Blog: May 2016 Archives

The Fallibility of DNA Evidence

This is a good summary article on the fallibility of DNA evidence. Most interesting to me are the parts on the proprietary algorithms used in DNA matching:

William Thompson points out that Perlin has declined to make public the algorithm that drives the program. “You do have a black-box situation happening here,” Thompson told me. “The data go in, and out comes the solution, and we’re not fully informed of what happened in between.”

Last year, at a murder trial in Pennsylvania where TrueAllele evidence had been introduced, defense attorneys demanded that Perlin turn over the source code for his software, noting that “without it, [the defendant] will be unable to determine if TrueAllele does what Dr. Perlin claims it does.” The judge denied the request.

[…]

When I interviewed Perlin at Cybergenetics headquarters, I raised the matter of transparency. He was visibly annoyed. He noted that he’d published detailed papers on the theory behind TrueAllele, and filed patent applications, too: “We have disclosed not the trade secrets of the source code or the engineering details, but the basic math.”

It’s the same problem as any biometric: we need to know the rates of both false positives and false negatives. And if these algorithms are being used to determine guilt, we have a right to examine them.

EDITED TO ADD (6/13): Three more articles.

Posted on May 31, 2016 at 1:04 PM44 Comments

Arresting People for Walking Away from Airport Security

A proposed law in Albany, NY, would make it a crime to walk away from airport screening.

Aside from wondering why county lawmakers are getting involved with what should be national policy, you have to ask: what are these people thinking?

They’re thinking in stories, of course. They have a movie plot in their heads, and they are imaging how this measure solves it.

The law is intended to cover what Apple described as a soft spot in the current system that allows passengers to walk away without boarding their flights if security staff flags them for additional scrutiny.

That could include would-be terrorists probing for weaknesses, Apple said, adding that his deputies currently have no legal grounds to question such a person.

Does anyone have any idea what stories these people have in their heads? What sorts of security weaknesses are exposed by walking up to airport security and then walking away?

Posted on May 31, 2016 at 6:35 AM90 Comments

Identifying People from their Driving Patterns

People can be identified from their “driver fingerprint“:

…a group of researchers from the University of Washington and the University of California at San Diego found that they could “fingerprint” drivers based only on data they collected from internal computer network of the vehicle their test subjects were driving, what’s known as a car’s CAN bus. In fact, they found that the data collected from a car’s brake pedal alone could let them correctly distinguish the correct driver out of 15 individuals about nine times out of ten, after just 15 minutes of driving. With 90 minutes driving data or monitoring more car components, they could pick out the correct driver fully 100 percent of the time.

The paper: “Automobile Driver Fingerprinting,” by Miro Enev, Alex Takahuwa, Karl Koscher, and Tadayoshi Kohno.

Abstract: Today’s automobiles leverage powerful sensors and embedded computers to optimize efficiency, safety, and driver engagement. However the complexity of possible inferences using in-car sensor data is not well understood. While we do not know of attempts by automotive manufacturers or makers of after-market components (like insurance dongles) to violate privacy, a key question we ask is: could they (or their collection and later accidental leaks of data) violate a driver’s privacy? In the present study, we experimentally investigate the potential to identify individuals using sensor data snippets of their natural driving behavior. More specifically we record the in-vehicle sensor data on the controller area-network (CAN) of a typical modern vehicle (popular 2009 sedan) as each of 15 participants (a) performed a series of maneuvers in an isolated parking lot, and (b) drove the vehicle in traffic along a defined ~50 mile loop through the Seattle metropolitan area. We then split the data into training and testing sets, train an ensemble of classifiers, and evaluate identification accuracy of test data queries by looking at the highest voted candidate when considering all possible one-vs-one comparisons. Our results indicate that, at least among small sets, drivers are indeed distinguishable using only in car sensors. In particular, we find that it is possible to differentiate our 15 drivers with 100% accuracy when training with all of the available sensors using 90% of driving data from each person. Furthermore, it is possible to reach high identification rates using less than 8 minutes of training data. When more training data is available it is possible to reach very high identification using only a single sensor (e.g., the brake pedal). As an extension, we also demonstrate the feasibility of performing driver identification across multiple days of data collection.

Posted on May 30, 2016 at 10:10 AM32 Comments

Friday Squid Blogging: More Squids

This research paper shows that the number of squids, and the number of cephalopods in general, has been steadily increasing over the past 60 years:

Our analyses revealed that cephalopod abundance has increased over the last six decades, a result consistently replicated across three distinct life history groups: demersal, benthopelagic, and pelagic… This is remarkable given the enormous life-history diversity exhibited across these groups, which were represented in this study by 35 species/genera and six families. Demersal species, for instance, have low dispersal capacity (tens of km) and occupy shelf waters. Benthopelagic species also occupy shelf waters, but have moderate dispersal capacity (hundreds of km) largely facilitated by a paralarval phase. Pelagic species inhabit open oceanic waters and have high dispersal capacity (thousands of km) facilitated by both a paralarval phase and a mobile adult phase.

News articles.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on May 27, 2016 at 4:28 PM160 Comments

The Unfalsifiability of Security Claims

Interesting research paper: Cormac Herley, “Unfalsifiability of security claims“:

There is an inherent asymmetry in computer security: things can be declared insecure by observation, but not the reverse. There is no observation that allows us to declare an arbitrary system or technique secure. We show that this implies that claims of necessary conditions for security (and sufficient conditions for insecurity) are unfalsifiable. This in turn implies an asymmetry in self-correction: while the claim that countermeasures are sufficient is always subject to correction, the claim that they are necessary is not. Thus, the response to new information can only be to ratchet upward: newly observed or speculated attack capabilities can argue a countermeasure in, but no possible observation argues one out. Further, when justifications are unfalsifiable, deciding the relative importance of defensive measures reduces to a subjective comparison of assumptions. Relying on such claims is the source of two problems: once we go wrong we stay wrong and errors accumulate, and we have no systematic way to rank or prioritize measures.

This is both true and not true.

Mostly, it’s true. It’s true in cryptography, where we can never say that an algorithm is secure. We can either show how it’s insecure, or say something like: all of these smart people have spent lots of hours trying to break it, and they can’t—but we don’t know what a smarter person who spends even more hours analyzing it will come up with. It’s true in things like airport security, where we can easily point out insecurities but are unable to similarly demonstrate that some measures are unnecessary. And this does lead to a ratcheting up on security, in the absence of constraints like budget or processing speed. It’s easier to demand that everyone take off their shoes for special screening, or that we add another four rounds to the cipher, than to argue the reverse.

But it’s not entirely true. It’s difficult, but we can analyze the cost-effectiveness of different security measures. We can compare them with each other. We can make estimations and decisions and optimizations. It’s just not easy, and often it’s more of an art than a science. But all is not lost.

Still, a very good paper and one worth reading.

Posted on May 27, 2016 at 6:19 AM28 Comments

Suckfly

Suckfly seems to be another Chinese nation-state espionage tool, first stealing South Korean certificates and now attacking Indian networks.

Symantec has done a good job of explaining how Suckfly works, and there’s a lot of good detail in the blog posts. My only complaint is its reluctance to disclose who the targets are. It doesn’t name the South Korean companies whose certificates were stolen, and it doesn’t name the Indian companies that were hacked:

Many of the targets we identified were well known commercial organizations located in India. These organizations included:

  • One of India’s largest financial organizations
  • A large e-commerce company
  • The e-commerce company’s primary shipping vendor
  • One of India’s top five IT firms
  • A United States healthcare provider’s Indian business unit
  • Two government organizations

Suckfly spent more time attacking the government networks compared to all but one of the commercial targets. Additionally, one of the two government organizations had the highest infection rate of the Indian targets.

My guess is that Symantec can’t disclose those names, because those are all customers and Symantec has confidentiality obligations towards them. But by leaving this information out, Symantec is harming us all. We have to make decisions on the Internet all the time about who to trust and who to rely on. The more information we have, the better we can make those decisions. And the more companies are publicly called out when their security fails, the more they will try to make security better.

Symantec’s motivation in releasing information about Suckfly is marketing, and that’s fine. There, its interests and the interests of the research community are aligned. But here, the interests diverge, and this is the value of mandatory disclosure laws.

Posted on May 26, 2016 at 6:31 AM29 Comments

Companies Not Saving Your Data

There’s a new trend in Silicon Valley startups; companies are not collecting and saving data on their customers:

In Silicon Valley, there’s a new emphasis on putting up barriers to government requests for data. The Apple-FBI case and its aftermath have tech firms racing to employ a variety of tools that would place customer information beyond the reach of a government-ordered search.

The trend is a striking reversal of a long-standing article of faith in the data-hungry tech industry, where companies including Google and the latest start-ups have predicated success on the ability to hoover up as much information as possible about consumers.

Now, some large tech firms are increasingly offering services to consumers that rely far less on collecting data. The sea change is even becoming evident among early-stage companies that see holding so much data as more of a liability than an asset, given the risk that cybercriminals or government investigators might come knocking.

Start-ups that once hesitated to invest in security are now repurposing limited resources to build technical systems to shed data, even if it hinders immediate growth.

The article also talks about companies providing customers with end-to-end encryption.

I believe that all this data isn’t nearly as valuable as the big-data people are promising. Now that companies are recognizing that it is also a liability, I think we’re going to see more rational trade-offs about what to keep—and for how long—and what to discard.

Posted on May 25, 2016 at 2:37 PM20 Comments

GCHQ Discloses Two OS X Vulnerabilities to Apple

This is good news:

Communications and Electronics Security Group (CESG), the information security arm of GCHQ, was credited with the discovery of two vulnerabilities that were patched by Apple last week.

The flaws could allow hackers to corrupt memory and cause a denial of service through a crafted app or execute arbitrary code in a privileged context.

The memory handling vulnerabilities (CVE-2016-1822 and CVE-2016-1829) affect OS X El Capitan v10.11 and later operating systems, according to Apple’s 2016-003 security update. The memory corruption vulnerabilities allowed hackers to execute arbitrary code with kernel privileges.

There’s still a lot that needs to be said about this equities process.

Posted on May 24, 2016 at 2:12 PM16 Comments

Google Moving Forward on Automatic Logins

Google is trying to bring this to Android developers by the end of the year:

Today, secure logins—like those used by banks or in the enterprise environment—often require more than just a username and password. They tend to also require the entry of a unique PIN, which is generally sent to your phone via SMS or emailed. This is commonly referred to as two-factor authentication, as it combines something you know (your password) with something you have in your possession, like your phone.

With Project Abacus, users would instead unlock devices or sign into applications based on a cumulative “Trust Score.” This score would be calculated using a variety of factors, including your typing patterns, current location, speed and voice patterns, facial recognition, and other things.

Basically, the system replaces traditional authentication—something you know, have, or are—with surveillance. So maybe this is a good idea, and maybe it isn’t. The devil is in the details.

EDITED TO ADD: It’s being called creepy. But, as we’ve repeatedly learned, creepy is subjective. What’s creepy now is perfectly normal two years later.

Posted on May 24, 2016 at 8:35 AM74 Comments

State of Online Tracking

Really interesting research: “Online tracking: A 1-million-site measurement and analysis,” by Steven Englehardt and Arvind Narayanan:

Abstract: We present the largest and most detailed measurement of online tracking conducted to date, based on a crawl of the top 1 million websites. We make 15 types of measurements on each site, including stateful (cookie-based) and stateless (fingerprinting-based) tracking, the effect of browser privacy tools, and the exchange of tracking data between different sites (“cookie syncing”). Our findings include multiple sophisticated fingerprinting techniques never before measured in the wild.

This measurement is made possible by our web privacy measurement tool, OpenWPM, which uses an automated version of a full-fledged consumer browser. It supports parallelism for speed and scale, automatic recovery from failures of the underlying browser, and comprehensive browser instrumentation. OpenWPM is open-source1 and has already been used as the basis of seven published studies on web privacy and security.

Summary in this blog post.

Posted on May 23, 2016 at 5:33 AM48 Comments

Detecting Explosives

Really interesting article on the difficulties involved with explosive detection at airport security checkpoints.

Abstract: The mid-air bombing of a Somali passenger jet in February was a wake-up call for security agencies and those working in the field of explosive detection. It was also a reminder that terrorist groups from Yemen to Syria to East Africa continue to explore innovative ways to get bombs onto passenger jets by trying to beat detection systems or recruit insiders. The layered state-of-the-art detection systems that are now in place at most airports in the developed world make it very hard for terrorists to sneak bombs onto planes, but the international aviation sector remains vulnerable because many airports in the developing world either have not deployed these technologies or have not provided rigorous training for operators. Technologies and security measures will need to improve to stay one step ahead of innovative terrorists. Given the pattern of recent Islamic State attacks, there is a strong argument for extending state-of-the-art explosive detection systems beyond the aviation sector to locations such as sports arenas and music venues.

I disagree with his conclusions—the last sentence above—but the technical information on explosives detection technology is really interesting.

Posted on May 20, 2016 at 2:06 PM24 Comments

Primitive Food Crops and Security

Economists argue that the security needs of various crops are the cause of civilization size:

The argument depends on the differences between how grains and tubers are grown. Crops like wheat are harvested once or twice a year, yielding piles of small, dry grains. These can be stored for long periods of time and are easily transported ­ or stolen.

Root crops, on the other hand, don’t store well at all. They’re heavy, full of water, and rot quickly once taken out of the ground. Yuca, for instance, grows year-round and in ancient times, people only dug it up right before it was eaten. This provided some protection against theft in ancient times. It’s hard for bandits to make off with your harvest when most of it is in the ground, instead of stockpiled in a granary somewhere.

But the fact that grains posed a security risk may have been a blessing in disguise. The economists believe that societies cultivating crops like wheat and barley may have experienced extra pressure to protect their harvests, galvanizing the creation of warrior classes and the development of complex hierarchies and taxation schemes.

Posted on May 18, 2016 at 9:11 AM30 Comments

More NSA Documents from the Snowden Archive

The Intercept is starting to publish a lot more documents. Yesterday they published the first year of an internal newsletter called SIDtoday, along with several articles based on the documents.

The Intercept‘s first SIDtoday release comprises 166 articles, including all articles published between March 31, 2003, when SIDtoday began, and June 30, 2003, plus installments of all article series begun during this period through the end of the year. Major topics include the National Security Agency’s role in interrogations, the Iraq War, the war on terror, new leadership in the Signals Intelligence Directorate, and new, popular uses of the internet and of mobile computing devices.

They’re also making the archive available to more researchers.

Posted on May 17, 2016 at 6:18 AM82 Comments

Defeating a Tamper-Proof Bottle

Here’s an interesting case of doctored urine-test samples from the Sochi Olympics. Evidence points to someone defeating the tamper resistance of the bottles:

Berlinger bottles come in sets of two: one for the athlete’s “A” sample, which is tested at the Games, and the other for the “B” sample, which is used to corroborate a positive test of the A sample. Metal teeth in the B bottle’s cap lock in place, so it cannot be twisted off.

“The bottles are either destroyed or retain visible traces of tampering if any unauthorized attempt is made to open them,” Berlinger’s website says about the security of the bottles.

The only way to open the bottle, according to Berlinger, is to use a special machine sold by the company for about $2,000; it cracks the bottle’s cap in half, making it apparent that the sample has been touched.

Yet someone figured out how to open the bottles, swap out the liquid, and replace the caps without leaving any visible signs of tampering.

EDITED TO ADD: There’s a new article on how they did it.

In Room 124, Dr. Rodchenkov received the sealed bottles through the hole and handed them to a man who he believed was a Russian intelligence officer. The man took the bottles to a building nearby. Within a few hours, the bottles were returned with the caps loose and unbroken.

One commenter complained that I called the bottles “tamper-proof,” even though I used the more accurate phrase “tamper-resistance” in the post. Yes, that was sloppy.

Posted on May 16, 2016 at 6:03 AM68 Comments

More on the Going Dark Debate

Lawfare is turning out to be the go-to blog for policy wonks about various government debates on cybersecurity. There are two good posts this week on the Going Dark debate.

The first is from those of us who wrote the “Keys Under Doormats” paper last year, criticizing the concept of backdoors and key escrow. We were responding to a half-baked proposal on how to give the government access without causing widespread insecurity, and we pointed out where almost of all of these sorts of proposals fall short:

1. Watch for systems that rely on a single powerful key or a small set of them.

2. Watch for systems using high-value keys over and over and still claiming not to increase risk.

3. Watch for the claim that the abstract algorithm alone is the measure of system security.

4. Watch for the assumption that scaling anything on the global Internet is easy.

5. Watch for the assumption that national borders are not a factor.

6. Watch for the assumption that human rights and the rule of law prevail throughout the world.

The second is by Susan Landau, and is a response to the ODNI’s response to the “Don’t Panic” report. Our original report said basically that the FBI wasn’t going dark and that surveillance information is everywhere. At a Senate hearing, Sen. Wyden requested that the Office of the Director of National Intelligence respond to the report. It did—not very well, honestly—and Landau responded to that response. She pointed out that there really wasn’t much disagreement: that the points it claimed to have issue with were actually points we made and agreed with.

In the end, the ODNI’s response to our report leaves me somewhat confused. The reality is that the only strong disagreement seems to be with an exaggerated view of one finding. It almost appears as if ODNI is using the Harvard report as an opportunity to say, “Widespread use of encryption will make our work life more difficult.” Of course it will. Widespread use of encryption will also help prevent some of the cybersecurity exploits and attacks we have been experiencing over the last decade. The ODNI letter ignored that issue.

EDITED TO ADD: Related is this article where James Comey defends spending $1M+ on that iPhone vulnerability. There’s some good discussion of the vulnerabilities equities process, and the FBI’s technical lack of sophistication.

Posted on May 13, 2016 at 6:55 AM55 Comments

Hacking Gesture-Based Security

Interesting research: Abdul Serwadda, Vir V. Phoha, Zibo Wang, Rajesh Kumar, and Diksha Shukla, “Robotic Robbery on the Touch Screen,” ACM Transactions on Information and System Security, May 2016.

Abstract: Despite the tremendous amount of research fronting the use of touch gestures as a mechanism of continuous authentication on smart phones, very little research has been conducted to evaluate how these systems could behave if attacked by sophisticated adversaries. In this article, we present two Lego-driven robotic attacks on touch-based authentication: a population statistics-driven attack and a user-tailored attack. The population statistics-driven attack is based on patterns gleaned from a large population of users, whereas the user-tailored attack is launched based on samples stolen from the victim. Both attacks are launched by a Lego robot that is trained on how to swipe on the touch screen. Using seven verification algorithms and a large dataset of users, we show that the attacks cause the system’s mean false acceptance rate (FAR) to increase by up to fivefold relative to the mean FAR seen under the standard zero-effort impostor attack. The article demonstrates the threat that robots pose to touch-based authentication and provides compelling evidence as to why the zero-effort attack should cease to be used as the benchmark for touch-based authentication systems.

News article. Slashdot thread.

Posted on May 12, 2016 at 5:31 AM11 Comments

FTC Investigating Android Patching Practices

It’s a known truth that most Android vulnerabilities don’t get patched. It’s not Google’s fault. It releases the patches, but the phone carriers don’t push them down to their smartphone users.

Now the Federal Communications Commission and the Federal Trade Commission are investigating, sending letters to major carriers and device makers.

I think this is a good thing. This is a long-existing market failure, and a place where we need government regulation to make us all more secure.

Posted on May 11, 2016 at 2:37 PM68 Comments

New Credit Card Scam

A criminal ring was arrested in Malaysia for credit card fraud:

They would visit the online shopping websites and purchase all their items using phony credit card details while the debugging app was activated.

The app would fetch the transaction data from the bank to the online shopping website, and trick the website into believing that the transaction was approved, when in reality, it had been declined by the bank.

The syndicates would later sell the items they had purchased illegally for a much lower price.

The problem here seems to be bad systems design. Why should the user be able to spoof the merchant’s verification protocol with the bank?

Posted on May 11, 2016 at 6:34 AM17 Comments

Economist Detained for Doing Math on an Airplane

An economics professor was detained when he was spotted doing math on an airplane:

On Thursday evening, a 40-year-old man ­—with dark, curly hair, olive skin and an exotic foreign accent—­ boarded a plane. It was a regional jet making a short, uneventful hop from Philadelphia to nearby Syracuse.

Or so dozens of unsuspecting passengers thought.

The curly-haired man tried to keep to himself, intently if inscrutably scribbling on a notepad he’d brought aboard. His seatmate, a blond-haired, 30-something woman sporting flip-flops and a red tote bag, looked him over. He was wearing navy Diesel jeans and a red Lacoste sweater—a look he would later describe as “simple elegance”—but something about him didn’t seem right to her.

She decided to try out some small talk.

Is Syracuse home? She asked.

No, he replied curtly.

He similarly deflected further questions. He appeared laser-focused ­—perhaps too laser-focused ­—on the task at hand, those strange scribblings.

Rebuffed, the woman began reading her book. Or pretending to read, anyway. Shortly after boarding had finished, she flagged down a flight attendant and handed that crew-member a note of her own.

This story ended better than some. Economics professor Guido Menzio (yes, he’s Italian) was taken off the plane, questioned, cleared, and allowed to board with the rest of his passengers two hours later.

This is a result of our stupid “see something, say something” culture. As I repeatedly say: “If you ask amateurs to act as front-line security personnel, you shouldn’t be surprised when you get amateur security.”

On the other hand, “Algebra, of course, does have Arabic origins plus math is used to make bombs.” Plus, this fine joke from 2003:

At Heathrow Airport today, an individual, later discovered to be a school teacher, was arrested trying to board a flight while in possession of a compass, a protractor, and a graphical calculator.

Authorities believe she is a member of the notorious al-Gebra movement. She is being charged with carrying weapons of math instruction.

AP story. Slashdot thread.

Seriously, though, I worry that this kind of thing will happen to me. I’m older, and I’m not very Semitic looking, but I am curt to my seatmates and intently focused on what I am doing—which sometimes involves looking at web pages about, and writing about, security and terrorism. I’m sure I’m vaguely suspicious.

EDITED TO ADD: Last month a student was removed from an airplane for speaking Arabic.

Posted on May 9, 2016 at 1:15 PM96 Comments

NIST Starts Planning for Post-Quantum Cryptography

Last year, the NSA announced its plans for transitioning to cryptography that is resistant to a quantum computer. Now, it’s NIST’s turn. Its just-released report talks about the importance of algorithm agility and quantum resistance. Sometime soon, it’s going to have a competition for quantum-resistant public-key algorithms:

Creating those newer, safer algorithms is the longer-term goal, Moody says. A key part of this effort will be an open collaboration with the public, which will be invited to devise and vet cryptographic methods that—to the best of experts’ knowledge—­will be resistant to quantum attack. NIST plans to launch this collaboration formally sometime in the next few months, but in general, Moody says it will resemble past competitions such as the one for developing the SHA-3 hash algorithm, used in part for authenticating digital messages.

“It will be a long process involving public vetting of quantum-resistant algorithms,” Moody said. “And we’re not expecting to have just one winner. There are several systems in use that could be broken by a quantum computer­—public-key encryption and digital signatures, to take two examples­—and we will need different solutions for each of those systems.”

The report rightly states that we’re okay in the symmetric cryptography world; the key lengths are long enough.

This is an excellent development. NIST has done an excellent job with their previous cryptographic standards, giving us a couple of good, strong, well-reviewed, and patent-free algorithms. I have no doubt this process will be equally excellent. (If NIST is keeping a list, aside from post-quantum public-key algorithms, I would like to see competitions for a larger-block-size block cipher and a super-fast stream cipher as well.)

Two news articles.

Posted on May 9, 2016 at 6:19 AM43 Comments

White House Report on Big Data Discrimination

The White House has released a report on big-data discrimination. From the blog post:

Using case studies on credit lending, employment, higher education, and criminal justice, the report we are releasing today illustrates how big data techniques can be used to detect bias and prevent discrimination. It also demonstrates the risks involved, particularly how technologies can deliberately or inadvertently perpetuate, exacerbate, or mask discrimination.

The purpose of the report is not to offer remedies to the issues it raises, but rather to identify these issues and prompt conversation, research­—and action­—among technologists, academics, policy makers, and citizens, alike.

The report includes a number of recommendations for advancing work in this nascent field of data and ethics. These include investing in research, broadening and diversifying technical leadership, cross-training, and expanded literacy on data discrimination, bolstering accountability, and creating standards for use within both the government and the private sector. It also calls on computer and data science programs and professionals to promote fairness and opportunity as part of an overall commitment to the responsible and ethical use of data.

Posted on May 6, 2016 at 6:12 AM16 Comments

Own a Pair of Clipper Chips

The AT&T TSD was an early 1990s telephone encryption device. It was digital. Voice quality was okay. And it was the device that contained the infamous Clipper Chip, the U.S. government’s first attempt to put a back door into everyone’s communications.

Marcus Ranum is selling a pair on eBay. He has the description wrong, though. The TSD-3600-E is the model with the Clipper Chip in it. The TSD-3600-F is the version with the insecure exportable algorithm.

Posted on May 5, 2016 at 6:31 AM15 Comments

Credential Stealing as an Attack Vector

Traditional computer security concerns itself with vulnerabilities. We employ antivirus software to detect malware that exploits vulnerabilities. We have automatic patching systems to fix vulnerabilities. We debate whether the FBI should be permitted to introduce vulnerabilities in our software so it can get access to systems with a warrant. This is all important, but what’s missing is a recognition that software vulnerabilities aren’t the most common attack vector: credential stealing is.

The most common way hackers of all stripes, from criminals to hacktivists to foreign governments, break into networks is by stealing and using a valid credential. Basically, they steal passwords, set up man-in-the-middle attacks to piggy-back on legitimate logins, or engage in cleverer attacks to masquerade as authorized users. It’s a more effective avenue of attack in many ways: it doesn’t involve finding a zero-day or unpatched vulnerability, there’s less chance of discovery, and it gives the attacker more flexibility in technique.

Rob Joyce, the head of the NSA’s Tailored Access Operations (TAO) group—basically the country’s chief hacker—gave a rare public talk at a conference in January. In essence, he said that zero-day vulnerabilities are overrated, and credential stealing is how he gets into networks: “A lot of people think that nation states are running their operations on zero days, but it’s not that common. For big corporate networks, persistence and focus will get you in without a zero day; there are so many more vectors that are easier, less risky, and more productive.”

This is true for us, and it’s also true for those attacking us. It’s how the Chinese hackers breached the Office of Personnel Management in 2015. The 2014 criminal attack against Target Corporation started when hackers stole the login credentials of the company’s HVAC vendor. Iranian hackers stole US login credentials. And the hacktivist that broke into the cyber-arms manufacturer Hacking Team and published pretty much every proprietary document from that company used stolen credentials.

As Joyce said, stealing a valid credential and using it to access a network is easier, less risky, and ultimately more productive than using an existing vulnerability, even a zero-day.

Our notions of defense need to adapt to this change. First, organizations need to beef up their authentication systems. There are lots of tricks that help here: two-factor authentication, one-time passwords, physical tokens, smartphone-based authentication, and so on. None of these is foolproof, but they all make credential stealing harder.

Second, organizations need to invest in breach detection and—most importantly—incident response. Credential-stealing attacks tend to bypass traditional IT security software. But attacks are complex and multi-step. Being able to detect them in process, and to respond quickly and effectively enough to kick attackers out and restore security, is essential to resilient network security today.

Vulnerabilities are still critical. Fixing vulnerabilities is still vital for security, and introducing new vulnerabilities into existing systems is still a disaster. But strong authentication and robust incident response are also critical. And an organization that skimps on these will find itself unable to keep its networks secure.

This essay originally appeared on Xconomy.

EDITED TO ADD (5/23): Portuguese translation.

Posted on May 4, 2016 at 6:51 AM38 Comments

Julian Sanchez on the Feinstein-Burr Bill

Two excellent posts.

It’s such a badly written bill that I wonder if it’s just there to anchor us to an extreme, so we’re relieved when the actual bill comes along. Me:

“This is the most braindead piece of legislation I’ve ever seen,” Schneier—who has just been appointed a Fellow of the Kennedy School of Government at Harvard—told The Reg. “The person who wrote this either has no idea how technology works or just doesn’t care.”

Posted on May 3, 2016 at 1:10 PM31 Comments

Vulnerabilities in Samsung's SmartThings

Interesting research: Earlence Fernandes, Jaeyeon Jung, and Atul Prakash, “Security Analysis of Emerging Smart Home Applications“:

Abstract: Recently, several competing smart home programming frameworks that support third party app development have emerged. These frameworks provide tangible benefits to users, but can also expose users to significant security risks. This paper presents the first in-depth empirical security analysis of one such emerging smart home programming platform. We analyzed Samsung-owned SmartThings, which has the largest number of apps among currently available smart home platforms, and supports a broad range of devices including motion sensors, fire alarms, and door locks. SmartThings hosts the application runtime on a proprietary, closed-source cloud backend, making scrutiny challenging. We overcame the challenge with a static source code analysis of 499 SmartThings apps (called SmartApps) and 132 device handlers, and carefully crafted test cases that revealed many undocumented features of the platform. Our key findings are twofold. First, although SmartThings implements a privilege separation model, we discovered two intrinsic design flaws that lead to significant overprivilege in SmartApps. Our analysis reveals that over 55% of SmartApps in the store are overprivileged due to the capabilities being too coarse-grained. Moreover, once installed, a SmartApp is granted full access to a device even if it specifies needing only limited access to the device. Second, the SmartThings event subsystem, which devices use to communicate asynchronously with SmartApps via events, does not sufficiently protect events that carry sensitive information such as lock codes. We exploited framework design flaws to construct four proof-of-concept attacks that: (1) secretly planted door lock codes; (2) stole existing door lock codes; (3) disabled vacation mode of the home; and (4) induced a fake fire alarm. We conclude the paper with security lessons for the design of emerging smart home programming frameworks.

Research website. News article—copy and paste into a text editor to avoid the ad blocker blocker.

EDITED TO ADD: Another article.

Posted on May 2, 2016 at 9:01 AM19 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.