Entries Tagged "deniability"

Page 1 of 2

Duress Codes for Fingerprint Access Control

Mike Specter has an interesting idea on how to make biometric access-control systems more secure: add a duress code. For example, you might configure your iPhone so that either thumb or forefinger unlocks the device, but your left middle finger disables the fingerprint mechanism (useful in the US where being compelled to divulge your password is a 5th Amendment violation but being forced to place your finger on the fingerprint reader is not) and the right middle finger permanently wipes the phone (useful in other countries where coercion techniques are much more severe).

Posted on January 26, 2017 at 2:03 PMView Comments

Was the iOS SSL Flaw Deliberate?

Last October, I speculated on the best ways to go about designing and implementing a software backdoor. I suggested three characteristics of a good backdoor: low chance of discovery, high deniability if discovered, and minimal conspiracy to implement.

The critical iOS vulnerability that Apple patched last week is an excellent example. Look at the code. What caused the vulnerability is a single line of code: a second “goto fail;” statement. Since that statement isn’t a conditional, it causes the whole procedure to terminate.

The flaw is subtle, and hard to spot while scanning the code. It’s easy to imagine how this could have happened by error. And it would have been trivially easy for one person to add the vulnerability.

Was this done on purpose? I have no idea. But if I wanted to do something like this on purpose, this is exactly how I would do it.

EDITED TO ADD (2/27): If the Apple auditing system is any good, they would be able to trace this errant goto line not just to the source-code check-in details, but to the specific login that made the change. And they would quickly know whether this was just an error, or a deliberate change by a bad actor. Does anyone know what’s going on inside Apple?

EDITED TO ADD (2/27): Steve Bellovin has a pair of posts where he concludes that if this bug is enemy action, it’s fairly clumsy and unlikely to be the work of professionals.

Posted on February 27, 2014 at 6:03 AMView Comments

Tor User Identified by FBI

Eldo Kim sent an e-mail bomb threat to Harvard so he could skip a final exam. (It’s just a coincidence that I was on the Harvard campus that day.) Even though he used an anonymous account and Tor, the FBI identified him. Reading the criminal complaint, it seems that the FBI got itself a list of Harvard users that accessed the Tor network, and went through them one by one to find the one who sent the threat.

This is one of the problems of using a rare security tool. The very thing that gives you plausible deniability also makes you the most likely suspect. The FBI didn’t have to break Tor; they just used conventional police mechanisms to get Kim to confess.

Tor didn’t break; Kim did.

Posted on December 18, 2013 at 9:59 AMView Comments

Defending Against Crypto Backdoors

We already know the NSA wants to eavesdrop on the Internet. It has secret agreements with telcos to get direct access to bulk Internet traffic. It has massive systems like TUMULT, TURMOIL, and TURBULENCE to sift through it all. And it can identify ciphertext—encrypted information—and figure out which programs could have created it.

But what the NSA wants is to be able to read that encrypted information in as close to real-time as possible. It wants backdoors, just like the cybercriminals and less benevolent governments do.

And we have to figure out how to make it harder for them, or anyone else, to insert those backdoors.

How the NSA Gets Its Backdoors

The FBI tried to get backdoor access embedded in an AT&T secure telephone system in the mid-1990s. The Clipper Chip included something called a LEAF: a Law Enforcement Access Field. It was the key used to encrypt the phone conversation, itself encrypted in a special key known to the FBI, and it was transmitted along with the phone conversation. An FBI eavesdropper could intercept the LEAF and decrypt it, then use the data to eavesdrop on the phone call.

But the Clipper Chip faced severe backlash, and became defunct a few years after being announced.

Having lost that public battle, the NSA decided to get its backdoors through subterfuge: by asking nicely, pressuring, threatening, bribing, or mandating through secret order. The general name for this program is BULLRUN.

Defending against these attacks is difficult. We know from subliminal channel and kleptography research that it’s pretty much impossible to guarantee that a complex piece of software isn’t leaking secret information. We know from Ken Thompson’s famous talk on “trusting trust” (first delivered in the ACM Turing Award Lectures) that you can never be totally sure if there’s a security flaw in your software.

Since BULLRUN became public last month, the security community has been examining security flaws discovered over the past several years, looking for signs of deliberate tampering. The Debian random number flaw was probably not deliberate, but the 2003 Linux security vulnerability probably was. The DUAL_EC_DRBG random number generator may or may not have been a backdoor. The SSL 2.0 flaw was probably an honest mistake. The GSM A5/1 encryption algorithm was almost certainly deliberately weakened. All the common RSA moduli out there in the wild: we don’t know. Microsoft’s _NSAKEY looks like a smoking gun, but honestly, we don’t know.

How the NSA Designs Backdoors

While a separate program that sends our data to some IP address somewhere is certainly how any hacker—from the lowliest script kiddie up to the NSA—spies on our computers, it’s too labor-intensive to work in the general case.

For government eavesdroppers like the NSA, subtlety is critical. In particular, three characteristics are important:

  • Low discoverability. The less the backdoor affects the normal operations of the program, the better. Ideally, it shouldn’t affect functionality at all. The smaller the backdoor is, the better. Ideally, it should just look like normal functional code. As a blatant example, an email encryption backdoor that appends a plaintext copy to the encrypted copy is much less desirable than a backdoor that reuses most of the key bits in a public IV (initialization vector).
  • High deniability. If discovered, the backdoor should look like a mistake. It could be a single opcode change. Or maybe a “mistyped” constant. Or “accidentally” reusing a single-use key multiple times. This is the main reason I am skeptical about _NSAKEY as a deliberate backdoor, and why so many people don’t believe the DUAL_EC_DRBG backdoor is real: they’re both too obvious.
  • Minimal conspiracy. The more people who know about the backdoor, the more likely the secret is to get out. So any good backdoor should be known to very few people. That’s why the recently described potential vulnerability in Intel’s random number generator worries me so much; one person could make this change during mask generation, and no one else would know.

These characteristics imply several things:

  • A closed-source system is safer to subvert, because an open-source system comes with a greater risk of that subversion being discovered. On the other hand, a big open-source system with a lot of developers and sloppy version control is easier to subvert.
  • If a software system only has to interoperate with itself, then it is easier to subvert. For example, a closed VPN encryption system only has to interoperate with other instances of that same proprietary system. This is easier to subvert than an industry-wide VPN standard that has to interoperate with equipment from other vendors.
  • A commercial software system is easier to subvert, because the profit motive provides a strong incentive for the company to go along with the NSA’s requests.
  • Protocols developed by large open standards bodies are harder to influence, because a lot of eyes are paying attention. Systems designed by closed standards bodies are easier to influence, especially if the people involved in the standards don’t really understand security.
  • Systems that send seemingly random information in the clear are easier to subvert. One of the most effective ways of subverting a system is by leaking key information—recall the LEAF—and modifying random nonces or header information is the easiest way to do that.

Design Strategies for Defending against Backdoors

With these principles in mind, we can list design strategies. None of them is foolproof, but they are all useful. I’m sure there’s more; this list isn’t meant to be exhaustive, nor the final word on the topic. It’s simply a starting place for discussion. But it won’t work unless customers start demanding software with this sort of transparency.

  • Vendors should make their encryption code public, including the protocol specifications. This will allow others to examine the code for vulnerabilities. It’s true we won’t know for sure if the code we’re seeing is the code that’s actually used in the application, but surreptitious substitution is hard to do, forces the company to outright lie, and increases the number of people required for the conspiracy to work.
  • The community should create independent compatible versions of encryption systems, to verify they are operating properly. I envision companies paying for these independent versions, and universities accepting this sort of work as good practice for their students. And yes, I know this can be very hard in practice.
  • There should be no master secrets. These are just too vulnerable.
  • All random number generators should conform to published and accepted standards. Breaking the random number generator is the easiest difficult-to-detect method of subverting an encryption system. A corollary: we need better published and accepted RNG standards.
  • Encryption protocols should be designed so as not to leak any random information. Nonces should be considered part of the key or public predictable counters if possible. Again, the goal is to make it harder to subtly leak key bits in this information.

This is a hard problem. We don’t have any technical controls that protect users from the authors of their software.

And the current state of software makes the problem even harder: Modern apps chatter endlessly on the Internet, providing noise and cover for covert communications. Feature bloat provides a greater “attack surface” for anyone wanting to install a backdoor.

In general, what we need is assurance: methodologies for ensuring that a piece of software does what it’s supposed to do and nothing more. Unfortunately, we’re terrible at this. Even worse, there’s not a lot of practical research in this area—and it’s hurting us badly right now.

Yes, we need legal prohibitions against the NSA trying to subvert authors and deliberately weaken cryptography. But this isn’t just about the NSA, and legal controls won’t protect against those who don’t follow the law and ignore international agreements. We need to make their job harder by increasing their risk of discovery. Against a risk-averse adversary, it might be good enough.

This essay previously appeared on Wired.com.

EDITED TO ADD: I am looking for other examples of known or plausible instances of intentional vulnerabilities for a paper I am writing on this topic. If you can think of an example, please post a description and reference in the comments below. Please explain why you think the vulnerability could be intentional. Thank you.

Posted on October 22, 2013 at 6:15 AMView Comments

Information-Age Law Enforcement Techniques

This is an interesting blog post:

Buried inside a recent United Nations Office on Drugs and Crime report titled Use of Internet for Terrorist Purposes one can carve out details and examples of law enforcement electronic surveillance techniques that are normally kept secret.

[…]

Point 280: International members of the guerilla group Revolutionary Armed Forces of Colombia (FARC) communicated with their counterparts hiding messages inside images with steganography and sending the emails disguised as spam, deleting Internet browsing cache afterwards to make sure that the authorities would not get hold of the data. Spanish and Colombian authorities cooperated to break the encryption keys and successfully deciphered the messages.

[…]

Point 198: It explains how an investigator can circumvent Truecrypt plausible deniability feature (hidden container), advising computer forensics investigators to take into consideration during the computer analysis to check if there is any missing volume of data.

[…]

Point 210: Explains how Remote Administration Trojans (RATs) can be introduced into a suspects computer to collect data or control his computer and it makes reference to hardware and software keyloggers as well as packet sniffers.

There’s more at the above link. Here’s the final report.

Posted on December 19, 2012 at 6:47 AMView Comments

P2P Privacy

Interesting research:

The team of researchers, which includes graduate students David Choffnes (electrical engineering and computer science) and Dean Malmgren (chemical and biological engineering), and postdoctoral fellow Jordi Duch (chemical and biological engineering), studied connection patterns in the BitTorrent file-sharing network—one of the largest and most popular P2P systems today. They found that over the course of weeks, groups of users formed communities where each member consistently connected with other community members more than with users outside the community.

“This was particularly surprising because BitTorrent is designed to establish connections at random, so there is no a priori reason for such strong communities to exist,” Bustamante says. After identifying this community behavior, the researchers showed that an eavesdropper could classify users into specific communities using a relatively small number of observation points. Indeed, a savvy attacker can correctly extract communities more than 85 percent of the time by observing only 0.01 percent of the total users. Worse yet, this information could be used to launch a “guilt-by-association” attack, where an attacker need only determine the downloading behavior of one user in the community to convincingly argue that all users in the communities are doing the same.

Given the impact of this threat, the researchers developed a technique that prevents accurate classification by intelligently hiding user-intended downloading behavior in a cloud of random downloading. They showed that this approach causes an eavesdropper’s classification to be wrong the majority of the time, providing users with grounds to claim “plausible deniability” if accused.

Posted on April 9, 2009 at 7:07 AMView Comments

TrueCrypt's Deniable File System

Together with Tadayoshi Kohno, Steve Gribble, and three of their students at the University of Washington, I have a new paper that breaks the deniable encryption feature of TrueCrypt version 5.1a. Basically, modern operating systems leak information like mad, making deniability a very difficult requirement to satisfy.

ABSTRACT: We examine the security requirements for creating a Deniable File System (DFS), and the efficacy with which the TrueCrypt disk-encryption software meets those requirements. We find that the Windows Vista operating system itself, Microsoft Word, and Google Desktop all compromise the deniability of a TrueCrypt DFS. While staged in the context of TrueCrypt, our research highlights several fundamental challenges to the creation and use of any DFS: even when the file system may be deniable in the pure, mathematical sense, we find that the environment surrounding that file system can undermine its deniability, as well as its contents. Finally, we suggest approaches for overcoming these challenges on modern operating systems like Windows.

The students did most of the actual work. I helped with the basic ideas, and contributed the threat model. Deniability is a very hard feature to achieve.

There are several threat models against which a DFS could potentially be secure:

  • One-Time Access. The attacker has a single snapshot of the disk image. An example would be when the secret police seize Alice’s computer.
  • Intermittent Access. The attacker has several snapshots of the disk image, taken at different times. An example would be border guards who make a copy of Alice’s hard drive every time she enters or leaves the country.
  • Regular Access. The attacker has many snapshots of the disk image, taken in short intervals. An example would be if the secret police break into Alice’s apartment every day when she is away, and make a copy of the disk each time.

Since we wrote our paper, TrueCrypt released version 6.0 of its software, which claims to have addressed many of the issues we’ve uncovered. In the paper, we said:

We analyzed the most current version of TrueCrypt available at the writing of the paper, version 5.1a. We shared a draft of our paper with the TrueCrypt development team in May 2008. TrueCrypt version 6.0 was released in July 2008. We have not analyzed version 6.0, but observe that TrueCrypt v6.0 does take new steps to improve TrueCrypt’s deniability properties (e.g., via the creation of deniable operating systems, which we also recommend in Section 5). We suggest that the breadth of our results for TrueCrypt v5.1a highlight the challenges to creating deniable file systems. Given these potential challenges, we encourage the users not to blindly trust the deniability of such systems. Rather, we encourage further research evaluating the deniability of such systems, as well as research on new yet light-weight methods for improving deniability.

So we cannot break the deniability feature in TrueCrypt 6.0. But, honestly, I wouldn’t trust it.

There have been two news articles (and a Slashdot thread) about the paper.

One talks about a generalization to encrypted partitions. If you don’t encrypt the entire drive, there is the possibility—and it seems very probable—that information about the encrypted partition will leak onto the unencrypted rest of the drive. Whole disk encryption is the smartest option.

Our paper will be presented at the 3rd USENIX Workshop on Hot Topics in Security (HotSec ’08). I’ve written about deniability before.

Posted on July 18, 2008 at 6:56 AMView Comments

Using a File Erasure Tool Considered Suspicious

By a California court:

The designer, Carter Bryant, has been accused by Mattel of using Evidence Eliminator on his laptop computer just two days before investigators were due to copy its hard drive.

Carter hasn’t denied that the program was run on his computer, but he said it wasn’t to destroy evidence. He said he had legitimate reasons to use the software.

[…]

But the wiper programs don’t ensure a clean getaway. They leave behind a kind of digital calling card.

“Not only do these programs leave a trace that they were used, they each have a distinctive fingerprint,” Kessler said. “Evidence Eliminator leaves one that’s different from Window Washer, and so on.”

It’s the kind of information that can be brought up in court. And if the digital calling card was left by Evidence Eliminator, it could raise some eyebrows, even if the wiper was used for the most innocent of reasons.

I have often recommended that people use file erasure tools regularly, especially when crossing international borders with their computers. Now we have one more reason to use them regularly: plausible deniability if you’re accused of erasing data to keep it from the police.

Posted on July 15, 2008 at 1:36 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.