Entries Tagged "FBI"

Page 10 of 23

Tor User Identified by FBI

Eldo Kim sent an e-mail bomb threat to Harvard so he could skip a final exam. (It’s just a coincidence that I was on the Harvard campus that day.) Even though he used an anonymous account and Tor, the FBI identified him. Reading the criminal complaint, it seems that the FBI got itself a list of Harvard users that accessed the Tor network, and went through them one by one to find the one who sent the threat.

This is one of the problems of using a rare security tool. The very thing that gives you plausible deniability also makes you the most likely suspect. The FBI didn’t have to break Tor; they just used conventional police mechanisms to get Kim to confess.

Tor didn’t break; Kim did.

Posted on December 18, 2013 at 9:59 AMView Comments

A Template for Reporting Government Surveillance News Stories

This is from 2006—I blogged it here—but it’s even more true today.

Under a top secret program initiated by the Bush Administration after the Sept. 11 attacks, the [name of agency (FBI, CIA, NSA, etc.)] have been gathering a vast database of [type of records] involving United States citizens.

“This program is a vital tool in the fight against terrorism,” [Bush Administration official] said. “Without it, we would be dangerously unsafe, and the terrorists would have probably killed you and every other American citizen.” The Bush Administration stated that the revelation of this program has severely compromised national security.

We’ve changed administrations—we’ve changed political parties—but nothing has changed.

Posted on November 1, 2013 at 2:26 PMView Comments

Defending Against Crypto Backdoors

We already know the NSA wants to eavesdrop on the Internet. It has secret agreements with telcos to get direct access to bulk Internet traffic. It has massive systems like TUMULT, TURMOIL, and TURBULENCE to sift through it all. And it can identify ciphertext—encrypted information—and figure out which programs could have created it.

But what the NSA wants is to be able to read that encrypted information in as close to real-time as possible. It wants backdoors, just like the cybercriminals and less benevolent governments do.

And we have to figure out how to make it harder for them, or anyone else, to insert those backdoors.

How the NSA Gets Its Backdoors

The FBI tried to get backdoor access embedded in an AT&T secure telephone system in the mid-1990s. The Clipper Chip included something called a LEAF: a Law Enforcement Access Field. It was the key used to encrypt the phone conversation, itself encrypted in a special key known to the FBI, and it was transmitted along with the phone conversation. An FBI eavesdropper could intercept the LEAF and decrypt it, then use the data to eavesdrop on the phone call.

But the Clipper Chip faced severe backlash, and became defunct a few years after being announced.

Having lost that public battle, the NSA decided to get its backdoors through subterfuge: by asking nicely, pressuring, threatening, bribing, or mandating through secret order. The general name for this program is BULLRUN.

Defending against these attacks is difficult. We know from subliminal channel and kleptography research that it’s pretty much impossible to guarantee that a complex piece of software isn’t leaking secret information. We know from Ken Thompson’s famous talk on “trusting trust” (first delivered in the ACM Turing Award Lectures) that you can never be totally sure if there’s a security flaw in your software.

Since BULLRUN became public last month, the security community has been examining security flaws discovered over the past several years, looking for signs of deliberate tampering. The Debian random number flaw was probably not deliberate, but the 2003 Linux security vulnerability probably was. The DUAL_EC_DRBG random number generator may or may not have been a backdoor. The SSL 2.0 flaw was probably an honest mistake. The GSM A5/1 encryption algorithm was almost certainly deliberately weakened. All the common RSA moduli out there in the wild: we don’t know. Microsoft’s _NSAKEY looks like a smoking gun, but honestly, we don’t know.

How the NSA Designs Backdoors

While a separate program that sends our data to some IP address somewhere is certainly how any hacker—from the lowliest script kiddie up to the NSA—spies on our computers, it’s too labor-intensive to work in the general case.

For government eavesdroppers like the NSA, subtlety is critical. In particular, three characteristics are important:

  • Low discoverability. The less the backdoor affects the normal operations of the program, the better. Ideally, it shouldn’t affect functionality at all. The smaller the backdoor is, the better. Ideally, it should just look like normal functional code. As a blatant example, an email encryption backdoor that appends a plaintext copy to the encrypted copy is much less desirable than a backdoor that reuses most of the key bits in a public IV (initialization vector).
  • High deniability. If discovered, the backdoor should look like a mistake. It could be a single opcode change. Or maybe a “mistyped” constant. Or “accidentally” reusing a single-use key multiple times. This is the main reason I am skeptical about _NSAKEY as a deliberate backdoor, and why so many people don’t believe the DUAL_EC_DRBG backdoor is real: they’re both too obvious.
  • Minimal conspiracy. The more people who know about the backdoor, the more likely the secret is to get out. So any good backdoor should be known to very few people. That’s why the recently described potential vulnerability in Intel’s random number generator worries me so much; one person could make this change during mask generation, and no one else would know.

These characteristics imply several things:

  • A closed-source system is safer to subvert, because an open-source system comes with a greater risk of that subversion being discovered. On the other hand, a big open-source system with a lot of developers and sloppy version control is easier to subvert.
  • If a software system only has to interoperate with itself, then it is easier to subvert. For example, a closed VPN encryption system only has to interoperate with other instances of that same proprietary system. This is easier to subvert than an industry-wide VPN standard that has to interoperate with equipment from other vendors.
  • A commercial software system is easier to subvert, because the profit motive provides a strong incentive for the company to go along with the NSA’s requests.
  • Protocols developed by large open standards bodies are harder to influence, because a lot of eyes are paying attention. Systems designed by closed standards bodies are easier to influence, especially if the people involved in the standards don’t really understand security.
  • Systems that send seemingly random information in the clear are easier to subvert. One of the most effective ways of subverting a system is by leaking key information—recall the LEAF—and modifying random nonces or header information is the easiest way to do that.

Design Strategies for Defending against Backdoors

With these principles in mind, we can list design strategies. None of them is foolproof, but they are all useful. I’m sure there’s more; this list isn’t meant to be exhaustive, nor the final word on the topic. It’s simply a starting place for discussion. But it won’t work unless customers start demanding software with this sort of transparency.

  • Vendors should make their encryption code public, including the protocol specifications. This will allow others to examine the code for vulnerabilities. It’s true we won’t know for sure if the code we’re seeing is the code that’s actually used in the application, but surreptitious substitution is hard to do, forces the company to outright lie, and increases the number of people required for the conspiracy to work.
  • The community should create independent compatible versions of encryption systems, to verify they are operating properly. I envision companies paying for these independent versions, and universities accepting this sort of work as good practice for their students. And yes, I know this can be very hard in practice.
  • There should be no master secrets. These are just too vulnerable.
  • All random number generators should conform to published and accepted standards. Breaking the random number generator is the easiest difficult-to-detect method of subverting an encryption system. A corollary: we need better published and accepted RNG standards.
  • Encryption protocols should be designed so as not to leak any random information. Nonces should be considered part of the key or public predictable counters if possible. Again, the goal is to make it harder to subtly leak key bits in this information.

This is a hard problem. We don’t have any technical controls that protect users from the authors of their software.

And the current state of software makes the problem even harder: Modern apps chatter endlessly on the Internet, providing noise and cover for covert communications. Feature bloat provides a greater “attack surface” for anyone wanting to install a backdoor.

In general, what we need is assurance: methodologies for ensuring that a piece of software does what it’s supposed to do and nothing more. Unfortunately, we’re terrible at this. Even worse, there’s not a lot of practical research in this area—and it’s hurting us badly right now.

Yes, we need legal prohibitions against the NSA trying to subvert authors and deliberately weaken cryptography. But this isn’t just about the NSA, and legal controls won’t protect against those who don’t follow the law and ignore international agreements. We need to make their job harder by increasing their risk of discovery. Against a risk-averse adversary, it might be good enough.

This essay previously appeared on Wired.com.

EDITED TO ADD: I am looking for other examples of known or plausible instances of intentional vulnerabilities for a paper I am writing on this topic. If you can think of an example, please post a description and reference in the comments below. Please explain why you think the vulnerability could be intentional. Thank you.

Posted on October 22, 2013 at 6:15 AMView Comments

Silk Road Author Arrested Due to Bad Operational Security

Details of how the FBI found the administrator of Silk Road, a popular black market e-commerce site.

Despite the elaborate technical underpinnings, however, the complaint portrays Ulbricht as a drug lord who made rookie mistakes. In an October 11, 2011 posting to a Bitcoin Talk forum, for instance, a user called “altoid” advertised he was looking for an “IT pro in the Bitcoin community” to work in a venture-backed startup. The post directed applicants to send responses to “rossulbricht at gmail dot com.” It came about nine months after two previous posts—also made by a user, “altoid,” to shroomery.org and Bitcoin Talk—were among the first to advertise a hidden Tor service that operated as a kind of “anonymous amazon.com.” Both of the earlier posts referenced silkroad420.wordpress.com.

If altoid’s solicitation for a Bitcoin-conversant IT Pro wasn’t enough to make Ulbricht a person of interest in the FBI’s ongoing probe, other digital bread crumbs were sure to arouse agents’ suspicions. The Google+ profile tied to the rossulbricht@gmail.com address included a list of favorite videos originating from mises.org, a website of the “Mises Institute.” The site billed itself as the “world center of the Austrian School of economics” and contained a user profile for one Ross Ulbricht. Several Dread Pirate Roberts postings on Silk Road cited the “Austrian Economic theory” and the works of Mises Institute economists Ludwig von Mises and Murray Rothbard in providing the guiding principles for the illicit drug market.

The clues didn’t stop there. In early March 2012 someone created an account on StackOverflow with the username Ross Ulbricht and the rossulbricht@gmail.com address, the criminal complaint alleged. On March 16 at 8:39 in the morning, the account was used to post a message titled “How can I connect to a Tor hidden service using curl in php?” Less than one minute later, the account was updated to change the user name from Ross Ulbricht to “frosty.” Several weeks later, the account was again updated, this time to replace the Ulbricht gmail address with frosty@frosty.com. In July 2013, a forensic analysis of the hard drives used to run one of the Silk Road servers revealed a PHP script based on curl that contained code that was identical to that included in the Stack Overflow discussion, the complaint alleged.

We already know that it is next to impossible to maintain privacy and anonymity against a well-funded government adversary.

EDITED TO ADD (10/8): Another article.

Posted on October 7, 2013 at 1:35 PMView Comments

More on the NSA Commandeering the Internet

If there’s any confirmation that the U.S. government has commandeered the Internet for worldwide surveillance, it is what happened with Lavabit earlier this month.

Lavabit is—well, was—an e-mail service that offered more privacy than the typical large-Internet-corporation services that most of us use. It was a small company, owned and operated by Ladar Levison, and it was popular among the tech-savvy. NSA whistleblower Edward Snowden among its half-million users.

Last month, Levison reportedly received an order—probably a National Security Letter—to allow the NSA to eavesdrop on everyone’s e-mail accounts on Lavabit. Rather than “become complicit in crimes against the American people,” he turned the service off. Note that we don’t know for sure that he received a NSL—that’s the order authorized by the Patriot Act that doesn’t require a judge’s signature and prohibits the recipient from talking about it—or what it covered, but Levison has said that he had complied with requests for individual e-mail access in the past, but this was very different.

So far, we just have an extreme moral act in the face of government pressure. It’s what happened next that is the most chilling. The government threatened him with arrest, arguing that shutting down this e-mail service was a violation of the order.

There it is. If you run a business, and the FBI or NSA want to turn it into a mass surveillance tool, they believe they can do so, solely on their own initiative. They can force you to modify your system. They can do it all in secret and then force your business to keep that secret. Once they do that, you no longer control that part of your business. You can’t shut it down. You can’t terminate part of your service. In a very real sense, it is not your business anymore. It is an arm of the vast U.S. surveillance apparatus, and if your interest conflicts with theirs then they win. Your business has been commandeered.

For most Internet companies, this isn’t a problem. They are already engaging in massive surveillance of their customers and users—collecting and using this data is the primary business model of the Internet—so it’s easy to comply with government demands and give the NSA complete access to everything. This is what we learned from Edward Snowden. Through programs like PRISM, BLARNEY and OAKSTAR, the NSA obtained bulk access to services like Gmail and Facebook, and to Internet backbone connections throughout the US and the rest of the world. But if it were a problem for those companies, presumably the government would not allow them to shut down.

To be fair, we don’t know if the government can actually convict someone of closing a business. It might just be part of their coercion tactics. Intimidation, and retaliation, is part of how the NSA does business.

Former Qwest CEO Joseph Nacchio has a story of what happens to a large company that refuses to cooperate. In February 2001—before the 9/11 terrorist attacks—the NSA approached the four major US telecoms and asked for their cooperation in a secret data collection program, the one we now know to be the bulk metadata collection program exposed by Edward Snowden. Qwest was the only telecom to refuse, leaving the NSA with a hole in its spying efforts. The NSA retaliated by canceling a series of big government contracts with Qwest. The company has since been purchased by CenturyLink, which we presume is more cooperative with NSA demands.

That was before the Patriot Act and National Security Letters. Now, presumably, Nacchio would just comply. Protection rackets are easier when you have the law backing you up.

As the Snowden whistleblowing documents continue to be made public, we’re getting further glimpses into the surveillance state that has been secretly growing around us. The collusion of corporate and government surveillance interests is a big part of this, but so is the government’s resorting to intimidation. Every Lavabit-like service that shuts down—and there have been several—gives us consumers less choice, and pushes us into the large services that cooperate with the NSA. It’s past time we demanded that Congress repeal National Security Letters, give us privacy rights in this new information age, and force meaningful oversight on this rogue agency.

This essay previously appeared in USA Today.

EDITED TO ADD: This essay has been translated into Danish.

Posted on August 30, 2013 at 6:12 AMView Comments

Details on NSA/FBI Eavesdropping

We’re starting to see Internet companies talk about the mechanics of how the US government spies on their users. Here, a Utah ISP owner describes his experiences with NSA eavesdropping:

We had to facilitate them to set up a duplicate port to tap in to monitor that customer’s traffic. It was a 2U (two-unit) PC that we ran a mirrored ethernet port to.

[What we ended up with was] a little box in our systems room that was capturing all the traffic to this customer. Everything they were sending and receiving.

Declan McCullagh explains how the NSA coerces companies to cooperate with its surveillance efforts. Basically, they want to avoid what happened with the Utah ISP.

Some Internet companies have reluctantly agreed to work with the government to conduct legally authorized surveillance on the theory that negotiations are less objectionable than the alternative—federal agents showing up unannounced with a court order to install their own surveillance device on a sensitive internal network. Those devices, the companies fear, could disrupt operations, introduce security vulnerabilities, or intercept more than is legally permitted.

“Nobody wants it on-premises,” said a representative of a large Internet company who has negotiated surveillance requests with government officials. “Nobody wants a box in their network…[Companies often] find ways to give tools to minimize disclosures, to protect users, to keep the government off the premises, and to come to some reasonable compromise on the capabilities.”

Precedents were established a decade or so ago when the government obtained legal orders compelling companies to install custom eavesdropping hardware on their networks.

And Brewster Kahle of the Internet Archive explains how he successfully fought a National Security Letter.

Posted on July 25, 2013 at 12:27 PMView Comments

1 8 9 10 11 12 23

Sidebar photo of Bruce Schneier by Joe MacInnis.