Entries Tagged "TLS"

Page 2 of 2

The Logjam (and Another) Vulnerability against Diffie-Hellman Key Exchange

Logjam is a new attack against the Diffie-Hellman key-exchange protocol used in TLS. Basically:

The Logjam attack allows a man-in-the-middle attacker to downgrade vulnerable TLS connections to 512-bit export-grade cryptography. This allows the attacker to read and modify any data passed over the connection. The attack is reminiscent of the FREAK attack, but is due to a flaw in the TLS protocol rather than an implementation vulnerability, and attacks a Diffie-Hellman key exchange rather than an RSA key exchange. The attack affects any server that supports DHE_EXPORT ciphers, and affects all modern web browsers. 8.4% of the Top 1 Million domains were initially vulnerable.

Here’s the academic paper.

One of the problems with patching the vulnerability is that it breaks things:

On the plus side, the vulnerability has largely been patched thanks to consultation with tech companies like Google, and updates are available now or coming soon for Chrome, Firefox and other browsers. The bad news is that the fix rendered many sites unreachable, including the main website at the University of Michigan, which is home to many of the researchers that found the security hole.

This is a common problem with version downgrade attacks; patching them makes you incompatible with anyone who hasn’t patched. And it’s the vulnerability the media is focusing on.

Much more interesting is the other vulnerability that the researchers found:

Millions of HTTPS, SSH, and VPN servers all use the same prime numbers for Diffie-Hellman key exchange. Practitioners believed this was safe as long as new key exchange messages were generated for every connection. However, the first step in the number field sieve—the most efficient algorithm for breaking a Diffie-Hellman connection—is dependent only on this prime. After this first step, an attacker can quickly break individual connections.

The researchers believe the NSA has been using this attack:

We carried out this computation against the most common 512-bit prime used for TLS and demonstrate that the Logjam attack can be used to downgrade connections to 80% of TLS servers supporting DHE_EXPORT. We further estimate that an academic team can break a 768-bit prime and that a nation-state can break a 1024-bit prime. Breaking the single, most common 1024-bit prime used by web servers would allow passive eavesdropping on connections to 18% of the Top 1 Million HTTPS domains. A second prime would allow passive decryption of connections to 66% of VPN servers and 26% of SSH servers. A close reading of published NSA leaks shows that the agency’s attacks on VPNs are consistent with having achieved such a break.

Remember James Bamford’s 2012 comment about the NSA’s cryptanalytic capabilities:

According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US. The upshot, according to this official: “Everybody’s a target; everybody with communication is a target.”

[…]

The breakthrough was enormous, says the former official, and soon afterward the agency pulled the shade down tight on the project, even within the intelligence community and Congress. “Only the chairman and vice chairman and the two staff directors of each intelligence committee were told about it,” he says. The reason? “They were thinking that this computing breakthrough was going to give them the ability to crack current public encryption.”

And remember Director of National Intelligence James Clapper’s introduction to the 2013 “Black Budget“:

Also, we are investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic.

It’s a reasonable guess that this is what both Bamford’s source and Clapper are talking about. It’s an attack that requires a lot of precomputation—just the sort of thing a national intelligence agency would go for.

But that requirement also speaks to its limitations. The NSA isn’t going to put this capability at collection points like Room 641A at AT&T’s San Francisco office: the precomputation table is too big, and the sensitivity of the capability is too high. More likely, an analyst identifies a target through some other means, and then looks for data by that target in databases like XKEYSCORE. Then he sends whatever ciphertext he finds to the Cryptanalysis and Exploitation Services (CES) group, which decrypts it if it can using this and other techniques.

Ross Anderson wrote about this earlier this month, almost certainly quoting Snowden:

As for crypto capabilities, a lot of stuff is decrypted automatically on ingest (e.g. using a “stolen cert”, presumably a private key obtained through hacking). Else the analyst sends the ciphertext to CES and they either decrypt it or say they can’t.

The analysts are instructed not to think about how this all works. This quote also applied to NSA employees:

Strict guidelines were laid down at the GCHQ complex in Cheltenham, Gloucestershire, on how to discuss projects relating to decryption. Analysts were instructed: “Do not ask about or speculate on sources or methods underpinning Bullrun.”

I remember the same instructions in documents I saw about the NSA’s CES.

Again, the NSA has put surveillance ahead of security. It never bothered to tell us that many of the “secure” encryption systems we were using were not secure. And we don’t know what other national intelligence agencies independently discovered and used this attack.

The good news is now that we know reusing prime numbers is a bad idea, we can stop doing it.

EDITED TO ADD: The DH precomputation easily lends itself to custom ASIC design, and is something that pipelines easily. Using BitCoin mining hardware as a rough comparison, this means a couple orders of magnitude speedup.

EDITED TO ADD (5/23): Good analysis of the cryptography.

EDITED TO ADD (5/24): Good explanation by Matthew Green.

Posted on May 21, 2015 at 6:30 AMView Comments

A New Free CA

Announcing Let’s Encrypt, a new free certificate authority. This is a joint project of EFF, Mozilla, Cisco, Akamai, and the University of Michigan.

This is an absolutely fantastic idea.

The anchor for any TLS-protected communication is a public-key certificate which demonstrates that the server you’re actually talking to is the server you intended to talk to. For many server operators, getting even a basic server certificate is just too much of a hassle. The application process can be confusing. It usually costs money. It’s tricky to install correctly. It’s a pain to update.

Let’s Encrypt is a new free certificate authority, built on a foundation of cooperation and openness, that lets everyone be up and running with basic server certificates for their domains through a simple one-click process.

[…]

The key principles behind Let’s Encrypt are:

  • Free: Anyone who owns a domain can get a certificate validated for that domain at zero cost.
  • Automatic: The entire enrollment process for certificates occurs painlessly during the server’s native installation or configuration process, while renewal occurs automatically in the background.
  • Secure: Let’s Encrypt will serve as a platform for implementing modern security techniques and best practices.
  • Transparent: All records of certificate issuance and revocation will be available to anyone who wishes to inspect them.
  • Open: The automated issuance and renewal protocol will be an open standard and as much of the software as possible will be open source.
  • Cooperative: Much like the underlying Internet protocols themselves, Let’s Encrypt is a joint effort to benefit the entire community, beyond the control of any one organization.

Slashdot thread. Hacker News thread.

EDITED TO ADD (11/19): Good post. And EFF’s blog post.

Posted on November 18, 2014 at 12:38 PMView Comments

Forged SSL Certificates Pervasive on the Internet

About 0.2% of all SSL certificates are forged. This is the first time I’ve ever seen a number based on real data. News article:

Of 3.45 million real-world connections made to Facebook servers using the transport layer security (TLS) or secure sockets layer protocols, 6,845, or about 0.2 percent of them, were established using forged certificates.

Actual paper.

EDITED TO ADD (6/13): I’m mis-characterizing the study. The study really says that 0.2% of HTTPS traffic to Facebook is intercepted and re-signed, and the vast majority of that interception and resigning happens either on the user’s local computer (by way of trusted security software which is acting scanning proxy) or locally on a private network behind a corporation’s intercepting proxy/firewall. Only a small percentage of intercepted traffic is a result of malware or other nefarious activity.

Posted on May 16, 2014 at 6:43 AMView Comments

New RC4 Attack

This is a really clever attack on the RC4 encryption algorithm as used in TLS.

We have found a new attack against TLS that allows an attacker to recover a limited amount of plaintext from a TLS connection when RC4 encryption is used. The attacks arise from statistical flaws in the keystream generated by the RC4 algorithm which become apparent in TLS ciphertexts when the same plaintext is repeatedly encrypted at a fixed location across many TLS sessions.

The attack is very specialized:

The attack is a multi-session attack, which means that we require a target plaintext to be repeatedly sent in the same position in the plaintext stream in multiple TLS sessions. The attack currently only targets the first 256 bytes of the plaintext stream in sessions. Since the first 36 bytes of plaintext are formed from an unpredictable Finished message when SHA-1 is the selected hashing algorithm in the TLS Record Protocol, these first 36 bytes cannot be recovered. This means that the attack can recover 220 bytes of TLS-encrypted plaintext.

The number of sessions needed to reliably recover these plaintext bytes is around 230, but already with only 224 sessions, certain bytes can be recovered reliably.

Is this a big deal? Yes and no. The attack requires the identical plaintext to be repeatedly encrypted. Normally, this would make for an impractical attack in the real world, but http messages often have stylized headers that are identical across a conversation—for example, cookies. On the other hand, those are the only bits that can be decrypted. Currently, this attack is pretty raw and unoptimized—so it’s likely to become faster and better.

There’s no reason to panic here. But let’s start to move away from RC4 to something like AES.

There are lots of press articles on the attack.

Posted on March 29, 2013 at 6:59 AMView Comments

Man-in-the-Middle Attack Against SSL 3.0/TLS 1.0

It’s the Browser Exploit Against SSL/TLS Tool, or BEAST:

The tool is based on a blockwise-adaptive chosen-plaintext attack, a man-in-the-middle approach that injects segments of plain text sent by the target’s browser into the encrypted request stream to determine the shared key. The code can be injected into the user’s browser through JavaScript associated with a malicious advertisement distributed through a Web ad service or an IFRAME in a linkjacked site, ad, or other scripted elements on a webpage.

Using the known text blocks, BEAST can then use information collected to decrypt the target’s AES-encrypted requests, including encrypted cookies, and then hijack the no-longer secure connection. That decryption happens slowly, however; BEAST currently needs sessions of at least a half-hour to break cookies using keys over 1,000 characters long.

The attack, according to Duong, is capable of intercepting sessions with PayPal and other services that still use TLS 1.0­which would be most secure sites, since follow-on versions of TLS aren’t yet supported in most browsers or Web server implementations.

While Rizzo and Duong believe BEAST is the first attack against SSL 3.0 that decrypts HTTPS requests, the vulnerability that BEAST exploits is well-known; BT chief security technology officer Bruce Schneier and UC Berkeley’s David Wagner pointed out in a 1999 analysis of SSL 3.0 that “SSL will provide a lot of known plain-text to the eavesdropper, but there seems to be no better alternative.” And TLS’s vulnerability to man-in-the middle attacks was made public in 2009. The IETF’s TLS Working Group published a fix for the problem, but the fix is unsupported by SSL.

Another article.

EDITED TO ADD: Good analysis.

Posted on September 23, 2011 at 1:37 PMView Comments

Firesheep

Firesheep is a new Firefox plugin that makes it easy for you to hijack other people’s social network connections. Basically, Facebook authenticates clients with cookies. If someone is using a public WiFi connection, the cookies are sniffable. Firesheep uses wincap to capture and display the authentication information for accounts it sees, allowing you to hijack the connection.

Slides from the Toorcon talk.

Protect yourself by forcing the authentication to happen over TLS. Or stop logging in to Facebook from public networks.

EDITED TO ADD (10/27): To protect against this attack, you have to encrypt the entire session—not just the initial authentication.

EDITED TO ADD (11/4): Foiling Firesheep.

EDITED TO ADD (11/10): More info.

EDITED TO ADD (11/17): Blacksheep detects Firesheep.

Posted on October 27, 2010 at 7:53 AMView Comments

NIST Hash Workshop Liveblogging (1)

I’m in Gaithersburg, MD, at the Cryptographic Hash Workshop hosted by NIST. I’m impressed by the turnout; a lot of the right people are here.

Xiaoyun Wang, the cryptographer who broke SHA-1, spoke about her latest results. They are the same results Adi Shamir presented in her name at Crypto this year: a time complexity of 263.

(I first wrote about Wang’s results here, and discussed their implications here. I wrote about results from Crypto here. Here are her two papers from Crypto: “Efficient Collision Search Attacks on SHA-0” and “Finding Collisions in the Full SHA-1 Collision Search Attacks on SHA1.”)

Steve Bellovin is now talking about the problems associated with upgrading hash functions. He and his coauthor Eric Rescorla looked at S/MIME, TLS, IPSec (and IKE), and DNSSEC. Basically, these protocols can’t change algorithms overnight; it has to happen gradually, over the course of years. So the protocols need some secure way to “switch hit”: to use both the new and old hash functions during the transition period. This requires some sort of signaling, which the protocols don’t do very well. (Bellovin’s and Rescorla’s paper is here.)

Posted on October 31, 2005 at 9:02 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.