The Logjam (and Another) Vulnerability against Diffie-Hellman Key Exchange

Logjam is a new attack against the Diffie-Hellman key-exchange protocol used in TLS. Basically:

The Logjam attack allows a man-in-the-middle attacker to downgrade vulnerable TLS connections to 512-bit export-grade cryptography. This allows the attacker to read and modify any data passed over the connection. The attack is reminiscent of the FREAK attack, but is due to a flaw in the TLS protocol rather than an implementation vulnerability, and attacks a Diffie-Hellman key exchange rather than an RSA key exchange. The attack affects any server that supports DHE_EXPORT ciphers, and affects all modern web browsers. 8.4% of the Top 1 Million domains were initially vulnerable.

Here’s the academic paper.

One of the problems with patching the vulnerability is that it breaks things:

On the plus side, the vulnerability has largely been patched thanks to consultation with tech companies like Google, and updates are available now or coming soon for Chrome, Firefox and other browsers. The bad news is that the fix rendered many sites unreachable, including the main website at the University of Michigan, which is home to many of the researchers that found the security hole.

This is a common problem with version downgrade attacks; patching them makes you incompatible with anyone who hasn’t patched. And it’s the vulnerability the media is focusing on.

Much more interesting is the other vulnerability that the researchers found:

Millions of HTTPS, SSH, and VPN servers all use the same prime numbers for Diffie-Hellman key exchange. Practitioners believed this was safe as long as new key exchange messages were generated for every connection. However, the first step in the number field sieve—the most efficient algorithm for breaking a Diffie-Hellman connection—is dependent only on this prime. After this first step, an attacker can quickly break individual connections.

The researchers believe the NSA has been using this attack:

We carried out this computation against the most common 512-bit prime used for TLS and demonstrate that the Logjam attack can be used to downgrade connections to 80% of TLS servers supporting DHE_EXPORT. We further estimate that an academic team can break a 768-bit prime and that a nation-state can break a 1024-bit prime. Breaking the single, most common 1024-bit prime used by web servers would allow passive eavesdropping on connections to 18% of the Top 1 Million HTTPS domains. A second prime would allow passive decryption of connections to 66% of VPN servers and 26% of SSH servers. A close reading of published NSA leaks shows that the agency’s attacks on VPNs are consistent with having achieved such a break.

Remember James Bamford’s 2012 comment about the NSA’s cryptanalytic capabilities:

According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US. The upshot, according to this official: “Everybody’s a target; everybody with communication is a target.”

[…]

The breakthrough was enormous, says the former official, and soon afterward the agency pulled the shade down tight on the project, even within the intelligence community and Congress. “Only the chairman and vice chairman and the two staff directors of each intelligence committee were told about it,” he says. The reason? “They were thinking that this computing breakthrough was going to give them the ability to crack current public encryption.”

And remember Director of National Intelligence James Clapper’s introduction to the 2013 “Black Budget“:

Also, we are investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic.

It’s a reasonable guess that this is what both Bamford’s source and Clapper are talking about. It’s an attack that requires a lot of precomputation—just the sort of thing a national intelligence agency would go for.

But that requirement also speaks to its limitations. The NSA isn’t going to put this capability at collection points like Room 641A at AT&T’s San Francisco office: the precomputation table is too big, and the sensitivity of the capability is too high. More likely, an analyst identifies a target through some other means, and then looks for data by that target in databases like XKEYSCORE. Then he sends whatever ciphertext he finds to the Cryptanalysis and Exploitation Services (CES) group, which decrypts it if it can using this and other techniques.

Ross Anderson wrote about this earlier this month, almost certainly quoting Snowden:

As for crypto capabilities, a lot of stuff is decrypted automatically on ingest (e.g. using a “stolen cert”, presumably a private key obtained through hacking). Else the analyst sends the ciphertext to CES and they either decrypt it or say they can’t.

The analysts are instructed not to think about how this all works. This quote also applied to NSA employees:

Strict guidelines were laid down at the GCHQ complex in Cheltenham, Gloucestershire, on how to discuss projects relating to decryption. Analysts were instructed: “Do not ask about or speculate on sources or methods underpinning Bullrun.”

I remember the same instructions in documents I saw about the NSA’s CES.

Again, the NSA has put surveillance ahead of security. It never bothered to tell us that many of the “secure” encryption systems we were using were not secure. And we don’t know what other national intelligence agencies independently discovered and used this attack.

The good news is now that we know reusing prime numbers is a bad idea, we can stop doing it.

EDITED TO ADD: The DH precomputation easily lends itself to custom ASIC design, and is something that pipelines easily. Using BitCoin mining hardware as a rough comparison, this means a couple orders of magnitude speedup.

EDITED TO ADD (5/23): Good analysis of the cryptography.

EDITED TO ADD (5/24): Good explanation by Matthew Green.

Posted on May 21, 2015 at 6:30 AM35 Comments

Comments

Thomas M May 21, 2015 6:47 AM

What about IPSEC VPNs?
When using IPSEC, you must configure the “Diffie Hellman Group”, which is basically that you say which of those known primes you are using.

Is there any way to avoid this, without breaking the VPN?

All of the discussion is about SSL, but I think the DH issue affects IPSEC as well.

Zenzero May 21, 2015 6:54 AM

From the site Bruce linked, they have have tests for both server & browser:

Will show you if your browser is Vulnerable (at the top, may take a few seconds)
https://weakdh.org/

And tool here to test your server:
https://weakdh.org/sysadmin.html

If the NSA in fact used this and I would be amazed if they didnt, then hopefully that will send a message to the american people what their government thinks of them.

Obama said the NSA would not site on Zero days unless “there is a clear national security or law enforcement need.”. How on earth would making most of the world (including americans) vulnurable to this bug constitute national security.

mike~acker May 21, 2015 7:59 AM

if you stop and think about things for a minute you’ll realize that the client computer must generate the session key.

initially the client holds the server’s public key in the x.509 certificate and the server holds the corresponding secret key. this is a half-installed PGP link: it’s only secure from client to server and authentication is only possible from server to client.

this reveals a huge problem with SSL/TLS: on initial contact the client is supposed to use its local copy of the server’s x.509 certificate in order to authenticate the identity of the server. but: x.509 certificate have been compromised. the client should verify the server’s certificate itself — and then sign for it — using its own copy of PGP or GnuPG. this is critical is it is absolutely necessary to the client to be sure who it is connecting with

a session key now needs to be generated. the client has to do this because while, initially, the client can encrypt a secure message to the server — the server cannot encrypt to the client as it does not normally hold a Public Key for the client.

thus the client computer needs only to generate a 256 bit AES session key, encrypt it to the server and send it . this will be a symmetric key used only for the session. anyting than does not decrypt using the session key then is either an error or an intrusion — game over .

Andrew May 21, 2015 8:15 AM

So, despite what all cryptographers were saying right after Snowden, it turned out that the three-letter agencies actually had an ace up in their sleeve.

May be it’s time to recall the old rule of thumb from the commie block – if you can’t say it to the local comparty secretary’s face, better not talk about it over the phone. Or in your home. Or in someone else’s home.

Anura May 21, 2015 8:19 AM

Generating your own prime is easy, it’s something that only takes a matter of seconds for a 2048-bit key. For ephemeral keys, there is no excuse for not generating a new one on first configuration. Hell, you could theoretically do it on server start if you don’t mind a slight delay.

I use OpenVPN and generating the prime is one of the first steps in all of the guides.

We do need to work on getting rid of export mode entirely, of course.

Bruce Schneier May 21, 2015 9:10 AM

“So, despite what all cryptographers were saying right after Snowden, it turned out that the three-letter agencies actually had an ace up in their sleeve.”

I don’t think that’s right. This, for example, is me in September 2013:

More likely is that the NSA has some mathematical breakthrough that affects one or more public-key algorithms. There are a lot of mathematical tricks involved in public-key cryptanalysis, and absolutely no theory that provides any limits on how powerful those tricks can be.

Also this, from 2014.

I wasn’t alone; others have said similar things.

Hanno May 21, 2015 9:54 AM

About reusing primes: Actually the situation is a bit more complex.

Looking back a bit at other attacks last year we had something called the triple handshake attack. It was not really one attack, but (like the logjam paper) a whole range of issues, but one of them involved attacking a client by sending him a bad DH group (which could then be used to fake a client authenticated connection to another host).

We thought we learned from the triple handshake attack that arbitrary primes in DH exchanges are bad. Therefore the IETF WG is working on a standard to fix the primes.

I’m not sure how aware the security community is, but the issue is more complex. There seem to be attacks that can profit from arbitrary primes, while others profit from fixed/shared primes. And to me right now it’s far from clear which one is worse. This needs clearly more investigation.

(One thing I don’t know and maybe Bruce or any other crypto guru can answer: Do we have the same precalculation problem with elliptic curve groups?)

Cassandra May 21, 2015 10:42 AM

I’d hope one outcome of this is to make the use of pre-shared keys easier. Key management, including key distribution and key changes is difficult – but one of the advantages of pre-shared keys is that you can make the stream unhackable (so long as you choose a sufficiently hard-to-guess key e.g. from a ‘sufficiently random’ one-time-pad; and don’t use an encryption protocol designed to leak details of the key).

This means that pre-calculation attacks become impossible, and your vulnerabilty reduces to ensuring that the key cannot be obtained from your encryption and decryption equipment (which can be difficult if you are protecting agains nation-state attackers).

mike~acker May 21, 2015 11:58 AM

I think it was generally conceded a while back that we needed to move on from prime numbers …

if you check on things you’ll find GnuPG 2.1 will support Eliptic Curves

Nicholas Weaver May 21, 2015 1:49 PM

@rgaff: YES.

Yes yes yes a thousand times yes use the NSA’s curves.

The NSA has an attitude that unclassified doesn’t matter: anything they actually care about defending ends up being classified. So they are willing to throw all the unclassified systems at risk if it helps their ability to break things (e.g. Dual_EC).

The attitude is completely reversed for the classified space. That is the one space where the NSA does truly value defense over offense, and will never knowingly use a weak algorithm.

Suite B is approved by the NSA for Top Secret communication, with AES256, curve P-384 (aka nistp384 and secp384r1), and SHA-384.

This approval, for Top Secret communication, is the ultimate NSA stamp of approval. They are saying “We believe this is strong, even against us”.

Curious May 21, 2015 2:46 PM

I read the following sentence somewhere:
“There is a law of diminishing returns with RSA key length.” (E.g 1024 bit vs 2048 bit)

What does that mean and is this comparable somehow to the use of ‘public key cryptography’?

Do you believe that? May 21, 2015 3:27 PM

@Nicholas Weaver- I can assure you that is what they have Suite A for.

lily May 21, 2015 3:37 PM

I’m surprised that you didn’t mention the fact that the second vulnerability results in a complete loss of forward secrecy. it effectively reduces the strength of each DH group to the strength of a single RSA key of the same length. forward secrecy that’s cheaper than generating a new RSA key pair for every connection was the main selling point of DH, so what point is there in using DH over RSA? the only reason I can think of is wanting people to think they have forward secrecy when they really don’t.

Anura May 21, 2015 4:05 PM

@lily

As long as each server generates their own primes, according to the paper diffie-hellman will be more secure than RSA. With 2048-bit keys, you should be good regardless with either algorithm for the near future.

Jesse May 21, 2015 4:11 PM

@Cassandra: pre-shared keys cannot offer you perfect forward secrecy.

If you use a pre-shared key for a year, and then later somebody hacks one end of the connection they can get that key and decrypt all of your past conversations.

Ephemeral keys like those offered by carefully executed DH mean that 1> you don’t have to store the keys actually used by the stream cipher to disk ever, so they should be next to impossible to ever recover in the future. 2> If one key does somehow magically get recovered, then only the data encrypted using that (for 10 minutes or however long) is breakable, while even the very next set of ephemeral keys are not compromised in turn.

lily May 21, 2015 4:24 PM

@Anura no, it will be not worse than RSA, but also not significantly better. to get forward secrecy, you would have to generate a new prime for every connection, which would be even slower than ephemeral RSA (which no one uses because of how slow it is).

Cassandra May 21, 2015 4:42 PM

@Jesse
Thank-you for pointing out the loss of Perfect Forward Secrecy. lily’s point above is pertinent. I don’t know which is better: to think you have perfect forward secrecy when in fact you don’t, because you are compromised by a poor DH implementation, or to have no forward secrecy and take appropriate mitigating actions.

This is partly why I mentioned the difficulty of key management. If changing pre-shared keys were made easy you could arrange matters so that compromise of an individual key compromised the encrypted stream for an arbitrarily short interval.

But, I acknowledge, properly executed 2048 bit DH should be good enough for a while.

Anura May 21, 2015 5:42 PM

@lily

If they can break the 1024-bit primes generated on each server, then they can also break the 1024-bit RSA keys on each server (which are possibly one key for a group of servers). Either renders the protocol insecure. DH is still more secure because they actually have to break it, whereas with RSA they only need access to the server itself to gain the private key, allowing you to read all past messages, which is insecure regardless of key length. With 2048-bit DH keys, it is infeasible even for the NSA to break the keys, and the more likely attack would be that they would recover the RSA private keys to perform man-in-the middle attacks. By definition this is still perfect forward secrecy, as long as the primes are not breakable, even if the same prime is used everywhere.

uh, Mike May 21, 2015 6:11 PM

!Bitcoin! by the way, provides an interesting empirical cryptology benchmark, eh? Bitcoin mining boxes are basically brute-forcing the hash. Bitcoin mining humans will be trying to break out of the statistical vault.

Immovable object, meet irresistible force.

MarkH May 22, 2015 9:42 AM

@Hanno:

For the sake of clarity …

By my reading, an accessory to the triple handshake vulnerability was not the use of “arbitrary primes”, but rather of composite values for parameter p. It isn’t a matter of bad primes, though Sophie Germain primes are considered the safest.

Accordingly, it would be more correct to say “arbitrary p values” than “arbitrary primes”.

Of course, this doesn’t change the essence of the problem: an attacker can use a bad value for p.

One way to respond to the problem, would be to require Sophie Germain primes. These can be dependably checked using pseudoprimality testing. So a checking process might look like:

(1) Is this p either published, or one we have validated from a different party? If so, reject.

(2) Is p a Sophie Germain prime? If not, reject.

Unfortunately, the testing costs significant CPU time (several times more than the DH key agreement protocol). However, once such a value has been checked, it could be safely re-used in future DH agreements between the two parties.

Sam May 22, 2015 4:02 PM

According to the weakdh server test site, this server (schneier.com) “uses a commonly-shared 1024-bit Diffie-Hellman group, and might be in range of being broken by a nation-state. It might be a good idea to generate a unique, 2048-bit group for the site”.

Sure this is mostly a cosmetic issue for public sites that don’t handle sensitive information, but I would think the site of a security professional would have a better image if it was as secure as reasonably possible.

MarkH May 22, 2015 7:55 PM

Also @Hanno:

No expert has yet weighed in, responding to your important question whether ECC has the same vulnerability to using “known” mathematical groups. So, as a tyro in the subject, here is my very non-expert perspective:

  1. As far as I have discovered, Pollard’s rho algorithm is the fastest known algorithm for solving discrete logs in elliptic curve groups. The rho algorithm has higher computational cost than the sieve algorithms applicable to non-ECC public key cryptography, which is perhaps one reason why ECC is considered to offer equivalent security at smaller key sizes.
  2. As far as I understand Pollard’s rho algorithm, the values computed while working toward a solution depend on the argument (that is, the value whose discrete log is to be found). Accordingly, there would seem to be no opportunity for pre-computation.

I would welcome the views of anybody who understands these matters beyond my very superficial understanding!

lily May 23, 2015 10:27 AM

@Anura
2048-bit primes can’t be broken yet, but will be broken eventually. the issue is that if someone records traffic to the server now and then breaks the server’s 2048-bit RSA key in 15 years, the worst they can do is attempt to impersonate the server with an expired certificate (assuming the server uses DHE_RSA and not plain RSA). if they break the server’s 2048-bit DH group in 15 years, they can decrypt all that recorded traffic.

Anura May 23, 2015 5:13 PM

@lily

The impact of this isn’t as high as you imply. Most servers do not use PFS and instead use RSA for key exchange (and the ones that do provide PFS use ECDH), but even then PFS does not guarantee your server is secure if your key is too small. Using your own primes still means the NSA has to break those primes, and if the keys themselves are breakable then any PFS goes out the window. If the primes are breakable if they are on a per-server basis then it won’t be long until the keys are individually breakable in large numbers as well if you use a different prime for each session (since it’s only a linear increase in attack time). This is why you always need to ensure that your key length will outlive the usefulness of the data you are encrypting.

Rob May 23, 2015 9:18 PM

I can’t comment on the technical side, but from a political side I can see what I think is the most likely scenario where this ends.

And that’s where you get the NSA penetrated from foreign HUMINT activity. Either an agent recruited from the NSA or even an officer somehow planted into them. Then the enemy agency finds a way to get their hands on the precalculation data and starts using the exact same techniques the NSA does to pop nearly anything open. Now I get that the sheer quantity of data required in this situation means that you can’t exactly walk off with this in a flash drive. OTOH, agencies have managed to get their hands on whole tank engines in the past. Of course, if the NSA’s has done something more exotic like P=NP, then getting the secret out the door becomes much easier.

However it happens, if the NSA creates a backdoor, or encourages one to remain out there in public, and that backdoor falls into enemy hands, and then that becomes public, then the storm that ensues could justify congress popping the NSA open and drastically reforming them.

Anything short of that, and you’ll be facing the same arguments we’ve been fighting, winning in public, and then losing in private, going all the way back to the Jamie Gorelick and the Clipper Chip.

Fascist Nation May 24, 2015 1:29 PM

Apologies if someone posted the fix for Mozilla engine users. I looked but didn’t see it. Easy fix. Disable the vulnerable cyphers. Doesn’t require reboot browser as changes occur automatically. PaleMoon users can do it from their menu. Worked for me on Firefox and Seamonkey. Running out of cyphers. 🙂

Test vulnerability:
https://www.ssllabs.com/ssltest/viewMyClient.html#1432491406522&frame_loaded

The fix:
http://forums.mozillazine.org/viewtopic.php?p=14165963#p14165963

Anura May 24, 2015 6:25 PM

I think we need to eventually phase out TLS for a new protocol that doesn’t support export grade cryptography, and gets around the problems (like the forging of handshake messages as stated in the article above). The protocol should go something like this:

1) Client requests handshake and sends random authentication token (to prevent replays), which the client may sign with a certificate they send at this point
2) Server responds with certificate and a message signed with that certificate containing the authentication token and the list of supported cipher suites, all of them using either DH or ECDH for key-exchange. Server may also send back a response either requesting or requiring a client certificate if it wasn’t already sent
3) Client picks a cipher suite, sends an ephemeral key and the parameters that it chooses to the server, optionally signed with the client certificate (if signed, it will also include the server response)
4) Server responds with a signed message containing the authentication token, the server’s ephemeral key, and the client’s message from step 3
5) Client and server compute a shared secret, and derive the encryption key and (if necessary) a separate MAC key

This does a few things, one it means that each client has their own prime number, since there are more clients than servers, it means you can’t figure out what a popular site uses, you have to attack each client individually (which should each derive their own prime, and can change it at any time). And two, by the server signing all messages the client sent, the client can make sure none of their unsigned messages have been forged to downgrade the encryption. On top of this, the client decides the cipher suite to use, not the server. This allows security-conscious users to ensure that by keeping their browser up-to-date, they only use secure cipher suites, as opposed to giving the server the choice where it might not choose PFS or it might choose a weaker hash function or encryption algorithm.

This does potentially add another problem, however; if the client generates their own prime and decides to use it then the server can use that to uniquely identify them, which might not be good. The client could get around this by simply preferring ECDH or keeping a list of primes, and generating new ones in the background so they are never reused.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.