Entries Tagged "NIST"

Page 3 of 6

NIST Starts Planning for Post-Quantum Cryptography

Last year, the NSA announced its plans for transitioning to cryptography that is resistant to a quantum computer. Now, it’s NIST’s turn. Its just-released report talks about the importance of algorithm agility and quantum resistance. Sometime soon, it’s going to have a competition for quantum-resistant public-key algorithms:

Creating those newer, safer algorithms is the longer-term goal, Moody says. A key part of this effort will be an open collaboration with the public, which will be invited to devise and vet cryptographic methods that—to the best of experts’ knowledge—­will be resistant to quantum attack. NIST plans to launch this collaboration formally sometime in the next few months, but in general, Moody says it will resemble past competitions such as the one for developing the SHA-3 hash algorithm, used in part for authenticating digital messages.

“It will be a long process involving public vetting of quantum-resistant algorithms,” Moody said. “And we’re not expecting to have just one winner. There are several systems in use that could be broken by a quantum computer­—public-key encryption and digital signatures, to take two examples­—and we will need different solutions for each of those systems.”

The report rightly states that we’re okay in the symmetric cryptography world; the key lengths are long enough.

This is an excellent development. NIST has done an excellent job with their previous cryptographic standards, giving us a couple of good, strong, well-reviewed, and patent-free algorithms. I have no doubt this process will be equally excellent. (If NIST is keeping a list, aside from post-quantum public-key algorithms, I would like to see competitions for a larger-block-size block cipher and a super-fast stream cipher as well.)

Two news articles.

Posted on May 9, 2016 at 6:19 AMView Comments

New NIST Encryption Guidelines

NIST has published a draft of their new standard for encryption use: “NIST Special Publication 800-175B, Guideline for Using Cryptographic Standards in the Federal Government: Cryptographic Mechanisms.” In it, the Escrowed Encryption Standard from the 1990s, FIPS-185, is no longer certified. And Skipjack, NSA’s symmetric algorithm from the same period, will no longer be certified.

I see nothing sinister about decertifying Skipjack. In a world of faster computers and post-quantum thinking, an 80-bit key and 64-bit block no longer cut it.

ETA: My essays from 1998 on Skipjack and KEA.

Posted on March 17, 2016 at 9:54 AMView Comments

Back Door in Juniper Firewalls

Juniper has warned about a malicious back door in its firewalls that automatically decrypts VPN traffic. It’s been there for years.

Hopefully details are forthcoming, but the folks at Hacker News have pointed to this page about Juniper’s use of the DUAL_EC_DBRG random number generator. For those who don’t immediately recognize that name, it’s the pseudo-random-number generator that was backdoored by the NSA. Basically, the PRNG uses two secret parameters to create a public parameter, and anyone who knows those secret parameters can predict the output. In the standard, the NSA chose those parameters. Juniper doesn’t use those tainted parameters. Instead:

ScreenOS does make use of the Dual_EC_DRBG standard, but is designed to not use Dual_EC_DRBG as its primary random number generator. ScreenOS uses it in a way that should not be vulnerable to the possible issue that has been brought to light. Instead of using the NIST recommended curve points it uses self-generated basis points and then takes the output as an input to FIPS/ANSI X.9.31 PRNG, which is the random number generator used in ScreenOS cryptographic operations.

This means that all anyone has to do to break the PRNG is to hack into the firewall and copy or modify those “self-generated basis points.”

Here’s a good summary of what we know. The conclusion:

Again, assuming this hypothesis is correct then, if it wasn’t the NSA who did this, we have a case where a US government backdoor effort (Dual-EC) laid the groundwork for someone else to attack US interests. Certainly this attack would be a lot easier given the presence of a backdoor-friendly RNG already in place. And I’ve not even discussed the SSH backdoor which, as Wired notes, could have been the work of a different group entirely. That backdoor certainly isn’t NOBUS—Fox-IT claim to have found the backdoor password in six hours.

More details to come, I’m sure.

EDITED TO ADD (12/21): A technical overview of the SSH backdoor.

EDITED TO ADD (12/22): Matthew Green wrote a really good technical post about this.

They then piggybacked on top of it to build a backdoor of their own, something they were able to do because all of the hard work had already been done for them. The end result was a period in which someone—maybe a foreign government—was able to decrypt Juniper traffic in the U.S. and around the world. And all because Juniper had already paved the road.

Another good article.

Posted on December 21, 2015 at 6:52 AMView Comments

How NIST Develops Cryptographic Standards

This document gives a good overview of how NIST develops cryptographic standards and guidelines. It’s still in draft, and comments are appreciated.

Given that NIST has been tainted by the NSA’s actions to subvert cryptographic standards and protocols, more transparency in this process is appreciated. I think NIST is doing a fine job and that it’s not shilling for the NSA, but it needs to do more to convince the world of that.

Posted on March 4, 2014 at 6:41 AMView Comments

Will Keccak = SHA-3?

Last year, NIST selected Keccak as the winner of the SHA-3 hash function competition. Yes, I would have rather my own Skein had won, but it was a good choice.

But last August, John Kelsey announced some changes to Keccak in a talk (slides 44-48 are relevant). Basically, the security levels were reduced and some internal changes to the algorithm were made, all in the name of software performance.

Normally, this wouldn’t be a big deal. But in light of the Snowden documents that reveal that the NSA has attempted to intentionally weaken cryptographic standards, this is a huge deal. There is too much mistrust in the air. NIST risks publishing an algorithm that no one will trust and no one (except those forced) will use.

At this point, they simply have to standardize on Keccak as submitted and as selected.

CDT has a great post about this.

Also this Slashdot thread.

EDITED TO ADD (10/5): It’s worth reading the response from the Keccak team on this issue.

I misspoke when I wrote that NIST made “internal changes” to the algorithm. That was sloppy of me. The Keccak permutation remains unchanged. What NIST proposed was reducing the hash function’s capacity in the name of performance. One of Keccak’s nice features is that it’s highly tunable.

I do not believe that the NIST changes were suggested by the NSA. Nor do I believe that the changes make the algorithm easier to break by the NSA. I believe NIST made the changes in good faith, and the result is a better security/performance trade-off. My problem with the changes isn’t cryptographic, it’s perceptual. There is so little trust in the NSA right now, and that mistrust is reflecting on NIST. I worry that the changed algorithm won’t be accepted by an understandably skeptical security community, and that no one will use SHA-3 as a result.

This is a lousy outcome. NIST has done a great job with cryptographic competitions: both a decade ago with AES and now with SHA-3. This is just another effect of the NSA’s actions draining the trust out of the Internet.

Posted on October 1, 2013 at 10:50 AMView Comments

Ed Felten on the NSA Disclosures

Ed Felten has an excellent essay on the damage caused by the NSA secretly breaking the security of Internet systems:

In security, the worst case—the thing you most want to avoid—is thinking you are secure when you’re not. And that’s exactly what the NSA seems to be trying to perpetuate.

Suppose you’re driving a car that has no brakes. If you know you have no brakes, then you can drive very slowly, or just get out and walk. What is deadly is thinking you have the ability to stop, until you stomp on the brake pedal and nothing happens. It’s the same way with security: if you know your communications aren’t secure, you can be careful about what you say; but if you think mistakenly that you’re safe, you’re sure to get in trouble.

So the problem is not (only) that we’re unsafe. It’s that “the N.S.A. wants to keep it that way.” The NSA wants to make sure we remain vulnerable.

Posted on September 12, 2013 at 6:05 AMView Comments

When Will We See Collisions for SHA-1?

On a NIST-sponsored hash function mailing list, Jesse Walker (from Intel; also a member of the Skein team) did some back-of-the-envelope calculations to estimate how long it will be before we see a practical collision attack against SHA-1. I’m reprinting his analysis here, so it reaches a broader audience.

According to E-BASH, the cost of one block of a SHA-1 operation on already deployed commodity microprocessors is about 214 cycles. If Stevens’ attack of 260 SHA-1 operations serves as the baseline, then finding a collision costs about 214 * 260 ~ 274 cycles.

A core today provides about 231 cycles/sec; the state of the art is 8 = 23 cores per processor for a total of 23 * 231 = 234 cycles/sec. A server typically has 4 processors, increasing the total to 22 * 234 = 236 cycles/sec. Since there are about 225 sec/year, this means one server delivers about 225 * 236 = 261 cycles per year, which we can call a “server year.”

There is ample evidence that Moore’s law will continue through the mid 2020s. Hence the number of doublings in processor power we can expect between now and 2021 is:

3/1.5 = 2 times by 2015 (3 = 2015 – 2012)

6/1.5 = 4 times by 2018 (6 = 2018 – 2012)

9/1.5 = 6 times by 2021 (9 = 2021 – 2012)

So a commodity server year should be about:

261 cycles/year in 2012

22 * 261 = 263 cycles/year by 2015

24 * 261 = 265 cycles/year by 2018

26 * 261 = 267 cycles/year by 2021

Therefore, on commodity hardware, Stevens’ attack should cost approximately:

274 / 261 = 213 server years in 2012

274 / 263 = 211 server years by 2015

274 / 265 = 29 server years by 2018

274 / 267 = 27 server years by 2021

Today Amazon rents compute time on commodity servers for about $0.04 / hour ~ $350 /year. Assume compute rental fees remain fixed while server capacity keeps pace with Moore’s law. Then, since log2(350) ~ 8.4 the cost of the attack will be approximately:

213 * 28.4 = 221.4 ~ $2.77M in 2012

211 * 28.4 = 219.4 ~ $700K by 2015

29 * 28.4 = 217.4 ~ $173K by 2018

27 * 28.4 = 215.4 ~ $43K by 2021

A collision attack is therefore well within the range of what an organized crime syndicate can practically budget by 2018, and a university research project by 2021.

Since this argument only takes into account commodity hardware and not instruction set improvements (e.g., ARM 8 specifies a SHA-1 instruction), other commodity computing devices with even greater processing power (e.g., GPUs), and custom hardware, the need to transition from SHA-1 for collision resistance functions is probably more urgent than this back-of-the-envelope analysis suggests.

Any increase in the number of cores per CPU, or the number of CPUs per server, also affects these calculations. Also, any improvements in cryptanalysis will further reduce the complexity of this attack.

The point is that we in the community need to start the migration away from SHA-1 and to SHA-2/SHA-3 now.

Posted on October 5, 2012 at 1:24 PMView Comments

Keccak is SHA-3

NIST has just announced that Keccak has been selected as SHA-3.

It’s a fine choice. I’m glad that SHA-3 is nothing like the SHA-2 family; something completely different is good.

Congratulations to the Keccak team. Congratulations—and thank you—to NIST for running a very professional, interesting, and enjoyable competition. The process has increased our understanding about the cryptanalysis of hash functions by a lot.

I know I just said that NIST should choose “no award,” mostly because too many options makes for a bad standard. I never thought they would listen to me, and—indeed—only made that suggestion after I knew it was too late to stop the choice. Keccak is a fine hash function; I have absolutely no reservations about its security. (Or the security of any of the four SHA-2 function, for that matter.) I have to think more before I make specific recommendations for specific applications.

Again: great job, NIST. Let’s do a really fast stream cipher next.

Posted on October 2, 2012 at 4:50 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.