Entries Tagged "NIST"

Page 3 of 6

Will Keccak = SHA-3?

Last year, NIST selected Keccak as the winner of the SHA-3 hash function competition. Yes, I would have rather my own Skein had won, but it was a good choice.

But last August, John Kelsey announced some changes to Keccak in a talk (slides 44-48 are relevant). Basically, the security levels were reduced and some internal changes to the algorithm were made, all in the name of software performance.

Normally, this wouldn’t be a big deal. But in light of the Snowden documents that reveal that the NSA has attempted to intentionally weaken cryptographic standards, this is a huge deal. There is too much mistrust in the air. NIST risks publishing an algorithm that no one will trust and no one (except those forced) will use.

At this point, they simply have to standardize on Keccak as submitted and as selected.

CDT has a great post about this.

Also this Slashdot thread.

EDITED TO ADD (10/5): It’s worth reading the response from the Keccak team on this issue.

I misspoke when I wrote that NIST made “internal changes” to the algorithm. That was sloppy of me. The Keccak permutation remains unchanged. What NIST proposed was reducing the hash function’s capacity in the name of performance. One of Keccak’s nice features is that it’s highly tunable.

I do not believe that the NIST changes were suggested by the NSA. Nor do I believe that the changes make the algorithm easier to break by the NSA. I believe NIST made the changes in good faith, and the result is a better security/performance trade-off. My problem with the changes isn’t cryptographic, it’s perceptual. There is so little trust in the NSA right now, and that mistrust is reflecting on NIST. I worry that the changed algorithm won’t be accepted by an understandably skeptical security community, and that no one will use SHA-3 as a result.

This is a lousy outcome. NIST has done a great job with cryptographic competitions: both a decade ago with AES and now with SHA-3. This is just another effect of the NSA’s actions draining the trust out of the Internet.

Posted on October 1, 2013 at 10:50 AMView Comments

Ed Felten on the NSA Disclosures

Ed Felten has an excellent essay on the damage caused by the NSA secretly breaking the security of Internet systems:

In security, the worst case—the thing you most want to avoid—is thinking you are secure when you’re not. And that’s exactly what the NSA seems to be trying to perpetuate.

Suppose you’re driving a car that has no brakes. If you know you have no brakes, then you can drive very slowly, or just get out and walk. What is deadly is thinking you have the ability to stop, until you stomp on the brake pedal and nothing happens. It’s the same way with security: if you know your communications aren’t secure, you can be careful about what you say; but if you think mistakenly that you’re safe, you’re sure to get in trouble.

So the problem is not (only) that we’re unsafe. It’s that “the N.S.A. wants to keep it that way.” The NSA wants to make sure we remain vulnerable.

Posted on September 12, 2013 at 6:05 AMView Comments

When Will We See Collisions for SHA-1?

On a NIST-sponsored hash function mailing list, Jesse Walker (from Intel; also a member of the Skein team) did some back-of-the-envelope calculations to estimate how long it will be before we see a practical collision attack against SHA-1. I’m reprinting his analysis here, so it reaches a broader audience.

According to E-BASH, the cost of one block of a SHA-1 operation on already deployed commodity microprocessors is about 214 cycles. If Stevens’ attack of 260 SHA-1 operations serves as the baseline, then finding a collision costs about 214 * 260 ~ 274 cycles.

A core today provides about 231 cycles/sec; the state of the art is 8 = 23 cores per processor for a total of 23 * 231 = 234 cycles/sec. A server typically has 4 processors, increasing the total to 22 * 234 = 236 cycles/sec. Since there are about 225 sec/year, this means one server delivers about 225 * 236 = 261 cycles per year, which we can call a “server year.”

There is ample evidence that Moore’s law will continue through the mid 2020s. Hence the number of doublings in processor power we can expect between now and 2021 is:

3/1.5 = 2 times by 2015 (3 = 2015 – 2012)

6/1.5 = 4 times by 2018 (6 = 2018 – 2012)

9/1.5 = 6 times by 2021 (9 = 2021 – 2012)

So a commodity server year should be about:

261 cycles/year in 2012

22 * 261 = 263 cycles/year by 2015

24 * 261 = 265 cycles/year by 2018

26 * 261 = 267 cycles/year by 2021

Therefore, on commodity hardware, Stevens’ attack should cost approximately:

274 / 261 = 213 server years in 2012

274 / 263 = 211 server years by 2015

274 / 265 = 29 server years by 2018

274 / 267 = 27 server years by 2021

Today Amazon rents compute time on commodity servers for about $0.04 / hour ~ $350 /year. Assume compute rental fees remain fixed while server capacity keeps pace with Moore’s law. Then, since log2(350) ~ 8.4 the cost of the attack will be approximately:

213 * 28.4 = 221.4 ~ $2.77M in 2012

211 * 28.4 = 219.4 ~ $700K by 2015

29 * 28.4 = 217.4 ~ $173K by 2018

27 * 28.4 = 215.4 ~ $43K by 2021

A collision attack is therefore well within the range of what an organized crime syndicate can practically budget by 2018, and a university research project by 2021.

Since this argument only takes into account commodity hardware and not instruction set improvements (e.g., ARM 8 specifies a SHA-1 instruction), other commodity computing devices with even greater processing power (e.g., GPUs), and custom hardware, the need to transition from SHA-1 for collision resistance functions is probably more urgent than this back-of-the-envelope analysis suggests.

Any increase in the number of cores per CPU, or the number of CPUs per server, also affects these calculations. Also, any improvements in cryptanalysis will further reduce the complexity of this attack.

The point is that we in the community need to start the migration away from SHA-1 and to SHA-2/SHA-3 now.

Posted on October 5, 2012 at 1:24 PMView Comments

Keccak is SHA-3

NIST has just announced that Keccak has been selected as SHA-3.

It’s a fine choice. I’m glad that SHA-3 is nothing like the SHA-2 family; something completely different is good.

Congratulations to the Keccak team. Congratulations—and thank you—to NIST for running a very professional, interesting, and enjoyable competition. The process has increased our understanding about the cryptanalysis of hash functions by a lot.

I know I just said that NIST should choose “no award,” mostly because too many options makes for a bad standard. I never thought they would listen to me, and—indeed—only made that suggestion after I knew it was too late to stop the choice. Keccak is a fine hash function; I have absolutely no reservations about its security. (Or the security of any of the four SHA-2 function, for that matter.) I have to think more before I make specific recommendations for specific applications.

Again: great job, NIST. Let’s do a really fast stream cipher next.

Posted on October 2, 2012 at 4:50 PMView Comments

SHA-3 to Be Announced

NIST is about to announce the new hash algorithm that will become SHA-3. This is the result of a six-year competition, and my own Skein is one of the five remaining finalists (out of an initial 64).

It’s probably too late for me to affect the final decision, but I am hoping for “no award.”

It’s not that the new hash functions aren’t any good, it’s that we don’t really need one. When we started this process back in 2006, it looked as if we would be needing a new hash function soon. The SHA family (which is really part of the MD4 and MD5 family), was under increasing pressure from new types of cryptanalysis. We didn’t know how long the various SHA-2 variants would remain secure. But it’s 2012, and SHA-512 is still looking good.

Even worse, none of the SHA-3 candidates is significantly better. Some are faster, but not orders of magnitude faster. Some are smaller in hardware, but not orders of magnitude smaller. When SHA-3 is announced, I’m going to recommend that, unless the improvements are critical to their application, people stick with the tried and true SHA-512. At least for a while.

I don’t think NIST is going to announce “no award”; I think it’s going to pick one. And of the five remaining, I don’t really have a favorite. Of course I want Skein to win, but that’s out of personal pride, not for some objective reason. And while I like some more than others, I think any would be okay.

Well, maybe there’s one reason NIST should choose Skein. Skein isn’t just a hash function, it’s the large-block cipher Threefish and a mechanism to turn it into a hash function. I think the world actually needs a large-block cipher, and if NIST chooses Skein, we’ll get one.

Posted on September 24, 2012 at 6:59 AMView Comments

NIST Defines New Versions of SHA-512

NIST has just defined two new versions of SHA-512. They’re SHA-512/224 and SHA-512/256: 224- and 256-bit truncations of SHA-512 with a new IV. They’ve done this because SHA-512 is faster than SHA-256 on 64-bit CPUs, so these new SHA variants will be faster.

This is a good thing, and exactly what we did in the design of Skein. We defined different outputs for the same state size, because it makes sense to decouple the internal workings of the hash function from the output size.

Posted on February 18, 2011 at 6:22 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.