Keccak is SHA-3

NIST has just announced that Keccak has been selected as SHA-3.

It’s a fine choice. I’m glad that SHA-3 is nothing like the SHA-2 family; something completely different is good.

Congratulations to the Keccak team. Congratulations—and thank you—to NIST for running a very professional, interesting, and enjoyable competition. The process has increased our understanding about the cryptanalysis of hash functions by a lot.

I know I just said that NIST should choose “no award,” mostly because too many options makes for a bad standard. I never thought they would listen to me, and—indeed—only made that suggestion after I knew it was too late to stop the choice. Keccak is a fine hash function; I have absolutely no reservations about its security. (Or the security of any of the four SHA-2 function, for that matter.) I have to think more before I make specific recommendations for specific applications.

Again: great job, NIST. Let’s do a really fast stream cipher next.

Posted on October 2, 2012 at 4:50 PM49 Comments

Comments

Hanno October 2, 2012 5:24 PM

About the “what to do next”: Wouldn’t it be time for another public key algorithm? Just read these days about further advances in quantum computing and I think it may be time to switch to a shor-attack-safe algorithm.

W October 2, 2012 5:25 PM

Personally I’m not happy with their choice. The inferior software performance makes it quite unattractive to me. Blake seemed like a better choice with great software performance and decent hardware performance.

And it seems like PBKDF2-SHA-3 for key strengthening and password hashing should be avoided, since hardware>software increases the attackers advantage.

Steve October 2, 2012 5:37 PM

NIST should do a PBKDF3 competition (with a standard for storing hashed passwords). With PBKDF2, attackers usually have an advantage of at least 4x on the same hardware.

Albert October 2, 2012 5:49 PM

I was rooting for Keccak. I think SHA-3 finalists were all good; I didn’t think there was a “loser” among them.

Davin C. Enigl October 2, 2012 6:26 PM

Keccak won!? That is hilarious. 🙂 Keccak has only one compression function but longer bit-length outputs require smaller message-blocks to be processed by each compression function call. Thus, Keccak runs slower for longer bit-length output variants.

Keccak-512’s 64-bytes fits with padding in a single message block. It has no extra finalization round, resulting in speeds just as fast as the other finalists. Other algorithms need two blocks or process a significantly larger 128-byte message block.

Keccak-256 uses 576-bit input blocks so the 64-byte message was handled within one block.

ARM with NEON vector unit: This is a fast CPU. The ARM core with the NEON vector unit that supports 64-bit operations. Skein was the fastest. Blake-256, Blake-512 and SHA-256 came in as close seconds. Keccak-256 was half the speed of Skein. Keccak-512 was three times slower. And, JH and Grøstl were four to nine times slower than Skein.

Well, I don’t need speed since the most need is our password hashes. The only thing I can think of is the NIST chose the most off-beat mathematics to get way way far away from MD-5 architecture.

OK, I love the Keccak sponge function mathematics. I’ll have plenty of fun with it, but it sure will be a long long time before we stop using SHA512 and it’s truncated forms.

Hat’s off to the Keccak team and especially congratulations to Joan Daemen for Keccak and he is also co-author on our current AES, originally called Rijndael. He is now co-author for both our standard AES encryption and standard hash algorithm SHA-3. Big win for us all.

schroeder October 2, 2012 6:31 PM

And it seems like PBKDF2-SHA-3 for key strengthening and password hashing should be avoided, since hardware>software increases the attackers advantage.

W, fast hardware won’t give the attackers an advantage if the users have it too (SHA-1/MD5 acceleration is available on some systems–mostly non-x86 IIRC). I don’t think the type of attacks published for these algorithms really weaken them for this use case anyway.

Alan Kaminsky October 2, 2012 6:40 PM

NIST’s announcement on the hash-forum mailing list said in part: “NIST chose Keccak over the four other excellent finalists for its elegant design, large security margin, good general performance, excellent efficiency in hardware implementations, and for its flexibility. . . . Additionally, Keccak complements the existing SHA-2 family of hash algorithms well. NIST remains confident in the security of SHA-2 which is now widely implemented, and the SHA-2 hash algorithms will continue to be used for the foreseeable future, as indicated in the NIST hash policy statement. One benefit that Keccak offers as the SHA-3 winner is its difference in design and implementation properties from that of SHA-2. It seems very unlikely that a single new cryptanalytic attack or approach could threaten both algorithms.”

So NIST did not choose Keccak because it performs better overall than SHA-2 (it doesn’t) or because it is more secure than SHA-2 (it isn’t). NIST chose Keccak because its design (sponge construction) is so different from SHA-2’s design (Merkle-Damgard construction), as insurance in the unlikely event that SHA-2 is ever broken.

EhNoNameUs October 2, 2012 7:16 PM

Where might an interested observer go to learn about the differences between Keccak and the other finalists that motivated the choice of Keccak over other alternatives such as Skein?

Perseids October 2, 2012 8:21 PM

@Hanno
Take a look at lattice based cryptosystems like NTRUEncrypt. Those are believed to be secure even against quantum computing attacks.
But really, even though I am in favour of developing those algorithms today, in the hope that they have proven trustworthy when we need them, quantum computing is far from being useful in the foreseeable future.

Nick P October 2, 2012 8:50 PM

I think I’ll probably be using Skein anyway due to its performance benefits. Legacy or higher risk stuff stays on SHA512 variants.

Side note: I think the worrying about quantum attacks is a wasted effort similar to average developer picking best cipher for SSL or file encryption. Usually, they side step its security and hit one of the many easier vectors. We’d get a higher payoff defending those vectors.

Also, note to Schneier: ESTREAM produced dome nice stream ciphers. I also second a better password hashing scheme contest.

aikimark October 2, 2012 9:23 PM

@Bruce

(Or the security of any of the four SHA-2 function, for that matter.)

Did you mean to write:
(Or the security of any of the four SHA-3 finalists, for that matter.)

Jrd October 2, 2012 10:09 PM

@alkimark,

No, I think he did mean SHA-2. He’s saying that he has no reservations about SHA-224, SHA-256, SHA-384 or SHA-512, which is why he didn’t think a SHA-3 was necessary.

Randall October 2, 2012 11:35 PM

via the announcement, noticed that NIST stated as policy a while back that there’s ‘no need or plan to transition applications from SHA-2 to SHA-3.’ Sounds like what you’re looking for:

http://csrc.nist.gov/groups/ST/hash/policy.html

Said it before, but even if SHA-2 isn’t looking creaky like we feared, now that we’ve gone through the process we might as well use the output. (Suitability for small hardware and difference from SHA-2, for starters.)

I hope the tree hashing and authenticated encryption modes the authors apparently designed get use too.

Nick P October 3, 2012 1:32 AM

@ Randall

“Said it before, but even if SHA-2 isn’t looking creaky like we feared, now that we’ve gone through the process we might as well use the output. (Suitability for small hardware and difference from SHA-2, for starters.)”

We can always use it for diversity purposes. The advantage of implementing crypto “suites” rather than a standard algorithm is that you can implement random selection of an algorithm others will be able to use. Yet, for the adversary, they have that many more algorithms they must optimize their breaking efforts for.

NordicStudent October 3, 2012 2:39 AM

Hey you native English speakers out there. Could you please specify how to pronounce “Keccak”?
Thanks!

Nathan October 3, 2012 3:17 AM

Whoa, is it that time again? It feels as if the competition was announced yesterday… and here we are, already. Time sure flies when you’re having fun!

Anyways, congratulations to the Keccak team.

Clive Robinson October 3, 2012 3:44 AM

@ Scott Paeth,

As for stream ciphers, is there anything wrong with Europe’s eStream contest?..

Yes they’ve not been approved by NIST so won’t get into a FIPS so probably won’t be used…

I know that’s a horrible thing to say but it’s the way of the world. Unless ‘Europe’ get’s it’s act together to force the issue by mandating only EU approved cipher use then we will continue to end up using NIST only and that will be to the detriment of all.

Not being nasty but the US is not that good at providing standards that remain useful in the long term. As such it’s a bit like what used to happen with telephone standards during the 1960’s through to 90’s. Every country had it’s own marginaly different standards for various reasons (often as part of “protectionism”) and this caused lasting harm.

With NIST’s comments they appear to have finaly woken up to the idea that it’s “one size fits all” when it comes to primitives does not work.

What we actually need is standard frameworks into which any primative (that fits certain base requirments) can be dropped, and used in any of a series of standard modes that again are in effect primatives in a framework into which new modes can be dropped. And in turn these lower frameworks to be dropped into a larger more general framework.

But importantly to meet the standards the end devices need to be able to support the “plug-n-play” primatives and modes.

Failure töoooooooooooooooooooooooooooooooooooooooooo

Bruce Schneier October 3, 2012 4:02 AM

NIST will publish a full report with their detailed analysis. They have not yet.

Bruce Schneier October 3, 2012 4:04 AM

“Is there a class of application for which the current stream cipher options are too slow?”

To me, almost as important as the actual standard is the research that the process encourages. I think the community has a lot to learn about the design and analysis of stream ciphers that work at speeds faster than RC4. And RC4 is optimized for 8-bit processors — we could use some more modern designs.

Bruce Schneier October 3, 2012 4:06 AM

“@alkimark, No, I think he did mean SHA-2. He’s saying that he has no reservations about SHA-224, SHA-256, SHA-384 or SHA-512, which is why he didn’t think a SHA-3 was necessary.”

Right.

Hanno October 3, 2012 8:43 AM

@Perseids:
I’m aware of ntru, but it has 2 problems:
1. it’s patented
2. I don’t see it used widely and thus it’s probably not analyzed widely.
A NIST competition for a new post quantum Public Key standard would probably give any selected algorithm the neccessary attention that crypto experts will have a close look at it and point out any failures. (knowing that other suggestions for pq-algorithms like SFLASH already failed)

Bram Cohen October 3, 2012 9:35 AM

Exactly one post ago you were saying that no winner should be picked because none of the sha3 candidates were a big enough improvement over sha2, even after all the work which has gone into it. Then at the end of this post you suggest, in all seriousness, that there should be a new competition for a stream cipher. There’s this thing called counter mode which you might want to learn about.

Zooko Wilcox-O'Hearn October 3, 2012 9:41 AM

For “what to do next”, how about a really fast hash function?

That may sound flippant, but it is something that is actually sorely needed, and it seems feasible. How about a hash function that is faster than MD5, and much faster than SHA-256, in software on 32-bit ARM?

Interestingly, we already have many examples of hash functions that take about 11 to 15 cycles per byte on ARM and are not known to be insecure. Also interesting is the fact that if you take Keccak, Skein, or Blake and cut them down to a reduced-round version that is enough to perfectly resist all known attacks, they each come out at about 13 cpb on ARM. So 11–15 cpb seems quite feasible from that perspective, and a very fast hash function running at maybe 2 or 4 cpb on ARM is also not out of the realm of possibility.

Here are my notes on that:

https://plus.google.com/108313527900507320366/posts/4ZPRdvpzBTJ

Okay, now about your suggestion of a stream cipher: I’m surprised you say that! The eStream project already gave us Salsa20, which seems quite serviceable, seems to have a great security margin, and is already within about a factor of two of the fastest stream ciphers (http://bench.cr.yp.to/results-stream.html). Is there anything wrong with Salsa20? How much better could we hope to do?

Perseids October 3, 2012 9:45 AM

@Hanno
Concerning 1: As far as I know it is only patented until 2021.
Concerning 2: I believe there is some active research in the direction of provable security going on. See for example “Making NTRU as Secure as Worst-Case Problems over Ideal Lattices” by Stehlé and Steinfeld.
Also I don’t think such a competition works well for asymmetric cryptography. Hash functions and block ciphers are more of a dark art, whereas public key cryptography is even more math heavy and takes more luck and time to find and analyse.

other human October 3, 2012 10:01 AM

From an unnamed source within the NIST:

On October 4, 2012, NIST is announcing a new open competition for a new hash function standard to be called SHA-4 to replace all previous obsolete hash standards.

Zooko Wilcox-O'Hearn October 3, 2012 10:07 AM

@Hanno, @Perseids: you folks would probably be interested in the “post-quantum crypto” field: http://www.pqcrypto.org/

There is a direct tie-in to hash functions here: one of the approaches to post-quantum digital signatures is hash-based digital signatures. In my opinion, the argument for the security of hash-based digital signatures is strictly better than the argument for the security of any other scheme. (Because a hash-based digital signature scheme can be broken only if second-preimages can be found on the hash function, but every other scheme can be broken if either second-preimages can be found on a hash function or an attack can be found on the signature function. This observation is due to David-Sarah Hopwood.)

So, one promising approach to practical and secure post-quantum digital signatures is to start with an efficient hash function and build on top of that. Unfortunately Keccak (SHA-3) is not nearly efficient enough for practical use in that way in the near term.

Hash-based digital signatures is one of two reasons why I’m hoping for much more efficient hash functions (as mentioned in my previous comment). The other reason is bulk data processing, such as when storing large files in an integrity-checking and deduplicating filesystem such as ZFS, a distributed integrity-checking filesystem such as Tahoe-LAFS (which I am a designer of), or in an integrity-checking distributed revision control system such as git.

We’ve posted some notes about post-quantum crypto in our “100 Year Cryptography” project on the Tahoe-LAFS wiki:

https://tahoe-lafs.org/trac/tahoe-lafs/wiki/OneHundredYearCryptography

(Spoiler: the main thing we learned from the “100 Year Cryptography” project is that you don’t need to do anything about hash functions or digital signatures in the short term in order to achieve 100-year-secure cryptography, but you do need to do something about encryption in the short term. That’s because an attacker in the distant future will no longer have access to a victim who will accept old-style hashes or digital signatures, but he will still have access to old ciphertexts.)

Alan Kaminsky October 3, 2012 10:17 AM

@Zooko: For “what to do next”, how about a really fast hash function?

Take a look at:

J. Aumasson and D. Bernstein. SipHash: a fast short-input PRF. Cryptology ePrint Archive, Report 2012/351, September 19, 2012. http://eprint.iacr.org/2012/351

SipHash is a keyed hash function that is claimed to be (second) preimage resistant, but not collision resistant. There are many hash function applications that require preimage resistance but not collision resistance; a message authentication code (MAC) is one example. In such applications, using a collision resistant hash function is overkill, and using a simpler and faster preimage resistant hash is perfectly secure.

BTW, Aumasson and Bernstein were authors of SHA-3 candidates (finalist BLAKE and second-round candidate CubeHash, respectively).

Ravinwind October 3, 2012 10:26 AM

My first choice was Blake because of it’s speed and stream cipher Salsa-20 base. But when I saw Skein’s documentation, I changed my mind to Skein, because of it’s block cipher design. I know block cipher designs. I love pySkein (Python). Look up the e-Bash and XBX data, Blake and Skein are consistently the fastest. Keccak is third fastest. Also, in embedded systems Blake and Skein have the smallest size, so they fit in small systems better than Keccak. NIST chose Keccak anyway, despite it’s cons, for it’s pro: the dissimilarity to SHA-2 (which is still based on the “old” MD-5). Switch SHA when needed. There are now 13 SHAs to choose from – plus the non-standard ones developed during the SHA-3 competition. The day of field programmable gate arrays (FPGA) is here, reprogram when necessary.

Curby October 3, 2012 11:19 AM

Then at the end of this post you suggest, in all seriousness, that there should be a new competition for a stream cipher.

Stream ciphers have their place; even if block ciphers can emulate the behavior of stream ciphers, they generally cannot compete on performance.

If you only have a hammer, everything looks like a nail. A toolset should be well-rounded and well-maintained, with new tools periodically replacing obsolete tools.

fast hardware won’t give the attackers an advantage if the users have it too

W was arguing that many users won’t have it too. My hunch is many people use hash functions without hardware acceleration, and many attackers hit hash functions with hardware acceleration. A smart implementation of a secure hash function will resist such attacks, but the point remains that the attackers have a speed advantage in the real world.

W October 3, 2012 12:36 PM

I did a bit of additional reading, and it seems like the difference between keccak and SHA-2 is smaller than I expected for password hashing.
While the throughput for keccak is much larger than the throughput for SHA-2, the difference in throughput per area, which an attacker cares about is much smaller. Seems to be worse by a factor two or so.

Still scrypt and bcrypt are much better than PBKDF2 with any popular hashfunction, the point is kind of moot. The choice of SHA-3 should not depend on how good it is for password hashing.

Instead we should push for standardizing scrypt and getting people to use it. There is some effort to get a RFC describing scrypt.


The thing I care far more about is fast a hash is in software and how secure it is. In that regard both Blake and Skein are far better than Keccak. Pity Blake didn’t make it.

David Leppik October 3, 2012 1:26 PM

@Steve: I’m with you; we need a replacement competition for PBKDF2, if for no other reason than to promote better password security and to shine some light on scrypt– which, the Internet claims, remains too new to trust.

I’m a software developer, not a full-time security guy, and yet I’ve had to deal with weak password hashing several times in the last few months. Developers and IT guys either (a) don’t realize it’s a problem, or (b) don’t realize that there’s a solution. Bad passwords aren’t going away, so the only solution is to make password guessing more expensive.

Nick P October 3, 2012 5:34 PM

@ W

I didn’t know about scrypt. Thanks for mentioning it. It sounds a lot like what I’ve been looking for in a PBKDF. I’ve often said we need to design one that is provably difficult to implement quickly in hardware and software. Critics appear to say it’s better to make a fast one that we iterate. I’d note it hasn’t been better so far….

I thought it was neat that they used the same approach as one of my old designs: using a decent amount of randomly accessed memory to make the computation more expensive. That do a bit more than that, though. I hope they keep the good work coming.

Scrypt Slides
http://www.tarsnap.com/scrypt/scrypt-slides.pdf

OFF TOPIC from the same site

I couldn’t resist posting this. They integrate Scrypt into their backup service “for the truly paranoid.” They have a page where they explain reasons that they charge for “picodollars per byte.” Reason 3:

“Specifying prices in picodollars reinforces the point that if you have very small backups, you can pay very small amounts. Unlike some people, I don’t believe in rounding up to $0.01 — the Tarsnap accounting code keeps track of everything in attodollars and when it internally converts storage prices from picodollars per month to attodollars per day it rounds the prices down.”

Picodollar charging, attodollar accounting, & rounding amounts down. I think that’s three firsts in service-oriented technology industry. I figure the cell phone companies and cloud providers should adopt the picodollar model, too. Now, we just have to come up with a way to explain why it’s better that way for them. 😉

Mike Hamburg October 4, 2012 1:14 AM

A fast new stream cipher, eh? Of course there is Salsa20 and its younger sibling ChaCha20, but if you’d like something with better hardware performance, perhaps Keccak itself is the way to go?

Still, a NIST / Suite B standard would be nice.

Sensible Crypto October 5, 2012 3:11 AM

I actually think that NIST should standardize a new block cipher, rather than stream cipher. It should be AES with the related-key attacks mitigated and block size increased to 256 bits.

A common hard drive will soon store hundreds of terrabytes and collisions will be more and more likely with only 128-bit blocks. Disk encryption re-keying every x terrabytes doesn’t sound like a good idea.

greg October 8, 2012 5:59 PM

@Sensible Crypto
Well with the birthday limit and assuming a block size of 8 bits (ie a key per byte). I get a “collision” every 18 million terabytes. But then we lose a bit with different modes I guess.. but AFAIK its still 2^64 for a 128 bit block. And there is three fish…..

@Hanno
As someone who once worked on quantum fancy pants stuff. I assure you you have nothing to worry about.

First of all qbits don’t scale. That is a 1024 qbit computer cannot be used for 1025 qbit factorization. Second adding a qbit is 2x harder than the previous generation. 3rdly… and often left out i also need to scale the number of operations before decoherence for it to work.

Right now the number of operations is totally laughable and the claims on “7 qbits” are absolutely misleading.

More or less, quantum computers exchange a not in P algorithm (is it exp or super poly?) for rather high order polynomial complexity, for exponentially difficult to build hardware.

We will find a factoring method in P long before quantum computers matter in braking public key systems. Right now side channel attacks are pretty good anyway.

Random832 October 9, 2012 3:21 PM

@Nick P Now, we just have to come up with a way to explain why it’s better that way for them. 😉

“2.0 × 10^15 attodollars” is unambiguous in precisely the way that “.002 cents” isn’t.

Shevek October 10, 2012 5:31 AM

Does anybody know about the official name and its variants?

“SHA3” looks ok for, let’s say, the 256-bit or 512-bit variant. But if you want to precise the bitlength “SHA3256” or “SHA3-256” are a bit confusing and too similar to actual SHA256.

Mike Amling October 19, 2012 8:39 PM

I read that Keccak’s round function is reversible. That’s pretty obvious for the rho, pi and iota transforms. For chi to be reversible, x’s range has to be odd, and since Keccak’s x has a range of 5, that’s fine.
What is the criterion for the theta step to be reversible? I ran some experiments for small x, y and z ranges. Theta where [x][y][z]’s ranges are [3][3][2] is not reversible, and [3][3][3] is not reversible. Where is the demonstration that theta for [5][5][64] is reversible?

Mike Hamburg October 31, 2012 3:58 PM

@TJ: I think that article is about half-right.

Rijndael was a good choice for AES. It’s fast in big CPUs, small CPUs and in hardware, and it’s simpler than the other competitors. NIST weren’t as worried about side-channel attacks then as they are now, so the difficulty of masking table lookups and the difficulty of masking power consumption for the S-box in hardware were overlooked. By the way, my implementation of AES with vector permutations (“vpaes”, which is now used by OpenSSL when SSE3 is available but AES-NI is not) is fast and timing-attack-resistant even when encrypting one block at a time. It’s slightly slower than Kasper-Schwabe, though, and like Kasper-Schwabe it requires SSE3 or AltiVec.

For SHA-3, the basic reason for the choice of Keccak is that SHA-2 is good enough. It wasn’t broken as NIST had feared, and it performs well enough in software and even in hardware. So a replacement isn’t needed. This is why Bruce suggested that NIST should declare no winner. Instead, NIST went for Keccak because it’s very different from SHA-2.

GCM is a somewhat strange choice, though. It’s clean, with its field size exactly matching the AES block size, but that doesn’t seem worth the software performance penalty.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.