Schneier on Security
A blog covering security and security technology.
« Friday Squid Blogging: Beached Firefly Squid |
| Security Vulnerability in Windows 8 Unified Extensible Firmware Interface (UEFI) »
September 24, 2012
SHA-3 to Be Announced
NIST is about to announce the new hash algorithm that will become SHA-3. This is the result of a six-year competition, and my own Skein is one of the five remaining finalists (out of an initial 64).
It's probably too late for me to affect the final decision, but I am hoping for "no award."
It's not that the new hash functions aren't any good, it's that we don't really need one. When we started this process back in 2006, it looked as if we would be needing a new hash function soon. The SHA family (which is really part of the MD4 and MD5 family), was under increasing pressure from new types of cryptanalysis. We didn't know how long the various SHA-2 variants would remain secure. But it's 2012, and SHA-512 is still looking good.
Even worse, none of the SHA-3 candidates is significantly better. Some are faster, but not orders of magnitude faster. Some are smaller in hardware, but not orders of magnitude smaller. When SHA-3 is announced, I'm going to recommend that, unless the improvements are critical to their application, people stick with the tried and true SHA-512. At least for a while.
I don't think NIST is going to announce "no award"; I think it's going to pick one. And of the five remaining, I don't really have a favorite. Of course I want Skein to win, but that's out of personal pride, not for some objective reason. And while I like some more than others, I think any would be okay.
Well, maybe there's one reason NIST should choose Skein. Skein isn't just a hash function, it's the large-block cipher Threefish and a mechanism to turn it into a hash function. I think the world actually needs a large-block cipher, and if NIST chooses Skein, we'll get one.
Posted on September 24, 2012 at 6:59 AM
• 67 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
I'm curious as to why you think the world needs a large-block cipher.
This is why you'd be rubbish in the Olympics...
...you'd say "Hey, I'm about as fast as the rest - so whoever wins, wins."
...and you ask "How does this very fast running in a big circle going to benefit mankind?"
(plus you'd probably not look great in skintight Spandex)
Why not 4 (or 8) out of initial 64?
Because our hands have 5 fingers.
Possibly because the best/failures at each round are chosen objectively rather than by an arbitrary number or competing one:one
"Why not 4 (or 8) out of initial 64?"
Standard are better with fewer options. Already there are too many hash function options -- more won't help.
"I'm curious as to why you think the world needs a large-block cipher."
There are applications for encryption algorithms where the block size is the limiting factor. For example, there are security problems when the number of blocks encrypted with a key approaches 2n, where n/2 is the block size.
"This is why you'd be rubbish in the Olympics... ...you'd say 'Hey, I'm about as fast as the rest - so whoever wins, wins.' ...and you ask 'How does this very fast running in a big circle going to benefit mankind?'"
This isn't just a contest; it's a new standard. I'm sure we can figure out some way to pick a winner -- and the winner would be better than SHA-2. My issue is that the winner isn't enough better than the old standard to mandate a switch.
Dom De Vitto: "(plus you'd probably not look great in skintight Spandex)"
Don't you mean "Skeintight Spandex"?
Wondering about the definition of "about" here, for NIST has been playing with our nerves for 3 months now; they succeeded in shifting our concerns (at least mine) from "what SHA-3?" to "when SHA-3?"...
If there isn't a hashing algorithm called "Corned Beef", there should be.
I'm surprised the new options aren't that much better. SHA-2 was formalized in 2001, but the functions were known well before that.
While cryptography moves cautiously, though not to a fault, I'd have expected more advances in those years.
@Zombie John: Oh, well done, Sir!
Wait, you say:
Well, maybe there's one reason NIST should choose Skein. Skein isn't just a hash function, it's the large-block cipher Threefish and a mechanism to turn it into a hash function.
Then you say:
I think the world actually needs a large-block cipher, and if NIST chooses Skein, we'll get one.
But the world already has a large-block cipher, Skein. What's to stop the world from using it as is? Why does it need a magic NIST stamp of approval to become usable?
(Probably a silly question, but I'm seriously wondering. After all, if Skein is one of the final five, that means it has presumably gotten a lot of attention, and doesn't have any bad problems.)
I for one will be happy, that with sha3 the recommended hash functions won't be susceptible to length extension attacks any more. It's one of the subtleties that few people get about sha1 and sha2 (see for example all the broken h(key||message) MACs).
With Skein as Sha3, Threefish will get continuous attention from cryptographers world wide and a substantial amount of documentation and implementation. There is no better way to spread knowledge and trust in the algorithm.
This blog is fascinating. I use an application that utilizes Blowfish and even though the application itself is aging, I've kept using it in your honor!
What is chosen by NIST, and indeed whether anything is chosen at all, does affect a lot of people.
As Perseids said, mindshare is part of it; I remember that Rijndael was the winner of the AES competition, but I don't know who the other finalists were.
FIPS standards matter because they affect what software devs actually choose to implement in their software. For many companies, govt. agencies are at least SOME part of the customer base and therefore adhering to these standards is a requirement.
Aren't all the SHA-3 candidates designed to not be vulnerable to the hash length extension attacks that someone gets bitten by every now an then using the existing options?
Any idea how long it will be until they announce it?
'SHA-2 is good enough' just suggests 'take your time switching to SHA-3', because SHA-3 will still be an improvement in security margin and performance, and a well-studied algorithm. If you've got it, might as well use it, even if the benefit is small.
I also think it's worth advocating for making more modes of operation standard (including the ones talked about in the Skein paper, whether the winning hash is Skein or not). Authenticated encryption, tree hashing, and randomized hashing all have concrete uses (respectively: better performance everywhere, better performance on SIMD/parallel architectures, and signatures that don't require collision resistance). If a tweakable cipher comes out of this, I'd hope the government eventually standardizes its use for encryption, and standardizes some operation modes that take advantage of the tweak parameter.
The need for SHA-3 isn't as urgent as the need for AES, but there's still some standardization work we'll someday really wish we'd done; might as well get started before we need it.
@ Bruce Schneier
I'm also in favor of keeping the algorithms that are field-proven in the field. Even your Blowfish algorithm still get's serious use in the form of Bcrypt. MD5 is still used for its speed boost where collisions aren't an issue. I haven't heard about anyone cracking IDEA, RIPEMD160-320 or Whirlpool. Plenty more examples.
I'd take it a step further, though. (And have in the past.) I'm always in favor of diversity at many levels to limit the damage of automated attacks. The diversity scheme here would be to let all of the SHA-3 candidates in & have each endpoint pick one randomly (maybe just server). If done this way, a toolkit with an automated attack via bad hash algorithm would have a 1 out of 5 chance of working.
This can help protect encrypted volumes from brute forcing too, TrueCrypt style. In TC, there are several different cipher, several hash algorithms, and even option of combining ciphers. Volume metadata doesn't tell what was used for encryption: software tries a bunch of combinations with your key until it figures it out. Naturally, a brute force effort requires many more resources if the crypto was chosen randomly. Throw in a few SHA-3 candidates & the situation get's worse for the attacker.
And so on. Of course, with high volume/scale apps, pre-selected methods are better than dynamically determined for performance and cost reasons.
It's not that the new hash functions aren't any good, it's that we don't really need one.
SHA-512 is still looking good.
Even worse, none of the SHA-3 candidates is significantly better.
Open-PGP needs one.
PGP, GnuPG, key generation uses SHA-1 inextricably in the fingerprint.
The general consensus is to hold off on overhauling the key generation process until a replacement hash is agreed upon, which, in turn, is waiting for NIST.
If NIST doesn't pick any, then people will probably go ahead with SHA-512.
If NIST does pick one, then there will probably be further delay to see how the 'new' one holds up in application use,
and SHA-512 won't be implemented, because "why do the overhaul twice, lets wait until the new one is 'vetted' by public use and scrutiny".
A paradoxical consequence of how a security advance can result in perpetuating use of an existing security-flawed step.
That adds about 2-3 bits of security at best and shaves of way more than that (assuming one of the functions is found to be weak) at worst.
Zombie John: Some cryptographers came up with the same joke--one of the other SHA-3 finalists is called Grøstl.
Played around with PySkein and Threefish was pretty blazing fast on my welfare debian system
A little off-topic here, but without waiting for SHA-3 to be awarded, is there a definitive hashing algorithm that I should be using for passwords with a new Ruby on Rails application that I am building? I *was* planning on using SHA-1, but should I be using SHA-256, SHA-512, or Bcrypt, or ??? My research was inconclusive. Advice appreciated.
"it's 2012, and SHA-512 is still looking good."
Cloudcracker.com accepts "SHA-512 unix" passwords for cracking. Is this different from the general SHA-512?
Grøstl has the advantage of having the best name of the five finalists.
Does anyone know if any operating systems or programs plan on implementing Threefish?
Hey Bruce, when you said:
"there are security problems when the number of blocks encrypted with a key approaches 2^n, where n is the block size."
I think you might've meant 2^(n/2).
Good day, sir.
I think the question on all of our minds is "when". When will SHA3 be announced? Were you given special information the rest of us don't have access to? Or, is it more of the same: "end of Q4 2012"?
Note especially: "As such general hashing algorithms (eg, MD5, SHA-1/256/512) are not recommended for password storage. Instead an algorithm specifically designed for the purpose should be used such as bcrypt, PBKDF2 or scrypt."
The thing I want most is a standardization of common features, in particular tree-hashes, personalization and one-pass MACs.
I'm working in the distributed storage area(Tahoe LAFS etc.), and one of the things that's really missing for these applications is a standardized tree-hash, to avoid each of them cooking their one incompatible one.
Concerning the choice of primitive what is most important for me is that collisions against the 256 bit variant don't become viable in the next decades. So I for one welcome a conservative security margin, even if it's at the expense of a few clocks per byte.
"Cloudcracker.com accepts "SHA-512 unix" passwords for cracking. Is this different from the general SHA-512? "
They probably use a dictionary attack. So it has nothing to do with any flaw in the hash algorithm.
Unless you count it as a flaw that it's fast; but then, it isn't really meant for one-way encryption of passwords in the first place. To make dictionary attacks harder you need to pick an appropriately slow hashing algorithm (or slow it down by e.g. repeatedly applying it several thousand times.)
"I think you might've meant 2^(n/2)."
Yes. Fixed. Thanks.
"When will SHA3 be announced? Were you given special information the rest of us don't have access to?"
I have no inside information on when SHA-3 will be announced. My guess is that they've made the decision, and are going over the final rationale again and again.
My guess is that it won't be Skein.
Do you see a need for moving towards replacing AES moreso than the SHA family at this time?
I know the attacks on AES aren't urgent, but you have said before that it's better to start replacing them before it gets that way and I'm curious to see which you think should have priority.
Beautiful. Much obliged.
Thank you too!
I'd say that the SHA-3 finalists have enough advantages to count as a significant improvement on SHA-2, it's just a pity that all the advantages could not be found in one candidate.
In particular there seems to be an inherent trade-off between efficiency on general purpose CPUs and on dedicated hardware - none of the candidates really shine in both categories. There are also varying performances going from 8-bit, 32-bit, 64-bit and on to larger SIMD units.
On the topic of orders of magnitude improvements:
Take the eBASH listing for the Intel Core i7-2600K. The slowest of the SHA-2s is SHA-256 at about 17 cycles per byte.
The fastest SHA-3 entry, Edon-R-512, is at 2.5 cpb. The fastest non-broken entry, BMW-512, at 3.6 cpb.
The fastest finalist is Blake-512, at 5.8 cpb, followed by Skein at 6.4 cpb.
Long story short, an order of magnitude improvement would call for a cipher roughly four times faster than Skein.
Is this realistic with current designs, by tweaking and by better utilization of vector units, or would it take some new construction to retain the required security margins? As far as I can tell, both Skein and Blake already make pretty good use of the available resources.
Another reason to finalize SHA-3 now is the amount of time it takes people to adopt a new hash. There are still quite a few applications using MD5 around!
The Open Web Application Security Project (OWASP) resource you posted is very good.
Thanks for sharing.
@ David and Zaphod
You're welcome. Always glad to be helpful. :)
The sponge function stuff of Keccak, beautifully simple yet incredibly complex, made it my personal favorite, for entirely subjective reasons: I'm merely interested in cryptography, so I don't think of myself having any valuable opinion about this competition.
I did implement Keccak in a crypto-related simple proof-of-concept software project though, just for the lulz.
I don't know if we need a new hashing primitive, but Skein has something more, in that it also specifies how to use it for real-world applications (MAC, KDF, PRNG...), in particular thanks to its many possible inputs (e.g. key input). Granted, you can do this with any hash function, but I liked that the Skein paper actually adressed this.
We don't need a new hashing primitive, but we need standardized, well-known constructs for solving real-world problems.
Unless NSA knows of some weakness in SHA-2... In that case encouraging people to consider SHA-3 could be a wise precaution lest PRC catch up.
I've thought long and hard about disagreeing with Bruce Shneier, and I still think not endorsing a new algorithm is bunk.
Here are the pros for endorsing a new algorithm:
1) People who need an algorithm endorsed by NIST can use SHA-3, which is likely to be more secure, faster, and/or use less hardware than SHA-1 or SHA-2
2) SHA-1 is broken
3) SHA-2 will probably break before SHA-3
The only con I can think of is that it adds one more choice of NIST endorsed algorithm that we'll have to watch out for.
It's great that we have the opportunity to take a small, incremental step forward, rather than having to leap out of a pit. This is just how it goes with somewhat mature technologies. Maybe in 50 years, we'll come up with hash algorithms that will last for decades without being broken, and without any significant benefit to use newer ones, but we just aren't there yet.
"Yes. Fixed. Thanks."
Actually you fixed it the wrong way: it's 2^(n/2) for block size n, not 2^n for block size n/2.
@zombiejohn I love the corned beef name. I would love to see that come into play. "Suddenly Mary's Kitchen sales have gone up, unknown reasons why"
Bruce, Thanks for posting this education post.
I suspect the problem is that the original designers of the SHA-2 family (NSA?) were actually rather good at it. If a public process competition with some of the leading minds in cryptography can't make more than marginal improvements for algorithms in general purpose use*, there probably will not be an order of magnitude improvement available.
Therefore, the choice is between sticking it out with what we've got or using something marginally better. Whether any of the finalists in the SHA-3 process fulfill that criteria are another story.
On the plus side, there was a flurry of effort and attention on cryptanalysis of hash algorithms so whatever the choice NIST makes, the process was not in vein.
* - if my memory serves me right, some of the algorithms are significantly better as compared to the SHA-2 family on the right hardware, perhaps that might be enough motivation to use them.
I disagree (and I rarely disagree with you!). You don't wait to build a fire escape until the building is on fire. Similarly, we need a good alternative hash algorithm now, not when disaster strikes.
I believe that, in general, we should always have two widely-implemented crypto algorithms for any important purpose. That way, if one breaks, everyone just switches their configuration to the other one. If you only have one algorithm... you have nothing to switch to. It can take a very long time to deploy things "everywhere", and it takes far longer to get agreement on what the alternatives should be. Doing it in a calm, careful way is far more likely to produce good results.
The history of cryptography has not been kind, in the sense that many algorithms that were once considered secure have been found not to be. Always having 2 algorithms seem prudent, given that history. And yes, it's possible that a future break will break both common algorithms. But if the algorithms are intentionally chosen to use different approaches, that is much less likely.
Today, symmetric key encryption is widely implemented in AES. But lots of people still implement other algorithms, such as 3DES. 3DES is really slow, but there's no known MAJOR break in it, so in a pinch people could switch to it. There are other encryption algorithms obviously; the important point is that all sending and receiving parties have to implement the same algorithms for a given message BEFORE they can be used.
Similarly, we have known concerns about SHA-2, SHA-256, and SHA-512. Maybe there will never be a problem. So what? Build the fire escape NOW, thank you.
To "Dom de Vitto": The outcome of an Olympic event doesn't result in the entire government infrastructure being overhauled, at an enormous cost to taxpayers. Announcing a new federal encryption standard does, however. The author makes the point that we're not getting much bang for our buck, and that even that low bang is solving a non-problem. That makes sense to me.
you can already use Threefish if you download py-skein... played around with it, seems pretty solid
openbsd has a thread somewhere on misc where they talked about implementing it
for the sake of argument, if sha512 weren't holding up so well would the SHA 3 competition be good enough to replace it for a while? security is of course an arms race, but these things are also implemented slowly, so chosen today i would suspect it will take 10 years before it sees mass adoption. So the real question is, speculatively will any of the SHA 3's be holding up 10 years from now, where sha512 might not be.
... be good enough to replace it for a while?
The problem with the question is what do you mean by "a while"?
You go on to say,
... speculatively will any of the SHA 3's be holding up 10 years from now, where sha512 might not be.
Ten years is actually a very short time when you are talking about "standards" especially when you are talking about a major investment of money such as infrastructure or government use.
On a more personal level think about the "utility meters" in your intake closet or medical Implanted Electronic Devices (IEDs) such as heart pacemakers, 25 years service life time would be the minimum for these devices.
Now ask yourself do you realy want some hacker with a grudge being able to get at your "smart meter" from around the other side of the world doing a Stuxnet number on your heating/AC? Or how about a drive by serial killer changing the settings in your Smart IED such that your pacemaker or insulin pump etc makes you very sick or dead?
A problem NIST and other US originated standards issuers have is the "one ring" mentality. That is there should be only one method forever or untill it is so broken some form of chaos starts to happen. It flies in the face of what we know happens, that is things get old and break, usually at the most inconvenient of times.
So we have developed ways in High Reliability and High Availability systems to quickly swap out parts that break aand replace them with stronger or more effective parts.
Sadly this does not appear to happen in other areas of life. Due in the main to "cost minimization" which is a major hallmark of unregulated markets drive to "be efficient", and it almost always ends in a "race for the bottom" which promots very small short term gain over very large long term loss.
Thus what happens in embedded systems such as Smart meters and Medical IEDs is parts get "baked in" with no hope of replacment, just the replacment of the whole unit. Now unlike physical defects which have a time distribution, information defects in software etc make all units fail at the same time, and in the case of security they all become vulnerable at the same time. How long do you think it would take to replace all the Smart meters in the US with a population of over 300million? Then ask how much damage can be done in that time?
But also think about the cost of such an event and who should pay? the current "free market" answer is that the customer or society should pay, not the manufacturer. That is the manufacturer externalises the long term large cost of their very deliberately chosen almost marginal short term "efficiency gains" on the excuse that the customer should have (unlike the manufacturer) the ability to see into the future and make what is in effect an "omnipotent buying decision"...
Thus NIST and other standards bodies need to produce standards that take this into account and enforce the ability of inplace upgrade of "information parts" as part of the standards compliance process as what is in effect a "social good".
However to do this we first need to "abstract" out the essential essence of various very low level components (AES, SHA-3, etc) into a common interface, but not just as an idealised form but also as an extensible form.
That is as Bruce has noted AES whilst having a block width suitable for many applications does not have a sufficient block width for some. So any standard should not "hard code" in restrictions at the interface, it should have an inbuilt ability to be extensible in some way.
However there is a downside to this, as information parts become broken how do you stop them being used without making the systems unusable. That is how do you manage transition reliably, this is an area we are currently getting to grips with as experiance with Revocation lists for PK Certs etc has shown our initial ideas are usually far from ideal.
For those who wonder why I worry about "infrastructure" attacks, this is just one of the latest reasons,
Note the atribution (all be it questionable) to the same or similar group who did the number on RSA a little while ago.
Even if SHA-2 is considered unbreakable for the foreseeable future, we should adopt an even stronger encryption scheme. When the alien mothership swings into earth orbit and trains its antennae on our military C³ grid, we will wish we had done so when there was still time.
@A Nonny Bunny "Unless you count it as a flaw that it's fast; but then, it isn't really meant for one-way encryption of passwords in the first place."
Well, obviously that _is_ a flaw, for this purpose. Maybe there should be a standard for password hashing. I think there are several hashes that are [varying degrees of] explicitly meant for the purpose of hashing passwords. Does anyone know if there are any that common implementations of unix login tools can use out of the box?
Here is the abstract of my SHA-3 report. It may be modified slightly before publication:
Blake and Skein hash algorithms perform equivalently. They are faster than the other three SHA-3 finalists. The greatest speed variability was Keccak. The least speed variability was JH and Skein. Grøstl, JH, and Keccak, in most measurement experiments, were the slowest. Kerckhoffs design philosophy is acceptable for all algorithms - security depends on the "key" message, not the secrecy of the algorithms. All five finalists were previously found to be secure, with best attacks close to brute force difficulty. Blake and Skein would both be acceptable for Federal Information Processing Standard (FIPS) 180, Secure Hash Standard. Skein's documentation and submission data provided a more complete system than Blake, therefore we recommend Skein over Blake and the other finalists.
Bruce Schneier's "no-new-standard" is correct IMO> Here is why: The Secure Hash Standard FIPS 180-4 March 2012, does not disambiguate SHA-2 from what will now add SHA-3. (That must be addressed).
FIPS 180-5 (the next in line, or 6) will include SHA-3 -- it's beed mandated by Congress's Fed Reg. on Nov. 2, 2007.
We need to re-label the SHA-224, etc. as
SHA-2-224. etc. , with reference to
"SHA-2" and add SHA-3. Or something like Skein256, Blake512, etc. AHS. Advanced Hash Standard has been proposed.
The current standard hash list has seven:
6. SHA-512/224 (truncated)
7. SHA-512/256 (truncated)
SHA-3 will add six more (AFAIK):
12. SHA-3-512/224 (truncated)
13. SHA-3-512/256 (truncated)
Thirteen "Standard" hashes to choose from -- too many. SHA-1, even though "cracked and bleeding", will still be needed, so we can't end-of-life any of them.
to: Zombie John
There is --- it's called SPAM...
Bruce, you initially postulated in September that the SHA-3 contestants are not overly speedier, though Xu Guo's ASIC evaluation PDF [en.wikipedia.org/wiki/Keccak ] reveals that Kecčak, in hardware, leaves the competition pretty much in the dust. What are your thoughts on this observation?
@Enigl : Surprisingly, part of the reason we don't really need a new hash, is that SHA-1 is a lot less "cracked and bleeding" than it was some 5 to 6 years ago.
It's a unique situation, but SHA-1 can be said to be a crypto algorithm that reverted the path of becoming weaker and weaker as time goes : All the strongest claims of weakness that were claimed to have been found have been retracted.
Hashclash made a new one at the end of last year at 2^61, but we're still waiting.
The whole point is at the moment most security devices give you the option of MD5 (which is dead) or SHA-1 which is dying but still functionally usefull.
We need SHA-3 now, so that we have a standard for interoperability that vendors can build in, so that when SHA-1 dies, our devices already have the ability to failover from a dead SHA-1 to something new, like we had from MD5 to SHA.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.