NIST Defines New Versions of SHA-512

NIST has just defined two new versions of SHA-512. They’re SHA-512/224 and SHA-512/256: 224- and 256-bit truncations of SHA-512 with a new IV. They’ve done this because SHA-512 is faster than SHA-256 on 64-bit CPUs, so these new SHA variants will be faster.

This is a good thing, and exactly what we did in the design of Skein. We defined different outputs for the same state size, because it makes sense to decouple the internal workings of the hash function from the output size.

Posted on February 18, 2011 at 6:22 AM31 Comments

Comments

Paeniteo February 18, 2011 7:11 AM

@Richard:
“SHA-2” is a collective term for the hash functions SHA-224, SHA-256, SHA-384, and SHA-512.

The new IV looks like something that makes implementation unecessarily difficult. Truncating ‘plain’ SHA-512 to the desired bitsize would have allowed trivial reuse of existing implementations. Would this approach have introduced a cryptographic weakness?

Also, I am not convinced that it is a good thing to have another hash function at this time, just to gain some performance enhancements on 64bit platforms. IMHO we will see SHA-3 before this is widely adopted.

Richard Hartmann February 18, 2011 7:49 AM

Paeniteo: I am well aware of this. Which is exactly the reason why I asked the question above.

As soon as SHA3 is out and offers a 256, 512 etc bit output as well, those distinctions will become very relevant.

Paeniteo February 18, 2011 9:09 AM

We will see how they deal with the potential ambiguities once SHA3 is out.

For myself, I would not have problems in differentiating between SHA-256, SHA-512/256 (both now being part of the SHA-2 family), and SHA3-256.

It is unclear whether NIST will (need to) make the internal state size of the hash function visible in the externally visible algorithm name. IMHO they should go with less algorithms (i.e., have one SHA3-256, instead of SHA3-512/256 and SHA3-256/256).

Christian February 18, 2011 9:28 AM

Is this really useful? SHA-3 seems not to be that far away. I doubt many libraries will implement this soon enought to be useable. Or is there any benefit to using SHA-2 when SHA-3 will be there?

Nick P February 18, 2011 9:49 AM

@ Bruce Schneier

I’ve always been a fan of using two algorithms on a given piece of data. For example, SHA-256 and RIPEMD-160 or SHA-256 and Whirlpool. It increases the burden on the systems but modern systems and networks are fast enough that adding a 2nd hash step is usually unnoticeable for most applications.

Cryptophone uses a similar concept for their crypto algorithms, combining AES and Twofish just in case one gets broken. Personally, though, I doubt that will happen and find hash functions to be riskier because cryptographers don’t seem to understand them as well. I think using two hash functions is a much more secure strategy than trying to build one good one. This paid off pre-SHA1 days when I used MD5 and a 2nd hash. All of those MD5 attacks fell flat on their face when the final (combined) hash didn’t match. (Recently been thinking of applying this concept to authentication, combining RSA with one of the new quantum-resistant schemes.)

Bruce, what are the security implications of combining hashes in various ways? Truncating two outputs to 128bits each? Just XORing them together? Ideally, storing both in full form, but I wonder what security tradeoffs come with the preceding options that might be useful to reduce network congestion, storage, etc.

MikeA February 18, 2011 12:27 PM

“Is NIST down?”

Maybe they share a first octet with some guy who has a recording of himself singing Happy Birthday, or a picture of his baby in the bath…

(“We’re ICE, we don’t need to obey the constitution”)

aikimark February 18, 2011 12:35 PM

Maybe Bruce’s blog is driving so much web traffic to NIST that it just seems like a DDoS attack.

aikimark February 18, 2011 12:36 PM

Maybe that should be a Bruce Fact…

“Bruce is so influential that a blog link is indistinguishable from a DDoS attack”

Dirk Praet February 18, 2011 3:38 PM

nist.gov still down. Any other links to the publication ? Since more and more people and appliances are on 64-bit, faster is better.

Nick P February 18, 2011 11:11 PM

@ BF Skinner

I also grabbed that one the minute I saw it. It’s good to see the government publishing standards that indicate things below the OS are important. Now we need one about processor errata. Witness a mass exodus from x86. More likely a bunch of bugfixing. I can’t wait to see penetrate and patch move to hardware.

@ Richard Steven Hack

LOL

Anonymous: “We have a script that will download your papers hundreds of times a minute for every one of us with broadband. No business owner will read your security guidelines. Developers will not know how to implement crypto properly. BIOS’s will provide an easy avenue of attack.”

(NIST responds)

Anonymous: “WHAT DO YOU MEAN THAT DOESN’T CHANGE ANYTHING!?”

Sala February 19, 2011 2:56 AM

I wonder how long before we’ll see people writing password hashing code that uses these hashes. Faster is better, right?

[/sarcasm]

Actually, you want a really slow hash. So don’t use any of these, use a hash meant for password hashing so it takes forever to crack passwords stored with it.

Chris Drost February 19, 2011 7:57 AM

@Nick P:

What you’re asking for are called “combiners” of hash functions, and “combiners” assume that some property holds of one or another hash, and use it to construct a similar property for the output hash.

The most well-studied is concatenation. If exactly one of A and B is collision-resistant, then A(m) | B(m) is collision resistant. On the other hand, if either A or B is differentiable from a random oracle, then so is A | B — just ignore the part that doesn’t help you to differentiate it.

On the other hand, A xor B is indifferentiable if either A or B is indifferentiable, as long as they’re substantially independent — but there is no similar property for collisions!

To see this, assume that xor were strong with regard to collisions: you give it two independent hash functions, and even if one of them admits easy collisions, so long as the other one doesn’t, the output resists collisions as well. We’ll call the secure one H(m) and the insecure one I(m). You xor-combine them into J(m) = H(m) xor I(m), and you get a nice function J which is collision-resistant, by assumption.

J(m) and H(m) are both independent collision-resistant functions, and so we attempt to xor-combine them, because by assumption this cannot hurt. Yet we get I(m) as output, which is insecure. This is a contradiction. Thus XOR cannot in general be strong against collisions — maybe for some independent functions you get lucky, but not for all of them.

There exists a published if somewhat-ugly construction which uses two n-bit hash functions, produces 2n bits of output, and provably preserves all of the (n-bit) hash function properties which you might care about. It queries both of the hash functions four times:

http://www.cdc.informatik.tu-darmstadt.de/~alehmann/publications/MPRCombinersRevisited.pdf

There is also a general impossibility result in the literature: given two n-bit hash functions, a combiner must produce at least 2n bits in order to have provable n-bit collision resistance when either one fails. In general, since you always pay this 2n price, it would be nicer to just have a single robust hash function with 2n bits of security.

Just think about it: you could use SHA1 | RIPEMD-160, and get a 320-bit hash function with 80-bit collision security even if SHA1 or RIPEMD-160 is broken. But why not just use SHA-512? Even if SHA-512 were “broken” in the security sense — even if it has less than 256-bit collision security — it will take a 2^176 break before it reaches the amount of security that you actually seem to require. For most applications, the extra 24 bytes of storage required to store 512-bit hashes versus 320-bit hashes is utterly negligible.

And if it weren’t negligible — if you were in the slim margin where 24 bytes of overhead really matters — you could use SHA-256 to save 8 bytes, and attackers would need a 2^48 break on SHA-256 instead of a 2^1 break on both RIPEMD and SHA1, to break your security parameters.

What I’m saying is, primitive-combination is okay but wasteful, and you could use the wasted bits to solve the same security problems via overkill. Why not use the overkill, if you have it?

Nick P February 19, 2011 12:13 PM

@ Chris Drost

Thanks for your very enlightening reply. Fortunately, I was using concatenation rather than XOR. I was concerned XOR might negatively impact collision properties so I just transmitted both hash values as is, to be checked independently. I appreciate your link to the Darmstadt paper, as it seems to solve this problem pretty well.

You asked why I don’t just choose an overkill function like SHA-512? Well, it depends on the threat model and in mine that’s far from overkill. My concern isn’t that someone will brute-force the algorithm or something. My concern is that future research might find a new class of attacks that makes these algorithms weak. This has happened over the decades for symmetric and asymmetric ciphers and may happen with hash function reseach.

I can’t find the link right now, but I remember reading Bruce or someone mention that there’s plenty they don’t understand about hash functions [compared to other primitives]. They expected to learn a lot during the SHA-3 competition and improve their cryptanalysis. The only defense against this threat is using more than one algorithm and hoping both don’t get broken. It should also be noted that I only do this in applications requiring the highest security. Most applications I just use AES-256, SHA-256 or SHA-512, and a good implementation of RSA-2048. I plan to update my default strategy to use the new SHA-512/256 standard.

Clive Robinson February 19, 2011 4:58 PM

@ Nick P,

“My concern isn’t that someone will brute-force the algorithm or something. My concern is that future research might find a new class of attacks that makes these algorithms weak.”

Yup that’s been my concern for a very long time.

And it’s why I keep mentioning NIST need to get their act together over robust and secure frameworks for likes of secure comms etc before it’s to late.

If NIST mandate a sensible, extensable, robust and secure framework that is “mandatory” for use with “certified systems” it will make life a lot simpler, cheaper and more secure for us all in the long run.

Although such a framework will increase the base crypto code size and require the use of mutable memory making the system more expensive, it gets rid of the very real security threat of legacy embedded systems.

Which as I’ve pointed is going to happen due to “lowest possible price” thinking on the likes of critical infrastructure…

That is if you have significant infrastructure you are going to get embbeded systems as a given these days (think electrcity/gas/water meters etc).

And due to “freemarket ideals” and “short term thinking” of managment only interested in the next quater profit these embedded systems will be designed for the minimum possible cost.

Which in turn means lowest possible price components to meet the specification which in turn will have minimal functionality.

Although quite inexpensive themselves these embedded systems have an attendant very high instalation / replacment costs. Further due to other resource limitations (skilled manpower) in many cases could take upto ten years to “swap out” on an upgrade cycle.

Thus you have systems designed with a mean life of 25years (but can easily be 30 or more in practice), produced at the lowest possible cost. Which means unless “mandated” these systems will have crypto hard coded in that cannot be upgraded…

This has a knock on effect due to other “free market” forces, which means that all crypto systems for non embeded systems will be written to be all things to all people as well as easy to use.

Thus will have weak or broken crypto algorithms and modes built in for “legacy support”. And due to “ease of use” the likes of plain text “auto negotiation” will be used to establish secure communications using a mode and algorithm supported by both end points.

Although the system will be designed to start/select the most secure algorithms it will negotiate downwards. Thus a system in the middle could spoof the end points plain text auto negotiation down to the weakest or most broken mode or even plain text. Thus the user would be unaware that what they think is a secure link is anything but.

By mandating a secure upgradable framework NIST can prevent this problem (and quite a few others) occuring long before it starts.

It could effectivly remove weak or broken algorithms and modes quickly and efficiently and importantly add new algorthms and modes as they become available.

Any way this is not something I have not said before but…

Dave February 20, 2011 3:30 AM

Oh great, another pointless fashion statement that implemeters will have to deal with, because there aren’t already enough cipher suites in SSL/TLS, SSH, and so on.

Nick P February 20, 2011 1:33 PM

@ Dave

Actually, there is a good point: increased security and performance (on 64-bit systems). People who use crypto in production systems may experience cost savings in day to day use. Because this is a truncation scheme, it’s actually easier on implementers than implementing two different hash functions.

Paeniteo February 21, 2011 3:08 AM

@Sala: “Actually, you want a really slow hash.”

You can make any hash function as slow as you want by iterating it a few thousand times. 😉

Paeniteo February 21, 2011 6:41 AM

@Richard: “no one will be able to guarantee that you will not weaken your hash with this”

Vice versa, no one will be able to guarantee you that a slow hash function is stronger than an iterated fast hash function.

I won’t argument against gut feelings, however 😉

RFC2898 November 2, 2012 1:23 PM

Allow me to translate “iterating it a few thousand times” to “using a well known HMAC based iterative technique like PKCS#5/PBKDF2/RFC2898 in the high tens of thousands to billions of times, doubling the iteration count every couple of years or so”.

I’m just hoping for official, better test vectors for RFC2898 with modern hash functions (including SHA-512/256 and the SHA3 variants).

Gavan Schneider December 16, 2012 7:49 PM

Thanks all for an interesting discussion.

I note the tension between perils in combining crypto techniques (i.e., unknown confounding which reduces security), versus the temptation to preempt one or other technique getting badly broken.

To my mind there may still be merit in a combined hash iff the hashes can be decoupled.

One form of this would be splitting the initial passphrase and hashing each part. Xor of the results should not confound anything since the hash functions have had different inputs, but the ‘strength’ will only be half of the input entropy.

Another (maybe less easily analysed decoupling) would be to use the input entropy in two ways, e.g.,

hash := hashfn1(salt1|pwd) xor hashfn2(pwd|salt2)

The belt braces and hold option would have this resulting string hashed again as follows:

hash := hashfn3(salt3|hash)

Suggest the salts should be a non-trivial, viz., 256+ bits.

For the case where a slower system is needed to protect a password file from brute force attack, consider:

hash := hashfn1(salt1|pwd)
for ii = 1 to N { — where N is ‘sufficient’
hash := hashfn2(salt2|text(ii)|hash)
}
hash := hashfn3(hash|salt3|pwd) — save the perceived ‘best’ hash for last

Obviously the detail should be changed to reflect the views of those who have the expertise.

The above is based on the conjecture that combining hash function outputs will not have confounding problems if hashing strings that are sufficiently different. The value of ‘sufficiently different’ is not defined, but I am guessing here that 256+ bits might be enough.

Apologies if this idea is very old and/or well discussed elsewhere.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.