NIST Hash Workshop Liveblogging (4)

This morning we heard a variety of talks about hash function design. All are esoteric and interesting, and too subtle to summarize here. Hopefully the papers will be online soon; keep checking the conference website.

Lots of interesting ideas, but no real discussion about trade-offs. But it’s the trade-offs that are important. It’s easy to design a good hash function, given no performance constraints. But we need to trade off performance with security. When confronted with a clever idea, like Ron Rivest’s dithering trick, we need to decide if this a good use of time. The question is not whether we should use dithering. The question is whether dithering is the best thing we can do with (I’m making these numbers up) a 20% performance degradation. Is dithering better than adding 20% more rounds? This is the kind of analysis we did when designing Twofish, and it’s the correct analysis here as well.

Bart Preneel pointed out the obvious: if SHA-1 had double the number of rounds, this workshop wouldn’t be happening. If MD5 had double the number of rounds, that hash function would still be secure. Maybe we’ve just been too optimistic about how strong hash functions are.

The other thing we need to be doing is providing answers to developers. It’s not enough to express concern about SHA-256, or wonder how much better the attacks on SHA-1 will become. Developers need to know what hash function to use in their designs. They need an answer today. (SHA-256 is what I tell people.) They’ll need an answer in a year. They’ll need an answer in four years. Maybe the answers will be the same, and maybe they’ll be different. But if we don’t give them answers, they’ll make something up. They won’t wait for us.

And while it’s true that we don’t have any real theory of hash functions, and it’s true that anything we choose will be based partly on faith, we have no choice but to choose.

And finally, I think we need to stimulate research more. Whether it’s a competition or a series of conferences, we need new ideas for design and analysis. Designs beget analyses beget designs beget analyses…. We need a whole bunch of new hash functions to beat up; that’s how we’ll learn to design better ones.

Posted on November 1, 2005 at 11:19 AM14 Comments


Paul Crowley November 1, 2005 12:10 PM

The comment about number of rounds isn’t the whole story. Even if they all had twice as many rounds, MD5 would still be weak simply because 128 bits is no longer enough for a hash function to be secure, and SHA-1 would still be vulnerable to the recent crop of attacks like Kelsey and your long message attack.

Hash functions are generally bounded in the size of input they can hash, so I don’t understand this motivation for Rivest’s construction. The countermeasure that seems most natural to me is that of significantly increasing the internal state of the hash function, and adding an extra final step which generates the (smaller) hash value from the internal state. This would seem to offer potential both for better performance and better security. However, adding a block counter as well does seem very appealing.

denis bider November 1, 2005 12:19 PM

The following thought occurs to me: if a hash needs to be designed that’s going to be good for 20 years, this isn’t going to happen by engaging in security vs. speed tradeoffs.

I think that, in many systems, a hash would be preferred that is slower but doesn’t need to be replaced on the order of every 5 years.

It’s no problem replacing the hash function in a protocol – where its use is ephemeral – but when you’ve got 10 year old signatures that you cannot any more trust because the hash function got broken in the mean time, well… that’s just bad design.

We DO need hashes that will last decades. How otherwise can electronically signed documents be trusted in the long term?

Bruce Schneier November 1, 2005 1:08 PM

“I think that, in many systems, a hash would be preferred that is slower but doesn’t need to be replaced on the order of every 5 years.”

Unfortunately, security is largely an externality. Imagine two different products, one using a slower hash function. That slower product will look bad in comparison reviews. It won’t sell as well. Companies pay attention to the short term, not to potential problems ten years from now.

This where a standard comes in. If there’s a hashing standard that everyone follows, then everyone is in the same boat performance-wise.

Anonymous November 1, 2005 1:36 PM

“Imagine two different products, one using a slower hash function. That slower product will look bad in comparison reviews.”

In my experience with customers, the reverse is actually true. The customers do have understanding for the performance ramifications of security, even too much so. Customers will excuse a product’s poor performance, even when it is not due to encryption, because they simply assume that, “oh well, i guess it must be slow because of the encryption; well, if it’s for encryption, that is fine”.

In reality the encryption is very fast and the product has some unrelated, silly performance bottleneck.

“This where a standard comes in. If there’s a hashing standard that everyone follows, then everyone is in the same boat performance-wise.”

We need two hashing standards, then. We need one that will be fast, and one that aims for decades of security.

Protocols will use the fast hash standard for the MACs, but long-term signatures will be made with the long-term one.

This is the right way. People will understand.

Anonymous November 1, 2005 1:37 PM

The previous comment was by me. My browser cleared the Name field when I hit the back button accidentally…

Adam Montville November 1, 2005 1:58 PM

I agree with Denis: we need a long term solution. I also agree with Bruce: we need flexibility. The implication is that we need a virtual toolbag of good, diversely applicable algorithms.

It seems to me that the best long-term way to achieve this is to get to work on hash algorithm theory. Are there any hash theory resources that aren’t esoteric?

Timmy303 November 1, 2005 3:53 PM

I can’t believe the multiple-standard sneakiness is happening again. This is so old.

1) The idea that a given algorithm chosen as a standard for a certain purpose might not meet requirements for other unforeseen purposes doesn’t mean that TWO algorithms are needed, but rather that development of another category of standard is called for that addresses that other purpose. Nobody’s screaming that Rijndael (NIST’s puzzling choice for AES) is inadequate because it doesn’t work also as a skyscraper I-beam. There are things that do that too, it wasn’t supposed to be one of those things for God’s sake.

2) The notion that two algorithms are more secure than one (because there will always be one stronger one that takes longer to be broken) doesn’t mean that any weaknesses have been addressed. This argument assumes that developers are somehow forced to use a standard. Even assuming they will constrain themselves to an accepted standard, they can’t use both at the same time, so there is always an equal possibility of developers using one algorithm as opposed to another, regardless of which is actually stronger.

Grant Gould November 1, 2005 3:58 PM

It’s delightful to see someone talking about the performance implications of hashing, because I think we’re headed for a train-wreck the way things are going.

On most j2me devices, network data can come in faster than the processor can do SHA-1 on that data (most devices have native SHA-1, but it’s inaccessible to programmers, so you have to SHA-1 in java). That means that, to get at the full network bandwidth of the device, some or all of any data in a given networking protocol must be left vulnerable to man-in-the-middle attacks. The thought that we need to move from this to something even more complex than SHA-1 is truly frightening.

My company recently entirely dropped most of the security features from our network protocols simply because it was impossible to keep the UI responsive otherwise on modern mobile devices. I’m sure we aren’t the only ones doing this. The performance requirements of hashing are creating disasters-waiting-to-happen all over mobile networking space.

Ryan November 1, 2005 5:25 PM

Bruce, I understand recommending SHA-256 because it’s a longer/more iterated version of the beast we already know to be “practically” secure.

But what about WHIRLPOOL? Has it been studied enough to be considered secure yet? It seems to have been around for as long as the SHA-2 family, and was selected by ISO and NESSIE as a standard.

I would think the AES-like block cipher design would make WHIRLPOOL quite computationally efficient compared with 160+ iterations of something like SHA. And the design seems quite different from the MD(n)/SHA(n) family of hashes, which could be a good thing.

Adam Montville November 1, 2005 6:14 PM


I don’t think it’s sneakiness. I consider the following to be a fact: not all hash algorithms are the same.

We wouldn’t use 16d nails to secure composite shingles any more than we would use roofing nails to support a wall frame. But, they’re all nails.

The point is, we use different, yet similar, items depending upon our specific needs. If we need to install a roof, we use roofing nails. If we need to frame a house, we use 16d nails.

While a single, universal cryptographic hash algorithm would be fantastic, I still believe that niche algorithms should be considered. Is that sneaky? Not in my opinion.

Anonymous November 24, 2005 2:05 PM

What about using two different but fast hashes?

What about using one hash that has some parameter than is changable to give better performance or more security, something analagous to key size in RSA?

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.