NIST Hash Workshop Liveblogging (3)

I continue to be impressed by the turnout at this workshop. There are lots of people here whom I haven’t seen in a long time. It’s like a cryptographers’ family reunion.

The afternoon was devoted to cryptanalysis papers. Nothing earth-shattering; a lot of stuff that’s real interesting to me and not very exciting to summarize.

The list of papers is here. NIST promises to put the actual papers online, but they make no promises as to when.

Right now there is a panel discussing how secure SHA-256 is. “How likely is SHA-256 to resist attack for the next ten years?” Some think it will be secure for that long, others think it will fall in five years or so. One person pointed out that if SHA-256 lasts ten years, it will be a world record for a hash function. The consensus is that any new hash function needs to last twenty years, though. It really seems unlikely that any hash function will last that long.

But the real issue is whether there will be any practical attacks. No one knows. Certainly there will be new cryptanalytic techniques developed, especially now that hash functions are a newly hot area for research. But will SHA-256 ever have an attack that’s faster than 280?

Everyone thinks that SHA-1 with 160 rounds is a safer choice than SHA-256 truncated to 160 bits. The devil you know, I guess.

Niels Ferguson, in a comment from the floor, strongly suggested that NIST publish whatever analysis on SHA-256 it has. Since this is most likely by the NSA and classified, it would be a big deal. But I agree that it’s essential for us to fully evaluate the hash function.

Tom Berson, in another comment, suggested that NIST not migrate to a single hash function, but certify multiple alternatives. This has the interesting side effect of forcing the algorithm agility issue. (We had this same debate regarding AES. Negatives are: 1) you’re likely to have a system that is as strong as the weakest choice, and 2) industry will hate it.)

If there’s a moral out of the first day of this workshop, it’s that algorithm agility is an essential feature in any Internet protocol.

Posted on October 31, 2005 at 4:00 PM22 Comments

Comments

Pat Cahalan October 31, 2005 4:53 PM

Mildly off-topic, but…

If there’s a moral out of the first day of this workshop, it’s that algorithm agility is
an essential feature in any Internet protocol.

Agility in Internet protocols goes beyond algorithms… look at how abused SMTP is compared to its original design. No compression, and yet now it is used to shovel files in the tens (or hundreds) of MB size range… and this is something that 90% of general Internet users not only don’t think twice about, but get generally irritated when they hit a limitation…

Hal October 31, 2005 4:55 PM

As far as 160 round SHA-1 vs truncated SHA-256, normally SHA-1 is 80 rounds. So this version would be half as fast. Perhaps the comparison was made on the assumption that SHA-1 is twice as fast as SHA-256, so halving the speed of SHA-1 would be a fair match to SHA-256, performance-wise.

According to the benchmarks at http://www.eskimo.com/~weidai/benchmarks.html, SHA-1 is more like 55% faster than SHA-256, not twice as fast. A better comparison, then, would be 128-round SHA-1 vs truncated SHA-256. SHA-1 might still win but perhaps not so overwhelmingly.

greg October 31, 2005 6:01 PM

When it comes to a hash function, i really want a secure one first, or a set of secure primitives first. A fast one is much less important. Its cheaper to buy hardwear than to be insecure. Well at least in my case.

Davi Ottenheimer October 31, 2005 7:10 PM

Bruce, sounds like you’re really “hashing” things out. Sorry, couldn’t resist.

“industry will hate it”

The industry hates anything that isn’t easy and cheap, for the very good reason that it isn’t easy and cheap.

Not to be too tongue-in-cheek about it, but I think most people have come to realize that there really (really) is no one-time solution and encryption projects need to assume from the start that agility and upgradeability are necessary, even for hash functions.

Davi Ottenheimer November 1, 2005 12:13 AM

@ viewer

When cryptographic hash functions are found to be too weak for certain circumstances — fail to provide a required level of collision resistance — they need to be replaced. The more widely deployed an extant hash function (e.g. MD5), the more you need the “agility” to transition to the new one without being forced to replace the protocol or system itself.

There was a lot of discussion about this earlier in the year during and after the RSA conference. In fact Burt Kaliski suggests x.509 and SSL/TLS as examples of protocols with agility:

http://www.computerworld.com/printthis/2005/0,4814,101788,00.html

You can imagine how agility really makes a huge difference to consumers when regulations/audits start to get specific (e.g. the PCI) and state something like “x is too weak and therefore must be upgraded on all systems”.

If you want examples of pain/suffering from lack-of-agility, I find the whole WEP disaster really annoying and telnet (in spite of its lingering pervasiveness) is doomed…not to mention the infamous decades of ATM protocol dependence on DES.

Iang November 1, 2005 2:11 AM

Normally agility is best considered at the overall protocol suite level, to reduce complexity and mucking around for implementors and operators. But my experience has been that even then, agility on hashes has been required over and beyond cipher suite agility due to the MD5-SHA0-SHA1-SHA256 progression over the last decade and the wider use that hashes can play in other areas of the application.

Which is to say that if a protocol hasn’t got the agility it needs by now then something else is going on. No big negative here.

Bill P. Godfrey November 1, 2005 8:29 AM

I’ve seen software that “prefers” to use SHA-1 based signatures, but will quite happily use MD5 if the other side is an older version that only knows about MD5.

The vendors are thinking about adding SHA-256 and removing MD5 for the next version, but I’d imagine they would get complaints that it “no longer works”. I suspect there will be a check box reading “[ ] Allow MD5” for a few years to come.

timeless November 1, 2005 8:48 AM

“…algorithm agility is an essential feature in any Internet protocol.”

Agreed. While this is important for connection oriented protocols like TLS and IPSec (where the data is not persisted for any length of time), it is even more important for document oriented protocols like SMIME or XMLDigSig where the data can be persisted for great lengths of time (i.e. as long as 30 years in the case of documents like digitally signed mortgages, employee benefits documents, etc.).

Zooko November 1, 2005 9:33 AM

An underappreciated fact about protocol agility is that designing a negotiation meta-protocol in from the start is probably unnecessary and is certainly error-prone. Instead, including the right kind of version numbers early enough and in the right way is sufficient for graceful upgrade.

Consider a simple protocol which uses SHA-1 exclusively, and includes protocol version numbers early in the protocol (and of course securely bound to all messages). Now you can invent a new version of the protocol, which uses a new hash function. (For that matter, the new version of the protocol could perform entirely different steps, different patterns of interaction, etc.). Now you can deploy “bi-lingual” software which detects whether the peer is capable of speaking the new protocol, and if not speaks the old protocol.

The only requirements are (a) the original protocol includes a protocol version number early enough (and securely bound), (b) the original protocol has “space” for transmission of a newer version number which is optional — i.e. which has no effect on older, mono-lingual software.

A simple example is that you transmit two numbers: what version number of the protocol is the newest version that I can speak, and what version number of the protocol is the oldest version that I can speak.

Note that both sides have to expose these version numbers to the other side before any other required protocol step which might be unacceptable in the new version of the protocol. It’s not that hard in practice, although it might impose an extra round-trip in some cases, or else it might involve sending an unnecessary “protocol v1” message which will then be discarded after both sides realize that they are able to speak protocol v2.

Regards,

Zooko

Zooko November 1, 2005 9:36 AM

P.S. Perhaps I should have mentioned that the importance of binding the protocol version numbers to the messages is in part a lesson learned from “Chosen Protocol Attack” by Kelsey, Schneier, Wagner, although also of course from older sources such as “Prudent Engineering Practices For Cryptographic Protocols” by Abadi, Needham.

Davi Ottenheimer November 1, 2005 10:05 AM

@ Zooko

Yes, SSL does that via a “cipher suite”. I made a cryptic reference to this above but a good example is the Payment Card Industry (PCI) Data Security Standard self-questionnaire that asks (section sec 4.2) “If SSL is used for transmission of sensitive cardholder data, is it using version 3.0 with 128-bit encryption?”

http://usa.visa.com/download/business/accepting_visa/ops_risk_management/cisp_PCI_Self_Assessment_Questionnaire.doc

Not the best wording, I admit, but if you want to answer “yes, or better” you have to ensure you disable the SSLv2 suite, which includes:
RC4-MD5
EXP-RC4-MD5
RC2-MD5
EXP-RC2-MD5
IDEA-CBC-MD5
DES-CBC-MD5
DES-CBC3-MD5

And since it’s trivial to make the config change/verify only SSLv3 and TLSv1 ciphers are allowed, you probably will not be given much slack for complaining about the “cost of compliance”.

Zooko November 1, 2005 1:30 PM

Davi:

There’s a difference between having a negotiable algorithm, the negotiation of which is explicitly part of the original version of the protocol, versus instead allowing the new version of the protocol to be arbitrarily different but to “switch-hit”.

Anyway, I see that Bellovin and Rescorla have gone into great detail on the subject and that I should read their paper before writing any more.

Paul Crowley November 2, 2005 7:09 AM

I’m not sure I see how agility helps. If we have two parties that wish to communicate, and they can both use either MD5 or Whirlpool (say) preferring the latter, a man-in-the-middle who can completely defeat MD5 can always convince each party that the other party can only use MD5 in a protocol rollback attack.

One solution to this would be to define an insanely conservative hash function, which can be securely used to negotiate a faster hash function.

Bruce, I seem to remember that you used to be against algorithmic agility in Internet protocols – what brought about the change of heart?

Zooko November 2, 2005 9:25 AM

Paul:

Those rollback attacks can be prevented (up to sufficiently bad breakage of the crypto primitives) by verifying the negotiation after it is complete. This does mean that if there is sufficiently bad breakage of the crypto primitives then the “switch-hitting” or “bilingual” clients are vulnerable, but that problem seems to be inherent in the notion of a bilingual client supporting a sufficiently broken crypto primitive.

That is: you can’t have both backwards-compatible support for old crypto primitives when talking to old clients, and safety against sufficiently bad breakage in those old crypto primitives when talking to new clients. However, you can have safety against partial breakage/weakness. (details available on request…)

Finally, looking outside the pure ab initiio peer-to-peer situation, if you have certain kinds of authentication, you might be able to bootstrap so that you have positive confirmation that your peer supports a newer crypto primitive. Although obviously that authentication itself will depend on some crypto primitives… The Bellovin, Rescorla paper talks about some of that.

Oh, one more detail — if you have a sufficiently sophisticated user, you could display the negotiated crypto suite so that a rollback attack could be noticed by the user.

Hm… Wow! I just realized that the current practice of abstracting the crypto suite away from the rest of the functionality can accidentally preclude us from an improvement here! If new functionality — i.e. new protocol features, new application behavior, were bundled in with new crypto primitives in an upgrade, then the user would be more likely to notice a roll-back attack.

This observation doesn’t make much sense when you are thinking of pure transport layer abstractions like TLS, but consider instead a less layered protocol such as a VoIP encryption scheme (I’m thinking of Phil Zimmermann’s Zfone, which I worked on, but the same applies to Skype), or an interactive game protocol, or a peer-to-peer file sharing system, etc. In those cases, where user-visible features might be tied to protocol upgrades, then hardcoding that protocol v1 uses SHA-1 and protocol v2 uses SHA-256, where at the same time protocol v2 offers new user-visible features, can help against roll-back attacks.

Well this was fun shooting from the hip here. I hate posting in blog comments because I don’t know if anyone will ever read what I wrote, but oh well.

Paul Crowley November 2, 2005 11:25 AM

I read it! Actually I was checking back to see if anyone responded. (I’ve also just remembered that I owe you an email – sorry!)

I agree with much of what you say, but I disagree with what you say about trying to involve the user in checking for protocol rollback – I think that will rarely work.

Rather, I’d like to see the information about how to authenticate a given party include information about what algorithms you can use to that purpose. Then, if I try and contact you and I get told to rollback to SHA-1, I know that someone’s trying to impersonate you, because (eg) your public key fingerprint says that you support Whirlpool. It all fits in to Zooko’s Triangle 🙂

Zooko November 2, 2005 2:29 PM

Yes, I think that the authentication approach is a good one.

The bit about letting the user see what algorithms are in use is interesting to me because it requires what network engineers call a “layer violation”. I have this long-standing suspicion that layering is preventing us from making progress on some deep problems in cryptography…

Thanks for checking back to see if anyone has responded! 🙂

Rafael Hashimoto November 7, 2005 11:16 AM

If SHA-256 is truncated to 160 bits, wouldn’t there 2^96 messages producing the same digest? Wouldn’t this message collision distribution over the domain be unprectitable?

John Macdonald November 17, 2005 10:32 AM

It seems to me that using multiple hash algorithms at once would be useful. So, the hash code would be the concatenation of the hash code computed by the separate algorithms. Surely (I think, but perhaps I’m missing something obvious to an expert) this would not be as insecure as the weakest of the algorithms but would be at least as strong as any of the individual algorithms. To create a collision, one has to find fudge data that causes all of the algorithms to compute their original result, all on the same data. Suppose you’ve found the appropriate fudge data to match the first hash. Changing that fudge data to match the second hash would break the matching of the first hash. You would have to find an entire (huge) family of fudge data codes for the first, so that you could extract one member of that family that also fudges the second hash. If one of the hashes being used is unbreakable by the person trying to create the collision, it doesn’t matter if they can create a hash code that does collide for all of the others. So, this does not fail to the weakest link problem – all of the links are connected end-to-end and not just to each other.

Chris Kendon November 23, 2005 6:52 AM

Everyone thinks that SHA-1 with 160 rounds is a safer choice than SHA-256 truncated to 160 bits. The devil you know, I guess.

Neither would be safe for long term use. Moore’s law will probably reach 2^80 in 20-30 years, so collision resistance will break then. And any attack that beats brute force will break it sooner.

Whatever successor is chosen will need at least 2^128 collision resistance to provide an adequate safety margin. An attack that finds collisions in full SHA-256 in “only” 2^100 steps would not be useful in the real world. Though for documents that need to last 30+ years I suggest SHA-512 for a bigger safety margin.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.