Schneier on Security
A blog covering security and security technology.
« Limits on Police Tracking People with Cell Phones |
| NIST Hash Workshop Liveblogging (2) »
October 31, 2005
NIST Hash Workshop Liveblogging (1)
I'm in Gaithersburg, MD, at the Cryptographic Hash Workshop hosted by NIST. I'm impressed by the turnout; a lot of the right people are here.
Xiaoyun Wang, the cryptographer who broke SHA-1, spoke about her latest results. They are the same results Adi Shamir presented in her name at Crypto this year: a time complexity of 263.
(I first wrote about Wang's results here, and discussed their implications here. I wrote about results from Crypto here. Here are her two papers from Crypto: "Efficient Collision Search Attacks on SHA-0" and "Finding Collisions in the Full SHA-1 Collision Search Attacks on SHA1.")
Steve Bellovin is now talking about the problems associated with upgrading hash functions. He and his coauthor Eric Rescorla looked at S/MIME, TLS, IPSec (and IKE), and DNSSEC. Basically, these protocols can't change algorithms overnight; it has to happen gradually, over the course of years. So the protocols need some secure way to "switch hit": to use both the new and old hash functions during the transition period. This requires some sort of signaling, which the protocols don't do very well. (Bellovin's and Rescorla's paper is here.)
Posted on October 31, 2005 at 9:02 AM
• 8 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
I just RTFP, and there's a non-technical aspect that's overlooked with regards to IPSec: interoperability is so flaky that as a practical matter no real negotiation takes place - during implementation we (and anybody with an ounce of sense) lock the protocols into a restricted set of encryption / hashing methods, &c. and basically leave nothing to chance (and even then you occasionally have to sacrifice an animal during a full moon to get the thing to work - or use a packet debug session and complain to the vendor that they're parsing some parameter improperly, and then argue about the exact interpretation of the RFCs). Personally, I verify working connections by running a debug session, but I'm more anal-retentive than most. Even when the luxury of homogenous implementation presents itself, I (and everyone I know) lock things down just out of habit. One should never let computers make important decisions anyway, that way us humans can still make the big bucks :)
Also, generally speaking this is an area that's easy to budget upgrades for if you push a little bit of (appropriate) FUD at management. If you're using FOSS the expense is trivial, and if you've got the budget for Crisco, &c. then they're already trained to shell out.
So yes, there's a theoretical problem with two hosts doing a poor job of negotiating which functions to use, but as a practical level this is largely moot as long as the admin has half a clue (and if (s)he doesn't, then this is the least of their worries).
schneier wrote:"They are the same results Adi Shamir presented in her name at Crypto this year: a time complexity of 2^64."
can you tell me how to intrepret this result, for example if i run an exhaustive search for a string whose hash collides, (ie equals to a hash, whose string i dont know), how much time will it take me to find the string on a processor with x Ghz clock speed.
I shall be very thankful if someone could enlighten me.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.