Entries Tagged "SHA-1"
Page 2 of 3
A New Secure Hash Standard
The U.S. National Institute of Standards and Technology is having a competition for a new cryptographic hash function.
This matters. The phrase “one-way hash function” might sound arcane and geeky, but hash functions are the workhorses of modern cryptography. They provide web security in SSL. They help with key management in e-mail and voice encryption: PGP, Skype, all the others. They help make it harder to guess passwords. They’re used in virtual private networks, help provide DNS security and ensure that your automatic software updates are legitimate. They provide all sorts of security functions in your operating system. Every time you do something with security on the internet, a hash function is involved somewhere.
Basically, a hash function is a fingerprint function. It takes a variable-length input—anywhere from a single byte to a file terabytes in length—and converts it to a fixed-length string: 20 bytes, for example.
One-way hash functions are supposed to have two properties. First, they’re one-way. This means that it is easy to take an input and compute the hash value, but it’s impossible to take a hash value and recreate the original input. By “impossible” I mean “can’t be done in any reasonable amount of time.”
Second, they’re collision-free. This means that even though there are an infinite number of inputs for every hash value, you’re never going to find two of them. Again, “never” is defined as above. The cryptographic reasoning behind these two properties is subtle, but any cryptographic text talks about them.
The hash function you’re most likely to use routinely is SHA-1. Invented by the National Security Agency, it’s been around since 1995. Recently, though, there have been some pretty impressive cryptanalytic attacks against the algorithm. The best attack is barely on the edge of feasibility, and not effective against all applications of SHA-1. But there’s an old saying inside the NSA: “Attacks always get better; they never get worse.” It’s past time to abandon SHA-1.
There are near-term alternatives—a related algorithm called SHA-256 is the most obvious—but they’re all based on the family of hash functions first developed in 1992. We’ve learned a lot more about the topic in the past 15 years, and can certainly do better.
Why the National Institute of Standards and Technology, or NIST, though? Because it has exactly the experience and reputation we want. We were in the same position with encryption functions in 1997. We needed to replace the Data Encryption Standard, but it wasn’t obvious what should replace it. NIST decided to orchestrate a worldwide competition for a new encryption algorithm. There were 15 submissions from 10 countries—I was part of the group that submitted Twofish—and after four years of analysis and cryptanalysis, NIST chose the algorithm Rijndael to become the Advanced Encryption Standard (.pdf), or AES.
The AES competition was the most fun I’ve ever had in cryptography. Think of it as a giant cryptographic demolition derby: A bunch of us put our best work into the ring, and then we beat on each other until there was only one standing. It was really more academic and structured than that, but the process stimulated a lot of research in block-cipher design and cryptanalysis. I personally learned an enormous amount about those topics from the AES competition, and we as a community benefited immeasurably.
NIST did a great job managing the AES process, so it’s the perfect choice to do the same thing with hash functions. And it’s doing just that (.pdf). Last year and the year before, NIST sponsored two workshops to discuss the requirements for a new hash function, and last month it announced a competition to choose a replacement for SHA-1. Submissions will be due in fall 2008, and a single standard is scheduled to be chosen by the end of 2011.
Yes, this is a reasonable schedule. Designing a secure hash function seems harder than designing a secure encryption algorithm, although we don’t know whether this is inherently true of the mathematics or simply a result of our imperfect knowledge. Producing a new secure hash standard is going to take a while. Luckily, we have an interim solution in SHA-256.
Now, if you’ll excuse me, the Twofish team needs to reconstitute and get to work on an Advanced Hash Standard submission.
This essay originally appeared on Wired.com.
EDITED TO ADD (2/8): Every time I write about one-way hash functions, I get responses from people claiming they can’t possibly be secure because an infinite number of texts hash to the same short (160-bit, in the case of SHA-1) hash value. Yes, of course an infinite number of texts hash to the same value; that’s the way the function works. But the odds of it happening naturally are less than the odds of all the air molecules bunching up in the corner of the room and suffocating you, and you can’t force it to happen either. Right now, several groups are trying to implement Xiaoyun Wang’s attack against SHA-1. I predict one of them will find two texts that hash to the same value this year—it will demonstrate that the hash function is broken and be really big news.
SHA-1 Cracked?
Slashdot is reporting on this article claiming that SHA-1 has been cracked.
The reality is more complicated.
Seagate Encrypted Drive
Seagate has announced a product called DriveTrust, which provides hardware-based encryption on the drive itself. The technology is proprietary, but they use standard algorithms: AES and triple-DES, RSA, and SHA-1. Details on the key management are sketchy, but the system requires a pre-boot password and/or combination of biometrics to access the disk. And Seagate is working on some sort of enterprise-wide key management system to make it easier to deploy the technology company-wide.
The first target market is laptop computers. No computer manufacturer has announced support for DriveTrust yet.
Notes from the Hash Function Workshop
Last month, NIST hosted the Second Hash Workshop, primarily as a vehicle for discussing a replacement strategy for SHA-1. (I liveblogged NIST’s first Cryptographic Hash Workshop here, here, here, here, and here.)
As I’ve written about before, there are some impressive cryptanalytic results against SHA-1. These attacks are still not practical, and the hash function is still operationally secure, but it makes sense for NIST to start looking at replacement strategies—before these attacks get worse.
The conference covered a wide variety of topics (see the agenda for details) on hash function design, hash function attacks, hash function features, and so on.
Perhaps the most interesting part was a panel discussion called “SHA-256 Today and Maybe Something Else in a Few Years: Effects on Research and Design.” Moderated by Paul Hoffman (VPN Consortium) and Arjen Lenstra (Ecole Polytechnique Federale de Lausanne), the panel consisted of Niels Ferguson (Microsoft), Antoine Joux (Universite de Versailles-Saint-Quentin-en-Yvelines), Bart Preneel (Katholieke Universiteit Leuven), Ron Rivest (MIT), and Adi Shamir (Weismann Institute of Science).
Paul Hoffman has posted a composite set of notes from the panel discussion. If you’re interested in the current state of hash function research, it’s well worth reading.
My opinion is that we need a new hash function, and that a NIST-sponsored contest is a great way to stimulate research in the area. I think we need one function and one function only, because users won’t know how to choose between different functions. (It would be smart to design the function with a couple of parameters that can be easily changed to increase security—increase the number of rounds, for example—but it shouldn’t be a variable that users have to decide whether or not to change.) And I think it needs to be secure in the broadest definitions we can come up with: hash functions are the workhorse of cryptographic protocols, and they’re used in all sorts of places for all sorts of reasons in all sorts of applications. We can’t limit the use of hash functions, so we can’t put one out there that’s only secure if used in a certain way.
NIST Hash Workshop Liveblogging (5)
The afternoon started with three brand new hash functions: FORK-256, DHA-256, and VSH. VSH (Very Smooth Hash) was the interesting one; it’s based on factoring and the discrete logarithm problem, like public-key encryption, and not on bit-twiddling like symmetric encryption. I have no idea if it’s any good, but it’s cool to see something so different.
I think we need different. So many of our hash functions look pretty much the same: MD4, MD5, SHA-0, SHA-1, RIPE-MD, HAVAL, SHA-256, SHA-512. And everything is basically a block cipher in Davies-Meyer mode. I want some completely different designs. I want hash functions based on a stream ciphers. I want more functions based on number theory.
The final session was an open discussion about what to do next. There was much debate about how soon we need a new hash function, how long we should rely on SHA-1 or SHA-256, etc.
Hashing is hard. At the ultra-high-level hand-waving level, it takes a lot more clock cycles per message byte to hash than it does to encrypt. No one has any theory to account for this, but it seems like the lack of any secrets in a hash function makes it a harder problem. This may be an artifact of our lack of knowledge, but I think there’s a grain of fundamental truth buried here.
And hash functions are used everywhere. Hash functions are the workhorse of cryptography; they’re sprinkled all over security protocols. They’re used all the time, in all sorts of weird ways, for all sorts of weird purposes. We cryptographers think of them as good hygiene, kind of like condoms.
So we need a fast answer for immediate applications.
We also need “SHA2,” whatever that will look like. And a design competition is the best way to get a SHA2. (Niels Ferguson pointed out that the AES process was the best cryptographic invention of the past decade.)
Unfortunately, we’re in no position to have an AES-like competition to replace SHA right now. We simply don’t know enough about designing hash functions. What we need is research, random research all over the map. Designs beget analyses beget designs beget analyses…. Right now we need a bunch of mediocre hash function designs. We need a posse of hotshot graduate students breaking them and making names for themselves. We need new tricks and new tools. Hash functions are a hot area of research right now, but anything we can do to stoke that will pay off in the future.
NIST is thinking of hosting another hash workshop right after Crypto next year. That would be a good thing.
I need to get to work on a hash function based on Phelix.
NIST Hash Workshop Liveblogging (4)
This morning we heard a variety of talks about hash function design. All are esoteric and interesting, and too subtle to summarize here. Hopefully the papers will be online soon; keep checking the conference website.
Lots of interesting ideas, but no real discussion about trade-offs. But it’s the trade-offs that are important. It’s easy to design a good hash function, given no performance constraints. But we need to trade off performance with security. When confronted with a clever idea, like Ron Rivest’s dithering trick, we need to decide if this a good use of time. The question is not whether we should use dithering. The question is whether dithering is the best thing we can do with (I’m making these numbers up) a 20% performance degradation. Is dithering better than adding 20% more rounds? This is the kind of analysis we did when designing Twofish, and it’s the correct analysis here as well.
Bart Preneel pointed out the obvious: if SHA-1 had double the number of rounds, this workshop wouldn’t be happening. If MD5 had double the number of rounds, that hash function would still be secure. Maybe we’ve just been too optimistic about how strong hash functions are.
The other thing we need to be doing is providing answers to developers. It’s not enough to express concern about SHA-256, or wonder how much better the attacks on SHA-1 will become. Developers need to know what hash function to use in their designs. They need an answer today. (SHA-256 is what I tell people.) They’ll need an answer in a year. They’ll need an answer in four years. Maybe the answers will be the same, and maybe they’ll be different. But if we don’t give them answers, they’ll make something up. They won’t wait for us.
And while it’s true that we don’t have any real theory of hash functions, and it’s true that anything we choose will be based partly on faith, we have no choice but to choose.
And finally, I think we need to stimulate research more. Whether it’s a competition or a series of conferences, we need new ideas for design and analysis. Designs beget analyses beget designs beget analyses…. We need a whole bunch of new hash functions to beat up; that’s how we’ll learn to design better ones.
NIST Hash Workshop Liveblogging (3)
I continue to be impressed by the turnout at this workshop. There are lots of people here whom I haven’t seen in a long time. It’s like a cryptographers’ family reunion.
The afternoon was devoted to cryptanalysis papers. Nothing earth-shattering; a lot of stuff that’s real interesting to me and not very exciting to summarize.
The list of papers is here. NIST promises to put the actual papers online, but they make no promises as to when.
Right now there is a panel discussing how secure SHA-256 is. “How likely is SHA-256 to resist attack for the next ten years?” Some think it will be secure for that long, others think it will fall in five years or so. One person pointed out that if SHA-256 lasts ten years, it will be a world record for a hash function. The consensus is that any new hash function needs to last twenty years, though. It really seems unlikely that any hash function will last that long.
But the real issue is whether there will be any practical attacks. No one knows. Certainly there will be new cryptanalytic techniques developed, especially now that hash functions are a newly hot area for research. But will SHA-256 ever have an attack that’s faster than 280?
Everyone thinks that SHA-1 with 160 rounds is a safer choice than SHA-256 truncated to 160 bits. The devil you know, I guess.
Niels Ferguson, in a comment from the floor, strongly suggested that NIST publish whatever analysis on SHA-256 it has. Since this is most likely by the NSA and classified, it would be a big deal. But I agree that it’s essential for us to fully evaluate the hash function.
Tom Berson, in another comment, suggested that NIST not migrate to a single hash function, but certify multiple alternatives. This has the interesting side effect of forcing the algorithm agility issue. (We had this same debate regarding AES. Negatives are: 1) you’re likely to have a system that is as strong as the weakest choice, and 2) industry will hate it.)
If there’s a moral out of the first day of this workshop, it’s that algorithm agility is an essential feature in any Internet protocol.
NIST Hash Workshop Liveblogging (2)
In the morning we had a series of interesting papers: “Strengthening Digital Signatures via Randomized Hashing,” by Halevi and Krawczyk; “Herding Hash Functions and the Nostradamus Attack,” by Kelsey and Kohno; and “Collision-Resistant usage of MD5 and SHA-1 via Message Preprocessing,” by Szydlo and Yin. The first and third papers are suggestions for modifying SHA-1 to make it more secure. The second paper discusses some fascinating and cool, but still theoretical, attacks on hash functions.
The last session before lunch was a panel discussion: “SHA-1: Practical Security Implications of Continued Use.” The panel stressed that these are collision attacks and not pre-image attacks, and that many protocols simply don’t care. Collision attacks are important for digital signatures, but less so for other uses of hash functions. On the other hand, this difference is only understood by cryptographers; there are issues if the public believes that SHA-1 is “broken.”
Niels Ferguson pointed out that the big problem is MD5, which is still used everywhere. (Hell, DES is still everywhere.) It takes much longer to upgrade algorithms on the Internet than most people believe; Steve Bellovin says it takes about one year to get the change through the IETF, and another five to seven years to get it depoloyed. And that’s after we all figure out which algorithm they should use.
Georg Illies gave a perspective from Germany, where there is a digital-signature law in effect. In addition to the technology, there are legal considerations that make it harder to switch.
The panel seemed to agree that it’s still safe to use SHA-1 today, but that we need to start migrating to something better. It’s way easier to change algorithms when you’re not in the middle of a panic.
There was more talk about algorithm agility. This problem is larger than SHA. Our Internet protocols simply don’t have a secure methodology for migrating from one cryptographic algorithm to another.
Bottom line: Don’t use SHA-1 for anything new, and start moving away from it as soon as possible. To SHA-256, probably.
And now it’s lunchtime.
NIST Hash Workshop Liveblogging (1)
I’m in Gaithersburg, MD, at the Cryptographic Hash Workshop hosted by NIST. I’m impressed by the turnout; a lot of the right people are here.
Xiaoyun Wang, the cryptographer who broke SHA-1, spoke about her latest results. They are the same results Adi Shamir presented in her name at Crypto this year: a time complexity of 263.
(I first wrote about Wang’s results here, and discussed their implications here. I wrote about results from Crypto here. Here are her two papers from Crypto: “Efficient Collision Search Attacks on SHA-0” and “Finding Collisions in the Full SHA-1 Collision Search Attacks on SHA1.”)
Steve Bellovin is now talking about the problems associated with upgrading hash functions. He and his coauthor Eric Rescorla looked at S/MIME, TLS, IPSec (and IKE), and DNSSEC. Basically, these protocols can’t change algorithms overnight; it has to happen gradually, over the course of years. So the protocols need some secure way to “switch hit”: to use both the new and old hash functions during the transition period. This requires some sort of signaling, which the protocols don’t do very well. (Bellovin’s and Rescorla’s paper is here.)
Sidebar photo of Bruce Schneier by Joe MacInnis.