Entries Tagged "cryptanalysis"

Page 18 of 19

Twofish Cryptanalysis Rumors

Recently I have been hearing some odd “Twofish has been broken” rumors. I thought I’d quell them once and for all.

Rumors of the death of Twofish has been greatly exaggerated.

The analysis in question is by Shiho Moriai and Yiqun Lisa Yin, who published their results in Japan in 2000. Recently, someone either got a copy of the paper or heard about the results, and rumors started spreading.

Here’s the actual paper. It presents no cryptanalytic attacks, only some hypothesized differential characteristics. Moriai and Yin discovered byte-sized truncated differentials for 12- and 16-round Twofish (the full cipher has 16 rounds), but were unable to use them in any sort of attack. They also discovered a larger, 5-round truncated differential. No one has been able to convert these differentials into an attack, and Twofish is nowhere near broken. On the other hand, they are excellent and interesting results — and it’s a really good paper.

In more detail, here are the paper’s three results:

  1. The authors show a 12-round truncated differential characteristic that predicts that the 2nd byte of the ciphertext difference will be 0 when the plaintext difference is all-zeros except for its last byte. They say the characteristic holds with probability 2-40.9. Note that for an ideal cipher, we expect the 2nd byte of ciphertext to be 0 with probability 2-8, just by chance. Of course, 2-8 is much, much larger than 2-40.9. Therefore, this is not particularly useful in a distinguishing attack.

    One possible interpretation of their result would be to conjecture that the 2nd byte of ciphertext difference will be 0 with probability 2-8 + 2-40.9 for Twofish, but only 2-8 for an ideal cipher. Their characteristic is just one path. If one is lucky, perhaps all other paths behave randomly and contribute an additional 2-8 factor to the total probability of getting a 0 in the 2nd byte of ciphertext difference. Perhaps. One might conjecture that, anyway.

    It is not at all clear whether this conjecture is true, and the authors are careful not to claim it. If it were true, it might lead to a theoretical distinguishing attack using 275 chosen plaintexts or so (very rough estimate). But I’m not at all sure that the conjecture is true.

  2. They show a 16-round truncated differential that predicts that the 2nd byte of the ciphertext difference will be 0 (under the same input difference). Their characteristic holds with probability 2-57.3 (they say). Again, this is not very useful.

    Analogously to the first result, one might conjecture that the 2nd byte of the ciphertext difference will be 0 with probability 2-8 + 2-57.3 for Twofish, but probability 2-8 for an ideal cipher. If this were true, one might be able to mount a distinguishing attack with 2100 chosen plaintexts or so (another very rough estimate). But I have no idea whether the conjecture is true.

  3. They also show a 5-round truncated differential characteristic that predicts that the input difference that is non-zero everywhere except in its 9th byte will lead to an output difference of the same form. This characteristic has probability 2-119.988896, they say (but they also say that they have made some approximations, and the actual probabilities can be a little smaller or a little larger). Compared to an ideal cipher, where one would expect this to happen by chance with probability 2-120, this isn’t very interesting. It’s hard to imagine how this could be useful in a distinguishing attack.

The paper theorizes that all of these characteristics might be useful in an attack, but I would be very careful about drawing any conclusions. It can be very tricky to go from single-path characteristics whose probability is much smaller than the chances of it happening by chance in an ideal cipher, to a real attack. The problem is in the part where you say “let’s just assume all other paths behave randomly.” Often the other paths do not behave randomly, and attacks that look promising fall flat on their faces.

We simply don’t know whether these truncated differentials would be useful in a distinguishing attack. But what we do know is that even if everything works out perfectly to the cryptanalyst’s benefit, and if an attack is possible, then such an attack is likely to require a totally unrealistic number of chosen plaintexts. 2100 plaintexts is something like a billion billion DVDs’ worth of data, or a T1 line running for a million times the age of the universe. (Note that these numbers might be off by a factor of 1,000 or so. But honestly, who cares? The numbers are so huge as to be irrelevent.) And even with all that data, a distinguishing attack is not the same as a key recovery attack.

Again, I am not trying to belittle the results. Moriai and Yin did some great work here, and they deserve all kinds of credit for it. But even from a theoretical perspective, Twofish isn’t even remotely broken. There have been no extensions to these results since they were published five years ago. The best Twofish cryptanalysis is still the work we did during the design process: available on the Twofish home page.

Posted on November 23, 2005 at 12:15 PMView Comments

NIST Hash Workshop Liveblogging (3)

I continue to be impressed by the turnout at this workshop. There are lots of people here whom I haven’t seen in a long time. It’s like a cryptographers’ family reunion.

The afternoon was devoted to cryptanalysis papers. Nothing earth-shattering; a lot of stuff that’s real interesting to me and not very exciting to summarize.

The list of papers is here. NIST promises to put the actual papers online, but they make no promises as to when.

Right now there is a panel discussing how secure SHA-256 is. “How likely is SHA-256 to resist attack for the next ten years?” Some think it will be secure for that long, others think it will fall in five years or so. One person pointed out that if SHA-256 lasts ten years, it will be a world record for a hash function. The consensus is that any new hash function needs to last twenty years, though. It really seems unlikely that any hash function will last that long.

But the real issue is whether there will be any practical attacks. No one knows. Certainly there will be new cryptanalytic techniques developed, especially now that hash functions are a newly hot area for research. But will SHA-256 ever have an attack that’s faster than 280?

Everyone thinks that SHA-1 with 160 rounds is a safer choice than SHA-256 truncated to 160 bits. The devil you know, I guess.

Niels Ferguson, in a comment from the floor, strongly suggested that NIST publish whatever analysis on SHA-256 it has. Since this is most likely by the NSA and classified, it would be a big deal. But I agree that it’s essential for us to fully evaluate the hash function.

Tom Berson, in another comment, suggested that NIST not migrate to a single hash function, but certify multiple alternatives. This has the interesting side effect of forcing the algorithm agility issue. (We had this same debate regarding AES. Negatives are: 1) you’re likely to have a system that is as strong as the weakest choice, and 2) industry will hate it.)

If there’s a moral out of the first day of this workshop, it’s that algorithm agility is an essential feature in any Internet protocol.

Posted on October 31, 2005 at 4:00 PMView Comments

NIST Hash Workshop Liveblogging (1)

I’m in Gaithersburg, MD, at the Cryptographic Hash Workshop hosted by NIST. I’m impressed by the turnout; a lot of the right people are here.

Xiaoyun Wang, the cryptographer who broke SHA-1, spoke about her latest results. They are the same results Adi Shamir presented in her name at Crypto this year: a time complexity of 263.

(I first wrote about Wang’s results here, and discussed their implications here. I wrote about results from Crypto here. Here are her two papers from Crypto: “Efficient Collision Search Attacks on SHA-0” and “Finding Collisions in the Full SHA-1 Collision Search Attacks on SHA1.”)

Steve Bellovin is now talking about the problems associated with upgrading hash functions. He and his coauthor Eric Rescorla looked at S/MIME, TLS, IPSec (and IKE), and DNSSEC. Basically, these protocols can’t change algorithms overnight; it has to happen gradually, over the course of years. So the protocols need some secure way to “switch hit”: to use both the new and old hash functions during the transition period. This requires some sort of signaling, which the protocols don’t do very well. (Bellovin’s and Rescorla’s paper is here.)

Posted on October 31, 2005 at 9:02 AMView Comments

New Cryptanalytic Results Against SHA-1

Xiaoyun Wang, one of the team of Chinese cryptographers that successfully broke SHA-0 and SHA-1, along with Andrew Yao and Frances Yao, announced new results against SHA-1 yesterday at Crypto’s rump session. (Actually, Adi Shamir announced the results in their name, since she and her student did not receive U.S. visas in time to attend the conference.)

Shamir presented few details — and there’s no paper — but the time complexity of the new attack is 263. (Their previous result was 269; brute force is 280.) He did say that he expected Wang and her students to improve this result over the next few months. The modifications to their published attack are still new, and more improvements are likely over the next several months. There is no reason to believe that 263 is anything like a lower limit.

But an attack that’s faster than 264 is a significant milestone. We’ve already done massive computations with complexity 264. Now that the SHA-1 collision search is squarely in the realm of feasibility, some research group will try to implement it. Writing working software will both uncover hidden problems with the attack, and illuminate hidden improvements. And while a paper describing an attack against SHA-1 is damaging, software that produces actual collisions is even more so.

The story of SHA-1 is not over. Again, I repeat the saying I’ve heard comes from inside the NSA: “Attacks always get better; they never get worse.”

Meanwhile, NIST is holding a workshop in late October to discuss what the security community should do now. The NIST Hash Function Workshop should be interesting, indeed. (Here is one paper that examines the effect of these attacks on S/MIME, TLS, and IPsec.)

EDITED TO ADD: Here are Xiaoyun Wang’s two papers from Crypto this week: “Efficient Collision Search Attacks on SHA-0” and “Finding Collisions in the Full SHA-1Collision Search Attacks on SHA1.” And here are the rest of her papers.

Posted on August 17, 2005 at 2:06 PMView Comments

SHA Cryptanalysis Paper Online

In February, I wrote about a group of Chinese researchers who broke the SHA-1 hash function. That posting was based on short notice from the researchers. Since then, many people have written me asking about the research and the actual paper, some questioning the validity of the research because of the lack of documentation.

The paper did exist; I saw a copy. They will present it at the Crypto conference in August. I believe they didn’t post it because Crypto requires that submitted papers not be previously published, and they misunderstood that to mean that it couldn’t be widely distributed in any way.

Now there’s a copy of the paper on the web. You can read “Finding Collisions in the Full SHA-1,” by Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu, here.

Posted on June 24, 2005 at 12:46 PMView Comments

AES Timing Attack

Nice timing attack against AES.

For those of you who don’t know, timing attacks are an example of side-channel cryptanalysis: cryptanalysis using additional information about the inner workings of the cryptographic algorithm. I wrote about them here.

What’s the big idea here?

There are two ways to look at a cryptographic primitive: block cipher, digital signature function, whatever. The first is as a chunk of math. The second is a physical (or software) implementation of that math.

Traditionally, cryptanalysis has been directed solely against the math. Differential and linear cryptanalysis are good examples of this: high-powered mathematical tools that can be used to break different block ciphers.

On the other hand, timing attacks, power analysis, and fault analysis all makes assumptions about implementation, and uses additional information garnered from attacking those implementations. Failure analysis assumes a one-bit feedback from the implementation — was the message successfully decrypted — in order to break the underlying cryptographic primitive. Timing attacks assumes that an attacker knows how long a particular encryption operation takes.

Posted on May 17, 2005 at 10:05 AMView Comments

Cryptanalysis of SHA-1

On Tuesday, I blogged about a new cryptanalytic result — the first attack faster than brute-force against SHA-1. I wrote about SHA, and the need to replace it, last September. Aside from the details of the new attack, everything I said then still stands. I’ll quote from that article, adding new material where appropriate.

One-way hash functions are a cryptographic construct used in many applications. They are used in conjunction with public-key algorithms for both encryption and digital signatures. They are used in integrity checking. They are used in authentication. They have all sorts of applications in a great many different protocols. Much more than encryption algorithms, one-way hash functions are the workhorses of modern cryptography.

In 1990, Ron Rivest invented the hash function MD4. In 1992, he improved on MD4 and developed another hash function: MD5. In 1993, the National Security Agency published a hash function very similar to MD5, called SHA (Secure Hash Algorithm). Then, in 1995, citing a newly discovered weakness that it refused to elaborate on, the NSA made a change to SHA. The new algorithm was called SHA-1. Today, the most popular hash function is SHA-1, with MD5 still being used in older applications.

One-way hash functions are supposed to have two properties. One, they’re one way. This means that it is easy to take a message and compute the hash value, but it’s impossible to take a hash value and recreate the original message. (By “impossible” I mean “can’t be done in any reasonable amount of time.”) Two, they’re collision free. This means that it is impossible to find two messages that hash to the same hash value. The cryptographic reasoning behind these two properties is subtle, and I invite curious readers to learn more in my book Applied Cryptography.

Breaking a hash function means showing that either — or both — of those properties are not true.

Earlier this week, three Chinese cryptographers showed that SHA-1 is not collision-free. That is, they developed an algorithm for finding collisions faster than brute force.

SHA-1 produces a 160-bit hash. That is, every message hashes down to a 160-bit number. Given that there are an infinite number of messages that hash to each possible value, there are an infinite number of possible collisions. But because the number of possible hashes is so large, the odds of finding one by chance is negligibly small (one in 280, to be exact). If you hashed 280 random messages, you’d find one pair that hashed to the same value. That’s the “brute force” way of finding collisions, and it depends solely on the length of the hash value. “Breaking” the hash function means being able to find collisions faster than that. And that’s what the Chinese did.

They can find collisions in SHA-1 in 269 calculations, about 2,000 times faster than brute force. Right now, that is just on the far edge of feasibility with current technology. Two comparable massive computations illustrate that point.

In 1999, a group of cryptographers built a DES cracker. It was able to perform 256 DES operations in 56 hours. The machine cost $250K to build, although duplicates could be made in the $50K-$75K range. Extrapolating that machine using Moore’s Law, a similar machine built today could perform 260 calculations in 56 hours, and 269 calculations in three and a quarter years. Or, a machine that cost $25M-$38M could do 269 calculations in the same 56 hours.

On the software side, the main comparable is a 264 keysearch done by distributed.net that finished in 2002. One article put it this way: “Over the course of the competition, some 331,252 users participated by allowing their unused processor cycles to be used for key discovery. After 1,757 days (4.81 years), a participant in Japan discovered the winning key.” Moore’s Law means that today the calculation would have taken one quarter the time — or have required one quarter the number of computers — so today a 269 computation would take eight times as long, or require eight times the computers.

The magnitude of these results depends on who you are. If you’re a cryptographer, this is a huge deal. While not revolutionary, these results are substantial advances in the field. The techniques described by the researchers are likely to have other applications, and we’ll be better able to design secure systems as a result. This is how the science of cryptography advances: we learn how to design new algorithms by breaking other algorithms. Additionally, algorithms from the NSA are considered a sort of alien technology: they come from a superior race with no explanations. Any successful cryptanalysis against an NSA algorithm is an interesting data point in the eternal question of how good they really are in there.

For the average Internet user, this news is not a cause for panic. No one is going to be breaking digital signatures or reading encrypted messages anytime soon. The electronic world is no less secure after these announcements than it was before.

But there’s an old saying inside the NSA: “Attacks always get better; they never get worse.” Just as this week’s attack builds on other papers describing attacks against simplified versions of SHA-1, SHA-0, MD4, and MD5, other researchers will build on this result. The attack against SHA-1 will continue to improve, as others read about it and develop faster tricks, optimizations, etc. And Moore’s Law will continue to march forward, making even the existing attack faster and more affordable.

Jon Callas, PGP’s CTO, put it best: “It’s time to walk, but not run, to the fire exits. You don’t see smoke, but the fire alarms have gone off.” That’s basically what I said last August.

It’s time for us all to migrate away from SHA-1.

Luckily, there are alternatives. The National Institute of Standards and Technology already has standards for longer — and harder to break — hash functions: SHA-224, SHA-256, SHA-384, and SHA-512. They’re already government standards, and can already be used. This is a good stopgap, but I’d like to see more.

I’d like to see NIST orchestrate a worldwide competition for a new hash function, like they did for the new encryption algorithm, AES, to replace DES. NIST should issue a call for algorithms, and conduct a series of analysis rounds, where the community analyzes the various proposals with the intent of establishing a new standard.

Most of the hash functions we have, and all the ones in widespread use, are based on the general principles of MD4. Clearly we’ve learned a lot about hash functions in the past decade, and I think we can start applying that knowledge to create something even more secure.

Hash functions are the least-well-understood cryptographic primitive, and hashing techniques are much less developed than encryption techniques. Regularly there are surprising cryptographic results in hashing. I have a paper, written with John Kelsey, that describes an algorithm to find second preimages with SHA-1 ­– a technique that generalizes to almost all other hash functions — in 2106 calculations: much less than the 2160 calculations for brute force. This attack is completely theoretical and not even remotely practical, but it demonstrates that we still have a lot to learn about hashing.

It is clear from rereading what I wrote last September that I expected this to happen, but not nearly this quickly and not nearly this impressively. The Chinese cryptographers deserve a lot of credit for their work, and we need to get to work replacing SHA.

Posted on February 18, 2005 at 11:24 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.