Twofish Cryptanalysis Rumors

Recently I have been hearing some odd “Twofish has been broken” rumors. I thought I’d quell them once and for all.

Rumors of the death of Twofish has been greatly exaggerated.

The analysis in question is by Shiho Moriai and Yiqun Lisa Yin, who published their results in Japan in 2000. Recently, someone either got a copy of the paper or heard about the results, and rumors started spreading.

Here’s the actual paper. It presents no cryptanalytic attacks, only some hypothesized differential characteristics. Moriai and Yin discovered byte-sized truncated differentials for 12- and 16-round Twofish (the full cipher has 16 rounds), but were unable to use them in any sort of attack. They also discovered a larger, 5-round truncated differential. No one has been able to convert these differentials into an attack, and Twofish is nowhere near broken. On the other hand, they are excellent and interesting results—and it’s a really good paper.

In more detail, here are the paper’s three results:

  1. The authors show a 12-round truncated differential characteristic that predicts that the 2nd byte of the ciphertext difference will be 0 when the plaintext difference is all-zeros except for its last byte. They say the characteristic holds with probability 2-40.9. Note that for an ideal cipher, we expect the 2nd byte of ciphertext to be 0 with probability 2-8, just by chance. Of course, 2-8 is much, much larger than 2-40.9. Therefore, this is not particularly useful in a distinguishing attack.

    One possible interpretation of their result would be to conjecture that the 2nd byte of ciphertext difference will be 0 with probability 2-8 + 2-40.9 for Twofish, but only 2-8 for an ideal cipher. Their characteristic is just one path. If one is lucky, perhaps all other paths behave randomly and contribute an additional 2-8 factor to the total probability of getting a 0 in the 2nd byte of ciphertext difference. Perhaps. One might conjecture that, anyway.

    It is not at all clear whether this conjecture is true, and the authors are careful not to claim it. If it were true, it might lead to a theoretical distinguishing attack using 275 chosen plaintexts or so (very rough estimate). But I’m not at all sure that the conjecture is true.

  2. They show a 16-round truncated differential that predicts that the 2nd byte of the ciphertext difference will be 0 (under the same input difference). Their characteristic holds with probability 2-57.3 (they say). Again, this is not very useful.

    Analogously to the first result, one might conjecture that the 2nd byte of the ciphertext difference will be 0 with probability 2-8 + 2-57.3 for Twofish, but probability 2-8 for an ideal cipher. If this were true, one might be able to mount a distinguishing attack with 2100 chosen plaintexts or so (another very rough estimate). But I have no idea whether the conjecture is true.

  3. They also show a 5-round truncated differential characteristic that predicts that the input difference that is non-zero everywhere except in its 9th byte will lead to an output difference of the same form. This characteristic has probability 2-119.988896, they say (but they also say that they have made some approximations, and the actual probabilities can be a little smaller or a little larger). Compared to an ideal cipher, where one would expect this to happen by chance with probability 2-120, this isn’t very interesting. It’s hard to imagine how this could be useful in a distinguishing attack.

The paper theorizes that all of these characteristics might be useful in an attack, but I would be very careful about drawing any conclusions. It can be very tricky to go from single-path characteristics whose probability is much smaller than the chances of it happening by chance in an ideal cipher, to a real attack. The problem is in the part where you say “let’s just assume all other paths behave randomly.” Often the other paths do not behave randomly, and attacks that look promising fall flat on their faces.

We simply don’t know whether these truncated differentials would be useful in a distinguishing attack. But what we do know is that even if everything works out perfectly to the cryptanalyst’s benefit, and if an attack is possible, then such an attack is likely to require a totally unrealistic number of chosen plaintexts. 2100 plaintexts is something like a billion billion DVDs’ worth of data, or a T1 line running for a million times the age of the universe. (Note that these numbers might be off by a factor of 1,000 or so. But honestly, who cares? The numbers are so huge as to be irrelevent.) And even with all that data, a distinguishing attack is not the same as a key recovery attack.

Again, I am not trying to belittle the results. Moriai and Yin did some great work here, and they deserve all kinds of credit for it. But even from a theoretical perspective, Twofish isn’t even remotely broken. There have been no extensions to these results since they were published five years ago. The best Twofish cryptanalysis is still the work we did during the design process: available on the Twofish home page.

Posted on November 23, 2005 at 12:15 PM38 Comments

Comments

Jarrod November 23, 2005 12:47 PM

If someone did break Twofish, would you be saddened at all? I’m sure you would be impressed, but would there be even a twinge of disappointment?

Just curious.

Bruce Schneier November 23, 2005 12:51 PM

Of course I would be disappointed. But I would be really impressed. And I would likely learn a lot from the analysis.

This is the way cryptography works. Algorithms get proposed, and they get broken. One of my algorithms, MacGuffin, was broken during the rump session of the workshop it was presented. Embarrassing, yes. But that’s okay.

Breaking is actually much more satisfying than designing. With designing, there’s always an element of doing the best you can and hoping for the best. With analysis, the proof is in the pudding.

If truth be known, I would like to be the one to break Twofish.

Mithrandir November 23, 2005 1:26 PM

What, exactly, is a distinguishing attack? A brief search seems to imply that it just allows one to distinguish between a cyphertext stream and a stream of random bits, but doesn’t actually break an algorithm.

stacy November 23, 2005 1:36 PM

“[…] the proof is in the pudding.”

Nope, not there. I looked! Sorry, just one of my pet peeves… the expression is

The proof of the pudding is in the eating.

Jim November 23, 2005 1:41 PM

Tongue-in-cheek-suggestion: Please refer to any new algorithms with names such as TITANIUM or ATOMIC. Something very strong sounding. I have recently been having internal audit complaining to me about the use of Pretty Good Privacy in our organization: “Shouldn’t we be using something stronger than just ‘pretty good’ ?”

A Prohias November 23, 2005 2:27 PM

Bruce, here you say the best cryptanalysis was the work during the design process and provide a link. On that page you say the best cryptanalysis is the work by Moriai and Yin. Sorry – just a niggling nit.

Thanks for your vision in making twofish open, uncopyrighted and patent free. This is why I gladly buy your books than downloading them free from P2P networks 😉

Tim Vail November 23, 2005 2:33 PM

Jim — it is certainly true that the name you give can have some impact on how someone perceive it. However, that is a double edged sword because what happens if this hypothetical algorithm — Titanium — wound up being broken? Then the name becomes a misrepresentation, and we do not want it to stick with the algorithm…do we?

Really, I think the name is all just marketing. Any good auditor should know that Pretty Good Privacy is probably one of the best the organization could be using. That is what security should be about — keeping up with what is currently considered the best.

Rick November 23, 2005 2:37 PM

Stacy said:

[…] the proof is in the pudding.”

Nope, not there. I looked! Sorry, just
one of my pet peeves… the expression
is

The proof of the pudding is in the
eating.

And the word proof here is in the sense of test, as opposed to confirmation. Like in proving ground.

Another common, and much misunderstood phrase where prove is used in this sense is:

The exception proves the rule.

If you read this as the exception tests the rule it makes sense, if you read it as the exception confirms the rule it’s a contradiction!

Ithika November 23, 2005 3:02 PM

The exception proves the rule.

If you read this as the exception tests the rule it makes sense, if you read it as the exception confirms the rule it’s a contradiction!

I’m afraid it’s not that either. The “exception that proves the rule” uses the commonly understood meanings of those words. In this case it is a legal term (an abbreviated translation from the latin ‘exceptio probat regulam in casibus non exceptis’). It loosely means “this exception being made explicit means that the rule holds in all other cases”.

More details can be found (at the bottom of the page) on The Straight Dope’s answer to this question. And one of the few where Cecil initially got it wrong! 🙂

http://www.straightdope.com/classics/a3_201.html

RandomOlderPerson November 23, 2005 3:52 PM

“Blowfish was before Twofish. I kind of liked Redfish and Quadfish”

No! After Twofish and Redfish, the only thing that can come next is Bluefish!

(By the way, what happened to Onefish?)

Bruce Schneier November 23, 2005 3:58 PM

“What, exactly, is a distinguishing attack? A brief search seems to imply that it just allows one to distinguish between a cyphertext stream and a stream of random bits, but doesn’t actually break an algorithm.”

Yes. That’s exactly what a distinguishing attack is. One of the properties of a good algorithm is that its output is indistinguishable from a random stream of bits. A successful distinguishing attack breaks that property.

The attack does not recover the key, nor does it allow anyone to decrypt any ciphertext. But it’s still a pretty serious attack, and I would recommend that any algorithms that permit such an attack be replaced. Like the attack against SHA-1, it wouldn’t be a crisis, but it would be a good idea to start upgrading.

Bruce Schneier November 23, 2005 3:59 PM

“Bruce, here you say the best cryptanalysis was the work during the design process and provide a link. On that page you say the best cryptanalysis is the work by Moriai and Yin. Sorry – just a niggling nit.”

Oops. I’ll fix the Twofish page.

Bruce Schneier November 23, 2005 4:00 PM

“Please refer to any new algorithms with names such as TITANIUM or ATOMIC. Something very strong sounding.”

We’re cryptographers. Marketing is down the hall.

free-as-in-steak-knives November 23, 2005 4:06 PM

@Tim Vail,

Names can be more important than you think in the minds of those that deal with perceptions rather than reality (i.e. marketing, management, politice etc.)

Open source software proponents are still fighting with the “free” label, trying to explain the differance between free-as-in-beer (gratis) and free-as-in-speech (libre).

Bruce, you want to break TwoFish yourself? Aren’t you afraid the tin-foil-hatters are going to acuse you of “doing a DES/NSA” and hiding a deliberate weakness?

Jarrod November 23, 2005 4:24 PM

“Of course I would be disappointed. But I would be really impressed. And I would likely learn a lot from the analysis.”

I can see how this would be. I was wondering because sometimes I come up with some really neat bit of code (or so I think), and someone better at it than I points out a flaw. I don’t get disappointed, but rather excited at the possibility of learning from the mistakes.

“Breaking is actually much more satisfying than designing.”

Cryptography, like most of security, is a battle of wits, and like any other battle of wits, breaking an algorithm represents a victory in this battle. Victory is usually more satisfying than planning a defense.

Paul Crowley November 23, 2005 5:11 PM

I much prefer names like Twofish, Rijndael, Phelix over Blowfish, MARS, Serpent, or TITANIUM and ATOMIC, because in the former case every Google hit is about the cipher.

Philip Storry November 23, 2005 6:32 PM

@Jim,

“Shouldn’t we be using something stronger than just ‘pretty good’ ?”

The correct reply to this is:
“Yeah, but there are probably untrained humans involved. ‘Pretty Good’ is about the best we can hope for…”

😉

David Harmon November 23, 2005 6:48 PM

Not to belabor a proverb, but “The exception proves the rule” has developed two distinct meanings. Ithika, you have the older legal definition. Rick has given the newer scientific definition. The latter comes from the scientific habit of looking for revealing special cases, because those can often distinguish between competing theories.

As a quick example: Newtonian physics implies that light, with mass 0, should not be affected by (nor exert) gravity, because gravitational force is proportional to both masses involved. Einsteinian physics says it should, because even massless particles are affected by spatial curvature. By experiment, light is indeed affected by gravity, so Einstein “wins”, and Newton gets demoted to an approximation.

Jeff Flowers November 23, 2005 11:48 PM

Not specially in regards to twofish, but if you needed to encrypt a sensitive file on your home computer, what method would choose in today’s world? And could you trust the encryption optioned provided by a commercial entity, such as Microsoft? (I would be afraid that they have a backdoor somewhere.)

Moz November 24, 2005 2:02 AM

No, I like names that express the vagueness of the protection they offer. “pretty good privacy” is just that – it’s pretty good, we think. Well, we hope. Kind of. If you use it right. And the hardware is secured. And the users are trained right. And no-one stuffs up. Mostly. Until someone breaks it.

Calling something “the ultimate bombproof uncrackable encryptonometer, now with 3162 bit keys!” is likely to lead to people assuming it’s so secure that they don’t have to worry. Just read the Risks Digest and see how much of a problem this sort of thing is already…

Bruce Schneier November 24, 2005 8:51 AM

“Not specially in regards to twofish, but if you needed to encrypt a sensitive file on your home computer, what method would choose in today’s world?”

I use PGP Disk.

Roger November 24, 2005 4:15 PM

Thanks for the observations Bruce.

Since one of the explicit design goals of AES candidates — the reason for a 128 bit block — was resistance to codebook attacks on the order of 2^64 blocks, I think distinguishing attacks requiring 2^100 chosen blocks can’t even be regarded as certificational.

Vincent November 25, 2005 3:55 PM

“Rumors of the death of Twofish has been greatly exaggerated.”

Seriously? You mean “As opposed to the information about the algebraic attacks on Rijndael, which has been entirely accurate,” don’t-you?

He who draws the sword …
😉

Bruce Schneier November 25, 2005 4:25 PM

Vincent, is that you? Or is it some other Vincent? If it’s you, e-mail me. I am currently writing a new essay on Rijndael, and I have a few questions.

If it’s another Vincent: While there are some weird algebraic properties of Rijndael, no one has anything even remotely resembling an attack based on them. The biggest Rijndael skeptic right now is Arjen Lenstra, and he doesn’t have anything except a seriously bad feeling.

Right now, the most interesting positive development regarding Rijndael is its endorsement by the NSA, not just for Secret information but for Top Secret as well:

http://www.nsa.gov/ia/industry/crypto_suite_b.cfm

To me, this says a lot.

Ari Heikkinen November 29, 2005 1:44 PM

About rijndael, I still don’t understand why its simple algebraic structure would be any problem. I thought the very idea was the security of any encryption scheme must depend only on the key and not the algorithm (and it makes sense too, because say you’re at war, your enemy is sure to get access to your hardware on the battlefield and thus get the algorithm eventually).

Now, if an algorithm is clean, simple and easy to understand (for everyone), in my opinion, it’s only a good thing. Relying on something that’s complex and hard to analyze would be like relying on security through obscurity to me. Going back to that war analogy, relying on complexity, your enemy might be able to monitor your government and citizens because their experts were good enough to come up with an attack as opposed to your own experts (or perhaps even students who were given a chance to analyze the algorithm before use) who missed the attack because of the complexity of that algorithm.

More Coffee (I like coffee) February 21, 2007 6:18 PM

The thing that is so cool with twofish is that its free, made by a group of professionals and has a history to back it up. I just never understood why it was 256 bits instead of 448 like blowfish???
I guess twofish is probably a significantly differnt cipher.

David H. Bronke June 10, 2012 6:55 PM

I just never understood why it was 256 bits instead of 448 like blowfish???
I guess twofish is probably a significantly differnt cipher.

@More Coffee (I like coffee): I’m assuming you were referring to the key length here. As far as I understand it (and I’m far from an expert, so feel free to correct me if I’m wrong) the length of the key is only one of several factors that influences the effective amount of security afforded by an algorithm. There’s also the overall design of the algorithm itself, the number of rounds, and in block ciphers, the block size. Both Blowfish and Twofish have variable key sizes (32–448 bits for Blowfish; 128, 192, or 256 bits for Twofish) and Twofish’s block size is twice that of Blowfish (128 bits as opposed to 64), so there’s several factors in how secure each is, and it varies depending on what key size you choose. From what I understand, Twofish definitely is considered more secure than Blowfish overall.

Sansom October 1, 2013 1:10 AM

“Right now, the most interesting positive development regarding Rijndael is its endorsement by the NSA, not just for Secret information but for Top Secret as well”

And 8 years, and a few revelations later, do you STILL feel that was a positive development???

Clive Robinson October 1, 2013 7:08 AM

@ Sansom,

The Crypto-Suite-B link is long since broken.

However if I remember correctly it was only ever TS certified for “data at rest”.

And that in effect means the “algorithm” not “the practical implementation” is certified. It’s been known since before the ink dried on the NIST AES standard that there were very real and fairly easy to exploit side channel issues with the majority of practical implementations of AES that alowed key recovery.

Now whilst NIST organised the contest what is not known is the level of NSA input. The NSA must have known that an efficient/fast implementation of any potential AES entrant would have significant side channel issues but chose not to make it known at the very least. Nor did they point out that making the entrant software implementations freely downloadable would almost certainly result in them being used in every application due to “cut-n-paste” programing.

The result as we now know is that few if any software implimentations of AES were secure to be used in “online mode” and whilst this is slowly changing the legacy/orphan code issue is very much still with us and will be for some time to come.

It’s one reason I’ve always said do the crypto on an “ofline mode” “air gapped” computer that is reserved only for doing the crypto….

ASmith November 14, 2021 1:54 PM

I recall several years ago (current timeframe is the end of 2021), Schneier mentioned do not use 2-fish for secure encryption, upgrade to 3-fish for adequate security. Perhaps Schneier can elaborate on that. Was that to hype 3-fish or were the attacks on 2-fish mounting to the point of lowering the safety level to the point of needing to upgrade to 3-fish for fuller security?

Clive Robinson November 14, 2021 5:24 PM

@ ASmith,

Schneier mentioned do not use 2-fish for secure encryption, upgrade to 3-fish for adequate security.

It’s true of all ciphers they “age” and sometimes very far from slowely or gracefully (see history of DES and FEAL that in effect put the stake through DES’s heart).

There are two main reasons ciphers age,

1, The reducing cost of resources.
2, The increase in knowledge.

The first kind of follows a power law so you know aproximately when “known attacks” are going to start looking feasible. This is true of say RSA where at one point 512bits was considered excessive, but these days people mention 8192 bits might not be enough.

Then there is the increase in knowledge, FEAL was the cipher that got so badly mauled it looked like a blood sport to people outside the field of endevor. The upside of this was a couple of new –to the open community– quite powerful attacks. Which answered certain questions about the design of DES. And in turn changed the focus of cipher design, and obsoleting some designs of the time (not just FEAL).

Sometimes new knowledge gives rise to new technology. David Deutsch[1] of Oxford and Cambridge universities, gave us Quantum Computing, when he came up with a description for a quantum Turing Engine. As importantly he came up with the first algorithm designed to run on a quantum computer.

The upshot others came up with algorithms that effect ciphers. So now we are in a race between the development of Quantum Computing and new cryptographic algorithms. The result many ciphers and the other algorithms that suround them need replacing.

Whilst QC is a bit of a shock, it is unlikely to be the last.

It’s why in the past I’ve argued that NIST’s time would be better spent on coming up with standards for franeworks rather than having yet another Crypto-Contest.

[1] https://en.wikipedia.org/wiki/David_Deutsch

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.