Another New AES Attack

A new and very impressive attack against AES has just been announced.

Over the past couple of months, there have been two (the second blogged about here) new cryptanalysis papers on AES. The attacks presented in the papers are not practical—they’re far too complex, they’re related-key attacks, and they’re against larger-key versions and not the 128-bit version that most implementations use—but they are impressive pieces of work all the same.

This new attack, by Alex Biryukov, Orr Dunkelman, Nathan Keller, Dmitry Khovratovich, and Adi Shamir, is much more devastating. It is a completely practical attack against ten-round AES-256:

Abstract.
AES is the best known and most widely used block cipher. Its three versions (AES-128, AES-192, and AES-256) differ in their key sizes (128 bits, 192 bits and 256 bits) and in their number of rounds (10, 12, and 14, respectively). In the case of AES-128, there is no known attack which is faster than the 2128 complexity of exhaustive search. However, AES-192 and AES-256 were recently shown to be breakable by attacks which require 2176 and 2119 time, respectively. While these complexities are much faster than exhaustive search, they are completely non-practical, and do not seem to pose any real threat to the security of AES-based systems.

In this paper we describe several attacks which can break with practical complexity variants of AES-256 whose number of rounds are comparable to that of AES-128. One of our attacks uses only two related keys and 239 time to recover the complete 256-bit key of a 9-round version of AES-256 (the best previous attack on this variant required 4 related keys and 2120 time). Another attack can break a 10 round version of AES-256 in 245 time, but it uses a stronger type of related subkey attack (the best previous attack on this variant required 64 related keys and 2172 time).

They also describe an attack against 11-round AES-256 that requires 270 time—almost practical.

These new results greatly improve on the Biryukov, Khovratovich, and Nikolic papers mentioned above, and a paper I wrote with six others in 2000, where we describe a related-key attack against 9-round AES-256 (then called Rijndael) in 2224 time. (This again proves the cryptographer’s adage: attacks always get better, they never get worse.)

By any definition of the term, this is a huge result.

There are three reasons not to panic:

  • The attack exploits the fact that the key schedule for 256-bit version is pretty lousy—something we pointed out in our 2000 paper—but doesn’t extend to AES with a 128-bit key.
  • It’s a related-key attack, which requires the cryptanalyst to have access to plaintexts encrypted with multiple keys that are related in a specific way.
  • The attack only breaks 11 rounds of AES-256. Full AES-256 has 14 rounds.

Not much comfort there, I agree. But it’s what we have.

Cryptography is all about safety margins. If you can break n round of a cipher, you design it with 2n or 3n rounds. What we’re learning is that the safety margin of AES is much less than previously believed. And while there is no reason to scrap AES in favor of another algorithm, NST should increase the number of rounds of all three AES variants. At this point, I suggest AES-128 at 16 rounds, AES-192 at 20 rounds, and AES-256 at 28 rounds. Or maybe even more; we don’t want to be revising the standard again and again.

And for new applications I suggest that people don’t use AES-256. AES-128 provides more than enough security margin for the forseeable future. But if you’re already using AES-256, there’s no reason to change.

The paper I have is still a draft. It is being circulated among cryptographers, and should be online in a couple of days. I will post the link as soon as I have it.

UPDATED TO ADD (8/3): The paper is public.

Posted on July 30, 2009 at 9:26 AM145 Comments

Comments

aes July 30, 2009 10:16 AM

I suggest AES-128 at 16 rounds, AES-192 at 20 rounds, and AES-256 at 28 rounds.

This would mean that AES would be almost twice as slow, so AES-128 would run at about 25-30 cpb. I would be concerned that the users would start rejecting the standard in favour of faster alternatives.

peleus July 30, 2009 10:20 AM

But didn’t we already assume the NSA had known about these attacks all along? Kind of like what happened with DES for a decade before we realized we should use triple DES instead. Regardless AES will probably remain viable for a good long time for your average user’s needs. Whereas persons with high security needs will/should continue to rely a security schema consisting of various tools rather than a single security solution such as cryptography.

aes July 30, 2009 10:22 AM

I suggest AES-128 at 16 rounds, AES-192 at 20 rounds, and AES-256 at 28 rounds.

And what about the key schedule and the s-box(algebraic properties). Should they be fixed to, just in case?

Casey July 30, 2009 10:41 AM

If we do have to increase the number of rounds due to security concerns, I wonder if this means we need to take a more serious look at Serpent. IIRC, this was regarded as the most secure of the algorithms submitted to AES and I believe the main point for rejection was speed (Don’t quote me on this, I very well my be wrong).

I’m not saying that we should replace AES now or in the near future. I’m just suggesting that we may want to have a backup plan and if adding more rounds to AES is the only solution, then maybe Serpent gives us a better safety margin.

Doug July 30, 2009 10:52 AM

So if I read this right (haven’t seen the paper yet), AES-128 is actually harder to break than AES-256 due to the nature of this attack?

Chris July 30, 2009 11:23 AM

But didn’t we already assume the NSA had known about these attacks all along?

DES was developed by the NSA internally and was not subject to much analysis prior to it being released. While it’s debated if the NSA thought it was faulty to begin with (the “key size” argument) it’s development was not an open discussion.

AES on the other hand was developed from dozens of openly submitted algorithms that where analysed by any party that wanted to take a look at it. Rijndael was selected because it seemed to be the best at the time. But there was no conspiracy to push Rijndael as the algorithm because it was known to have flaws. As good as they are, it’s hard to assume that the NSA’s cryptographers are so good that they would find things that the rest of the world’s crypto experts would miss.

Bobby G. July 30, 2009 11:39 AM

What this shows to me is that we never can completely trust that there isn’t unknown better attacks out there. While we can’t assume the NSA (or similar agencies) have better experts, we can chain together a few likelihoods that could be uncomfortable for some. Agencies like this have deep pockets not directly focused on profit and are quite likely to be able to have experts as good or better than others. Agencies like this are quite unlikely to publicly announce their abilities. A saving grace I would think is that they can’t actually use any such ability in legal cases without obviously making it known they have the ability. Hence, they may be limited to clandestine use.

K. Signal Eingang July 30, 2009 12:13 PM

It looks to me like this does show AES-256 to be weaker than 128, but only if the listed preconditions are met – that is, that the encryption method is carried out for fewer rounds than standard, and the attacker already has some relatively hard-to-get information. I would be curious to know how one knows how many rounds are being used, though, and whether it’s important for the attacker to know this number or not.

Blue Moon July 30, 2009 12:20 PM

I think this new attack on AES may cause NIST to shy away from hashes that are based on AES, even though there may be no direct relationship between the attack and the hash.

Gabriel Jägenstedt July 30, 2009 12:21 PM

@ K. Signal: Well one easy way to check is by having a look in the source code of whatever implementation you are running. For example the loop-aes implementation clearly uses 14 rounds and I would guess most other implementations do the same.

However Im unsure what effect the fact that loop-aes uses 65 keys not 1 has on this attack.

Bruce Schneier July 30, 2009 1:13 PM

“This would mean that AES would be almost twice as slow…”

No. It would be 50% slower. Right now AES-128 has ten rounds.

Security doesn’t come for free.

Bruce Schneier July 30, 2009 1:15 PM

“But didn’t we already assume the NSA had known about these attacks all along?”

Actually, the NSA approved AES for use up to — I think — SECRET, so they either didn’t know about the attack or didn’t care. Or wanted us all to think they couldn’t break it, and were willing to sacrifice their secrets to perpetuate that belief.

Wheels within wheels….

Bruce Schneier July 30, 2009 1:17 PM

“And what about the key schedule and the s-box(algebraic properties). Should they be fixed to, just in case?”

Fixing the 256-bit key schedule would be nice, but that’s a more complicated change. Adding rounds is quick and easy, and requires very little thought.

Bruce Schneier July 30, 2009 1:20 PM

“If we do have to increase the number of rounds due to security concerns, I wonder if this means we need to take a more serious look at Serpent…. I’m not saying that we should replace AES now or in the near future. I’m just suggesting that we may want to have a backup plan and if adding more rounds to AES is the only solution, then maybe Serpent gives us a better safety margin.”

We could certainly abandon AES in favor of either Serpent or Twofish. But those algorithms are both over a decade old, and designed for 32-bit CPUs.

I would much rather extend the life of AES now, and have NIST — or someone — hold a new competition for a replacement algorithm after they finish with SHA-3.

Bruce Schneier July 30, 2009 1:22 PM

“So if I read this right (haven’t seen the paper yet), AES-128 is actually harder to break than AES-256 due to the nature of this attack?”

Yes and no. Neither can be broken. There are no attacks against any AES variants that are better than brute force; all of these attacks are against reduced-round variants.

That being said, the key schedule for AES-256 is very poor. I would recommend that people use AES-128 and not AES-256.

Bruce Schneier July 30, 2009 1:24 PM

“What this shows to me is that we never can completely trust that there isn’t unknown better attacks out there.”

Of course.

All we can ever say about the security of a symmetric cipher is that we can’t break it, and that no one else we know has admitted to being able to break it, either.

Bruce Schneier July 30, 2009 1:28 PM

“I would be curious to know how one knows how many rounds are being used, though, and whether it’s important for the attacker to know this number or not.”

The number of rounds is fixed and not part of the key. It’s public information.

In all cryptanalytic attacks, we assume that the attacker knows all the details of the algorithm. We assume they can disassemble the hardware, reverse-engineer the chips, recover the source code, and so on.

Jeremy July 30, 2009 1:31 PM

“Actually, the NSA approved AES for use up to — I think — SECRET, so they either didn’t know about the attack or didn’t care.”

If I recall correctly, they approved AES-192 and AES-256 for TOP SECRET.

I notice you’ve commented that this attack doesn’t extend to AES-128, but haven’t mentioned AES-192. Is that more similar to AES-128 or AES-256?

Bruce Schneier July 30, 2009 1:33 PM

“I notice you’ve commented that this attack doesn’t extend to AES-128, but haven’t mentioned AES-192. Is that more similar to AES-128 or AES-256?”

The new attack does not extend to AES-192, but some of the attacks from the previous few months — still wildly infeasible — do. Given that new results are still coming in fast and furious, I’m not willing to comment on AES-192 yet.

Bruce Schneier July 30, 2009 2:00 PM

“Do you think its time to start the work on Threefish?”

As a second-round SHA-3 candidate, I would hope the work has already been started on Threefish.

rm July 30, 2009 2:00 PM

Pretty sick stuff, best thing I’ve read in a while. Lot of lessons to be learned right here.

rm July 30, 2009 2:10 PM

Got me thinking again, remembering your epic book ‘secrets and lies’. I know plenty of SSL cert providers that use AES256, because the number is “higher” than 128, so it must be “better”.

Gotta love security.

Erik July 30, 2009 2:16 PM

“At this point, I suggest AES-128 at 16 rounds, AES-192 at 20 rounds, and AES-256 at 28 rounds.”

A possibly “stupid” question from a non-cryptographer: Why do you need more rounds with longer keys? And how did you come up with these seemingly arbitrary numbers for more rounds?

BR, Erik

Hal July 30, 2009 2:39 PM

I like the idea of increasing AES rounds. Next year Intel is coming out with processors with AES acceleration, what they call AES-NI. This has a single instruction which implements an AES round. It makes AES so fast it is practically for free, the other processing typically dominates.

Increasing the number of AES rounds still allows using these instructions, you just put in a few more of them. AES would still be super fast on the new hardware.

Another option to consider would be to in effect double the number of rounds, by standardizing single-key double-AES. (That is, C = AES(K,AES(K,P)) to encrypt P to C.) This would allow the existing investment in AES hardware encryption infrastructure to be maintained. And you’d still get good speed with Intel AES-NI.

Peter Maxwell July 30, 2009 2:41 PM

Before the paper is published; is there any more information on the relationship between the number of related keys, number of rounds and attack complexity?

Is the attack an algebraic attack?

vedaal July 30, 2009 2:53 PM

a little OT ;

as vulnerabilities keep being discovered in AES, and SHA,

can we have an update on Solitaire?

(even with the bias described by Paul Crowley, it still seems secure, but how much work would it take to break,
relative to the attacks against AES)

also,
was there ever a winner for the Cory Doctorow Crypto Wedding Ring contest?

David Johnston July 30, 2009 2:58 PM

You can’t go round just adding rounds to the AES-128 specification. Its 10 rounds are baked into a lot hardware implementations in a lot of silicon products.

NIST could specify some new supa-dupa-AES-128, but it would have to be a new specification of a new algorithm. The existing algorithm simply can’t be changed in the millions of products using it.

I know of no hardware implementation that has a ‘more rounds’ register. It would just create problems when you try to get it through FIPS.

Bruce Schneier July 30, 2009 3:05 PM

“A possibly ‘stupid’ question from a non-cryptographer: Why do you need more rounds with longer keys? And how did you come up with these seemingly arbitrary numbers for more rounds?”

Good questions, actually. The meta-answer to both is: diffusion. When block ciphers get broken, they’re invariably broken due to diffusion failures. What more rounds give you is more diffusions.

AES-256 needs more rounds for two reasons. One, the key is twice as long, so it takes more rounds to get complete diffusion of the key. The AES-256 key schedule is particularly bad in this regard, so it takes a lot more rounds. Fourteen rounds just isn’t enough.

And two, no one cares about attacks greater than brute force. Since AES-256 is much harder to brute force, it needs to resist more complex attacks than AES-128.

As to your second question, choosing the number of rounds for a cipher is a combination of experience and guesswork. The AES rounds of 10, 12, and 14 were arbitrary, but represented the designers’ best guess as to what would be secure. In my 2000 paper, I recommended increasing the number of rounds considerably, based on my best guess at the time.

The round recommendations I gave above — 16, 20, 28 — are designed to restore AES’s security cushion. They’re off the top of my head, and certainly not the last word on the topic. I’m sure this third paper isn’t the last word on these attacks — there’ll be further improvements in the coming months — and further discussion is required.

Bruce Schneier July 30, 2009 3:07 PM

“You can’t go round just adding rounds to the AES-128 specification. Its 10 rounds are baked into a lot hardware implementations in a lot of silicon products.”

Agreed. We would have to migrate the specification slowly and carefully. But as hard as it would be to add rounds to AES, replacing AES with a whole new algorithm would be even harder.

Bruce Schneier July 30, 2009 3:09 PM

“Another option to consider would be to in effect double the number of rounds, by standardizing single-key double-AES.”

This is a clever idea. It halves the speed, but it’s much easier to standardize on.

aes July 30, 2009 3:18 PM

“Another option to consider would be to in effect double the number of rounds, by standardizing single-key double-AES.”

This is a clever idea. It halves the speed, but it’s much easier to standardize on.

But is it secure? Double-DES can be broken with a meet-in-the-middle attack. As for having triple-AES, I think thats way to slow.

Bruce Schneier July 30, 2009 3:20 PM

“But is it secure? Double-DES can be broken with a meet-in-the-middle attack.”

Single-key double-AES. The meet-in-the-middle attack breaks 2-key variants.

Bobby G. July 30, 2009 3:48 PM

I’m curious about what form these attacks generally take. They are way beyond my math ability but are they usually in the form of a mathematical proof? Or do they usually present an algorithm as to how to implement the attack? I would expect that they never actually demonstrate an attack as that would take a very long time even with significant CPU resources. Or am I wrong on that?

Hal July 30, 2009 3:51 PM

The reason for double DES was to make the key longer. Regular DES has only a 56 bit key. Doubling it was aimed at giving 2*56 or 112 bit strength. Meet in the middle attacks defeat this goal, giving two-key double DES still only about 56 bit key strength.

Doubling AES would have a different goal: not to increase the key size, but to make the cipher stronger against cryptanalysis by in effect giving it more rounds. Meet in the middle wouldn’t apply.

However it occurs to me that vanilla double-AES, with the same key, is not the same as AES with twice as many rounds, because the key schedule would be different. You could approximate the effect of double-round AES by making the key for the 2nd AES instance be based on the final round key from the first instance. That would be hard though because hardware implementations won’t spit out that final round key.

Definitely need more thought on this one.

Daniel July 30, 2009 4:12 PM

“Cryptography is all about safety margins. ”

In the AES competition, some teams advocated using a security margin – something like: # of rounds divided by # of rounds of best attack – as a selection criterion, which probably would have favored e.g. Serpent over Rijndael. NIST didn’t follow that idea.

Bruce, with hindsight, and with SHA-3 coming up: Is the computed security margin a useful selection criteria?

Randall July 30, 2009 4:34 PM

@Bruce: Since the attacks have been against the AES key schedule, perhaps we should be considering key schedule improvements instead of raw increases in the number of rounds.

The “W” cipher used by Whirlpool, for instance, uses the round function itself in the key schedule. One of its authors was an AES designer, and it’s been published a while.

Just tweaking the key schedule preserves speed in applications that use a single key to encrypt lots of data, and it’s seemingly more insurance against future key-schedule attacks than an increase in the round count would be.

Maybe we should be asking NIST to take proposals for an AES-2, an incremental strengthening (but not total redesign) of AES?

Randall July 30, 2009 5:03 PM

A baroque, but somewhat backwards-compatible option: munge the raw key so that an attacker has less control over what’s fed into the weak key schedule. I’m talking about baking this into the definition of tweaked AES, not a protocol-level change.

The hash function could be AES-CTR, SHA-2/3 with reduced(?) rounds, or something special-purpose. It just needs to be strong enough that attackers definitely can’t feed systematic differences into the AES key schedule, and needs to be fast and simple and easy to drop into all the environments that use AES now.

Mark July 30, 2009 5:31 PM

@Chris: “DES was developed by the NSA internally and was not subject to much analysis prior to it being released.”

Actually, DES was designed by IBM as a modification to the Lucifer cipher. NSA reviewed it and suggested changes to the S-boxes and reducing the key size from 80 bits to 56 bits. So the NSA had a hand in the result, but it’s not an NSA design per se. (See http://en.wikipedia.org/wiki/Data_Encryption_Standard, e.g.)

Al July 30, 2009 6:25 PM

It was gonna happen. It both scares me and makes me feel safer. On one hand, billions of people rely on it for protecting bank data, SSNs, DMV databases, etc.

On the other, it needs to be tested. We need people like this so we aren’t lured into a false sense of security. More out of paranoia than anything else, I use triple crypto (Truecrypt): AES,Serpent,Twofish. If I had complete discretion over software selection on a legal professional’s laptop, I’d push for this.

The system encryption will be only AES (for speed reasons). But all crypto containers within will be the triple.

Western Infidels July 30, 2009 6:33 PM

@Brad Conte: And they called me crazy when I chose AES-Twofish for my TrueCrypt partition.

Someone correct me if I’m wrong:

If I understand this right, because it’s a related-key attack, it’s not useful for attacking a single encrypted text like a TrueCrypt volume. To make the attack work, an attacker needs access to lots of pairs of plaintext and corresponding ciphertext encrypted with keys that are mathematically related to the target key.

With a TrueCrypt volume, the attacker would generally have access to the ciphertext only, and would have no obvious way of generating any related keys.

I think the main danger is for network systems, where lots of keys are generated automatically, and where it might be possible to harvest lots of plaintext/ciphertext pairs automatically.

Scott Contini July 30, 2009 9:26 PM

Maybe one could consider changes to AES-192 or AES-256 based upon the number of rounds or the key schedule, but those who are suggesting throwing out the whole algorithm are over-reacting.

I posted this on sci.crypt a few days ago and I think it is worth reposting here:

When Rijndael was chosen as the AES, there were three concerns that people voiced about the algorithm:

  1. The key schedule looks dangerous. For example, see the paper “Improved
    Cryptanalysis of Rijndael” by Ferguson et al. or “Strengthening the Key Schedule of the AES” by L. May et al.
  2. The number of rounds for 192- and 256-bit version of the cipher. I remember Shamir saying at a conference that he was concerned about the number of rounds for more than 128-bit key size and he expects that in the future there will be a theoretical attack on AES-192 or AES-256 which will not threaten it in practice, but will cause bad publicity. He suggest that AES-192 have 15 rounds
    and AES-256 to have 20 rounds (if I remember correctly). Shamir’s prediction was 100% correct.
  3. Algebraic attacks. I think the first paper on this topic was “A Simple
    Algebraic Representation of Rijndael” by Ferguson et al, but then later came the papers by Courtois and Pieprzyk, and Murphy and Robshaw.

It was decided not to change the original proposal. But it turns out that changes according to #1 or #2 above would’ve been a good idea and perhaps would have resisted the present attacks.

(Both Ferguson et al papers had Schneier as one of the coauthors)

Scott Contini July 30, 2009 10:01 PM

I hear a lot of talk about “security margins”. I just remind people that SHA-1 had a very large security margin until Wang et al finally figured out how to attack it, and then the design fell quickly. In this case, the difficulty was figuring out how to analyse the darn thing, so the apparent security margin was giving a false sense of strength. Security margins are great to have, but they cannot be solely relied upon. Ideally we want a design that not only has a good security margin, but also has some other good reasons for trusting it. For example, the nice provable resistance to differential and linear cryptanalysis that Rijndael offers is a very good reason (too bad that this design philosophy did not carry over to the key schedule).

Brad Conte July 30, 2009 11:02 PM

@ Western Infidels

If I understand this right, because it’s a related-key attack, it’s not useful for attacking a single encrypted text like a TrueCrypt volume. To make the attack work, an attacker needs access to lots of pairs of plaintext and corresponding ciphertext encrypted with keys that are mathematically related to the target key.

It is somewhat plausible. If TrueCrypt had a flaw in generating keys, I created multiple volumes with it in succession, and an attacker was able to get access the contents of a couple of the volumes (say I left the volumes mounted and left the room) it may apply.

But your point is valid, TrueCrypt volumes are not the subject of interest here. However, my intent was humor, not to proclaim superiority of layering AES and Twofish. 😉

David Wagner July 31, 2009 12:37 AM

I guess I don’t agree with the recommendation to increase the number of rounds in the AES. The AES is still fine, as long as you use it properly. These attacks only apply in a related-key threat model. But for most applications of AES (e.g., for encryption), as long as you use AES properly, related-key attacks are not possible. This does require designing your protocol to avoid introducing the opportunity for related-key attacks, but that is a solved problem these days.

If you want to put it in the terms of theoretical cryptography: as a pseudorandom permutation (PRP), AES is still perfectly fine. Stick to protocols that only count on AES to be a PRP, and you’ll be fine. If you’re using AES in some way so that the overall system’s security can only be justified by assuming that AES is a so-called “ideal cipher” (basically, the block cipher equivalent of the random oracle model), then yeah, you may be in trouble — but in my opinion, that was never all that great an idea anyway, given the earlier results on the AES key schedule.

About the only major usage of AES that’s likely to be affected (as far as I can see right now) is using AES in certain hashing modes of operation (e.g., Davies-Meyer. But there’s probably not much need to use AES in that way in most systems, given the existence of dedicated hash functions that are designed for this purpose.

So, bottom line: great research, but I’m not convinced we need to change the standard.

Clive Robinson July 31, 2009 2:58 AM

@ Bruce Schneier,

“Actually, the NSA approved AES for use up to — I think — SECRET, so they either didn’t know about the attack or didn’t care. Or wanted us all to think they couldn’t break it, and were willing to sacrifice their secrets to perpetuate that belief.”

On a historical note,

It is known that the US put into service a number of cipher systems that had issues to do with “week keys”.

In that part of the key space was very week and part of it strong and other parts in between.

There have been various sugestions as to why the most compeling was that the equipment would fall into enamy hands and either be used by them or copied and used. Thus the argument “We know the keys to avoide but they don’t”

It also became known that Crypto AG had “advice” from the NSA in the design of systems to go to other countries.

This sort of “reuse” mentality made it into capstone with the final system being moderatly strong but even minor changes significantly weekening it.

So as we would say in the UK “The’ve previous” when it comes to week key systems.

Clive Robinson July 31, 2009 3:33 AM

Whils’t in historical mode a question raises it’s head again,

“WHEN is NIST going to wake up and smell the coffee?”

DES had a problem in that it was a “one size fits all” solution. It was known and recognised as a serious “engineering” issue long before DES by those involved with Gov Crypto. It quickly became clear when Crypto became part of more standard engineering.

NIST atleast paid lip service to the size issue in AES but it went about it the wrong way, which is why we have this and previous threads.

What was also know long before AES is that it’s not just an issue of “size” but also “style”.

That is some systems require only a modicum of security others a lot.

Likewise some systems have to be extrodinarily cheap others money is almost no object.

For instance transport payment cards and other low value payment cards need to be very cheap to be effective to roll out. They do not need to be secure for one hundred years.

Likewise “consumer” product design follows Charles Moore’s Law and designs are obsoleate within four years at most.

AES did almost nothing for low value payment cards which is one of the reasons such crap crypto was used in them and we get threads about how they are being broken.

Again this was a well known issue.

You just have to look at the battle between Sky and the card cloners etc to see this applied to “consumer grade crypto”.

NIST need to stop this “Champion of Champions” type competition which is arguably harmfull to industry and actuall get on with what they are supposed to do which is help not hinder.

I have said for a very long time that probably the most important thing NIST should do for industry is develop a “frame work” for Crypto and other security where “size”, “style” and “obsolecence” should be clearly recognised as major problems that are addressed.

Then they can get back to having “crypto competitions” for the different slots in the frame work that industry needs now and in the foreseable future.

And before people say “it’s not practical” i’ll tell you in advance “wake up and smell the coffee”

In Europe we have a darn sight more experiance of system inter operation than you do in the US which is one reason you have GSM phones that work as advertised, that also get features added as and when industry thinks they will generate revenue.

Clive Robinson July 31, 2009 4:15 AM

@ Brad Conte, Western Infidels,

“And they called me crazy when I chose AES-Twofish for my TrueCrypt partition.”

Hard drive encryption is a actually a very hard problem that has a great deal more to do with crypto modes than it does individual crypto types such as DES, AES, TwoFish etc.

For instance you have two compeating problems,

1, Access speed to blocks
2, Attacks in depth.

Historicaly the solution to the first would be to use some kind of stream cipher with offset capability.

However the last place on earth you would have used a stream cipher is one where attacks in depth are possible.

Likewise historicaly the solution you would use for the second problem would be a block cipher in chaining mode.

But for fast access thats the last thing you would do on a hard drive or voume on it.

One possible solution would be to use a collection of modes like chaining on individual blocks with IV’s based on another mode such as CTR. But that in turn has it’s own issues.

So “Western Infidels” question,

“If I understand this right, because it’s a related-key attack, it’s not useful for attacking a single encrypted text like a TrueCrypt volume. To make the attack work, an attacker needs access to lots of pairs of plaintext and corresponding ciphertext encrypted with keys that are mathematically related to the target key.”

In a single encrypted container (especialy with snap shots) most of those conditions are likley to be true depending on what modes and how the HD encryption designer went about their task and what the business drivers where/are.

And remember as with WEP, the designers may be expert in their field of endevor but not that of crypto, as Apple apears to have recently demonstrated with their 3G S iPhone (like Bruce I want to see a lot more on that attack).

Which brings us around to “Western Infidels” diferentiation between networks and hard drives,

“I think the main danger is for network systems, where lots of keys are generated automatically, and where it might be possible to harvest lots of plaintext/ciphertext pairs automatically.”

This is also true of HD encryption on very large volumes such as storage arrays.

It is because of this and many many other issues that storage crypto is a very hard problem, probably more so than with “databases” which have on the face of it a subset of the problems of HD Crypto. But you should hear that little voice asking “hey arn’t they stored on HD’s” 😉

The point being that none of the issues or solutions to them can be considered in isolation and you drop into that delightful N^2/2 problem at the design stage…

As Brad noted,

“TrueCrypt volumes are not the subject of interest here.”

Nor is HD crypto in general, it’s a subject that would take up many threads, books etc (and already has done so).

BF Skinner July 31, 2009 5:31 AM

@peleus “But didn’t we already assume the NSA had known about these attacks all along”

Where’s the NSA’s incentive to withhold findings? Did I hear wrong AES has been cleared for encryption to TOP SECRET? I knew about use at SECRET.

@Clive “Crypto AG had “advice” from the NSA ”

Arguably true for AG and I’ve heard the story and it sounds right to me. But there’s a difference between crippling the cover other peoples communications and doin’ it to yourself. While NSA may have a mandate to read US mail going overseas (they won’t say) AES is the mandated standard for ALL gov’t encryption (presumedly outside of military comms). And if they’ve dilberately allowed US information that, if compromised, will cause exceptionally grave damage to the US to be protected by weak ciphers that’s beyond wheels within wheels. It’s likely criminal or treason.

Clive Robinson July 31, 2009 5:46 AM

@ Bruce,

You can tell it’s a slow day waiting on the NHS…

With regard to your quick comment,

“Adding rounds is quick and easy, and requires very little thought.”

I’m going to sound picky but I have to disagree with the what you’ve said there.

It should be “requires very little resources”

As was shown with 2-DES to little “thought” gave rise to a new attack.

The real issue is crypto and it’s protocols are a very specialised field of endevor much like quantum physics in many ways.

The people who use the product of both of those fields are other scientists and engineers.

Scientists have depth (but rarely bredth) and engineers have bredth (and sometimes depth).

Ehgineers by and large treat the product of both Crypto and quantum as “off the shelf” building blocks.

That is as a “resource” which they take on spec as “doing what it says on the packet”.

Therfore any thought they give it is in most cases how to efficiently put it into whatever they are currently designing. They generaly don’t question it’s fundemental behaviour unless problems arise, or ascociated activity such as drawing up “user/test cases” focuses a strong light on it.

I know it’s wrong but at the end of the day you have only had 24Hours, of which “the man” pays you for eight, even though you’ve worked 12 just to meet the demands of “the man” and keep your job.

Clive Robinson July 31, 2009 6:10 AM

“Cryptography is all about safety margins.”

Whilst true it hides a lot of thorny issues out of sight.

First of a deliberate (as opposed to accidental) margin implies that the designer knows that it is required.

And it follows from that they also have reason to belive they know “why”.

The first problem is the “why” is known attacks and probable attacks. Not unknown or not understood attacks.

The second problem is as Bruce has observed with hash’s in the past. Strengthening against one type of attack has a habit of weakening against other attacks.

That is there are “trade offs” not just with speed and efficiency but also aspects of the strength of a crypto algorithm.

It may be (although I can’t immediatly see a valid argument) that increasing the amount of perceived “diffusion” from increased rounds might have the opposit effect when the “diffusion” is examined in a different manner.

And the crypto designer needs to keep all this in mind when selecting which margins and by how much.

Which is equivalent to playing an unknown game with unknown equipment on an unknown ground in the dark against an unknown opponent.

As the Chinese curse has it “may you live in interesting times”.

Mark R July 31, 2009 6:29 AM

Western Infidels said:

“I think the main danger is for network systems, where lots of keys are generated automatically, and where it might be possible to harvest lots of plaintext/ciphertext pairs automatically.”

Am I right in thinking a web server running SSL/TLS fits this description perfectly?

Clive Robinson July 31, 2009 7:03 AM

@ BF Skinner,

“But there’s a difference between crippling the cover other peoples communications and doin’ it to yourself.”

Ahhh, I did not make myself clear.

The joy about have a system with 20% of the key space being very strong and 20% being very weak is “if you know” you stick to the keys in the “very strong” group. You just design the system such that the 20% of the keyspace is going to cover your conceivable needs.

Your oposition however if they do not know will use keys from across the full keyspace in a random way. So sometimes they will use very strong keys but just as likley they will use as many very weak keys (1 in 5 messages).

The advantage for the NSA is not just in getting access to those 20% of messages under the very weak keys. It’s all the info in those messages that enable you to develop “signals knowledge” that will provide significant help in breaking the 60% of messages encrypted under the strong keys.

The important fact that you may not have been aware of (and I did not mention) is that all KeyMat used by the US Gov comes from the NSA either directly or indirectly through the systems they have developed. Thus this would ensure that the US Gov only ever used the very strong keys.

So no they would not be leaving themselves open to charges of treason.

As I also said the ill fated capstone system used a crypto design that as a number of observers have said is very brittle. That is when the individual components of the crypto algorithm are used the way the NSA has put them together then it is strong. But even very small aparently very minor changes weaken the result dramaticaly.

I find it hard to belive that the NSA arrived at such a design just by accident.

Also the LEAF “opps” smells like a red herring in the sun. No lack of respect to those who where responsable for putting it in but it was a bit to obvious. Which is why a “get around” was probably spotted so quickly. The advantage to the NSA is that it gives the majority a false sense of security and if used is not an impediment to their activities.

I suspect that when examined you will find there are also exploitable problems in the key space with very strong and weak keys in equal measure along with the bulk being strong.

In (open) practice There are only three practical ways currently to attack a crypto system that is in use,

1, key space attacks
2, side channel attacks
3, in depth attacks.

As the last is well known and for high level communicatons largly mitigated by various cipher modes. It is not a likley avenue of attack except in “consumer” grade communications and other “consumer” uses of crypto (see my comments about HD crypto).

The second is an implementation issue that although well known prio to AES was ignored during AES (loop unrolling / cache / etc). And as a result there are very practical attacks against “consumer grade” AES systems that are in use. Such attacks however are not likley to be of any use against Gov Level systems simply because it is so well known to them (EmSec/TEMPEST).

Which leaves the “key space attacks” as one of the areas the likes of the NSA and GCHQ are going to be playing in when it comes to second tier Governments and their crypto systems.

And as I noted “the NSA has previous” in this area although it is not widely known.

Now before you right me of as a “conspiracy nut” try putting yourself in the NSA’s shoes. They have two very conflicting primary purposes “protect US” comms “break non US” comms.

It has been known since long befor the NSA existed that your systems are “known to the oposition” and even in modern times people will pass across info to the oposition for purely “human” reasons.

Ask yourself the question “how do I do both tasks, knowing that my systems are known to the oposition”

Then think about how nicley “key space” attacks answer the problem…

Oh and if you can think of a better way feel free to give voice to it as the “open crypto” community will thank you for it. Not least because it will give them a new field of research 😉

David July 31, 2009 8:23 AM

I don’t understand Bruce’s comment about AES-256 being much harder to brute force than AES-128.

Brute-forcing a 128-bit key will require, on the average, roughly 4 x 10 ^ 36 tries. If we assume that attacks are limited to ten billion years, that’s roughly 3 x 10 ^ 17 seconds to attack it in. That means it’s necessary to try about 10 ^ 19 keys per second to get an average of one 128-bit key broken each ten billion years. If a machine can try one key a nanosecond, that means we need ten billion machines, and we have to have means to power them after the Sun dies. If we want to brute-force it in ten years, that’s ten billion billion machines.

I think that’s effectively impossible. Saying that a 256-bit key is harder to brute-force is like saying that it’s harder to swim the Pacific than the Atlantic.

Sam Trenholme July 31, 2009 9:45 AM

Bruce,

I remember a few years ago you made a comment to the effect that “lower powered computers do not go away, but end up in other places”; for example, 8-bit processors end up in control systems for cars and what not.

Indeed, while modern computers by and large do have 64-bit processors, many Netbook CPUs do not have this support, nor do the ARM processors used by the majority of cell phones. In addition, 64-bit software is still not very common, and there are software support issues for people trying to use a 64-bit operating system; most Windows installations today are still 32-bit.

This in mind, I’m a little disappointed that more effort hasn’t been made to made a Bona Fide 32-bit version of Threefish/Skein. The Skein paper points out this is possible. From section 5.4:

“All versions of Skein are specified with 64-bit words. The word size can be seen as a tunable parameter; we can define a Skein variant with 32-bit words. This variant would run much faster on 32-bit CPUs, but significantly slower on 64-bit CPUs.”

One thing I would like to see is better support for legacy 32-bit processors. This is the one reason I prefer Keccak over Skein, since their paper does describe how to make a 32-bit version of the hash.

Randall July 31, 2009 1:36 PM

@Sam: You can get 20 cpb for Skein in 32-bit code today. If it were parallelized, you could get a little over 10 cpb. That’s because all 32-bit desktop processors (and even some ARM chips) have instructions that can do math on pairs of 64-bit numbers.

I think the Skein team made a reasonable decision. Skein doesn’t have any real deal-breakers for low-end hardware — no 64-bit multiplications or 8×64 S-boxes or such. And using 64-bit additions is a big speed gain on NIST’s 64-bit reference platform. Wish they’d default to the tree mode, but you can’t have everything.

Randall July 31, 2009 1:43 PM

Er, I said “all 32-bit desktop processors” have those instructions, but I mean all reasonably recent x86 processors, both desktop and mobile (Intel Atom, etc.). There might be a few embedded processors that don’t have it still. The 64-bit instructions in SSE2 first appeared in the Pentium 4 in 2001.

Doubtful July 31, 2009 5:22 PM

Here’s a thread on this on the TrueCrypt forum. A guy called “Hiu” over there is really laying into Bruce’s coverage of this topic. Since I’m not an expert, I would like to see some sort of debate between both camps. A non-expert really can’t get any sort of meaningful understanding if you have one side talking here and the other side on another forum.

http://forums.truecrypt.org/viewtopic.php?t=16870

bob d July 31, 2009 6:41 PM

Bruce,

$50 bucks and a bottle of 1989 La Tour says that in five years AES* will be cracked via algebraic attack 🙂

Joe C July 31, 2009 7:14 PM

Read an interesting comment about this on Slashdot.. thought I would copy paste it here. Bruce, I believe you sort of alluded to this in one of your BlackHat/DefCon talks in stating there is some reason to believe the NSA might have better means for calculating security margins in advance.

Quote:

“The best attack against DES breaks 15 out of 16 rounds faster than brute force. However for the full 16 rounds, the best attack against DES is brute force. Likewise, the best attack against SKIPJACK breaks 31 out of 32 rounds. In both cases NSA was fairly involved with the development of the algorithms and they just happen to have no “security margin”. Perhaps that means NSA was ignorant of the methods (such as impossible differential cryptanalysis) that the academic sector developed. Perhaps it means that NSA is willing to play fast and loose with securing government communications. Or maybe, just maybe, it means that NSA knows exactly how strong the algorithms are and doesn’t need to rely on ad-hoc measures like “security margins”. I don’t know, but the fact that AES-192/256 is specified for Top Secret while AES-128 is for Secret makes me suspect that NSA knows far more about the real security levels of AES than the keysize would indicate.”

Source: http://tech.slashdot.org/comments.pl?sid=1322099&cid=28904549

iv July 31, 2009 7:24 PM

I’m interested in cryptography solely as an end-user / system implementor, not a designer, so the following post comes from that viewpoint.

For a long time (since before AES) whenever I needed crypto, I used Blowfish. After AES appeared I still used Blowfish out of inertia and have only recently started to use AES more than Blowfish.

What interests me is – how come Blowfish, at least according to http://en.wikipedia.org/wiki/Blowfish_%28cipher%29 designed in 1993 appears to be more secure than AES today? I’ve read that its worse features are the complex (long) key schedule and the 64-bit block size but apparently the key schedule is a huge part of what keeps it safe, and just how bad using 64-bit block size really is in practice?

Doubtful July 31, 2009 7:55 PM

@ bob d

I was going to post something in the same vain. However, I’m going to say the opposite. I’ll bet you the full 14 rounds are still secure (though not necessarily unbroken) in 20+ years.

Here’s my problem with all of this. This thread is unnecessarily alarmist. Bruce criticizes people all the time for the exact same type of thing he’s doing here. 11 rounds are broken (though still secure for now) using an impractical attack. 12, 13, and 14 rounds are still unbroken.

So, the way TrueCrypt implements AES still requires an exhaustive brute force attack to break it.

I’m usually a fan of this site, but full rounds AES is NOT broken.

However, knowing the people at TrueCrypt, they’ll probably relegate AES to legacy, even though there’s no reason to do so other than the alarmism.

Doug Coulter July 31, 2009 8:33 PM

Unless the NSA has completely lost it since I worked with them and DOD back in the day, things like “this is good enough for secret, or top secret” isn’t how it’s done, and for good reasons that I can’t see have changed much. This was awhile back, understand.

Some security needs to be long lived (perhaps forever) and some can tolerate short lifetime. For example, communication among fighter pilots or tankers may only need to be a secret for a little longer than a mission, or a conflict at most, while some things need to stay secret “forever”. In the cases I’m aware of, providing a super level of security for a moving platform just used too much computational power (at the time) to be practical, and it didn’t seem to matter if someone could reliably crack it in a few weeks anyway — they were vaporized by then. And any information the adversary might have gotten was of purely academic or historical interest by that time.

On the other hand, some things, like methods, names of certain people in a network, and other things you’d hope would be secure against a concentrated attack on the cipher by anyone interested, and that the adversary might have the resources of a state.

So the metric used back then (70’s and 80’s) was more like “how long can this last against a concerted attack”, not in any way “how secret is the stuff we want protected”. Operational security is certainly very important, but it doesn’t matter much after the operation is done, so can use a code that’s breakable, but that takes enough time to break so as not to matter to the operation.

Albert July 31, 2009 9:28 PM

When I worked at RSA, back when the AES competition was going on; I talked to quite a few cryptographers and they all were fairly confident that Rijndael would win. I was voting for Serpent. But I had made a comment that because we weren’t as familiar with the Rijndael square structure as we were with say, a Feistel based structure, there might come a time within 8~10 years, we might be looking at TRIPLE-AES, as we did with Triple-DES.

I think with this new attack, the time has come…

Clive Robinson August 1, 2009 2:20 AM

@ Doug Coulter,

“Unless the NSA has completely lost it since I worked with them and DOD back in the day, things like “this is good enough for secret, or top secret” isn’t how it’s done, and for good reasons that I can’t see have changed much.”

Nor has it changed in other places that I’m aware of (though like you it’s been a few years).

One security parameter that used to be an “accepted norm” was “if we can’t break it we don’t use it”. This was because it’s “strength was unknown” and that was a major cause of concern for exactly the reasons you stated.

Oh and inconveniant as they have always been the NSA and others still print up OTP’s of various forms in large quantaties.

And other people have realised that OTP is still a good way to go. I recently saw a prototype of a USB device that was an OTP system.

The design is quite “cute” and I was surprised at how it had been designed a lot of thought went into it. It managed to be both conservative and inovative. The amount of keymat it holds is surprisingly large (and no it did not use flash memory for storing the pad) and would be more than sufficient for a very busy MS Office using person on a short trip.

The only issue with it being “realy cute” is the current EmSec (EMC style) protection but I don’t think that is going to be an impossible problem to solve 😉

Clive Robinson August 1, 2009 3:23 AM

@ Doubtful,

“Here’s my problem with all of this. This thread is unnecessarily alarmist. Bruce criticizes people all the time for the exact same type of thing he’s doing here.”

Some of the posts may be but I don’t think Bruce is. In fact he may actually be being conserative.

You need to be mindfull of who the intended audience Bruce is adressing on this particular thread. It is not the ordinary blog readers but those who take a much more indepth view.

As with any sufficiently complex subject it has a language of it’s own where words and phrases have speciffic meanings that are different from their more normal lose everyday meanings. And that is going to always be a bit of a problem on an open blog, especialy one that encorages non experts and experts to participate on an equal basis.

What is concerning Bruce and what he is talking about is the speed the (known) effective security of AES is dropping.

In essence he has updated a line on a graph in his head with the latest results and seen that it crosses the axis a lot sooner than it did before. And this is significant not of it’s self but as an indicator of what might be to come (have a look at the history of FEAL for why).

Whilst currently of no “joe public” practical concern it may become one soon if a new class of attack comes out of the work. But it is of concern to those “in the field of endevor” as it effects new designs currently in progress.

So you need to view it against an individuals needs.

Now some people have little need of security margins their data or aplication has a short life of a couple of years.

Other people have need of much longer protection on their data say 25years (the length of a mortgage for instance).

And some have need for 999years (the length of some land leases).

And of course there will be some for which the time to the “end of the universe” is a little to short.

As Bruce has made clear in the past those practicing in the crypto field have different views on what is and is not broken to Joe Public.

If you look back you will see that the definition is one where an attack is less than “brute force”.

It’s a nice simple definition but is time scale independent and there is the problem for most people.

Now other things being equal the time for a “brute force” attack doubles with each aditional bit in key length. Therefore a “break” against a 256bit system compared to a 128bit system is going to be 2^128 (3.4E+38) times as hard and take that many times as long.

However not all things are equall sometimes different attacks can be refined or combined quite quickly then the difference in key size to work may actually be linear not exponential, that is a 256bit size only being 2^7 (128) times stronger.

Then there is Moore’s law and a whole host of other things to take into account befor a “real time line” figure can be reached that “joe public” will have a conceptual feel for.

Bruce is being cautious and saying AES appears to be sucumbing to new attacks faster than was originaly thought by the designers.

Which is of concern to those designing new crypto primatives (ie Hashes at the moment) around AES or the AES structures.

He has not said that Joe Public needs to worry about using AES for what it was designed for.

Clive Robinson August 1, 2009 4:00 AM

@ Doubtful,

“A guy called “Hiu” over there is really laying into Bruce’s coverage of this topic.”

I had a quick look over there at “hiu’s” comments and it appears that he and Bruce are comming at the issue from different angles.

Bruce is sayint that “due to problems in the key schedualing in 256bit AES the number of rounds should be increased or people should stick with 128Bit AES that has a different/better key schedualing.

Hiu appears to think Bruce is arguing just about a weakness in the rounds…

So Hiu is arguing against what he thinks Bruce has said, NOT something that Bruce has actually said.

Hmm based on that discrepancy of view point you would have an argument that would be both pointless and potentialy endless.

Roger August 1, 2009 6:40 AM

I was going to say something very similar to Doug’s remark. It might explain why the NSA would approve AES-192 and AES-256 for TOP SECRET, even though AES-256 appears to be weaker than AES-128 against certain classes of attack. (The alternative explanation, of course, is that they didn’t know of these attacks.)

If a brute force attack on a 128 bit key is the best you can do, then for the time being that cipher is totally unbreakable. For that matter, so is a 96 bit key.

However at some point in the distant future advances in computational technology (e.g. quantum computing, which effectively halves the length of a symmetric key) may make that cipher vulnerable to key searches. Thus if a message must remain secure for many decades to come, it would seem prudent to use a key length that is at least double what you might otherwise think is suitable.

There are other attacks, which may be faster than a brute force search. Even if they are not currently practicable, we might wonder if this could be an issue for messages that must remain secure for a very long time. The answer is no, they are probably not a problem. Pretty well all such attacks require extremely large amounts of CT / PT pairs with special properties, related keys, or some other kind of interaction with the system. Only very weak ciphers are vulnerable to better-than-brute-force attacks that require only a small number of CTs, no PT, and 1 key. So if you currently practice good habits with your key management, then these sorts of attacks will probably never become a threat, even if they become computationally feasible, because the attacker cannot reach back into the past and get you to encrypt some chosen PTs for him.

Thus for short term secrets, we may be concerned with smarter attacks, but the attacker only gets to use current computational power. All 3 versions of AES seem to be OK in that regard. For long term secrets, the attacker gets to use fabulous, scarcely imaginable computational power, but it is unlikely he can launch any attack more efficient than QC brute force. AES-192 and AES-256 are ok in that regard, AES-128 is not.

Roger August 1, 2009 8:00 AM

@Doubtful:

Here’s a thread on this on the TrueCrypt forum. A guy called “Hiu” over there is really laying into Bruce’s coverage of this topic. Since I’m not an expert, I would like to see some sort of debate between both camps.

So would I, but I can’t register to post there without giving a paid email address — FTS.

Anyway, pretty well everything hiu says is either exaggerated, misleading, misinformed, or misrepresents what Bruce has said. Not to mention just plain rude. Examples:
1. hiu wrote: “Schneier should follow his advice and stop writing those sensationalistic FUD-creating articles. ”

After Bruce wrote such sensationalistic stuff as “The attacks … are not practical” and “There are three reasons not to panic”

  1. hiu wrote: “- There are currently no attacks on the full AES-128 or the full AES-256 (faster than brute force).”

This is false, and I hope hiu is not involved with TrueCrypt development if he is happy to throw around such strong opinions without staying abreast of the research. In fact, the related key attack on AES-256 which was published by Biryukov and Khovratovich last month does break the FULL AES-256 faster than brute force — in fact, even faster than brute force on AES-128.

This attack is actually explicitly referred to in the main blog posting above, leading us to the inevitable conclusion that hiu doesn’t bother to actually read things before venting his spleen.

You may wonder why we care about a new attack on a reduced round version when there is an existing attack on a full version? Quite simply, the new attack gets through nearly all the rounds whilst shaving a mere 2^49 off the work factor. If — and this is pure speculation — the last three rounds could be pushed out at the same exponential growth rate as the first 11, we might see a full round attack with complexity 2^89. Or, 1 billion times faster in just one month. Ouch.

  1. hiu wrote: “Because since the very beginning, 9 out of 10 rounds of AES128 are broken. That’s right.” and when challenged for a source, replied:
    “The paper was written by the Twofish team including Schneier in the year 2000. See page 5:
    http://csrc.nist.gov/archive/aes/round2/comments/20000515-bschneier.pdf
    In fact, page 5 of this paper makes no such claim! Rijndael (it wasn’t yet called AES) is actually not even mentioned on that page!

Possibly hiu means page 4, where a table notes a safety factor of 1.11 for Rijndael. However hiu seems to think — without reading the original paper to which this refers — that this means Rijndael-128. If he had more carefully read the text reading up to it on page 3, or, heavens forbid, read the actual paper it cites (a paper co-authored by Bruce and available on this very website) he would have realised that the attack is actually against 9 rounds of Rijndael-256. Now known as AES-256. The very cipher currently being criticised. And thus, if he actually understood his own reference, actually blowing the ground out from under his own feet.

I could go on, but life is short, and there are plenty more like hiu on the internets …

ConfusedOne August 1, 2009 8:41 AM

Am I missing something? Why is single-key double AES more secure than double-key double AES?

The Meet-In-The-Middle attack is even easier with the same key, is it not? The original attack needs to store all the results from encrypting a plain text with all keys and then comparing it to the decryption with all keys. This storage is quite unlikely to be feasible with 256 key bits, so I would think the Meet-In-The-Middle attack is infeasible for double-key double-AES.

However, if you know that the key is the same for both steps, you do not need to store all the intermediate (‘middle’) results: you can try every key, encrypting plain and decrypting crypted text, and directly compare: if the result is equal, you have found the correct key, with only O(1) space instead of O(2^n), and the same amount of time: 2 times more than single AES.

Sherwood Botsford August 1, 2009 9:04 AM

It behoves the makers of software to make the crypt units modular.

Suppose that tomorrow someone comes up with a scheme that breaks all forms of AES in time short enough to be useful.

Every AES dependent system in the world is no longer secure.

It is unlikely however that all the crypt systems will be broken simultaneously.

Suppose that at any given time a software package that uses cryptography has 3-6 different modules. If AES breaks, then either you, your software vendor, or the other end of the secure connection marks AES as broken, and your software uses two-fish instead. When two-fish breaks, you use red-fish, then blue-fish.

In economics you diversify your portfolio. You don’t want to have all your money in Exxon when the Valdez hits the rocks.

In ecology, population dynamics are more stable with increasing numbers of species. In the arctic when the lemming population crashes so does the owl population. In Alberta when the rabbits come down with tularaemia and crash, the coyotes eat more mice. (And lamb…)

The Irish potato famine was caused in part by using the same variety everywhere. The spuds were genetically identical. Now when they are planting willow in high density as biofuel, they recommend using at least 5 different unrelated strains: A pest that affects one strain badly can take out only 20% of the supply.

Vancouver planted one variety of cherry tree as a street tree. Block after block was planted with this tree. When they got some fungus it spread like wildfire. Now they don’t plant more than 2 blocks of a given variety, then leave a buffer.

One of the virtues of multiple computer operating systems is increased resilience. The attack that breaks windows doesn’t subvert my linux box. The root kit that subverts my linux box doesn’t work on my wife’s Mac. The exploit for BIND 9 doesn’t work on djndns or maradns.

So too in cryptography: You want multiple ciphers in place and ready to use. When the great Next Best Think crypto contest concludes, I hope that NIST recommends that 5 be accepted as standards — and that they mandate a framework for modularity so they can be dropped in place and verified.


The other concept is the use of dummy payloads. Consider the following:

I want to keep a secret from the NSA. My source and my destination exchange messages constantly, all encrypted. Most of the plaintext is the output of /dev/random or other noise generator. Firstly, this makes it tricky to tell if you have successfully decrypted the message. Secondly, as long as encryption is faster than cracking, you can waste far more of their time than yours. Thirdly, if typical crack systems work on the first N blocks of the message to try to determine if it is interesting, then having real message with a Y block preamble of /dev/random means either that they have to up the ante and pick a new N > Y, or they will miss real messages.

Dummy payloads can help defeat traffic analysis too. E.g. if you can access some of the machines using an onion router, and can determine a high level of correlation in activity on machines A and B, then you know a link. On the other hand if the network is set up to exchange dummy payloads whenever traffic permits, or to generate spikes in traffic between arbitrary pairs of units, then traffic analysis becomes less useful unless you can tell real packets from dummy packets.


The use of honeypot transactions can help in a security audit. Example:

As a large bank, I ask employees to install package X on their home computer. X exchanges transactions with the bank computer, but does so using fictitious accounts. The bank computer watches for attempts to access these accounts. All attempts to access account Fictitious 121 should come from X121. Successful attempts to access an account from elsewhere shows that the system is broken.

Pierce Wetter August 1, 2009 11:10 AM

If key scheduling is the issue and the attack only works with related keys, what if you just SHA-256 the original key first? That would make it very very hard to create related keys.

Doubtful August 1, 2009 3:43 PM

@Roger

This is false, and I hope hiu is not involved with TrueCrypt development if he is happy to throw around such strong opinions without staying abreast of the research. In fact, the related key attack on AES-256 which was published by Biryukov and Khovratovich last month does break the FULL AES-256 faster than brute force — in fact, even faster than brute force on AES-128.

Doesn’t this only apply when AES is used as a hash? I don’t think this applies to the way it is used in real life (e.g. TrueCrypt, PGP), so I think hiu is actually correct.

Also, I don’t believe this attack applies to real-life usage. My understanding is that even if TrueCrypt used 11 round AES (instead of the 14 they use), this attack wouldn’t apply. So, I have to ask the question, is 11 round AES actually broken if no one uses AES in a way that would be susceptible to this attack?

So, my question is, how do you actually define broken? Shouldn’t it apply to real-world usage and not require data that you can’t actually obtain in real life?

Doubtful August 1, 2009 4:06 PM

Let me clarify what I said above. This is my understanding (a non-expert perspective):

The way TrueCrypt uses 14 round AES still requires an exhaustive brute force attack, thus it is not broken. I believe that neither of the attacks that came out in the last month apply to the way TrueCrypt and other OTFE software uses AES-256.

If TrueCrypt were to use 11 round AES but keep all other implementation the same, I believe it would still require an exhaustive brute force attack, thus it is not broken for this application. My understanding of this new attack is that it requires information that cannot be obtained in real life.

If the above is correct, then is 11 round AES actually broken?

Thanks

Clive Robinson August 1, 2009 5:05 PM

@ Pierce Wetter,

“If key scheduling is the issue and the attack only works with related keys, what if you just SHA-256 the original key first? That would make it very very hard to create related keys.”

The simple answer is yes.

This is the basis of a number of a number of “generator” modes for block ciphers.

Perhaps the simplest to understand is CTR mode where you have a simple binary counter at the input and you encrypt under a randomly selected key.

Other modes involve feeding back the output to the input and encrypting.

Some modes have issues with certain types of ciphers. If I remember correctly DES in feedback mode does not work to well for all sizes (you’d need to look them up).

And this is the rub with simple answers to simple questions they tend to be general and not specific and some specifics have exceptions whilst others not or different… 8(

It is one of the reasons that engineers (and I include myself in this) need to be wary of what they are doing with cipher blocks, because a system they design might meet the test cases they can think up while badly failing other tests known to a domain expert (sadly there are not enough of those to go around, and it really is a case of what not who you know).

It is interesting to note that some scientists from related fields when watching or being involved with an impromptu crypto discussion have indicated that it felt like a blood sport (it’s not but then neither do they take prisoners 😉

Witek Baryluk August 1, 2009 6:11 PM

I always was thinking that high speed hardware solutions uses round pipelining, where number of rounds doesn’t really hurts performance in any way (it only increses number of gates lineary, because loop over round are unrolled). The enc/dec bandwidth is the same (only latency and power consumption increses).

But software implementations and very constrained hardware implementation (like smartcards, where every gate matters), will have problem (but on the other hand, increasing number of runds here is simpler).

So considering this, why anybody will choice small number of rounds. Why they just didn’t choicen number 32 for all of them, just for safty? Specialized hardware implementations will still be fast due to pipelining, for smartcards it doesn’t really metter, they are processing just few packets per second, mayby software implementation will be 2 times slower only. This puts NIST and NSA in bad light again.

Roger August 1, 2009 6:36 PM

@Doubtful:

Doesn’t this only apply when AES is used as a hash?

Absolutely not; it is a related key attack, same as this one. It is a key recovery attack on the block cipher itself. There are two ways you might be getting confused here. Firstly, the attack uses the “correcting disturbance” method, which was originally invented for attacks on hash functions. However here that basic method has been ingeniously modified for use as an attack on a cipher.

Secondly, related key attacks are often too casually dismissed as being impractical, since they can be prevented by a good key management system. However the area where they become devastating is in certain constructions for turning the cipher into a hash. In that case, related key attacks on the underlying cipher have in practice been used to find collisions on the hash (e.g.: see cracking of Microsoft’s X-box.)

That does not mean, however, that the attack will “only apply when … used as a hash”. Use as a hash is simply a mode in which the attacker can easily obtain the related key conditions he wants, and the defender can do nothing to prevent it. However if your key management is not good enough, the attack may still be applicable in other circumstances.

I don’t think this applies to the way it is used in real life (e.g. TrueCrypt, PGP),

You should be very careful about saying “the way it is used in real life (e.g. TrueCrypt, PGP)” AES is now one of the most widely deployed ciphers in the world, used in thousands of products. We have no idea how many of these might or might not be vulnerable to related key attacks. Most of these have been implemented by generalist software engineers (or in some cases, hardware engineers) who have only a basic understanding of cryptography. When a cipher is certified by the crypto community, they expect to treat it as a magic black box that takes two inputs and spits out security. In fact in some cases it may be incorporated into standards that obligate the implementers to treat it that way! So when you start adding caveats like “oh, XXX version cannot be used where YYY applies”, you have a problem.

In particular, hardware devices often have room to fit only one cryptographic primitive, so it gets reused for many things. There are certain block cipher hash constructions for which a key size that is double the block size, is just perfect. Thus if I were a betting man I would wager that there are applications out there using AES-256 as a hash.

so I think hiu is actually correct.

hiu didn’t write “attacks that apply only in this particular application”, he wrote “- There are currently NO attacks on the full AES-128 or the full AES-256 (faster than brute force).” (emphasis added.) This is not correct. Such an attack is known, and was widely publicised in the relevant community just 1 month ago. hiu’s statement is a load of baloney, and since it was so easy for him to know better, it indicates that he shoots his mouth off (or his keyboard, actually) about things he has not researched very carefully.

Also, I don’t believe this attack applies to real-life usage.

Ta-da! You’re absolutely correct. But I have to point out, that is exactly what everyone else has been saying too: the attacks are not currently practical.

The problem here is that there are no attacks at all that are currently practical against any AES variant, including AES-128. Let me emphasise this: AES-128 is not just ok-ish security for the little guy whose secrets don’t really matter; AES-128 is a formidably powerful cipher, suitable for protecting national secrets against immensely capable opponents who are prepared to spend billions of dollars attacking it. So what the heck is AES-256 for if AES-128 is practically unbreakable? Well, AES-256 is for two things:

a) chumps who say “ooh, it has a bigger key, I must use that!”
If option a. applies to you, go right ahead and keep using AES-256; no problems will ensue (so long as you use good key management, and don’t use it in a hash!)

b) ultra-high security applications that are concerned about the attack capabilities of advanced national intelligence agencies many decades in the future, using computing technology we could barely dream of today.
But if option b. applies, you have a problem. You don’t know how powerful your hypothetical future enemies will be, so you chose a stupendously, ridiculously enormous extra margin of safety. In now turns out that in certain circumstances, under attack by hypothetical super-powerful future enemies, you don’t get any extra safety at all. You would actually have been better off with AES-128. That’s a problem. It begins to look as if you really have to have a good handle on exactly what attack model you are resisting before it would be sensible to choose AES-256 over AES-128.

The way TrueCrypt uses 14 round AES still requires an exhaustive brute force attack, thus it is not broken. I believe that neither of the attacks that came out in the last month apply to the way TrueCrypt and other OTFE software uses AES-256.

The attack against 11 rounds of AES-256 which Bruce announced above (but hasn’t been published yet), and the attack against all 14 rounds published last month, are both related key attacks. If TrueCrypt is not vulnerable to related key attacks, then it is ok. I have seen the bulk encryption modes of TrueCrypt, and they seem to be immune to related key attacks. I haven’t looked in detail at all the fiddly little extra bits; someone who thoroughly understands the concept will need to go over them all in detail to ensure that they are OK, but I expect they probably are.

I absolutely could not assert that this is true for all OTFE software.

If TrueCrypt were to use 11 round AES but keep all other implementation the same, I believe it would still require an exhaustive brute force attack, thus it is not broken for this application.

That is probably correct, but an exhaustive code audit by experts would be required before you could guarantee it.

My understanding of this new attack is that it requires information that cannot be obtained in real life.

This is probably true for the specific case of TrueCrypt. There are almost certainly implementations for which it is not true.

If the above is correct, then is 11 round AES actually broken?

Absolutely. If there were people using AES-256 with 11 rounds — and there isn’t, so we are here talking about pure hypotheticals — then it is likely that some of those implementations would be vulnerable to related key attacks. The attack against the 11 round version is probably already computationally feasible for some opponents and in less than decade it will be feasible for distributed community efforts like distributed.net. In this hypothetical parallel world in which people have deployed 11 round AES-256 applications, we would be sounding klaxons, starting code audits, issuing work-arounds, and generally recommending that everyone immediately switch to AES-128.

Back in the real world where 11 round AES-256 is a purely academic construct, the real reason we are worried about it is that it indicates work on AES-256 is progressing so rapidly that people are beginning to worry about non-academic, practical breaks emerging much sooner than the c. 10 year window it takes to revise, implement and deploy new standards.

Doubtful August 1, 2009 6:44 PM

At the risk of overposting, can anyone tell me how a related-key can apply in real life to the world of block ciphers?

Throw hashes out, as AES was not designed to be a hash, but a block cipher. Will a related-key attack ever be applicable to the real world? Because every other forum I’ve been to has dismissed these attacks as mostly irrelevant.

Does the related-key attack require some faulty implementation in the software?

Doubtful August 1, 2009 6:48 PM

Sorry, I wrote the above post before reading Roger’s post. It wasn’t meant to be an insult.

Doubtful August 1, 2009 7:09 PM

@Roger

Absolutely. If there were people using AES-256 with 11 rounds — and there isn’t, so we are here talking about pure hypotheticals — then it is likely that some of those implementations would be vulnerable to related key attacks. The attack against the 11 round version is probably already computationally feasible for some opponents and in less than decade it will be feasible for distributed community efforts like distributed.net. In this hypothetical parallel world in which people have deployed 11 round AES-256 applications, we would be sounding klaxons, starting code audits, issuing work-arounds, and generally recommending that everyone immediately switch to AES-128.

In this parallel world, wouldn’t it be just as easy to verify that your software is immune to related-key attacks? You’ve already stated that if AES-256 is implemented properly, as is probably the case with TrueCrypt, then the software can be immune to related-key attacks. Are you saying that it’s such a daunting task to do this that switching to AES-128 would be preferable.

And if AES-128 were preferable to AES-256, couldn’t we just use Twofish or Serpent? It seems to me these two would be preferable to AES-128 because of their 256 bit key and because they’ve both received a good amount of scrutiny (although not as much as AES).

Back in the real world where 11 round AES-256 is a purely academic construct, the real reason we are worried about it is that it indicates work on AES-256 is progressing so rapidly that people are beginning to worry about non-academic, practical breaks emerging much sooner than the c. 10 year window it takes to revise, implement and deploy new standards.

Are you sure that the worry is due to fast progress? It could be entirely coincidental that there was a release of two papers within one month of each other with related-key attacks, the latter being much better than the former. It’s entirely possible that we could go for years without a better attack. It’s been more than a decade since Rijndael was released for public review. It’s entirely possible this recent flurry of activity is coincidental. I think the timing of these two attacks could be as big a factor as anything else.

Brucefan August 1, 2009 10:44 PM

Hiu is a fool for sure, look what nonsense he stated in that truecrypt forum:

A single round of AES256 is as strong as a single round of AES128.

I guess he doesn’t understand key diffusion.

Makes me not want to ever consider Truecrypt if it has such an ignoramus working on it’s code.

Doubtful August 1, 2009 11:07 PM

@Brucefan

Makes me not want to ever consider Truecrypt if it has such an ignoramus working on it’s code.

I don’t think he’s involved with TrueCrypt. He’s just a member (with a bit of a mean streak), as far as I can tell. I agree that the way he talks to people leaves something to be desired. That style is okay if you’re never wrong. But you’re leaving yourself open for a major thumping if you even make a small mistake.

Bruce Schneier August 2, 2009 1:50 AM

“Since the attacks have been against the AES key schedule, perhaps we should be considering key schedule improvements instead of raw increases in the number of rounds.”

Increasing rounds is quick and easy. Any more complicated change will require actual cryptanalysis, and will take much much longer. If we’re going through that much work, we might as well take advantage of another decade of cryptographic research and choose a new algorithm altogether.

Bruce Schneier August 2, 2009 1:51 AM

“If I understand this right, because it’s a related-key attack, it’s not useful for attacking a single encrypted text like a TrueCrypt volume.”

Correct. That and the attack doesn’t work against full 14-round AES-256, or against AES-128 at all.

Bruce Schneier August 2, 2009 1:53 AM

“Saying that a 256-bit key is harder to brute-force is like saying that it’s harder to swim the Pacific than the Atlantic.”

Agreed, but cryptographers compare completely impractical attacks all the time. (There is a school of thought that agrees with you, though.)

Bruce Schneier August 2, 2009 2:01 AM

“Here’s a thread on this on the TrueCrypt forum. A guy called “Hiu” over there is really laying into Bruce’s coverage of this topic.”

He seems to be misrepresenting a lot of what I wrote above. I tried to write the post to prevent panic, not foment it.

Bruce Schneier August 2, 2009 2:03 AM

“This thread is unnecessarily alarmist. Bruce criticizes people all the time for the exact same type of thing he’s doing here. 11 rounds are broken (though still secure for now) using an impractical attack. 12, 13, and 14 rounds are still unbroken.”

I sure hope it’s not. I’m not trying to be alarmist.

We cryptographers like to change things slowly and deliberately, before there is cause for alarm. It’s much better that way.

Bruce Schneier August 2, 2009 2:05 AM

“I guess I don’t agree with the recommendation to increase the number of rounds in the AES. The AES is still fine, as long as you use it properly. These attacks only apply in a related-key threat model. But for most applications of AES (e.g., for encryption), as long as you use AES properly, related-key attacks are not possible. This does require designing your protocol to avoid introducing the opportunity for related-key attacks, but that is a solved problem these days.”

I agree with your analysis, but think that a simple change is prudent. Think of it as analogous to the change from SHA to SHA-1: a quick change made in response to a potential future weakness. It was better to make that change before it was required.

aes August 2, 2009 3:04 AM

Think of it as analogous to the change from SHA to SHA-1: a quick change made in response to a potential future weakness.

The change from SHA-0 to SHA-1 was quick, but it required a radical intervention into the algorithm’s structure itself, not just increasing the nr. of rounds. The exact thing you are arguing against in your previos comments.

Bruce Schneier August 2, 2009 9:14 AM

“The change from SHA-0 to SHA-1 was quick, but it required a radical intervention into the algorithm’s structure itself, not just increasing the nr. of rounds. The exact thing you are arguing against in your previos comments.”

Exactly. This is much, much easier than even that.

Bruce Schneier August 2, 2009 9:16 AM

“Are you sure that the worry is due to fast progress? It could be entirely coincidental that there was a release of two papers within one month of each other with related-key attacks, the latter being much better than the former. It’s entirely possible that we could go for years without a better attack. It’s been more than a decade since Rijndael was released for public review. It’s entirely possible this recent flurry of activity is coincidental. I think the timing of these two attacks could be as big a factor as anything else.”

Well, it’s the same team — basically — behind the three papers, so it’s not a coincidence. What we’re seeing is a new vector of attack being exploited. I don’t think we’ve reached the end of the potential for this new attack vector.

Bruce Schneier August 2, 2009 9:20 AM

“And if AES-128 were preferable to AES-256, couldn’t we just use Twofish or Serpent? It seems to me these two would be preferable to AES-128….”

There is a lot of benefit to having and using a single standard. I don’t think we should abandon AES in favor of other algorithms.

buherator August 2, 2009 10:45 AM

Thanks for the post, it was really informative!

Do you think that this could possibly affect 802.11i (WPA2) systems in the future?

Doubtful August 2, 2009 11:36 AM

@Bruce Schneier

There is a lot of benefit to having and using a single standard. I don’t think we should abandon AES in favor of other algorithms.

I know people a lot better than I know cryptography, and I think it’s going to be a very tough sell to get people to adopt AES-128. This is going to seem like a downgrade to a lot of people, even if they don’t really understand the science behind it. If the decision is based on consumer demand at all, I would be surprised to see anywhere near a universal adoption of this. I don’t speak for the TrueCrypt developers, but I have a hard time believing that they will offer AES-128 as a single cipher (i.e. not as part of a cascade). They may decide to drop AES-256 (or not- I have no idea) and only provide it as legacy. But, if they do that, my guess is they would simply stick with Twofish and Serpent.

The reason I say this is because the people on the TrueCrypt forums who have some knowledge of cryptography (but not an expert knowledge) are going to see that they were using 256 bit keys before. A 128 bit key will be unacceptable to a lot of these people.

Also, I wish that everyone wasn’t talking on different forums, taking entirely different perspectives. A debate would really help people like me understand this better. Since I’m not an expert, I’ll post something that Justin Troutman (a cryptography expert) said at Wilders Security:

These related-key attacks, in a nutshell, assume that an adversary has access to different plaintexts encrypted under different, related, keys, which can be ruled out, if you do things right. From the way it looks now, the attacks would apply if the AES was used as a hash function, in Davies-Meyer mode, for example, which is essentially the construct on which MD5, SHA-1, SHA-2. However, we don’t need to use the AES this way, because we have dedicated hash functions already that meet security requirements of which the AES was never intended.

We use the AES for purposes like message encryption and message authentication, where modes, such as CTR (i.e., encryption for confidentiality) and CMAC (i.e., authentication for integrity) assume the AES to be a PRP; this is still just fine. On the other hand, if we expect the AES to behave like an ideal block cipher, as might be assumed in Davies-Meyer mode, then we might have a problem. So, TrueCrypt and PGP aren’t susceptible. Furthermore, I can’t think of any application that is.

aes August 2, 2009 12:48 PM

Exactly. This is much, much easier than even that.

No, no, I mean when you argued, that changing the key schedule was to radical:
“Fixing the 256-bit key schedule would be nice, but that’s a more complicated change.”

If NSA dared to change the structure of SHA-0, why shouldn’t we change the key schedule of AES-192 and AES-256?
If NSA just increased the nr. of rounds(no adding of shifts), SHA-1 would have probably been much weaker than it is now. If I remember correctly Biham and Chen showed in 2004 that SHA-0 with larger nr. of rounds is weaker than with default, 80 rounds.

Clive Robinson August 2, 2009 2:45 PM

@ Bruce,

One of the problems so far is that most of the issues and answers have been in sound bites and in ways that “Joe Public” is unfamiliar with and also in a disconnected fashion.

So If you could do a quick FAQ type Q&A for “Joe Public” it might take a lot of the wind out of the arguments etc.

As I see it the following “Joe Public” questions have been or will be asked but due to various reasons the answers have not been given or are not getting through clearly.

So my attempt at Qs and some As is,

The No.1 question for “Joe Public” is,

“Is it broken?”

With broken being in the “normal practical human” meaning not the more specialized meaning used by cryptographers.

The answer from your comments is a little long, but in essense is,

1.A) Because this attack is based on the part of AES256 that expands the key for use and that in AES256 it is significantly different to AES128, AES128 is as far as is currently known, completely unaffected by this attack.

1.B) Further that AES256 is not broken in the normal sense either, because this attack only partially extends previous attacks and there is still a reasonable margin of safety to be yet used up.

Which leads onto question No.2 by “Joe Public”,

“how soon before it will be broken?”

The answer to which is,

2.A) For most uses of AES256 this attack currently will not lead to a break.

2.B) Nor is it likely to in the near future except for some specialized use of AES. And that is very dependent on if the attack can be extended to all the current rounds (doubtful) and then only with selected texts under related keys.

Which leads onto Joe Public question No.3,

“How many selected texts and how are the keys related?”

This is the question(s) that have so far not been answered in a way that Joe Public can easily comprehend. So can the answer(s) be put in a way that can?

3.) ???

Joe Public has noticed that various people have said that the “related keys” problem has been solved, but no further information has been forth coming. So Joe Public question No.4,

“what are the methods that stop related keys being used?”

4.) ???

And question No.5,

“On the assumption that the attack can be extended to all the rounds will the attack be actually practical to use in the normal sense of the word?”

5.) ???

John August 2, 2009 6:25 PM

Well, we use a program that uses three different encryption methods with three different passphrases for each, encrypting each encrypted block again with the next method. I am not an expert on encryption, but it seemed to be a good idea to multi encrypt our data this way. Thanks to this discussion, I changed the methods from 2Fish/AES/RC6 to 2Fish/AES/Serpent.

BW August 3, 2009 4:45 AM

Maybe the NSA wanted to read the rest of the governments documents; it was adopted, after all, post-9/11 during the Bush years. If the NSA can get Bush to approved warrantless wiretaps, don’t you think they could get him to allow a broken encryption alg to be used as long as it let us read everyone else’s stuff? The logic is so screwed up that it just might have happened.

Nathan Russell August 3, 2009 3:29 PM

Bruce, does it still make sense to use AES-256 because of concerns about Grover’s Algorithm? Or is that far too theoretical and long-term to be a concern?

Bruce Schneier August 3, 2009 5:44 PM

“If NSA dared to change the structure of SHA-0, why shouldn’t we change the key schedule of AES-192 and AES-256?”

Presumably the NSA had a large internal review process before they made that change. For us out in the academic and public world, such a review process would be hard. It’s easier — and just as effective — to increase the number of rounds.

Uri August 17, 2009 11:28 AM

The problem with AES is caused by its cryptographically weak key schedule. IMHO the correct (though at this time impractical) solution is to employ a key schedule algorithm that obfuscates relationship between the bits of the round subkeys.

This has been discussed back in the 1990-ties – see Key Schedule on Wiki.

Uri August 17, 2009 11:37 AM

Wanted to add – cihper designers face a dilemma: good key agility vs. elaborate but secure key schedule. Invariably they choose the faster and simpler key schedule over a slower but more secure alternative.

Justin Wells August 21, 2009 11:42 AM

Doesn’t the NSA recommend that AES be used in GCM mode? I am not an expert so someone else needs to clarify, but isn’t GCM one of those modes that uses AES only as a random generator, and so not susceptible to this attack?

I guess the big question is, for those of us who liked what we thought was the margin of safety provided by AES256 over AES128, that is now apparently gone, how do we get our margin of safety back?

Up thread someone suggested that doing a SHA256 hash on the key before using it with AES would resolve the problem. Is that true?

If I stick with AES256 in GCM mode and I SHA256 the key before passing it to the cipher, am I safe from this attack?

dave October 11, 2009 3:15 AM

In other words, AES256 is still good for disk encryption using a single key (such as TrueCrypt) , and AES128 is better for a stream cipher such as implemented in OpenSSH.
“Will the use of CBC reduce the effectiveness of these attacks?” In OpenSSH using CBC is a bad idea.[ google : openssh cbc exploit] CTR (Counter Mode) is the preferred mode for OpenSSH, I don’t know if it is for other applications, and i know CBC without the use of ESSIV in disk encryption is vulnerable to attacks such as watermarking.
So the best mode for OpenSSH is : Ciphers aes128-ctr ; MACs hmac-ripemd-160 (since SHA1 is broken, and your other options are MD5 and even “more broken” algorithms.

Max Sala October 19, 2009 1:43 AM

@John at August 2, 2009 6:25 PM
“Well, we use a program that uses three different encryption methods with three different passphrases for each, encrypting each encrypted block again with the next method. I am not an expert on encryption, but it seemed to be a good idea to multi encrypt our data this way. Thanks to this discussion, I changed the methods from 2Fish/AES/RC6 to 2Fish/AES/Serpent.”

Serpent is very similar to AES, in fact they’re
both translation-based ciphers with my
restricted definition [1]. If AES is going down the sink, Serpent (and Present) will likely follow. However, I don’t think AES is under any threat by a practical attack, not now and not in a near future.

Max
[1]-> arXiv:0806.4135 (to appear in AAECC)

CW January 5, 2010 7:52 AM

Unfortunately, additional rounds requires additional round keys, and there is a limit on the number of round keys that can be expanded from the seed key based on the number of round constant dimensions in the spec.

If you try to expand the number of round to those suggested the spec breaks because there are not enough round constants to do so.

CW January 5, 2010 7:38 PM

max rounds for AES based on key expansion method and round constants seems to be the integer of

((KeySizeInBits/128) * 11) – 1

Mike February 24, 2010 1:43 PM

The issue here is time and compute power. This places the breakdown of the key in the hands of private parties with a reasonable amount of computational horsepower – no longer the domain of perhaps only governments. This is a bigger problem than suggested and it cannot be ignored.

Sandy Harris April 17, 2010 6:43 AM

This attack works against AES-256 but not AES-128, because as Bruce puts the AES-256 key schedule is “pretty lousy”.

So is this a solution, given a 256-bit key? Set up AES-128, run its key schedule with 128 of the key bits. Use the other 128 to initialise a counter. Encrypt that counter with the AES-128 to generate a set of round keys. Use those for the AES-256 encryption.

This might be too expensive, of course. Also, you end up with the round keys being related, but that’s true for any key schedule.

See http://eprint.iacr.org/2008/473 for another approach to deriving a stronger 256-bit cipher from AES-128.

Clive Robinson April 17, 2010 11:22 PM

@ Sandy Harris,

“This might be too expensive, of course. Also, you end up with the round keys being related, but that’s true for any key schedule.”

Hmm related at what level…

Take a block cipher like TEA, AES or DES and throw the key expansion for the rounds away (it’s inefficient in DES and broken in AES256 anyway ;).

Replace it with stream cipher such as ARC, where you map part of the ARC Sarray into the block cipher where the expanded key would have gone in.

Then using a BBS generator to give you an additional value to add into updating the Jptr in the ARC update function.

I came about this odd arrangement in reverse back around 95 when looking for a way to improve on an existing design I’d done to generate many many keys using a True Random Number Generator (TRNG). The original idea used a card shuffling algorithm not that dissimilar to ARC4 to spread the entropy of the TRNG across a 1Kbyte state array. The ARC4 algorithm had been let out of the bag back in late 94 and as it had been “blessed” by various crypto experts I decided to use it. I modified it by adding the TRNG output as an additional offset to the J ptr. Part of the ARC4 state array was passed to a modified DES block cipher that had already been written such that the key expansion was done separately (and passed as a pointer to an array holding the expanded keys for the DES rounds). I modified the design again in late 98 to use the BBS generator instead of the TRNG to make a “software only version”.

As far as I could tell at the time the output from the ARC4+BBS generator passed all the standard tests.

Frank July 8, 2010 4:21 PM

Bruce,

Can you comment on this article? — http://eprint.iacr.org/2009/531

Or the better question is does it in any way change your opinion of using AES-128 over AES-256?

Please forgive me I am not in the cryptography field, just trying to make an informed decision on the best cipher to use.

Rob August 6, 2010 7:29 AM

practical attacks vs non practical attacks, panic or not to panic.. o.k.Reality for you…

Your drivers license number expires in 8 years. Your credit card number will change in 4 years. Your social security number will be usefull for we hope around 120 years.. being generous here..

So any data that I need to protect those kinds of items has to have a shelf life of those time frames.

There are very few that have need of much more than that.

even as far as the country goes theres few “secrets” that need to be protected for more than a single mans lifetime.

Now the other side of the equation.

this is the theoretical side.. where reputations are made.. where neither the data or the algorithm thats protecting the data is important.. The name of the guy who wrote the algo, or the attack is the important thing here…

this is where encryption must last longer than the established time line for the end of the solar system … Preferably galaxy…

so which is right? BOTH …….

Just as practical science needs its theorists so we have the same issues here.

Just my thoughts..

smu johnson August 11, 2010 2:00 AM

@ Bruce Schneier at August 2, 2009 9:20 AM
“There is a lot of benefit to having and using a single standard. I don’t think we should abandon AES in favor of other algorithms.”

This is a great debate right here. I would very much like to hear the reasons why we should still stick with AES despite all this worry about it being cracked in the upcoming months / years. I’m no expert, but it seems like a no-brainer for me to pick Serpent or Twofish over AES and 1) not have to worry about any of this lousy round implementation stuff and 2) check the “Is AES broken?” blog pages every month. To me it seems that life is too short to stick with AES on this recommendation and continue to worry about a break coming around the corner.

On the other hand… it could be that picking a standard (like AES) is best because it will get more attention and flaws will likely be discovered sooner. But by the same token, this seems like a good reason to use something else. If the non-standardized AES finalist candidates draw less cryptoanalysis attention to them, and are already considered by experts to have more of a security margin, why not just use them instead? It seems that my AES encrypted files won’t be secure anymore far sooner by sticking with this “use the standard” paradigm.

To me, it seems that if a “cipher break” time race took place, the AES might break before Twofish or Serpent ciphers would. If that’s true, why the insistance on faithfully sticking with a standard?

I’m asking sincerely, as I am not an expert, and this very question has haunted me for the last month or two.

Thanks for reading, I hope to hear a reply.

Clive Robinson August 11, 2010 2:46 AM

@ Smu Johnson,

“To me, it seems that if a “cipher break” time race took place, the AES might break before Twofish or Serpent ciphers would. If that’s true, why the insistance on faithfully sticking with a standard?”

I was once told,

There are two winners in every race, the one who came in first, and the one selling the tickets. And as we know there is always going to be some one who beats the guy who came in first next time or the time after, but the guy selling the tickets wins every time…

The simple fact is AES won the Pentathlon but has already been beaten in each individual discipline It’s an all rounder not the fastest sprinter etc. And history tells us at some point it will fail to be the best all rounder lose it’s title and eventualy retire.

This may be upsetting for some of the supporters but that’s the way life is.

But remember the second guy he does not care he’s going to keep on selling tickets as long as people want to buy them.

It is why I say the likes of NIST should move their game up from putting all their weight behind the winner and should “own the stadium”. Or put in place a standard framework with plug in modules.

Thus if AES is for whatever reason not suitable you can use an alternative just by using a different plug in.

Also more importantly if the AES plug in you are using is found to be deficient for some reason (as has happened with “timing attacks”) you can swap it out for one that does not.

The problem with software is the “built in” mentality in small and embedded systems such as “Utility Meters” they can have a lifetime of 25 years or more and are worth only a fraction of the cost of installing them.

Thus the security of the Utility Network depends on the weakest link, if an adversary can find the AES key because of an implementation fault in an old meter, the network is compramised at that point for as long as the implementation fault remains in use.

If however an appropriate standard for a framework is in place Utility Meter manufactures will work to the standard otherwise they significantly limit their market potential.

Thus getting a good framework is way more important than getting a good algorithm.

Pierre November 6, 2010 8:08 AM

“Cryptography is all about safety margins”

It explains why everybody is telling the (stupid) story about how to hide the forest with a couple of trees.

Cryptography should rather be about delivering Shannon’s “perfect security”.

I can’t believe that people who dedicated their whole life to the subject can’t see how the unicity principle translates into far more viable methods than the one-time pad.

If the world is in love with the concept of security by obscurity this can only be to happily practice shoulder-surfing.

This is just sad.

get your facts right November 7, 2010 8:59 PM

“The Bureau of Information Resource Management’s Radio Programs Branch
(IRM/OPS/ITI/LWS/RPB) provides all overseas missions two-way radios
equipped with Digital Encryption Standard (DES) or Advance Encryption
Standard (AES). These encryption algorithms provide limited protection
from unauthorized interception of voice communications and are only
approved for the transmission of Department of State Sensitive But
Unclassified (SBU) and Department of Defense For Official Use Only (FOUO)
communications. Under no circumstances should DES- or AES-equipped
radios be used for the transmission of classified information, as defined by
Executive Order 12958.”

http://www.state.gov/documents/organization/89272.pdf

Alan Taylor November 9, 2010 1:23 PM

It is understood that the NSA has classified AES 256 for use with TOP SECRET traffic. This only applies to NSA prior approved applications, and cipher operating modes. The NSA will usually supply the approved cipher, and in the case of “enhanced” AES 256 this includes enhanced key schedules and management. The AES 256 enhanced algorithm supplied by the NSA is not available to the commercial sector.

The majority of US State Department equipment laptops, radios, satellite links etc. should have NSA approved embedded encryption technology. This technology is hardware based and not software based. This takes the form of NSA approved chipsets.

NSA approved chipsets are manufactured and supplied by NSA approved manufacturers and suppliers. One of the most prominent suppliers of NSA cipher chip sets is Harris RF Communications 1680 University Avenue, Rochester NY.

Harris Sierra II chip sets are supplied with NSA suite A, and suite B cipher algorithms. The Sierra II chipset can be embedded into a variety of communications equipment and terminals. (See attached PDF)

In addition Harris supply NSA approved SECNET wireless interface modules for travelling diplomats. The chipset can be programmed with additional NSA classified cipher algorithms depending upon usage/deployment and application.

The data over at Cryptome looks a little dated. I would suspect that the some of the equipment did not conform to NSA cryptographic requirements.

Concerning your comments about security of AES. I will not speculate on this. However, it is common knowledge that in terms of outright security the SERPENT 32 round block cipher was rated higher than AES during the selection process. However, AES prevailed as it was deemed faster, and more easily integrated into smart card applications.

Alan Taylor
PGPBOARD Administrator
London, England

David Oftedal November 29, 2010 4:07 AM

So this seems to be getting some attention:

http://eprint.iacr.org/2010/594

“In this paper we analyze the case of AES and present an attack which is capable of recovering the full secret key in almost realtime for AES-128, requiring only a very limited number of observed encryptions. Unlike most other attacks, ours neither needs to know the ciphertext, nor does it need to know any information about the plaintext (such as its distribution, etc.). Moreover, for the first time we also show how the plaintext can be recovered without having access to the ciphertext. Further, our spy process can be run under an unprivileged user account.”

Time to go back to ROT-13?

Clive Robinson November 29, 2010 6:25 AM

@ David Oftedal,

“In this paper we analyze the case of AES and present an attack which is capable of recovering the full secret key in almost realtime for AES-128…”

The fact that AES is susceptible to side channel Cache attacks have been known about since before it was certified (it’s why I prefered another European candidate).

This particular attack is good (very good in some. respects) and as Bruce has noted in the past attacks can only get better with time.

Speaking of “things noted in the past” I have been saying for quite some time now that the two big areas of attacks are going to be,

1, Side Channel attacks.
2, Known plaintext attacks.

The only real solution to (1) side channel attacks is as has been known for over 60years “don’t use cipher systems online”

As the above statment has caused problems with readers on the blog in the past I will say that by “online” I mean a machine that is coupled in some way to an external monitorable point be it whilst using the cipher or (as could be done with the spy ware) at some later point in time such that any info obtained whilst “monitoring” the use of the cipher is leaked out.

Practicaly this means using two machines on fully issolated (ofline) that is only used to encipher/decipher and only the ciphertext files are copied to an “online” system for TX/RX.

As for (2) known plaintext attacks can be used with large volumes of “data at rest” such as encrypted hard drives where poor design allows either “attacks in depth” or the system is effectivly “ECB” or aproximating some other weak system like a stream cipher with key re-use.

Coyote March 3, 2011 7:34 AM

Thanks a lot for this paper, very interesting.
Just a question now, what’s the stand of the attacks on AES nowadays? Still the same as 2009? Do you still recommand the use of AES 128?

john August 4, 2011 1:27 AM

I would like to know which is the best security algorithm without the attacks among TWOFISH and AES,then also whether blowfish can be used to analyze among those results

Andy August 4, 2011 2:57 AM

an = angle

buf[0+an] = a[0]
buf[4+an] = t[0]
buf[1+an] = a[1]
buf[5+an] = t[1]
…16
buf[0+an] = a[0]
buf[4+an] = t[0]
buf[1+an] = a[1]
buf[5+an] = t[1]
…32
buf[0+an] = a[0]
buf[4+an] = t[0]
buf[1+an] = a[1]
buf[5+an] = t[1]
…64
buf[0+an] = a[0]
buf[4+an] = t[0]
buf[1+an] = a[1]
buf[5+an] = t[1]
…128…up say 4 bil, max limit

[0][1][2][3][4][5]
[0] [1] [2]
1&1 = 0
0&1 = 1,

link,nonlink

Andy August 4, 2011 3:02 AM

Tun foil hat stuff, can also predict the future . you don’t need to know the value of the vars, just how many vars there are, and do a bruteforce run, if you change one of the values say from zero to one(monday), if it rained on monday that vars is the value of monday… 12 oclock change another one to zero or what not, at the bottom of the tree yes or no

Noob October 16, 2011 9:31 AM

Hi everyone, I been reading all the post and it’s confusing to me. This might be a silly question but I am currently using truecrypt with a 20 character password. Is that safe against cracking? What if I increase it to 50 characters, would that make any difference. It’s going to be tough trying to remember it though 🙁 Thanks if anyone can answer

Clive Robinson October 16, 2011 12:32 PM

@ noob,

“… with a 20 character password. Is that safe against cracking? What if increase it to 50 characters”

It’s a simple enough question but the answer is hard.

The usual argument is that password cracking is done by “bruteforce” that is starting from A-B,AA-BB,… steping through every single combination of upper case, lower case, numbers and valid punctuation characters untill the password or phrase is found.

If that were the case you would have something like 70^20 passwords to go through, which would give you a reasonable level of security.

However the human brain is weak and most people are hard pushed to remember a purely random string of more than 7 numbers or five or six alpha chars.

Thus people use some “clever” method by which to make a long phrase memorable. The problem with this is it can reduce the entropy value of each charecter immensely so instead of each char having around 6.3bits of entropy giving your 20 char pass phrase around 126bits of entropy, it reduces the entropy to as little as 1.4bits per char giving you as little as 28bits of entropy.

Thus the important thing here is “can reduce” that is if your potential enemy knows how you are remembering your passphrase they can use the method to reduce the search space accordingly.

So my advice is to either find your own personal way to remember a 20 char password that is either random or apparently random, or take Bruce’s standard advice about writing it down on a piece of paper you keep in your wallet. Better still find some way to incorperate the two methods together in some “unique to you” method.

Alan Taylor February 4, 2012 8:36 AM

This is such an informative thread. Concerning the NSA’s use of AES for “Top Secret” traffic. Be aware that the NSA uses an enhanced version of AES for NSA prior approved applications, and cipher operating modes. The NSA will usually supply the approved cipher, and in the case of “enhanced” AES 256 this includes enhanced key schedules and management. The AES 256 enhanced algorithm supplied by the NSA is not available to the commercial sector.

See: http://www.pgpboard.com/viewtopic.php?f=2&t=206&p=346

Regards
AT

Chris December 26, 2013 4:34 AM

Give that AES-192 requires 2^176 time, doesn’t this still make it more secure than the 2^128 time required by AES-128?

Art January 4, 2015 4:14 AM

Funny it’s 2015 now. The original blog was in 2009. 6 years later AES isn’t any less secure than it was 6 years ago.

There is still no known practical attacks that can break AES, and it’s universally agreed AES 256 is stronger.

Clive Robinson January 4, 2015 10:18 AM

@ Art,

There is still no known practical attacks that can break AES.

Err that is badly worded, whilst there are no practical attacks publicaly know against the AES algorithm, the same is not true against practical implementations. As for what the various IC agencies might additionaly know, your guess is as good as most others.

Art January 5, 2015 2:45 AM

@Clive Robinson

IC agencies are not smarter than the entire human intelligence and academic world combined. Given there are still no practical attacks against AES by academic world, it’s safe to assume IC agencies don’t have it either. IC agencies are not aliens with different brains than the entire academic world combined.

The original 2009 blog entry was alarmist. Related-key attacks are no concern of properly implemented software. Back in 2009 dozens of people posted on various boards claiming that schneier says 128 AES is better than 256 AES.

The fact is that I am writing right now in 2015, six years later, and there has been zero progress in showing AES is anywhere even close to be broken.

What progress has been made in 6 years that weakens AES in any practical way? Nothing. Zero. Well, except biclique attack that requires 2^88 bits of data (38 trillion terabytes) and weakens AES 128 to still impractical 2^126, Even that attack ism given amount of data required, is probably slower in practice than brute force.

So no progress whatsoever since this blog post.

“Attacks always get stronger” but apparently sometimes they just get stuck

Normand Martel May 9, 2015 8:33 PM

I really like the idea to keep the same same algorithm but increase the number of rounds. I’ve written AES in MPLAB and ioncreasing the number of rounds only requires minor changes and adding a few returns for Rcon. AES/Rijndael is already very fast and i sincerly agree that adding rounds is a very viable compromise considering that AES might attain robustness levels approaching Serpent and Twofish.

Art November 3, 2015 8:23 PM

Normand Martel, wrong dude. Ignore this alarmist 6 years blog post by schneier. There is not one iota of evidence AES 256 is weaker than AES 128. The related key attacks are irrelevant as properly designed software don’t use related-keys.

By the way, there is no reason to believe the 14 round AES 256 is in any danger of being broken for the next several hundred years, if ever.

Sebastian June 2, 2019 7:18 AM

I’d give my left nut to see Bruce answer some of the latter day comments on this thread, some killer points and questions.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.