New Attack on AES

Biclique Cryptanalysis of the Full AES,” by Andrey Bogdanov, Dmitry Khovratovich, and Christian Rechberger.

Abstract. Since Rijndael was chosen as the Advanced Encryption Standard, improving upon 7-round attacks on the 128-bit key variant or upon 8-round attacks on the 192/256-bit key variants has been one of the most difficult challenges in the cryptanalysis of block ciphers for more than a decade. In this paper we present a novel technique of block cipher cryptanalysis with bicliques, which leads to the following results:

  • The first key recovery attack on the full AES-128 with computational complexity 2126.1.
  • The first key recovery attack on the full AES-192 with computational complexity 2189.7.
  • The first key recovery attack on the full AES-256 with computational complexity 2254.4.
  • Attacks with lower complexity on the reduced-round versions of AES not considered before, including an attack on 8-round AES-128 with complexity 2124.9.
  • Preimage attacks on compression functions based on the full AES versions.

In contrast to most shortcut attacks on AES variants, we do not need to assume related-keys. Most of our attacks only need a very small part of the codebook and have small memory requirements, and are practically verified to a large extent. As our attacks are of high computational complexity, they do not threaten the practical use of AES in any way.

This is what I wrote about AES in 2009. I still agree with my advice:

Cryptography is all about safety margins. If you can break n round of a cipher, you design it with 2n or 3n rounds. What we’re learning is that the safety margin of AES is much less than previously believed. And while there is no reason to scrap AES in favor of another algorithm, NST should increase the number of rounds of all three AES variants. At this point, I suggest AES-128 at 16 rounds, AES-192 at 20 rounds, and AES-256 at 28 rounds. Or maybe even more; we don’t want to be revising the standard again and again.

And for new applications I suggest that people don’t use AES-256. AES-128 provides more than enough security margin for the forseeable future. But if you’re already using AES-256, there’s no reason to change.

The advice about AES-256 was because of a 2009 attack, not this result.

Again, I repeat the saying I’ve heard came from inside the NSA: “Attacks always get better; they never get worse.”

Posted on August 18, 2011 at 6:12 AM74 Comments

Comments

ChristianO August 18, 2011 7:38 AM

@Paeniteo

Is there a proof that Triple AES done like Triple DES would improve strength of the encryption?

jake August 18, 2011 7:45 AM

is twofish vulnerable to the same attack?

it’s a shame that support for twofish is so limited amongst crypto software suites. i do use it with bestcrypt’s BCVE.

jmdesp August 18, 2011 7:45 AM

As is currently noted by some, the attack requires a 2^88 indexed data space for the storage of bicliques. I don’t know how many accesses to it you need to do to per tested key, but it sounds like in some scenario, the time realistically needed to access it makes the complexity gain not so useful anymore. In this regard, I think it’s interesting to remember an important part of the RSA768 attack was to do more calculations initially in exchange for a smaller data space in the final step.

I’ll wait for the opinion of better cryptographers than me, but this is one reason more to think that this attack, however much of a mathematical breakthrough it is, falls in the theoricallest side of theorical attacks.

Louis August 18, 2011 7:50 AM

In my mind, encryption should always follow the most streneous demands technology can offer; thus pushing the limit every day.

Encryption data std should be as hard today [on present day equipment] as it is now easy to attack yesterday’s [with the same equipment]

The nice thing about the above paragraph is that I can state it again in 3-5 years [or two?] from now, when there will be a debate about AES being broken and a new standard being recommended/sought

Furthermore, maybe we could have “year suffices” on standards to imply the necessary bracing of the times, ie AES-2012, which could eventually be broken in 2015, by which time we will be using AES-2014. (One can replace “AES” with whatever encryption TLA)

In such a context, as a security officer, I would recommend 12 random characters passwords on ZIP files using AES-2012 for secrets that must last two years, but would crank it up to 24 random characters using AES-2012 for secrets the should last, say 5 years.
(PS. I know character increase is not the answer to everything, but it does help)

Paeniteo August 18, 2011 8:03 AM

@ChristianO: I would have to look it up in Applied Cryptography to get the exact security properties of triple encryption right, but AFAIR they are general statements and not at all limited to DES.

It is just so that practically nobody has bothered to use triple encryption with anything other than DES, since all newer ciphers have keylengths that should make such a thing unnecessary.

phred14 August 18, 2011 8:29 AM

@Paeniteo: I though the reason for Triple-DES was that some peculiarity of DES made Double-DES made it it worse, or at least no better than DES. Was the need for tripling indeed specific to DES, so that Double-AES would confer benefit, or was the need for tripling generic?

x August 18, 2011 8:31 AM

“As our attacks are of high computational complexity, they do not threaten the practical use of AES in any way.”

Until quantum computers are so commonplace that crooks can also obtain one.

Bruce Schneier August 18, 2011 8:34 AM

“Until quantum computers are so commonplace that crooks can also obtain one.”

A quantum computer will reduce the complexity of an attack by a factor of a square root. So it will effectively halve the keyspace; that’s all.

Mike B August 18, 2011 8:38 AM

I am sure one can find a counter-example of an attack that gets worse. Might be a corner case, but generalizations like that rarely hold up.

Clive Robinson August 18, 2011 8:44 AM

@ Bruce,

“Or maybe even more; we don’t want to be revising the standard again and again.”

We don’t want to be revising standards at all if we can help it.

On powerfull attack method against protocols and standards are “fall back attacks” when they have multiple revisions, an attacker can force a “race to the bottom”.

Basicaly a person who is in the middle of the channel at the “negotiation phase” can cause both ends to fall back to the weakest algorithm both ends support.

Now because this is ‘autonegotiated” and software is supposed to be user friendly often the user will not be warned that this has happened, even when the weakest link is “plaintext”. The software developer mantra is “The users want to communicate, don’t get in their way”.

That aside, OK so technicaly AES is broken in that the attacks are better than brute force, but does this actually matter?

The answer is it depends on the lifetime of the required confidentiality or authentication of the communication.

For most communications the confidentiality is only required for the life time of a password etc because the value of the confidentiality of the communication is in the authentication not the transaction.

That is you don’t realy care if other people know the communication is to pay your quaterly electricity bill, nor do you realy care if people know it’s 375.27USD. However what you do realy care about is that only you can authorize a movment of money out of your account.

So on the assumption that your bank uses a very weak pasword authentication, what you care about most is the confidentiality of the password. So if you change your password considerably more frequently than the symmetric cipher (AES in this case) takes to break then in this case it’s not a problem.

However what if the authentication needs to be long lived as in a 999year lease on a property then you might be concerned.

Usually the solution to this sort of issue is to use a chain of orthagonal ciphers. That is you use an series arangment of two or more cipher algorithms that use fundementaly different methods to achive plaintext to ciphertext mapping.

For probably 99.9% of uses of AES this attack is unimportant, and for those where it might have an effect there are known solutions that don’t actually require the current AES standard to be changed now or in the near future.

However this attack is different to many other previous attacks on AES. In that generally other previous attacks on AES have required an infeasably large amount of cipher text material under a given key, and in normal practice key changes happen considerably more frequently than would alow this amount of ciphertext to accrue.

As the article authors say,

“Most of our attacks only need a very small part of the codebook and have smal memory requirements”.

This sounds a lot more practical than previous attacks, thus those with very high bandwidth communications channels with infrequent key changes might need to think about changing their habbits.

This choice will largley depend on the actuall computational resource requirment (CPU cycles) of the attack. The authors say,

“As our attacks are of high computational complexity, they do not threaten the practical use of AES in any way”

However two things to note, First CPU power/buck tends to double quite quickly (about 18months at one point). And secondly as Bruce always mentions attacks don’t get worse with time, usually they get a lot better quite quickly.

So it could be argued that for a safety margin the time this attack requires to be effective halves every year for any given unit of cost invested.

Does this attack matter even over a 999year lease? probably not.

Has August 18, 2011 8:59 AM

Silly question, but when they say “the first … attack”, do they mean discovered or actually performed?

Clive Robinson August 18, 2011 9:03 AM

@ phread14,

“I though the reason for Triple-DES was that some peculiarity of DES made Double-DES made it it worse, or at least no better than DES.”

There was indeed an issue with Double-DES, not that it matters that much now.

The main advantage of triple DES was backwards compatability with single DES in hardware designs.

If you think about it for more than just a few bits in block cipher width the number of maps available usually vastly exceads the largest number expressable by that number of bits (ie N! is considerably greater than 2^N)

The question then falls back to how many of those maps are actually of use for a cipher system and more importantly if re-encrypting produces an overall weak map (that is are some multiples of encryptions effectivly an encryption followed by a decryption over some or all of the map).

Jose August 18, 2011 11:35 AM

Certainly one big headache, but headache and nothing more.
I will propose Twofish + New hash Nist winner like the new standart if possible as soon as they can. For high security users the use of “open key” that couldnt be cracker of bruteforced. And for the paranoids, well the unconfortable OTP one time pad encryption system. God bless the crypto world and good solutions will be coming to the final end user…. Jose

salach August 18, 2011 11:50 AM

A question to Bruce, following his 2009 advice to increase the number of rounds:

Obviously increasing the number of rounds add to the safety margin, but is it really the optimal way to do it? There are performance issues and maybe other means, less elegant, would do a better job without the performance penalty?

csharptest August 18, 2011 12:10 PM

Last year I took your 2009 article to heart and began experimenting with a managed implementation for extending the rounds of the build-in RijndaelManaged implementation. Though the end result worked well enough it also caused me to start asking a lot of questions about the practicality of applying this in the ‘wild’. I just completed a blog post with some of my thoughts on the subject (http://csharptest.net/?p=523).

At present I still can’t find any practical application of using AES with extended rounds. It seems to me most people just don’t have anything to store that is that sensitive that they should be concerned. So I want to turn the question to you, Bruce, given the ability to do this, when should we (software developers) deviate from the accepted standard? Or is it the case that the standard remains good enough for now and we should wait until the standard changes?

Bruce Schneier August 18, 2011 12:11 PM

“A question to Bruce, following his 2009 advice to increase the number of rounds: Obviously increasing the number of rounds add to the safety margin, but is it really the optimal way to do it? There are performance issues and maybe other means, less elegant, would do a better job without the performance penalty?”

Depends on what you mean by “optimal.” It’s the easiest way to do it: it gets the job done with the minimal new analysis. Anything else would require a multi-year analysis process.

To reiterate: I don’t think there’s any danger of a practical attack against AES for a long time now. Which is why the community should start thinking about migrating now.

jmdesp August 18, 2011 1:32 PM

@Schneier : the 2^88 data space required is a not unreasonable reason for some to believe the practical cost of breaking AES may not necessarily be changed by this attack. So I’m not sure you’re completely objective when you say we should move now. I’d wait a few year to check if further advances are made on both the complexity and the data space. Then yes I’d move.

@Mike B : The only one thing like that that may happen is that we later realized some published attack was far less realistic than initially believed. And that does happened. For example, the people who announced attacks on SHA1 around 2^52 have withdrawn their claims, and Stephane Manuel now estimates the difficulty more around 2^65/2^69, based on a methodology he admits could be yet too optimistic (in a 2009 paper, I didn’t read his 2010 one which is gated)

Nick P August 18, 2011 2:49 PM

I’m not concerned about this attack. I think suggesting we move to another cipher or change up the infrastructure is alarmist. Organizations requiring multi-decade data protection, like classified TS/SCI facilities, should increase the number of rounds for their data or use alternative algorithms. For anything I want protected over decades, I use triple encryption with AES-Twofish-Serpent, Serpent 256 or a carefully orchestrated one-time pad scheme. It depends on the value of the data, its structure, projected age, potential attackers and access requirements.

@ Bruce Schneier & other crypto pro’s

I’ve also been toying with improving crypto security/performance tradeoff by first encrypting with Salsa20, then AES-256. This is just double encryption. However, this is a non-crypto geek’s attempt at improving security posture. Salsa20 is a very fast stream cipher that randomizes the data stream. Then, AES-256 encrypts it with a different key. My theory is that attacks should be harder if the plaintext is random-looking, at very least harder due to prior encryption. So, the use of AES is to take advantage of hardware acceleration, existing implementations & standardization. Salsa20 to cover for issues with AES & increase strength in general. Is this a naive analysis with no practical merit or am I onto something?

Nick P August 18, 2011 2:58 PM

@ Bruce Schneier

Another thing: IDEA’s patent has expired in Europe & will in US by January 2012. You once said it was your favorite cipher. I haven’t been following it, but the Wikipedia article shows no practical attacks have been found on it. Would this still be a good alternative to AES where 128-bit keys are acceptable? I mean, it’s been around for a long time & was targeted by NSA thanks to its use in PGP. I’ve seen no evidence anyone has cracked it in the nearly 20 years it’s been in use.

Obscurity + No practical attacks + 20 years field-tested + 128bit key + public domain (soon) = good choice for stuff not requiring or trusting standardized algorithms? Or should I just keep using Twofish, Serpent, and the new ESTREAM stream ciphers?

Dr. T August 18, 2011 3:58 PM

@Nick P: “… For anything I want protected over decades…”

Since almost no digital storage medium will last for decades, there is no need for encryption that will last decades (which probably isn’t possible). Encrypt files now with a commonly used method and archive them. Five or ten years from now, decrypt the archives, re-encrypt them with the latest standard, and re-archive them on fresh media. This archiving process increases the likelihood of retaining data integrity and decreases the likelihood of forgetting decryption passwords!

Paul Crowley August 18, 2011 4:15 PM

I seem to remember that at a panel session of the Third AES Conference, you recommended that if Twofish was not selected, that they select either Serpent or “18-round Rijndael”. Immediately after it was selected, you stated that you did not expect practical attacks against the new AES to arise but you feared certificational attacks that reduced confidence in the cipher. You were right on both counts!

tommy August 18, 2011 4:32 PM

@ Bruce:

1) Would cascading, say, twofish and AES produce a measurable improvement, and at what cost in overhead? I ask because Nick P.’s choice of Salsa was one that I believe is not so well known as Twofish, though I could be mistaken.

2) Given the huge lead time for NIST competitions before a winner is chosen, perhaps NIST should announce the opening of a competition for AES’s successor as soon as they announce that Skein has been chosen for the hash? (thinking positively here)

hope August 18, 2011 6:46 PM

Is there anyway to predicte the life time of a cypher?
Anything might beable to be bruteforce from some person or company, and if thats the only security part of a cypher then wouldn’t they all be broken.

Bruce Schneier August 18, 2011 7:32 PM

“So I’m not sure you’re completely objective when you say we should move now. I’d wait a few year to check if further advances are made on both the complexity and the data space. Then yes I’d move.”

I’m not saying we should move now. I don’t think anyone should change their cipher choice over the 2007 attack or this new attack. My advice is only for new implementations.

The cheapest time to change ciphers is before the system is fielded. And the easiest time to update a standard is before the old one becomes obsolete.

Bruce Schneier August 18, 2011 7:33 PM

“I seem to remember that at a panel session of the Third AES Conference, you recommended that if Twofish was not selected, that they select either Serpent or “18-round Rijndael”. Immediately after it was selected, you stated that you did not expect practical attacks against the new AES to arise but you feared certificational attacks that reduced confidence in the cipher. You were right on both counts!”

I remember that, too.

Bruce Schneier August 18, 2011 7:34 PM

“I’m not concerned about this attack. I think suggesting we move to another cipher or change up the infrastructure is alarmist. Organizations requiring multi-decade data protection, like classified TS/SCI facilities, should increase the number of rounds for their data or use alternative algorithms.”

Agreed.

Clive Robinson August 18, 2011 7:43 PM

@ Nick P,

“I’ve also been toying with improving crypto security/performance tradeoff by first encrypting with Salsa20, then AES-256”

It is one simple way to do it but there are others (have a look at Ross Anderson’s dancing bear).

However even with hardware systems there are other ways of using the stream cipher. If you think about it a basic block cipher has two inputs and one output. You can encrypt either the plantext in or the cipher text out to achive similar effect or you can encrypt/replace the key with the stream cipher to make the use of the block cipher in code book effectivly polyalphabetic.

However if you are using software implementations then you can do a whole lot more…

If you consider a block cipher it usualy consists of two halves the key expansion system and the encryption rounds.

The plaintext coes into the rounds and emerges as ciphertext, to do this a round breakes the plaintext into two or more parts and uses a reversable mixing process to mix each part with the output of a (hopefully) one way function.

The most recognisable round is the basic Horst Fiestel round consisting of two half rounds. Where the input to the oneway function is one half of the plaintext and the appropriate half round key.

As a rough aproximation the key expansion half of the block cipher has to produce atleast NR bits where N is the bitwidth of the cipher block and R the number of rounds.

If you consider how a stream cipher is used it effectivly “whitens” the data by using a reversable mixing process such as a simple XOR or mod 2^N or multiplication in a field.

Thus the stream cipher can be added to a half round in one of a number of places,

1, By whitening the data in (a) across the full width (b) across the half to be mixed or (c) across the half going into the one way function.
2, By (a) replacing (b) whitnening or (c) augmenting the round key into the oneway function.
3, By whitening the output from the oneway function before it goes into the mix.
4, By whitening the data out (a) across the full width, (b) across the half from the mix or (c) across the half that went into the oneway function.

Importantly you can use one or more of these points. Some of the points appear to be logicaly the same and can be if the mix function is the same as the whitening function. For instance if the mix function and the whitening fuction are XOR it does not matter if the whitening is applied to either input or the output of the mix function. However if the mix function and whitening function are different this equivalence probably does not hold. Thus you could use a mix function and two or more whitening functions that are all different.

What I have done in the past is to use the stream cipher to whiten all the round keys with as many bits from the stream cipher as their are from the key expansion system. This might seem like a waste of bits but it has low overhead to generate. Also I used the xor whitening function on one half round and add mod 2^n on the other half round key.

Clive Robinson August 18, 2011 8:14 PM

@ tommy,

“Given the huge lead time for NIST competitions before a winner is chosen, perhaps NIST should announce the opening of a competition for AES’s successor as soon as they announce that Skein has been chosen for the hash?”

Unless something dramatic and compleatly unexpected happens to reduce the complexity of this attack or another significant attack arises there is actually no need to do anything for a considerable period of time.

In my view there is one heck of a lot more useful things for NIST to be doing than organising another crypto primative contest such as producing a standard for a framework where basic primatives can be dropped in etc.

Also if they do organise another primative competition I think they should look for three or more orthagonal algorithms to be selected as winners.

As this will reduce the chance that any new attack will effect all of them.

Like many people I was not keen on Rijndael and prefered serpant for many practical reasons not related to crypto theory closley followed by twofish. And yes I still don’t like Rijndael even though it has been officialy blessed.

Joe August 18, 2011 9:19 PM

@phred14
“Was the need for tripling indeed specific to DES […] or was the need for tripling generic?”

It’s generic. Doubling any cipher (at least in that manner) only adds one bit of security.
See https://secure.wikimedia.org/wikipedia/en/wiki/Meet-in-the-middle_attack

@Mike B
“I am sure one can find a counter-example of an attack that gets worse. Might be a corner case, but generalizations like that rarely hold up.”

If I understand correctly, it’s not that any new attack will always be as good or better than previous ones, it’s that you still have the old ones. If you can crack a key in time = x today, you can crack it in time <= x tomorrow.

tommy August 18, 2011 11:59 PM

@ All:

“”Attacks always get better; they never get worse.”

The reason seems obvious, but I don’t remember anyone saying it: If you invent a weaker attack (that breaks no new ground in method), you don’t bother to publish it. I kind of considered the statement a tautology.

@ Clive Robinson:

“Unless something dramatic and compleatly unexpected happens to reduce the complexity of this attack or another significant attack arises there is actually no need to do anything for a considerable period of time”

When, exactly, are attacks “expected”, especially “dramatic” and “significant”ones (major breaks)?

What Bruce said at 7:32 pm:

“And the easiest time to update a standard is before the old one becomes obsolete.”

mithrandir August 19, 2011 1:15 AM

Would not using a hash such as ,Tiger, SHA 2 ,Whirlpool and a cascade of of alogorithms (serpent, twofish and aes) be secure. Leaving no traces….on the machine?

hope August 19, 2011 2:38 AM

Using the IV in AES should block off maths attacks with a decent random number. Mix /dev/random with 10101010 or 01010101 should increse the strength to close to 100% as long as a attacker doesn’t know werther is 10 or 01, and stop gainig data with ever wrong and right random number, or rotate the 32 byte in the enc data with one another, so the output is more monkeys on a typewriter shackspere

Jimmy August 19, 2011 4:17 AM

NSA: “Attacks always get better; they never get worse.”

That NSA source goes by the name Captain Obvious.

Clive Robinson August 19, 2011 9:47 AM

@ tommy,

Bruce’s statment,

“And the easiest time to update a standard is before the old one becomes obsolete.”

Is effectivly the same as my,

“Unless something dramatic and compleatly unexpected happens to reduce the complexity of this attack or another significant attack arises there is actually no need to do anything for a considerable period of time.”

Bruce is saying it is best to attempt to do something before something else happens. But he has no idea when that something else is going to happen nor what it will be.

What I’m saying is unless something unexpected happens before we can predict it we don’t actually have to do something for quite some period of time. So like Bruce I have no idea when something is going to happen nor what it will be either.

The thing about the unknown is you cannot prepare for it because it is “unknown”. as Bruce note’s increasing the number of rounds is a simple solution to this particular type of attack.

However conceivably an attack might arise in say the key expansion algorithm that makes increasing the number of rounds ineffective as a solution.

So it boils down to “it’s nice to prepare in advance” (Bruce’s point) but “unless you know what to prepare for you may well be wasting your and other peoples time”… Thus “currently there is little point doing anything specific, because it probbably does not effect you” (my point).

That said Bruce is commenting on what the crypto designers should be doing (start thinking), and I’m commenting on what the majority of users need to do in the foreseable future (nothing).

I hope that makes it clear.

Randall August 19, 2011 1:09 PM

Echo @jmdesp about the complexity question — unclear if the attack would, in fact, require fewer resources than brute force, or just trade fewer resources spent on encryption for more spent on memory/lookups. If it’s harder than brute force in the end, it’s still not over the certificational-break line.

(In another sense, it’s clearly harder than brute force, in that getting 2^40 plaintexts is more of a practical hurdle than doing 5x more encryptions in any real application. But academics tend not to worry about such things, and understandably not.)

Other thing is that it seems like a total bear to extend — there’s nothing to do here analogous to finding an improved differential/linear characteristic or peeling off 1-2 more rounds some clever way. An attack that clearly breaks 6 rounds and might be extensible is scarier to me than an attack that slightly speeds up brute force on 8 or 12 rounds but doesn’t fully break any version of the cipher and doesn’t seem extensible. This attack may be a little like the complementation property of DES that way — shouldn’t be possible, but doesn’t matter much in the bigger picture.

Of course, it’s still quite brilliant, and structural properties like this are an important place to keep looking when AES clearly is designed pretty well to resist differential/linear attack.

Finally, it’s yet another attack that depends on AES’s key schedule’s flaws. That tells me that an AES-style key schedule isn’t the way — either design your system not to rely on key schedule diffusion at all (Skein), or make the key schedule beefy enough that it’s very likely to do everything you need (Blowfish, and SHA-2 is much closer to this ideal than SHA-1).

John August 19, 2011 2:23 PM

@Dr. T
“there is no need for encryption that will last decades”

I disagree with that assertion. Someone that needs to keep some data secret for many decades will be better off using encryption that will protect it for at least that amount of time than using something weaker. If they use encryption that will only protect it for 10 years, an attacker that somehow acquires a copy of that encrypted data will only need 10 years to decrypt it, no matter how you choose re-encrypt it later on.

cryptonoob August 19, 2011 4:32 PM

Is this an over-over…over kill?

Encrypt with, say, 10 different ciphers. Use 64 chars or more passwords if allowed. And repeat if still feeling paranoid. |-D

Wouldn’t that guard against all kinds of attacks, even quantum computers?

Wolf August 19, 2011 6:42 PM

@cryptonoob

Speed and the ability to implement a cipher in cheap/simple hardware is important.

amanfromMars August 20, 2011 1:23 AM

“A quantum computer will reduce the complexity of an attack by a factor of a square root. So it will effectively halve the keyspace; that’s all.” …… Posted by: Bruce Schneier at August 18, 2011 8:34 AM

Well, surely Bruce that is far too a refined and definitive statement to make, for it appears to make no allowance for both the very real future possibility and the most very likely probability of a quantum computer being nothing at all like the virtualising bits and bytes machines of yesterday/today. And what is NOT to say that Man is discovered to be an alien morph of a quantum computing machine as IT exercises global program control over as yet blissfully unaware and disenabled human being assets with Sublime Text Deliveries and Manipulative Streaming Media.

A quantum computer programmed with Markov Chain Monte Carlo algorithms does not waste time and processing power breaking encryption protecting Vulnerable Destructive Operating Systems, it simply extremely effectively virtually decrypts past hidden and IMPertinent events by completely ignoring recognition of the need for the exercise and instead quantum leaps forward to Other Times with SMARTer Places in CyberSpaces, in order to Present a Future Current devoid of past follies with their attendant perverse and ultimately self-destructive corrupting complexities …… which would probably have been the sole primary, sub-prime reason/flawed original justification for secrecy protection in the first place.

In a Brave New World Order with Nothing to Hide is there Nothing to Fear, because Secrets have been Realised to have caused and delivered Fear and are Anathema to Virtually Intelligent Advanced Intelligence and Quantum Operating Systems with HyperRadioProActive IT. Such SMARTer IT Systems, having no corrupting secrets, have no need of wasting time and effort and processing power on hiding projects which stunt growth and hinder progress. All information and intelligence is freely available for viewing and upon demand/request…… on WYSIWYGIWYSeed Machines and C42 Quantum Control Systems for AI and Quantum Computers, currently betatesting present live contemporary systems for remote grooming and virtual transfer/pragmatic steal of their leading input source elements/core components?

The consensus opinion from contributors on this thread appears to be, that unless something unexpected happens, things are safe enough as they are, with just incremental changes to existing used protocols and algorithms, and that the unexpected is unknown and thus cannot presently be guarded against or factored into the security equation?

Does that not say that all systems are catastrophically exposed and vulnerable to stealthy exploitation by unknown actors and that there is no security or protection against such actors? With that, I would concur.

cryptonoob August 20, 2011 4:33 AM

If there’s a corp that needs to make sure the secrets stay secret, surely they don’t care about hardware resources.

Wouldn’t 10-100 different ciphers guard against anything?

How about 1000?

Where is the threshold beyond which more ciphers don’t matter? Two or three?

Clive Robinson August 20, 2011 7:00 AM

@ Cryptonoob,

“Where is the threshold beyond which more ciphers don’t matter? Two or three”

Simple answer “one” as it should be.

There are quite a few crypto systems out there that will give you any desired level of security for periods long past anybodies caring.

For instance the OTP is secure forever under two conditions the key stream is truly random, and it is never revealed in any way.

However simple as it is it’s not particularly practical. For instance all you are realy doing is moving the security issue from the plaintext to the crypto stream.

From the security asspect the issue with crypto systems is effectivly three fold. The first is expanding a secret key in a known algorithm to form a secure map or stream for the plaintext to cipher text conversion. Secondly using that algorithm in a system that ensures the continuation of that secrecy so far obtained by the algorithm. And the third and most importantly these days is key managment and security.

However “secrecy” is not the only consideration in a crypto system, other things such as efficiency, speed, power consumption and physical size and alsorts of other resource issues etc raise their head to muddy the waters.

And like it or not these reflect back all the way to the “key” in terms of size etc. So right from the first step the design of a crypto system is a compromise.

The question is which compromises effect what and in waht manner and what effect that has on the design goals of the system.

The fun starts when you realise we don’t realy know how somethings compromise other things. Thus at anypoint in time we can say,

‘We belive system X is going to be resistant to attack for Y period within our current knowledge’

However another thing we can say is that the current known laws of physics put constraints on what is achievable by even the most well equiiped advisary. One such is that the processing of a bit of information requires a certain minimum amount of energy. In some cases the energy is recoverable in most it’s not. Therefore we know that this places constraints on such things as chip sizes and clock speeds. Thus we know Charles Moores Law has a hard limit and although human ingenuity will find ways of moving around it a bit we are trapped by the laws of physics as we currently understand them, even quantum computers.

Thus we can (and have) designed cipher systems that the key size is such we know it cannot be “brut forced” searched in less than a certain period of time (measured in time beyond that of the expected life of the universe in some cases 😉 In Bruce’s book Applied Cryptography he gives a number of physical constraints and then makes a joke about “dark matter”.

The problem is such systems are in most cases not practical to use in everyday systems.

Also crypto strength is not realy relevent in most cases see “Rubber Hose Cryptanalysis” and “Information leakage by side channels”.

MarkH August 20, 2011 8:52 AM

@cryptonoob:

In what I think of as the “modern era” of cryptography (since about 1970), “strong cryptography” has proven to be just that: strong.

It is deeply impressive to me that after all these years, the world knows no practical attack on DES other than exhaustive keysearch (i.e. “brute force”). Because of increasing computer power, the 56-bit strength of DES eventually became crackable, but the design of the cipher has shown no exploitable weakness. Unless a practical quantum computer can be made, 3DES will probably remain secure for generations.

The cryptanalytical work on AES is very impressive, but has not yet yielded any practical attack. Except for the highly specialized related-key attacks, there is no attack on real (as opposed to reduced-round) AES that is any better than exhaustive search.

Unless quantum computers of practical scale can be realized (still a very doubtful proposition), in the year 2111 it will probably remain impossible for anyone to decrypt an AES ciphertext without prior knowledge of the key.

Gweihir August 20, 2011 11:32 AM

@MarkH: As Bruce states above, a quantum computer is singularly unusable for breaking block ciphers, as it gives you only one bit for free. So forget that approach.

Factorization might fare a bit better, but it is far more easy to scale up. And quantum computers cannot break down problems. You always have to fit in the whole thing. If your quantum computer is just one bit too small, it becomes completely useless.

Bottom line: Forget quantum computers for crypto-breaking in any practical sense. And I have a feeling that unlike conventional computers, the effort for scaling quantum computers up will be exponential anyways.

MarkH August 20, 2011 12:43 PM

@Gweihir, who wrote “a quantum computer is singularly unusable for breaking block ciphers, as it gives you only one bit for free”

I don’t pretend to understand quantum computers — maybe, if I’m not too old for it, I’ll try to learn about them when they make a practical example — but here is how I understand their relation to block ciphers. My apologies for any errors!

Grover’s algorithm for quantum computers is relevant to block ciphers, because it can be used to improve the speed of an exhaustive search. As Bruce posted, it “will reduce the complexity of an attack by a factor of a square root.” For a block cipher with key length of n bits, exhaustive search time is roughly O(2n). A square-root improvement reduces the time to O(2n/2), so a 128-bit key would be reduced to the effective strength of a 64-bit key.

The difference isn’t one bit, but rather halving the number of bits. It is for this reason, that doubling key lengths (for symmetric ciphers) may be expected to protect against the advent of a practical quantum computer, even if all quantum computer scaling problems are overcome.

Because the maximum practical computation today is considered to be roughly 260, and doubling time seems to not much less than one year, a 2128 attack may not be feasible for generations. For this reason, users of good block ciphers with 256-bit keys need not worry about the possibility of quantum computers.


On the other side of the coin, if those users of block ciphers with 256-bit keys are relying on public-key cryptography for distribution or establishment of block cipher keys, then a quantum computer would be a threat.

Personally, I guess that scaling problems of quantum computers may never be overcome. But if they are, progress is likely to be gradual: even if a practical 1024-qbit computer could be made, a 2048-qbit machine would be a LOT harder than that. Right now, achieved coherence times for quantum computers are sub-microscopic, but they would have to grow to many minutes (!!!) for these hypothetical large computations.

So doubling of key lengths for public key ciphers, while not offering the lasting protection expected for block ciphers (see above), is still likely to buy a few extra years in case a practical quantum computer ever comes into being.

Nick P August 20, 2011 1:42 PM

@ MarkH

Good analysis. You forgot to mention there are quantum resistant public key cryptosystems now. Particularly, NTRU seems like it has decent security after they added the perturbation technique. I’ve been wanting to see someone like Bruce or Matt Blaze take a fresh look at it & give us a solid set of recommendations regarding its use.

I’ve always pushed a dual signature scheme using two different asymetric algorithms for high security apps where performance wouldn’t be affected (email, file transfer & real-time if hardware accelerated). The 256-bit symmetric key would be split using a secret splitting scheme. The two secret pieces would be sent over two different cryptosystems, maybe one resistant to quantum crypto. This makes things much worse for the attacker & only adds an extra signature for the defender. (Not so bad on modern processors or with hardware acceleration in devices.)

Clive Robinson August 20, 2011 3:22 PM

@ Mark H,

“I don’t pretend to understand quantum computers — maybe, if I’m not too old for it, I’ll try to learn about them when they make a practica example”

You appear to have a better grip on them than most 8)

Just remember that most of the productice work on quantum computers has not been done by physicists but engineers.

And the one thing that engineers excel at is proving scientists wrong (See Arthur C Clarke’s comments on distinguished but elderly scientists and their “imposible predictions” 😉

So my guess is that if and when we get QCs to the point they start to become usefull an engineer will come along with some “divide and conquer algorithm” to get around any pesky limits 😉

I’m not saying it will happen (I’m neither distinguished enough or old enough 😉 but as Douglas Adams once observed about “million to one chances” they tend to happen “nine times out of ten”.

cryptonoob August 20, 2011 4:04 PM

I understand that one cipher is absolutely sufficient.

But aren’t multiple ciphers defense in depth?

Why wouldn’t you at least double encrypt every data area? Like first you enter a lobby – one security check – and then before you can enter a floor, you need to insert a key in the elevator, and one more when entering a room, and of course the computer there is encrypted. I recall reading that Apple has 6 levels of physical security until you get to the room where the next iGottahaveit gets crafted!

Isn’t it better to overkill a bit – or a lot – if it from a resource standpoint is negligible?

Another q: can you do processing on encrypted data already, like in the cloud?

Btw, thanks for your highly knowledgeable answers guys, and ultra-respect to Bruce – reading his books, understanding little, but at least that this is fundamentally necessary stuff for our world, any world where information is transmitted, manipulated and stored.

Nick P August 20, 2011 4:28 PM

@ cryptonoob

Double or triple encryption is a good way to overcome flaws in any one cipher and increase the work required of an attacker so long as each algorithm produces random looking output, different algorithms are used, and each has its own key. TrueCrypt offers this option and Cryptophone uses AES and Twofish together. Personally i think a strong AES candidate is enough cuz they usually crack ur system not ur crypto.

Clive Robinson August 21, 2011 12:25 AM

@ Cryptonoob,

“But aren’t multiple ciphers defense in depth”

Only if the ciphers are orthagonal to each other or give access to other “unrelated” maps or streams.

A simple example is the XOR function, no matter how many times you apply it in succession they all colapse down to the equivalent of applying just a single XOR function. Likewise addition mod N nomatter how many numbers you add it colapses down to the equivalent of a single addition.

However using an XOR function then applying addition mod N is usually (there are special exceptions) not colapsable down to a single function.

Now if you think about the Fiestel round (used in DES and many other systems) the structure in a half round is to XOR half the plain text block with the output of a oneway function. The next half round does the other half of the plaintext block.

Thus DES is just a series of XOR functions in series alternating on each half of the plaintext block into each round. So if you could “pre-compute” the values from the one way functions DES would colapse down to just two XORs one on either half of the plaintext input block.

The problem is Horst Fiestel thought of this and worked out a nifty little trick to stop you doing that. He made the input to the oneway functions in each half round consist of two inputs, the first being the half round key derived from the cipher key, the second being the other half of the plaintext block into the half round. The result being that the output of each Fiestel round was intimatly related to the plaintext in to the round as well as the key, thus to colapse the XOR functions down you would need to know in advance not just the key but the plaintext as well.

The reason that putting DES encryptions in series (correctly as in 3DES) gives you greater strength is because the number of maps defined by the keyspace (ie the number of keys) is much much smaller than the number of maps that exist. DES is a 64bit cipher with a 56bit key space imposed by the NSA, this makes the number of maps available only 2^56. However the total number of maps available in a 64bit width cipher is not the 2^64 of the normaly expected keyspace but (2^64)!.

The question then becomes how do you get at these other maps to enlargen the available keys space. Well the answer is by putting DES in series BUT if you don’t do it properly you only increase the number of maps by a very small amount not by the number of extra bits you have added to the keyspace.

There is also a second gotcha on the maps a great many are very very weak in that some are very little different to the null map (where the ciphertext is identical to the plaintext). You have to ensure that the design of the cipher is such that chaining the ciphers together does not produce one of the weak maps.

Ensuring this is difficult, however if you use a second cipher with an orthagonal design then the chance of bring up a weak map is very very much smaller.

With regards to,

“Isn’t it better to overkill a bit – or a lot – if it from a resource standpoint is negligible”

The simple answer is only to a point. The number of maps has a limit imposed by the text blockwidth. That is a 64bit plaintext in block cipher has a maximum number of maps of 2^64 factorial. However depending on the cipher structure you may never get more than a few nomatter how many times you chain the cipher.

However there is a third gotcha waiting for you if you take a very simple cipher the XOR each time you XOR again with the same key value you get back to the plaintext. Now there are two things to note, the first being that it’s always an even number of applications of the cipher function, the second is it produces the inverse map and thus decrypts the encryption.

Thus two weaknesses, firstly some ciphers when chained using the same key can effectivly decrypt back to plaintext thus effectivly reducing all the previous strength build up to zero. Secondly other ciphers will after a fixed number of steps produce the same map as would have been produced by a single encryption, if this occurs for all keys after the same chain length then again you have thrown away all the previous strength build to just that of a single encryption.

So in some cases you might be giving an adversary a short cut into attacking the system.

With regards,

“Can you do processing on encrypted data”

Simple answer is “yes but not usefully”.

If you encrypt using a stream cipher, then you can usually do the same operation that the encryption used on the encrypted data, with another stream that you could if they were not encrypted.

So if your stream cipher used addittion mod N you could add a plaintext value to the encrypted value and the result after decryption would be the same as if you had added the two together as plain text.

Likewise if the number were encrypted you would decrypt the result twice, with the key of the first stream and the key used for the number.

However you (usually) cannot usefully do other operations.

You also then get into another area of security in that you have to ask if you are leaking information about the encrypted data by the way you manipulate it even though it is still in ciphertext.

Cryptonoob August 21, 2011 5:11 AM

If you can’t compute with encrypted data, is there a way to ensure that the one who has access to the hardware (the clustered cloud computing provider) won’t get access to your data?

I’m thinking:
Encrypt data before sending it to the CCCP. Data to be computed is split into the smallest possible parts and processed on separate machines of the cluster (even multiple providers) and the results (that require less or practically no computation) are combined on back on your system.

They compute in fully inaccessible virtual machines that process only data with your passcode, running on encrypted filesystems.

Cryptonoob August 21, 2011 5:33 AM

This would seem to depend on the inaccessibility of the virtual machine’s memory, which AFAIK isn’t encrypted.

Is encrypting the virtual machine’s RAM, where the computations take place, i.e. where the program runs, possible, even in theory?

Clive Robinson August 21, 2011 6:21 AM

@ Cryptonoob, Nick P,

With regards my earlier comments about Moores law well it just so happens the Register has just put up an article on that very subject,

http://www.theregister.co.uk/2011/08/20/microprocessors_may_face_trouble_ahead/

It’s important backround reading for anybody in the “Systems Security” field, because only having four FAB plants doing 20nm lithography has some very serious implications as well as the fact the market is now highly stratified.

Clive Robinson August 21, 2011 6:56 AM

@ Cryptonoob

“If you can’t compute with encrypted data, is there a way to ensure that the one who has access to the hardware won’t get access to your data?”

The simple answer has always been in the past “If someone gets access to the console it’s game over”.

However that is not of necessity true as you appear to be thinking,

“I’m thinking Encrypt data before sending it to the CCCP. Data to be computed is split into the smallest possible parts and processed on separate machines of the cluster (even multiple providers) and the results(that require less or practically no computation are combined on back on your system.”

I’m a bit ahead of you on this, in that I’m thinking multiple lowpower simple functionality CPU cores on chips. As has been pointed out by Robert T you can get several thousand 8051 equivalent CPU’s onto a chip these days. However I’m thinking of using stack orientated CPU’s that have no knowledge of anything other than the CPU internal registers, and the stack heads and memory arrays/buffers controled by a security hypervisor. That is each core is segregated from memory resources by a Memory Mapped Unit (MMU) that unlike conventional designs is not controled by the CPU using it but the security hypervisor. Communication is via “Post Box” buffers in memory whereby the CPU core writes to an output buffer reads from an input buffer and controled by simple arbitration logic.

The design has other features and the intent is to put each CPU core in a “jail” with strictly controled resources such that malware does not have an easy time and cannot hide it’s self.

With a few slight changes the design could be changed to implement a system that with the use of PUF’s could start getting close to what you want…

Which brings me to your final question,

“Is encrypting the virtual machine’s RAM, where the computations take place, i.e. where the program runs, possible, even in theory”

Yes and it’s actually not as difficult as you think.

First put a crypto engin between the CPU core and it’s memory, secondly have each CPU have it’s on PK pair that is internal to the CPU core. You get sent the Public Key for each CPU core, you encrypt the program and data with several symetric cipher (say AES) and then encrypt the keys along with other info under the respective public keys. The wrapped up package gets sent to the hypervisor that unbundles the various parts for the different cores into main memory. The hypervisor then sends the public key encrypted AES keys to the respective cores to put into their crypto engines.

The system is not perfect but it goes a long way towards what yo want.

However there is a problem, with “tapeout costs” being up in the million bucks and above bracket these days I can’t see many takers for the investment (I could be wrong but…).

Nick P August 21, 2011 2:35 PM

@ Clive Robinson & cryptonoob

“With regards my earlier comments about Moores law well it just so happens the Register has just put up an article on that very subject,”

This has been evident for a long time. I was looking into “spintronics” and optical computing a few years ago. They’ve made quite a bit of progress in those areas. I think the spin-based computers have the most promise so far as performance goes. Imagine finally being able to ditch the fundamental 0 or 1 model for 0-3 or 0-7. Oh the possibilities…

“It’s important backround reading for anybody in the “Systems Security” field, because only having four FAB plants doing 20nm lithography has some very serious implications as well as the fact the market is now highly stratified.”

RobertT and I discussed this at length. I was concerned that having only a dozen fabs or so increases the opportunity for wholesale subversion. He corrected me & noted that only four fabs produce virtually all mobile phone chips. Hence, compromise (or heavily pay) four companies and you can get subverted chips on demand. Yikes. It’s why I’ve promoted government-sponsored construction of a 20nm fab onshore & operated in a DOD-certified way. Sure, the NSA might build in backdoors for those chips, but we’ve already narrowed subversion possibity down to one organization. That’s progress.

“The design has other features and the intent is to put each CPU core in a “jail” with strictly controled resources such that malware does not have an easy time and cannot hide it’s self.” (clive)

“Is encrypting the virtual machine’s RAM, where the computations take place, i.e. where the program runs, possible, even in theory” (cryptonoob)

Clive’s prison model doesn’t help in the cloud computing scenario because you’re trying to prevent subversion by an adversary with unlimited access to the hardware. Since VM’s move freely amongst hardware, it’s also difficult to tell if they are cloning it and taking down hardware to get snapshots of what’s going on inside. The only way to let an untrusted 3rd party process your data is relying on special chips that (1) are tamper-resistant, (2) encrypt all data leaving the chip, (3) authenticate any software entering the chip, and (4) leave open no backdoors related to administration. That said, several schemes have been designed. (Not used by any cloud computing vendor, of course.)

Clive’s hinting at one I mentioned a while back. The first group in was MIT’s Aegis processor that authenticated program code, encrypted stuff in RAM, and did an integrity check on any incoming data to detect tampering. The next step was SP, SecureCore and Bastion family of architectures (Google Bastion) that integrated Aegis-style protections with a legacy processor core (i.e. SPARC8) & a separation kernel (google it) to provide maximum protection from hardware & software threats. Recently, a new group has posed an architecture called SecureMe that they say has advantages over Bastion. (Clive, you need to look into SecureMe. I remember I liked reading the paper.) The old solution, refreshed a few times, was carrying out certain critical computations in an IBM 4758 crypto coprocessor. Those things were tamper-resistant as hell, but still had weaknesses that I recall.

So, cryptonoob, if you want to trust the privacy of your data, keep it as close to you and in as few hands as possible. If you want to trust admins less, then use one of these exotic architectures, trusted logging, least privilge, and third-party monitoring of admin activities. Of course, you will probably have to design the SOC yourself and pay top dollar due to low volume. You can’t get that kind of security cheap after all. 😉

Nick P August 21, 2011 2:56 PM

@ Clive Robinson

That was actually a really nice article that summed up almost all of the key issues, except subversion or on-chip security. (Unsurprising…) I think it’s going to be hard to cram all of the legacy functionality of a 4G smartphone and something like Bastion onto one chip. At the least, the constant use of crypto and integrity-checking might kill the battery. The only solution I can immediately think of is those chip technologies that turn off power to unused parts of the chip. On standby, could turn most stuff off while keeping the security- and baseband-related functionality on.

The other interesting tidbit was heterogeneous computing. I’ve always been a big supporter of that. Back in your day (:P), computer hardware was as diverse as software. Although a burden for developers, I always thought this was a good thing. Instead of just x86, you also had very reliable cheap RISC processors and even an Intel platform supporting capability-based security. The lack of on-chip and on-board integration of functionality meant more flexibility in choosing the parts. We lost that over time.

The closest things we’ve had to it are high-end gaming/workstation PC’s and high performance computing applications. In the gaming area, people regularly use special-purpose cards like GPU’s, Physix and network offloading to boost performance. The HPC people have used FPGA’s, GPU’s, non-Ethernet interconnects (Myrinet/Infiniband all the way!), NUMA for huge memory (I’d love an Altix), and even special-purpose minimized OS’s for the compute nodes. I’d note that the latter is the performance-seeking equivalent of your prison approach: the nodes have just enough OS to boot, diagnostics, receive programs to execute, and communicate results of execution. Much better than a whole Linux or Windows OS per node. 😉

So, I for one welcome our new [heterogenous] masters. I’d like a programmable FPGA or high performance DSP in every general purpose processor. I’d also like more high level development environments like RapidMind that make it easier to develop for heterogenous systems.

Cryptonoob August 21, 2011 4:09 PM

@Nick P and Clive

Thanks for the responses.

So 100% security CAN be achieved, if money is NO issue using custom hardware – even building your own fab costing billions – even when you process off-site and let other people handle the hardware?

Nick P August 21, 2011 8:44 PM

@ Cryptonoob

I hate to dash your hopes but there is no absolute security. All we can do is increase the assurance level. That is, the confidence level we can have that the resulting product will meet its security goals. The most high assurance secure systems ever built weren’t 100% secure & even that IBM coprocessor had flaws.

Security is a continuum that flows upward and sideways. Vertically, it goes like this (today): hardware (from VHDL to transistors); firmware; hypervisor layer; kernel mode OS/middleware/apps; privileged user-mode OS/middleware/apps; regular user-mode apps. Sideways, you’re looking at a computer, the user, the network, emanations, communication protocols and the outer network.

The old Orange Book and new Common Criteria establish how to do security at the computer and network levels, although they stopped short. We do know what it takes, though. Basically, you need a precise mathematical model of the requirements, security policy, high level design, and low level design. All of these must be proven to correspond. A verified translation from low level design to machine code must be performed. The hardware must be developed similarly. A covert channel analysis must be performed to show no illicit information flows occur that defeat confidentiality. Exhaustive testing and regular power-on tests are needed to catch hardware failures.

Designing a system with this level of assurance at each level is infeasible from both a financial and marketing standpoint. For believable proofs, the system must be very minimal and designed in a structure, layered way that permits verification. This is rarely done and contrary to what the market demands. Full verification of a useful system all the way down to the hardware would cost no less than $25 million per instance of the system. Each modification must be re-analyzed and evaluated to see how it changes things, meaning a change to even a small system takes months and hundreds of thousands of dollars. And all this doesn’t account for a fully subverted development team or partially subverted fab. 😉

Hence, 100% security cannot be achieved and the highest security won’t be achieved in practice most of the time even if one is protecting high value assets. The defender always gravitates toward something that is “good enough” or feels secure. There are many important economic, psychological and legal motivations to NOT go for perfect security. Anyone wanting it is essentially on their own and better have a shitload of cash. Just developing a ASIC from scratch costs around $30 million, with the TCB probably costing another $25-30 million. Evaluations are typically around $50 million at that level, more if you want it faster than 2-5 years. So, again, good luck.

RobertT August 22, 2011 1:05 AM

“RobertT and I discussed this at length. …He corrected me & noted that only four fabs produce virtually all mobile phone chips. Hence, compromise (or heavily pay) four companies and you can get subverted chips on demand. Yikes…”

Actually I seem to remember also mentioning that almost all smart phones use some variant of the ARM Cortex core. So subverting just ONE processor core (ARM) would give you direct hardware subversion of the cryptographic engines for all smartphones.

At the time I also remember saying (or maybe just thinking) that I believed I could sneak exploit hardware past every review committee and right into ARM’s reference database. I’m confident of this, because the very concept of hardware side channels is so poorly understood. The exact hardware to protect against DPA also potentially leaks (if subverted) the information, it is meant to protect. In the case of ARM, as a commercial entity, their databases are trade secrets, so if the compromise is buried deep within the “hard macro” core, than it will never be openly reviewed. Now that’s YIKES…

Today there are also only really four sources for Smartphone chipsets
(Qualcomm, Mediatek, Broadcom, STMicro).

Take a close look at the critical team members for these companies and you will find an interesting selection of Ex members of (NSA, GCHQ, DPSD, Stassi, FSB, BND, Mossad,…heck I can even name at least one DSD guy). If you just look at the complexity of QAM on OFDM radio systems (3G and 4G) you’ll understand that there are very good reasons why guys with histories in secure military radio, now playing critical roles in these smartphone companies But I do find myself asking, who is really signing their pay check?

Jose August 23, 2011 11:23 PM

Again OTP one time pad, and open keys encryption systems, saving the potatoes from the oven again…..

JOSE

Joe August 24, 2011 11:53 AM

I would like to secure my fully encrypted laptop for decades and this attack worries me a little.

I have been researching full disk encryption products and all of them use AES (PGP,Securstar,Truecrypt), please notice that I am talking about full disk encryption (operating system) and not about external device encryption, which allows for different algorithms to be used.

Can anyone please recommend me a full disk encryption product using something else other than AES?

Thank you

MarkH August 24, 2011 2:16 PM

@RobertT et al.:

I don’t quite follow this concern about integrated circuit foundries (fabs). My picture (please correct me, if I’m wrong) is that the chip designers design the masks (physical layouts) at rather enormous expense, and send these designs to the fabs to get chips manufactured.

If my picture is true to life, then a fab wanting to tamper with a design, would have to reverse-engineer the mask designs to the functional circuit level, redesign the logic, and then redesign the masks accordingly.

Supposing that a fab followed this nefarious course (flawlessly, without making a mistake that breaks the original design and reveals the tampering to the customer) … the customer could still make an electron microscope inspection of any chip, and have a good opportunity to detect the tampering.

Am I missing something here?!?!?

Clive Robinson August 24, 2011 5:48 PM

@ MarkH,

“I don’t quite follow this concern about integrated circuit foundries (fabs)”

Nick P, Robert T, mainly and Richard Steven Hack and myself supporting have been having a conversation of why China APT may be much lower level than you think ie at silicon level. And we have been having it on and off for quite some time now.

What has spiced it up is news that some US agencies have woken up to this (if they are reading Hi welcome and draw up a chair) and have started a research program to detect “backdoors in silicon”.

It has been said that you need less than two thousand gates on the silicon to effectivly trojan horse it others think less and we know for some chips just a handfull would be sufficient due to the bulk of the required functionality already being on chip.

Now you have to understand that effectivly nobody designs at the transistor level on silicon any longer, or for that matter at the gate level. Most design does not even start at macro level but functional block level. Those designing the chips “buy in a library of functional blocks” such as an ARM CPU Core, because these blocks are propriatary the designers don’t get to know what is inside them…

So even if they could flip the lid and get the chip under an electron microscope what are they going to see?

Now you have to realise that for some “Systems On a Chip” the chip is 20mm by 20mm and the lithography is 40nm. I’ll leave you to do the math but it’s a lot of transistors of which very very few are needed to make a gate or memory cell. It would simply not be feasible to view such a chip to look for extra tracking.

And as Robert T has indicated there are ways and means to hide this extra functionality so it cannot be easily seen (I’ll let him explain again).

What Nick P has been doing is trying to see how you would prevent the adding in of hidden extras.

The thing is that currently there is very little or no security in the design to silicon process and the only checking done is functional with known test parameters.

If you want to know how weak that process can be think back to a then “all in house shop” Intel and the “Pentium Bug” where a mistake in a lookup table mucked up the functioning of the floating point arithmetic.

The thing is that these days there are only about four fab plants making silicon in the small nm scales and they are mainly located in areas over which the US has no control staffed by employees who are mainly asian and sufficient numbers are ethnic Chinese, who may or may not be under the influence of Beijing.

Now the question then moves onto “yeah so what can they do” well that’s where we would get into a very long chat about side channels of various forms.

Suffice it to say we know that the AES competition was flawed in that the emphasis was on “Efficiency and Security” not just “security” and no consideration was given to side channel attacks although the NSA who acted as consultants to NIST were without doubt only to aware of them and the dangers they present.

So what happened is that during the competition the dual goal of “efficience and security” got split appart. Security was looked at in the abstract algorithm and not in the working of the code examples. The code examples were as a result highly efficient on the refrence platforms…

Although the algorithm is secure (even after this latest attack) the code examples where far from secure. Within a couple of weeks of the AES winner being anounced an attack was shown on a practical implementation which enabled the key to be recovered on another machine on a network…

The problem was that the code examples where highly optomised for efficiency and this involved things like “loop unrolling” which inevitably ment “cache hits” on modern “efficient” CPU’s These “cache hits” could be “excercised” remotly as a result a carefully crafted attack could see the time differences caused by the pattern of bits in the key.

Thus in practice you could determin by the time delays caused by cache hits what the actual AES key was.

Now the problem was that most people implementing AES in their code libraries simply took the refrence examples that where free of restriction from the competition site.

Even some highly respected open source libraries had them in. Sadly by the time people started to flush out the bad code many many products were out there and are still out there. Worse it is known that there are code libraries out there (many are commercial) that are still in use, that have not updated to safer versions of AES code so bad executables are still being produced…

Now consider that you could with just a very few gates make a switch in core CPU code that you could turn on and off remotly, that could alter time delays in critical instruction execution. Thus you could render code that had been created to stop a time based side channel leaking key bits now leak key bits. In by far the majority of cases this small change would have no effect on the way the code executed and would thus go unnoticed.

There are many many more ways the security of a design could be compramised by having hidden extras on the silicon.

One area I will let robert T explain which is the complex modulation schemes used for “effcient bandwidth” usage in the likes of mobile phone and other communications systems, they are designed to also be error tolerant which means they have redundancy within them and where there is redundancy there is the easy posibility of side channels…

Oh the NSA are aware of side channels and AES if you look at their Inline Media Encryptor for encrypting hard disks and the like it is only certified “when the data is at rest” (ie when it’s not doing encrypting or decrypting).

MarkH August 24, 2011 8:00 PM

@Clive:

I still don’t “get it.” If an IC designer/vendor sends mask designs to the fab, it seems to me that they would have a high probability detect a change even if it involved only a few dozen transistors out of a million-transistor chip.

And again, it seems to me that such “mask editing” would be an extraordinarily difficult and expensive undertaking indeed.

Clive points out that designers make extensive use of large functional blocks (usually IP licensed from another firm, yes?) But if something like an IP CPU core is back-doored, what does that have to with foundries? Whether there are 4 fabs, or 400, the vulnerability to insecure IP blocks would be the same. Or am I missing something?

Again, my knowledge of these processes is very scant. Please straighten me out if I’m misunderstanding how it all works.

I’m not suggesting that it is implausible for security to be deliberately compromised at the Si level. If the chip designer/vendor wants to do so, this should be no problem!

But to secretly implant such defects against the vendor’s will — how is this done without a “conspiracy theory” scenario, in which numerous people in a variety of separate organizations would all have to know, or at least have the possibility to discover, the implantation. And it only would take one of them to blow the whole thing wide open.

RobertT August 24, 2011 8:42 PM

“If my picture is true to life, then a fab wanting to tamper with a design, would have to reverse-engineer the mask designs to the functional circuit level, redesign the logic, and then redesign the masks accordingly.”

More or less a correct statement.

Although the fab typically knows in advance which Hardware macros are being incorporated on which chis, so they have time to do the reverse engineering in advance of the mask order arriving.

Fortunately the task is not as difficult as it might first seem because the engineer trying to create an exploit knows exactly which signals he wants to monitor. Signals that leak a lot of information are things like the Carry-out of adder and multiplier blocks. So just routing this output signal to a string of Flipflops to create a record of carry-outs for the last 100 calculations is a huge security compromise.

So above this you need to create some way to read this data, ideally you will try to use an existing serial to parallel circuit so that the exploit only needs a mux to switch the serial source and some trigger method (code sequences or maybe undocumented instruction execution or other..)

At the Fab level these edits need to be done correctly with no mistakes (unless they somehow have access to the original design database). If I wanted to make such an edit invisible I would probably use the Salicide layer to create a connection. Most people would not even suspect it even if they found an error somehow. The other way to create an error relies on the tricking the fab OPC rules into creating a connection where none should be, but this is really a design team member intentionally compromising the chip database, but with a compromise that will pass all the review tools (LVS) and formal verification methods BUT intentionally screwup the OPC stage. Interestingly a couple of weeks ago I got an email offering to sell me an OPC “exploit”.

Could the customer see the changes with an electron microscope? Maybe especially if you were silly enough to use upper layers of metal. But remember this cannot be seen with any optical microscope, so to discover an M1 change the customer would need to polish back 7 layers of metal to get a clean surface to scan. But a smart exploit would use the minimum possible change, say by adding a single M1/M2 via.

RobertT August 24, 2011 11:20 PM

@MarkH
“And again, it seems to me that such “mask editing” would be an extraordinarily difficult and expensive undertaking indeed.”

First off you need to understand that I’m not suggesting that a few script kiddies sitting in Russia can compromise a chip mask. This is intentional and executed by someone in a trusted position, probably at the fab, who is very skilled and knowledgeable. However it is not necessarily expensive, because the customer is buying the mask anyway, all you need to do is edit the layout database after the customer signs off. From the old database (customer supplied) you can do formal verification to the new “edited” structure, you can also run all the same design / layout/timing rules as the design house, so there is no reason to believe that the resultant chip will not work correctly. Additionally you do not necessarily need the compromise to be present from day one, because masks can get dirty or broken and the fab will just buy another mask, so the compromise can be inserted into an in production product. For this the Fab employee has potentially months to make the changes AND because it is already a production product the original design team has moved on, so no one is looking closely for problems.

How much is this information worth?
Think of the value you create if EVERY smart phone in the world is compromised at the chip level. and leaks it’s encryption keys to you.
It has to be cheaper than running Silkworth. It also works just as well in South Africa as it does in North America, now what is that capability worth?

If you need some help estimating the value, I’d suggest you read the European Parliament inquiry into Echleon.

I’ve also said several times, that I have never heard of an actual chip being compromised at the Fab. BUT it is possible, especially if someone in the design team added support circuits and specifically routed a wire across a critical signal, in such a manner that a single via could short the two signals. This gives the Design/Layout person plausible denyability, with a database that is completely clean including passing formal verification, yet by adding / moving just ONE via suddenly critical information is leaking.

BTW only those who know of the compromise, know to look at the circuits into which the information is leaking. Think of something simple like a timer/counter that serial loads the leaked data and starts without resetting. Anyone that discovered this might curse at the stupid engineer that didn’t clear the T/C before starting but would they ever suspect that they had discovered an intentional side channel.

I think one step that you are also not understanding is that the chips masks are not made from the customer supplied “layout database” this database is used to construct a “size adjusted” and OPC corrected database which the original customer never sees. Typically this second database is still called Mebes data, although most fabs don’t actually use Mebes format today. I’d attack the data at the Mebes level.

hhhobbit August 26, 2011 8:25 PM

Increase the rounds unless somebody really has a better idea. I will stay out of that one and let you other people hash that one out. Just let us idiot users (I am talking about me) know if we need to decrypt with the old software and then re-encrypt with the new software if that is necessary. The attacks get better instead of getting worse? Well, if they got worse nobody would admit they went backwards.

In this day and age with 90% plus people using Windows I suggest you are far more likely to have some key logger snatching the symmetric password, the encrypted data, and maybe even the unencrypted data than you are in having a cryptanalyst cracking even AES-256 with just a few encrypted samples. If it is an encrypted disk it is just like there is no encryption at all while it is running. You don’t believe me on the key-logger angle? Look at this URL which I just received from SANS:

http://www.infosecurity-us.com/view/20335/

Almost all the malware I study invariably has a key logger component. This is especially true for all of the banker trojans. The weakest link isn’t the algorithms – it is the PEBKAC factor, … and Windows. Why brute-force anything if you can do it much easier another way? If I don’t miss my guess somebody will be using the password “pencil.” Actually it is even more likely they won’t even be using encryption at all. Ask Maine if they did use encryption.

Daniel Ferrer September 15, 2011 11:17 AM

AES.
There is a whole business of breaking
AES with Side-Channel attacks.
Even before it was picked there was this technical report that was skipped over by the selection process.
A timing attack against Rijndael (1999). by Francois Koeune , Francois Koeune , Jean-Jacques Quisquate.

Lot of details about breaking AES
is at th AES Lounge.
http://www.iaik.tugraz.at/content/research/krypto/aes/

Algorithms theories have there place;
but most of live in the real world with real work security concerns. Time to setup and put the two together – always.
Thanks, Daniel Ferrer.

MarkH September 15, 2011 4:13 PM

@Daniel Ferrer:

The comparative ease of AES timing attacks is a major weakness. However, this doesn’t unconditionally disqualify AES for use in real-world security — it depends on the application.

For the purposes of traditional confidentiality (the inability to read an intercepted message), AES is (as far as the world now knows) very strong, and will be for a long time.

Side-channel attacks — including timing attacks — require significant access to the encryption or decryption processes. This is equivalent to having an enemy spy in the office with the cipher clerks. In my opinion, if an adversary has such access, confidentiality is gravely threatened, even if the cipher algorithm in use doesn’t have known vulnerabilities to timing attacks.

Unfortunately, for common everyday applications (such as SSL/TLS), we must generally assume that an adversary is inside the computer, hope that he can’t get more direct access to secrets (using common-as-dirt methods such as keyloggers), and prefer a cipher algorithm that we think is proof to timing attacks.

But for other cryptographic applications, where ciphertexts are stored or archived over time, and may be subject to interception by parties who lack access to the cipher process, AES remains a useful tool.

I’m not arguing that we shouldn’t prefer a cipher without this weakness — rather, that practical judgments concerning security depend on the application.

Mike Rodriguez September 16, 2011 4:25 PM

Use Triple-AES and be done with it

I don’t think it’s currently known if AES is a group. The answer bears on whether or not T-AES will add any security.

Ryan December 31, 2014 4:03 PM

Well, if you some how manage to encrypt 2^88 chosen plaintexts from an attack without rekeying by 2^64, maybe you deserve to lose several bits of security.

Art November 12, 2015 12:44 PM

In 2011 this biclique attack required 2^126.2 operations to recover an AES-128 key. For AES-192 and AES-256, 2^190.2 and 2^254.6 operations were needed.

In 2015, the result were improved to 2^126.0 for AES-128, 2^189.9 for AES-192 and 2^254.3 for AES-256

In other words, nothing has really changed in 4 year with biclique attack.

In practical term, AES still stand just as secure as it was in 2000.

sima January 23, 2017 9:57 AM

Hello
I am searching for time complexity of AES in a comparable unit like T(mul) modular multiplication but i can’t find would you please guide me??????

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.