FREAK: Security Rollback Attack Against SSL

This week, we learned about an attack called “FREAK”—”Factoring Attack on RSA-EXPORT Keys”—that can break the encryption of many websites. Basically, some sites’ implementations of secure sockets layer technology, or SSL, contain both strong encryption algorithms and weak encryption algorithms. Connections are supposed to use the strong algorithms, but in many cases an attacker can force the website to use the weaker encryption algorithms and then decrypt the traffic. From Ars Technica:

In recent days, a scan of more than 14 million websites that support the secure sockets layer or transport layer security protocols found that more than 36 percent of them were vulnerable to the decryption attacks. The exploit takes about seven hours to carry out and costs as little as $100 per site.

This is a general class of attack I call “security rollback” attacks. Basically, the attacker forces the system users to revert to a less secure version of their protocol. Think about the last time you used your credit card. The verification procedure involved the retailer’s computer connecting with the credit card company. What if you snuck around to the back of the building and severed the retailer’s phone lines? Most likely, the retailer would have still accepted your card, but defaulted to making a manual impression of it and maybe looking at your signature. The result: you’ll have a much easier time using a stolen card.

In this case, the security flaw was designed in deliberately. Matthew Green writes:

Back in the early 1990s when SSL was first invented at Netscape Corporation, the United States maintained a rigorous regime of export controls for encryption systems. In order to distribute crypto outside of the U.S., companies were required to deliberately “weaken” the strength of encryption keys. For RSA encryption, this implied a maximum allowed key length of 512 bits.

The 512-bit export grade encryption was a compromise between dumb and dumber. In theory it was designed to ensure that the NSA would have the ability to “access” communications, while allegedly providing crypto that was still “good enough” for commercial use. Or if you prefer modern terms, think of it as the original “golden master key.”

The need to support export-grade ciphers led to some technical challenges. Since U.S. servers needed to support both strong and weak crypto, the SSL designers used a “cipher suite” negotiation mechanism to identify the best cipher both parties could support. In theory this would allow “strong” clients to negotiate “strong” ciphersuites with servers that supported them, while still providing compatibility to the broken foreign clients.

And that’s the problem. The weak algorithms are still there, and can be exploited by attackers.

Fixes are coming. Companies like Apple are quickly rolling out patches. But the vulnerability has been around for over a decade, and almost has certainly used by national intelligence agencies and criminals alike.

This is the generic problem with government-mandated backdoors, key escrow, “golden keys,” or whatever you want to call them. We don’t know how to design a third-party access system that checks for morality; once we build in such access, we then have to ensure that only the good guys can do it. And we can’t. Or, to quote the Economist: “…mathematics applies to just and unjust alike; a flaw that can be exploited by Western governments is vulnerable to anyone who finds it.”

This essay previously appeared on the Lawfare blog.

EDITED TO ADD: Microsoft Windows is vulnerable.

Posted on March 6, 2015 at 10:46 AM40 Comments

Comments

David Penfold March 6, 2015 10:57 AM

FWIW, this vuln is not present with LibreSSL as it was one of the many legacy components that got the chop.

Anura March 6, 2015 12:15 PM

Firefox/NSS also has the legacy algorithms removed long ago. Server administrators have been told to disable SSL 2.0 for years; this was hardly an unknown attack, but it’s not surprising so many people are still vulnerable.

Richard Schwartz March 6, 2015 12:15 PM

I’m trying to understand how much of the blame here lies with the fact that the old export keys existed, versus how much of it lies in the fact that people have been too lazy about getting rid of them from configurations. How much longer before this same vulnerability would become a real problem for the old 1024 bit keys?

Martin March 6, 2015 12:59 PM

Encryption works great, until it doesn’t.
Encryption works great, as long as no one makes a mistake.
Encryption works great, unless something goes wrong.
Encryption works great, as long as everything works right.

And the wonder of peer-review means no one will say whether it’s good enough or not, no one is willing to sign their name, until AFTER something wrong is found out and then here they come!

And the wonder of peer-review means no one gives a shit unless they’re paid a lot of money and are absolved of all responsibility.

Encryption is like a collision-warning system that goes off AFTER the plane crashes.

Curious March 6, 2015 1:25 PM

Hi the first time i read about this thing was earlier today, on I think Krebbs on Security, i didnt quite understand it then and i still dont understant it now because the mechanisms are not discussed, why so….

Ok so fallback of encryption I understand fully and it has been used towards !MANY! things among for instance EDACS radio systems to name something that I am familiar with.

However, if I in my browser disable SSL3 and all encryptions such as RC4 and DES and Qualys SSL Labs things i am cool, is it still possible to use this method ?

Today i have used Qyalys SSL Labs to verify my browser is infact doing what i want it to do
and in rare cases where some idiot still dont use proper encryption i have a plugin called
Cipherfox where if that occurs i can click to accept RC4 but today that is infact EXTREMEMLY rare and i report it to the idiots every time. So should you.

Please can you explain to me if this is some other fallback method that i am not aware of and what are the mechanics of it, if there is something new here, then i think this is a big thing and needs to not only be discussed but to fixed NOW!
If its not new, then why are we talking about it now, ditch ssl3 and rc4 etc is nothing new really.

Martin March 6, 2015 1:29 PM

Did you hear about the police dept. with the monster-sized budget? A full staff, everyone wore crisp new uniforms, the patrol cars were spotless, shiny, new.

All those resources, and they promised to serve and protect. Until a crime took place. Then, they always blamed the victim “you should have called sooner, there’s nothing we can do.” Anytime someone called for help they were told “unless you can provide the social security number of the suspect and their DOB and provide detailed color photos of them, there’s nothing we can do.”

The name of that police dept is “Encryption.”

MarkH March 6, 2015 1:30 PM

I must challenge the quote from The Economist: “a flaw that can be exploited by Western governments is vulnerable to anyone who finds it.”

The infamous Dual_EC_DRBG (believed to be backdoored by NSA) is a counterexample. That parameter Q was not chosen at random is a flaw, but even if everyone else in the world knows it, they can’t exploit the flaw without knowing the index of Q.

By definition of the Dual_EC_DRBG curve, Q must be a power of P, so “anyone” could break it by taking the discrete logarithm, even if Q were chosen at random … however, this is believed (with good foundation) to be computationally infeasible.

Hence, the backdooring of Dual_EC_DRBG has no effect on its security with respect to parties not knowing NSA’s secret index.

Certainly, in most cases, intentional weakening of a cryptosystem by a particular agency reduces its security against other actors, but Dual_EC_DRBG is an instance in which non-NSA exploiters have no attack vector if NSA keeps its “magic number” secret.

Anon March 6, 2015 1:38 PM

@Richard Schwartz:
I ran some rough numbers for you. Using the number field sieve as my complexity function, then, if my math is right, it takes over 4500 times longer to factor a 1024 bit key than a 512 bit key. Using Moore’s law as a working assumption, this same attack should be possible on 1024 bit keys in about 25 years.

I’m not advocating anyone switching to 512 bit keys here, but last I checked the reality is that we’re still a decent ways away from factoring even 512 keys on a normal desktop set up. The 7 hours we’re being quoted is using Amazon cloud to do the heavy number crunching. So yes, while this is a problem for places that have access to super computers, even 512 bit is not yet something that’s breakable by any pimply faced kid with a laptop.

Anura March 6, 2015 1:48 PM

@Anon

While I agree that only people with significant resources will be able to break 1024-bit RSA…

The 7 hours we’re being quoted is using Amazon cloud to do the heavy number crunching. So yes, while this is a problem for places that have access to super computers, even 512 bit is not yet something that’s breakable by any pimply faced kid with a laptop.

Last I checked, anyone with enough cash has access to the Amazon cloud, which is not a super computer. Also, if Amazon cloud is too expensive for you, there’s always botnets.

Curious March 6, 2015 1:51 PM

Well anyhows, if this thing is just about the encryptions that are defaulted in your browser and you are using firefox, google fu it up and about:config go to security.ssl3
make your best shots, lots of whitepapers around this issue infact

Also some plugins address this issue, one that i use sometimes is called ssleuth
in there you can disable and enable the stuff without going into about:config

Read littlebit about it and do some research
I am curious though because it almost sounds like there is some other fallback mode involved here that is buildin that we are not aware of….
Hmmm that would be very intresting to hear about and would rise alot of eyebrows

keiner March 6, 2015 1:54 PM

@curious

I use cipherfox, too and to be true: up till some months ago I had to allow RC4 for amazon.de and even nowadays for ebay.de…

Any questions?

Curious March 6, 2015 1:59 PM

Hi ok I have read the story some more times and i think i finaly got it, this is a major thing. Very intresting! Unbelievable stuff.

Personally not that i care too much but i wouldnt touch any software ever produced in united states from now on. And i will definately make sure that my customers understands the message as well. have a nice day

Anon March 6, 2015 2:07 PM

Again, I’m not arguing that 512 bits is a good idea. It’s not.
I’m just trying to keep things in a little perspective here.
Factoring keys – even small ones like 512 bits – is still an expensive proposition. If the traffic that is being protected this way is only worth a few cents on average, the attacker will lose money trying to decrypt it. Not all secrets require particularly good encryption to be safe. I’m just saying that 512 bits is not yet at the stage where everyone can download an app for their phone and get a decryption in any reasonable amount of time (without spending money to outsource the computation like this researcher did).

Heck – 1024 bit encryption is not safe now if an attacker is willing to pay Amazon cloud for next 3.7 years or so for their computing power. It all depends on how much security you need.

Gerard van Vooren March 6, 2015 2:22 PM

Maybe it is time to say goodbye to SSL/TLS. It is battle tested … and failed, hard, and at numerous places. Not only the protocol, but also the philosophy about it turn out to be a scam.

keiner March 6, 2015 2:29 PM

From hardware design, BIOS/UEFI, network protocolls, WLN, Bluetooth to the firmware of your HDD,SSD, DVD, Blueray, name it: WHAT IS NOT A SCAM THESE DAYS?

Anura March 6, 2015 2:49 PM

@Gerard van Vooren

I agree, although it’s about a decade away from getting fixed if we start today. Honestly, I’d like to see a whole suite of small, independently verifiable interfaces and algorithms that conform to those interfaces: How to encode and authenticate data, how to do a handshake, how to exhange keymat, how to derive keys from the keymat, etc. High level protocols will be built on those interfaces, and would also be easy to verify as they depend only on the interfaces, not the underlying algorithms.

Mike March 6, 2015 3:26 PM

@ Anon
Factoring keys – even small ones like 512 bits – is still an expensive proposition. If the traffic that is being protected this way is only worth a few cents on average, the attacker will lose money trying to decrypt it.


Heh maybe its just me but i read that and was emedatly trying to thinking about makes
some traffic generator that spews out shit and crap traffic
Mikey

serf ib March 6, 2015 5:37 PM

Grasping concepts can be like grasping the right handfull of sand.

Total surveillance is the problem!

Any known security exploit is not a problem for the security conscious or those operating a proper security model, it’s a problem for the public.

Unknown security exploits and backdoors are a problem, especially in the total survillance state.

MrC March 6, 2015 7:42 PM

@ MarkH:

Re: Dual_EC_DRBG

  1. NSA couldn’t prevent Snowden from publicly depantsing them. Didn’t even see it coming. Why should we have any faith they can keep index-of-Q secret?

  2. Perhaps I’m wrong, but my understanding is that they’ve created a “break once, f*ck everybody forever” problem that wouldn’t exist with a random Q. Sure, you’ve got a helluva math problem that would require tying up major-nation-state level computational resources for a few years to solve, but you’ve also made a prize that’s worth doing exactly that. I’d be surprised if the Chinese weren’t already working on it.

Lawrence D’Oliveiro March 6, 2015 9:13 PM

“We don’t know how to design a third-party access system that checks for morality.”

Even if we could, would we be allowed to?

Supposing there was a magic sign over the data-access portal saying “Let only those who are pure of heart enter here”. Given some of the past record of certain law-enforcement personnel, how many of them do you think would be able to get in?

Gerard van Vooren March 7, 2015 3:25 AM

@ Anura

I like things to be simple. Something such as MinimaLT will work if implemented correctly.

On the other hand, we do live in a world where complexity rules because of competition, design by committee, backwards compatibility and TLA influences. We also live in a world where no-one is allowed to know everything about computers but we all use these machines.

The future that I prefer is one where it is easy as pie and secure by default when two computers are connected. This would mean a non-proprietary, platform independent networking protocol that is not designed by a committee. Something like a 9P and MinimaLT combo.

I agree however that there has to be a larger scope such as key storage and identity verification because the NSA usually just bypasses the encryption. The overall picture is indeed a set of “small, independently verifiable interfaces and algorithms that conform to those interfaces”.

MarkH March 7, 2015 5:01 AM

@MrC:

Whether the Dual_EC_DRBG parameter “Q” is chosen randomly, or generated by NSA using a known exponent (as we all think likely), the work of a “cracker” is exactly the same: they must compute the discrete logarithm of Q.

If each instance of the generator had its own randomly chosen Q, then so much the better for security … but of course that would have prevented back-dooring, too.


But my “big point” is, that it is possible to construct a back door that doesn’t compromise the system to other parties.

The world has lots of keys that are apparently kept confidential. Perhaps even NSA can do this; the stuff Snowden pilfered almost certainly does not represent the organization’s highest level of information protection.

Clive Robinson March 7, 2015 6:19 AM

@ MarkH,

All Determanistic Random Bit Generators can be used as a back door one way or another, that much is the logical conclusion of “determanistic”. That is if a second party knows the key, the state, or one of the primes etc then it’s game over they can fully determine your next bit out of the generator.

For it to become a workable backdoor it must somehow leak sufficient information to the second party.

The problem with you saying,

I must challenge the quote from The Economist: “a flaw that can be exploited by Western governments is vulnerable to anyone who finds it.”

And then talking about the Dual_EC_DRBG is that you are confusing knowing that something can exist and showing it does exist.

We know it is possible to backdoor ALL determanistic generators, which the Economist referes to as “flaws”, what we don’t know is if it has actualy happened, thus it has not yet been found publicaly.

But even if it was found, is it actually a serious issue? No because any sensible crypto designer would know such a possability exists for any determanistic generator. Thus they would take steps to mitigate either the leaking of information or either the predictability or determanistic nature of the generator.

What raised red flags over dual EC was the very ham fisted behaviour of the NSA representative to NIST, and that RSA would have known that the generator would have needed mitigating, but chose not to. The third strike that tipped opinion was the large cash payment to RSA to make it not just the default generator, but also not particularly easy to change…

But the fact remains we know the generator has flaws, but importantly nobody has publicaly gone looking for them. Importantly they don’t need to there are better generators and mittigation is easy.

And that’s the important point as a back door hamstringing a random generator is not very effective or reliable because it is way to obvious and way to easy to get around.

The place to put backdoors is in the actual encryption chain, preferably where it’s next to impossible to mitigate.

MikeA March 7, 2015 11:36 AM

So it looks like we have two choices:

1) Constant software updates under the assumption that “This time for sure!” (as recommended by the “honeymoon effect” paper), with the corrolary that software providers (yes, even “free” software, I’m looking at you Canonical) regularly drop support for older hardware, thus forcing hardware “upgrades” to new backdoored (UEFI, RasPi GPU code, Baseband, SSD Firmware, whatever) hardware, trading software backdoors for hardware ones.

2) Try to keep older, less likely to be backdoored systems usable, which implies understanding the relevant software well enough to apply equivalent patches to the software (if we have access to it, e.g. LibreSSL). This is marginally more possible, but no picnic, especially as all our friends and relatives chase the latest shiny thing and harass us about being luddites.

Or, of course, choice three: drop all methods of comunication and social connection, and go live in a cave until we have eaten the entire bat and bug population.

MarkH March 7, 2015 5:54 PM

@Clive,

With respect, I disagree on two major points:

  1. No, deterministic RBGs are not inherently a vector for back-door compromise!

For example, Dual_EC_DRBG could easily be modified in several ways to “close” the potential backdoor. For example, if the iteration output used the least significant half of the coordinate bits (instead of all but 16), it would apparently be infeasible to predict future outputs even with the presumed non-random Q value. There are other simple changes that would make a secure variant.

It is (by definition) true that a deterministic RBG is broken if its internal state can be inferred by an attacker, but deterministic RBGs can be, and have been, designed to well conceal their internal state from parties who have access to very long sequences of output bits.

That the discovery of internal state breaks a deterministic RBG is certainly a weakness relative to a good physical RBG (which as you well know, is much harder to make than it might seem at first glance). But that is not the same as a backdoor! If I can discover somebody’s symmetric encryption key (the internal state of their symmetric encryption process), I can read their traffic; that reality does not constitute a backdoor!

  1. “hamstringing a random generator is not very effective or reliable”

Well, that’s an opinion. Some respected cryptographers, including Bruce, have opined more than once that the RBG is the preferred point of attack, one reason being that if the RBG is compromised, encrypting the data five times with world’s five strongest distinct ciphers is no defense.

Evidently, somebody at NSA thought so — as you correctly observe, their apparent sabotage of EC_Dual_DRBG was bloody ham fisted. It may be that they risked such a clumsy (but, thanks to RSA Labs, successful) maneuver, because the potential breaking of world-wide crypto traffic was so enormous.

Wael March 7, 2015 10:14 PM

@MarkH,

It is (by definition) true that a deterministic RBG is broken if its internal state can be inferred by an attacker […] If I can discover somebody’s symmetric encryption key (the internal state of their symmetric encryption process), I can read their traffic; that reality does not constitute a backdoor!

It’s also true, by definition, that a backdoor is: “Business done out of public view”, ” A feature or defect of a computer that allows surreptitious unauthorized access to data”. So it’s still a backdoor, even in the situation you describe above.

MarkH March 8, 2015 3:10 AM

@Wael,

As I read the position you just stated, all computational cryptosystems (essentially, all except quantum encryption for data-in-motion) is “backdoored” because at each step in the encryption process, they possess a definite internal state.

Did I understand incorrectly?

Wael March 8, 2015 8:12 AM

@MarkH,

Did I understand incorrectly?

No, you didn’t.

I am saying a defect is a backdoor if it’s used to compromise systems, regardless whether the defect was introduced deliberately or discovered at a later stage.

Clive Robinson March 8, 2015 12:04 PM

@ MikeA,

Or, of course, choice three: drop all methods of comunication and social connection, and go live in a cave until we have eaten the entire bat and bug population.

But will there be enough energy within walking distance of the cave to ensure they are all safely cooked, otherwise it might be a risky option 😉

Wael March 8, 2015 12:39 PM

@Clive Robinson, @MikeA,

and go live in a cave until we have eaten the entire bat and bug population…

Yup, living in a cave is what you whiners should do 🙂 Remember that older generations lived on honey and locusts or milk and dates or even just dates and water, and managed to survive without eating GM bats.

Clive Robinson March 8, 2015 2:52 PM

@ MarkH,

… deterministic RBGs are not inherently a vector for back- door compromise

Yes they are they are fully determanistic, go back and read the first two paragraphs of my post.

Also there is a lot of difference between an inherant weakness that has not yet been exploited and one that has been exploited, which is what paragraph two makes clear.

Further,

But that is not the same as a backdoor! If I can discover somebody’s symmetric encryption key (the internal state of their symmetric encryption process), I can read their traffic; that reality does not constitute a backdoor

It does if there is a processes or flaw by which the key or plain text is leaked.

Arguably using any block cipher in a simple code book mode is a backdoor, not because it leaks the key but because it leaks information on the plaintext, that can then be used to find the key –if practical– or the original plaintext identifing the source etc.

It’s why cipher system designers mitigate the issue by various methods such as cipher chaining, where either the plain text or ciphertext from previous blocks are mixed into either the current plaintext or ciphertext. These mitigations are now so generic we don’t think about why they are needed we just see them as standard modes to use a cipher in.

Importantly this mixing if done correctly adds a truely random element from the plain text which helps reduce the exploitable elements of the determanistic block cipher by adding uncertainty to the cipher output. A determanistic RNG does not have this simple but important mitigation.

So onto your second point,

Well, that’s an opinion. Some respected cryptographers, including Bruce, have opined more than once that the RBG is the preferred point of attack…

So have I, however the fact that it has in the past been prefered does not mean that doing it is “effective or reliable” or that it will remain prefered or even possible in the near future. Arguably with the hamfisted behaviour and the following debacle over Dual EC it’s probably nolonger prefered as a spotlight has been very vividly shone on that class of attack vectors.

The real reason it was prefered is down to the failings of people who design many commercial crypto systems, and either don’t mitigate for some reason a known weakness, or as RSA did chose to deliberately ignore it.

In part this is due to the failings of academia, in that although the problem with determanistic RNGs has been long known and certainly identified as a security issue back in the last century it’s not been given the prominance it deserves. Which means that software developers who have not undergone basic crypto specific training by practical cryptographers are often unaware of the issue.

Another issue is NIST standards, some are “descriptive” others “what must be done” whilst others are “what should be done as a minimum”. The AES standard is actually “descriptive” that is it accurately describes all the logical steps of how the block cipher should function. Thus any deviation, addition or subtraction will result in it not functioning. However the modes of AES are “what must be done” to be compatable with other implementations but they are effectivly generic and thus work with different block ciphers. Determanistic RNGs do not need to be compatible with other systems –unless they are inadvisably going to be used as stream ciphers etc– thus the standards fall in the “what should be done as a minimum” category. That is designers are free to make additions including mitigating obvious and known risks.

Which brings us back to,

For example, Dual_EC_DRBG could easily be modified in several ways to “close” the potential backdoor. For example, if the iteration output used the least significant half of the coordinate bits (instead of all but 16), it would apparently be infeasible to predict future outputs even with the presumed non-random Q value. There are other simple changes that would make a secure variant.

Any of your mitigations would close not just the suspected backdoor in this determanistic RNG but several others, without effecting the security or compatability with other systems. However if it was being used as a stream cipher none of the mitigations would be possible because it would break compatability. As for the number of bits v theoretical security issue and determanistic RNGs this is not new the idea was discussed in depth last century with the BBS generator and what went before it.

That said as you originaly pointed out, even without the mitigations currently it is in all probability secure against others other than it’s designers, however this can change at any point in time either by luck or considered and considerable effort, and we are unlikely to know when or where this first happens, if it has not happened already.

Which brings us back to why people should mitigate any determanistic RNG, it’s relativly trivial to do, it does not break compatability and with a little care it is very unlikely to weaken security whilst breaking any standards based backdoor.

Now the idea of a standards based backdoor in determanistic RNGs is well and truely out in the open, there is realy no excuse for designers not to add mitigation, and anybody not doing so should find people looking at them with incredulous stares as their design choices would be regarded as unprofessional.

That said those writing closed source software will no doubt carry on doing favours for the NSA or others in the darkness of their cubicals. It does not matter if it is done through ignorance or either accidently or deliberatly the result is the same security is lost and people will get hurt. Whilst this might appear as a recomendation for open source it’s not, it’s a recomendation for transparancy and sufficient verification. It just happens that both of these tend to be more easily accomplished with open source, and for most users impossible with closed source.

And when mitigation becomes general practice, backdooring determanistic RNGs will probably cease to be a prefered practice. A new prefered practice will no doubt arise, to join the legion of other attacks based on standards and protocols of which FREAK is just one. Whilst Bruce calls it a “security rollback” attack I’ve refered to such attacks frequently as “Fall back” attacks, which if you google this site you will find I have warned about on several occasions over a considerable period of time.

As in many things a common nomenclature would provide a common base line of agrement on the meaning of terms and thus make the lives of others easier and more harmonious.

MarkH March 8, 2015 5:28 PM

Well, I guess we are using completely incompatible definitions of “backdoor”.

To me, the definition of “backdoor” is a means for bypassing normal or ostensible safeguards.

Example 1: Dual_EC_DRBG with non-random Q gives an attacker an easy way to predict all future output bits from a modest sequence of past output bits.

Example 2: If the Intel on-chip non-deterministic RBG were made with intentional bias, that would reduce the search space for a party knowing the key used to “hash” the biased raw data.

Those are back doors. And, by the way, they are both examples of backdoors not accessible to parties who don’t know the secret number.

Blum Blum Shub is a fully deterministic RBG. As far as the world knows, predicting future output bits from an arbitrarily long sequence of past output bits is as expensive as factoring the semiprime modulus, which for large moduli is beyond the capacity of even the most powerful attackers.

According to Clive, Blum Blum Shub is inherently backdoored because it is deterministic.

Where is the means of bypass?

I really don’t get it.

Clive says it’s backdoored, because you lose confidentiality if someone can “see into” the internal state of the computation, even if the algorithm has no usable leakage path*.

By that reasoning, every symmetric and public key encryption algorithm that has ever existed is backdoored, because if I can “open the lid” of the computer and inspect its bit values during the computation, then confidentiality is lost.

I really don’t get it.


*Note that I distinguish a leakage path of the algorithm from a leakage path of the computation. Without exception, all computational cryptography may be vulnerable to side-channel analysis or other attacks on the computational process. Again, if that constitutes a backdoor, then ALL computational cryptosystems are (by Clive’s definition) backdoored.

It looks like we have a new Nummulosphere hypothesis.

Dirk Praet March 8, 2015 6:43 PM

@ MarkH

To me, the definition of “backdoor” is a means for bypassing normal or ostensible safeguards.

IMHO, a backdoor is an intentionally created subversion to allow a party with exact knowledge thereof to either directly or indirectly bypass the protection mechanism(s) of a control. As compared to a non-deliberate design or implementation flaw resulting in an exploitable vulnerability. Disguising the former as the latter is a common technique to hide behind plausible deniability.

Clive Robinson March 9, 2015 5:38 AM

@ MarkH,

I take a generalised security engineering view point, not a theoretical one divorced from the realities of the art of secure system design.

As I have pointed out all fully determanistic RNGs have the fundemental flaw of their output being predictable if sufficient information is known by an adversary.

Further to prevent sufficient information becoming available to an adversary mitigation methods must be used.

Hopefully those two facts are sufficiently self evident to most and thus not to require further explanation.

I define a backdoor as some method or lack thereof that allows information to leak to an adversary which can or might assist in making the output of the generator predictable in part or whole now or at some future point in time.

You will note that this is a system design view that is not limited to knowledge at the time of the design, nor precludes the enemy making modifications to the system after design when it is in active use. Which if followed stops catastrophic system security failures when new information becomes known, and thus allow a gracefull period for the systems to be updated or replaced.

I find any lesser view to be not just lacking, but actually quite likely to cause the system to be insecure in the reality of use under adverserial conditions. That is theoretical security is not enough it needs to be practical security for real world conditions when faced with an overwhelmingly resoursed adversary.

At the end of the day it’s not cosy academic reputations that are on the line but real peoples lives and liberty.

You say,

To me, the definition of “backdoor” is a means for bypassing normal or ostensible safeguards.

And then immediately restrict it to just talking about the theoretical aspects of the algorithm, thus effectivly contradicting yourself and end up claiming “I really don’t get it”…

You use as an example the BBS generator and make some interesting claims about it, but neglect to mention that there are constraints on them.

The actual generator as the authors took care to point out is only theoreticaly secure if the constraints are met. These constraints include,

1, Selecting primes within a set of relationship constraints.
2, The size of the primes has to be of a time increasing minimum length.
3, There are constraints on the number of bits that can be output from any single step of the generator.

These constraints are the minimum mitigations you must put on the BBS generator for it to be theoreticaly secure.

By your own admission similar constraints / mitigations actually apply to the Dual_EC_DRBG, “but” they were not put in place by the designers and thus it is suspected of being insecure.

Thus if the designers of the BBS generator had neglected to make the constraints such the number of output bits per computation to a secure level clear in the design, then it too would leak sufficient information to an adversary.

In effect both genenerators are insecure it is the degree of mitigation the designers put in place that makes the difference between “secure in operation” and potentialy “insecure in operation”.

Importantly though as it’s an algorithm in both cases it’s only the security of the algorithm it’s designers can “sufficiently” mitigate in it’s specification. However if they don’t “sufficiently” mitigate then it will be insecure in practical use, UNLESS the designers of systems that use the specification add further mitigation of their own.

And it is this difference in lack of “sufficiency” of mitigation in the case of the Dual EC design –not it’s designers– that has ultimately thrown a sufficient long dark shadow over NIST’s proceadures for producing the standard concerned that has caused NIST to publicaly remove it, not modify the mitigations to ensure sufficiency.

But as I’ve indicated all determanistic RNGs have mitigations even those using well found crypto primitives.

Take for instance AES in CTR mode, obviously the security rests on the strength of the algorithm –which is belived to be well found–, and the values of the key and counter. However if you talk to the appropriate people they tell you that further mitigations are required. The first of which is that the selection of not just the key but the counter value should be “random but not reused”. Further that the number of encruptions under a given key must be a fraction of that available. Others indicate that you should only use some of the output bits, with others indicating this should be done by randomly selecting from the generator output stream etc etc.

Importantly some say –myself included– that you should use two or more unrelated generators and mix the outputs in some fashion, as this makes individual generator insufficiencies not just a mute point, it also buys you time to update/replace if one of the generators is found to be deficient in some way.

As I’ve previously stated the advantage RNGs have over other crypto primitives, is that you can add as many mitigations as you wish as a designer they don’t effect compatability in systems that need to be compatable with other standards for interoperability. And in the general scheme of things mixing two or more unrelated generators adds very little extra overhead for a massive increase in practical security margin.

Ton March 9, 2015 6:06 AM

Hi Bruce, although, it was initially believed only to threaten mobile devices and Mac computers, it is now understood that even Windows PC’s are vulnerable to this attack. Thanks for the detailed explanation.

MarkH March 9, 2015 1:10 PM

“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean — neither more nor less.”

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.