Entries Tagged "random numbers"

Page 4 of 5

The Strange Story of Dual_EC_DRBG

Random numbers are critical for cryptography: for encryption keys, random authentication challenges, initialization vectors, nonces, key-agreement schemes, generating prime numbers and so on. Break the random-number generator, and most of the time you break the entire security system. Which is why you should worry about a new random-number standard that includes an algorithm that is slow, badly designed and just might contain a backdoor for the National Security Agency.

Generating random numbers isn’t easy, and researchers have discovered lots of problems and attacks over the years. A recent paper found a flaw in the Windows 2000 random-number generator. Another paper found flaws in the Linux random-number generator. Back in 1996, an early version of SSL was broken because of flaws in its random-number generator. With John Kelsey and Niels Ferguson in 1999, I co-authored Yarrow, a random-number generator based on our own cryptanalysis work. I improved this design four years later — and renamed it Fortuna — in the book Practical Cryptography, which I co-authored with Ferguson.

The U.S. government released a new official standard for random-number generators this year, and it will likely be followed by software and hardware developers around the world. Called NIST Special Publication 800-90 (.pdf), the 130-page document contains four different approved techniques, called DRBGs, or “Deterministic Random Bit Generators.” All four are based on existing cryptographic primitives. One is based on hash functions, one on HMAC, one on block ciphers and one on elliptic curves. It’s smart cryptographic design to use only a few well-trusted cryptographic primitives, so building a random-number generator out of existing parts is a good thing.

But one of those generators — the one based on elliptic curves — is not like the others. Called Dual_EC_DRBG, not only is it a mouthful to say, it’s also three orders of magnitude slower than its peers. It’s in the standard only because it’s been championed by the NSA, which first proposed it years ago in a related standardization project at the American National Standards Institute.

The NSA has always been intimately involved in U.S. cryptography standards — it is, after all, expert in making and breaking secret codes. So the agency’s participation in the NIST (the U.S. Commerce Department’s National Institute of Standards and Technology) standard is not sinister in itself. It’s only when you look under the hood at the NSA’s contribution that questions arise.

Problems with Dual_EC_DRBG were first described in early 2006. The math is complicated, but the general point is that the random numbers it produces have a small bias. The problem isn’t large enough to make the algorithm unusable — and Appendix E of the NIST standard describes an optional work-around to avoid the issue — but it’s cause for concern. Cryptographers are a conservative bunch: We don’t like to use algorithms that have even a whiff of a problem.

But today there’s an even bigger stink brewing around Dual_EC_DRBG. In an informal presentation (.pdf) at the CRYPTO 2007 conference in August, Dan Shumow and Niels Ferguson showed that the algorithm contains a weakness that can only be described as a backdoor.

This is how it works: There are a bunch of constants — fixed numbers — in the standard used to define the algorithm’s elliptic curve. These constants are listed in Appendix A of the NIST publication, but nowhere is it explained where they came from.

What Shumow and Ferguson showed is that these numbers have a relationship with a second, secret set of numbers that can act as a kind of skeleton key. If you know the secret numbers, you can predict the output of the random-number generator after collecting just 32 bytes of its output. To put that in real terms, you only need to monitor one TLS internet encryption connection in order to crack the security of that protocol. If you know the secret numbers, you can completely break any instantiation of Dual_EC_DRBG.

The researchers don’t know what the secret numbers are. But because of the way the algorithm works, the person who produced the constants might know; he had the mathematical opportunity to produce the constants and the secret numbers in tandem.

Of course, we have no way of knowing whether the NSA knows the secret numbers that break Dual_EC-DRBG. We have no way of knowing whether an NSA employee working on his own came up with the constants — and has the secret numbers. We don’t know if someone from NIST, or someone in the ANSI working group, has them. Maybe nobody does.

We don’t know where the constants came from in the first place. We only know that whoever came up with them could have the key to this backdoor. And we know there’s no way for NIST — or anyone else — to prove otherwise.

This is scary stuff indeed.

Even if no one knows the secret numbers, the fact that the backdoor is present makes Dual_EC_DRBG very fragile. If someone were to solve just one instance of the algorithm’s elliptic-curve problem, he would effectively have the keys to the kingdom. He could then use it for whatever nefarious purpose he wanted. Or he could publish his result, and render every implementation of the random-number generator completely insecure.

It’s possible to implement Dual_EC_DRBG in such a way as to protect it against this backdoor, by generating new constants with another secure random-number generator and then publishing the seed. This method is even in the NIST document, in Appendix A. But the procedure is optional, and my guess is that most implementations of the Dual_EC_DRBG won’t bother.

If this story leaves you confused, join the club. I don’t understand why the NSA was so insistent about including Dual_EC_DRBG in the standard. It makes no sense as a trap door: It’s public, and rather obvious. It makes no sense from an engineering perspective: It’s too slow for anyone to willingly use it. And it makes no sense from a backwards-compatibility perspective: Swapping one random-number generator for another is easy.

My recommendation, if you’re in need of a random-number generator, is not to use Dual_EC_DRBG under any circumstances. If you have to use something in SP 800-90, use CTR_DRBG or Hash_DRBG.

In the meantime, both NIST and the NSA have some explaining to do.

This essay originally appeared on Wired.com.

Posted on November 15, 2007 at 6:08 AMView Comments

Randomness at Airport Security

Now this seems to be a great idea:

Security officials at Los Angeles International Airport now have a new weapon in their fight against terrorism: complete, baffling randomness. Anxious to thwart future terror attacks in the early stages while plotters are casing the airport, LAX security patrols have begun using a new software program called ARMOR, NEWSWEEK has learned, to make the placement of security checkpoints completely unpredictable. Now all airport security officials have to do is press a button labeled “Randomize,” and they can throw a sort of digital cloak of invisibility over where they place the cops’ antiterror checkpoints on any given day.

Posted on October 5, 2007 at 6:52 AMView Comments

TSA Uses Monte Carlo Simulations to Weigh Airplane Risks

Does this make sense to anyone?

TSA said Boeing would use its Monte Carlo simulation model “to identify U.S. commercial aviation system vulnerabilities against a wide variety of attack scenarios.”

The Monte Carlo method refers to several ways of using randomly generated numbers fed into a computer simulation many times to estimate the likelihood of an event, specialists in the field say.

The Monte Carlo method plays an important role in many statistical techniques used to characterize risks, such as the probabilistic risk analysis approach used to evaluate possible problems at a nuclear power plant and their consequences.

Boeing engineers have pushed the mathematical usefulness of the Monte Carlo method forward largely by applying the technique to evaluating the risks and consequences of aircraft component failures.

A DHS source said the work of the U.S. Commercial Aviation Partnership, a group of government and industry organizations, had made TSA officials aware of the potential applicability of the Monte Carlo method to building an RMAT for the air travel system.

A paper by four Boeing technologists and a TSA official describing the RMAT model appeared recently in Interfaces, a scholarly journal covering operations research.

I can’t imagine how random simulations are going to be all that useful in evaluating airplane threats, as the adversary we’re worried about isn’t particularly random — and, in fact, is motivated to target his attacks directly at the weak points in any security measures.

Maybe “chatter” has tipped the TSA off to a Muta al-Stochastic.

Posted on June 22, 2007 at 12:58 PMView Comments

A Million Random Digits

The Rand Corporation published A Million Random Digits with 100,000 Normal Deviates back in 1955, when generating random numbers was hard.

The random digits in the book were produced by rerandomization of a basic table generated by an electronic roulette wheel. Briefly, a random frequency pulse source, providing on the average about 100,000 pulses per second, was gated about once per second by a constant frequency pulse. Pulse standardization circuits passed the pulses through a 5-place binary counter. In principle the machine was a 32-place roulette wheel which made, on the average, about 3000 revolutions per trial and produced one number per second. A binary-to-decimal converter was used which converted 20 of the 32 numbers (the other twelve were discarded) and retained only the final digit of two-digit numbers; this final digit was fed into an IBM punch to produce finally a punched card table of random digits.

I have a copy of the original book; it’s one of my library’s prize possessions. I had no idea that the book was reprinted in 2002; it’s available on Amazon. But even if you don’t buy it, go to the Amazon page and read the user reviews. They’re hysterical.

This is what I said in Applied Cryptography:

The meat of the book is the “Table of Random Digits.” It lists them in five-digit groups — “10097 32533 76520 13586 …” — 50 on a line and 50 lines on a page. The table goes on for 400 pages and, except for a particularly racy section on page 283 which reads “69696,” makes for a boring read.

Posted on October 13, 2006 at 12:12 PMView Comments

Quasar Encryption

Does anyone have the faintest clue what they’re talking about here? If I had to guess, it’s just another random-number generator. It definitely doesn’t sound like two telescopes pointing at the same piece of key can contruct the same key — now that would be cool.

The National Institute of Information and Communications Technology is trying to patent a system of encryption using electromagnetic waves from Quasars.

According to The Nihon Keizai Shimbun, this technology is used to take cosmic radio waves are received through a radio telescope, encrypt and then retransmit them. Because cosmic waves are irregular, it is virtually impossible for others to decipher them. A spokesman is quoted as saying that the system could be used for the transmission of state secrets and other sensitive information.

The radio telescope can decipher the information by observing the cosmic wave patterns emitted by a particular quasar selected in advance. Even if the encrypted data is stolen, it is impossible to read it without the appropriate quasar’s radio signals.

The only way to really break the code is to know which radio telescope the coder is using and what Quasar it is pointing at. Only then do you have a slim chance of decoding it.

I can see the story on the home page of Nikkei.net Interactive, but can’t get at the story without a login.

Posted on March 27, 2006 at 1:21 PMView Comments

Snake-Oil Research in Nature

Snake-oil isn’t only in commercial products. Here’s a piece of research published (behind a paywall) in Nature that’s just full of it.

The article suggests using chaos in an electro-optical system to generate a pseudo-random light sequence, which is then added to the message to protect it from interception. Now, the idea of using chaos to build encryption systems has been tried many times in the cryptographic community, and has always failed. But the authors of the Nature article show no signs of familiarity with prior cryptographic work.

The published system has the obvious problem that it does not include any form of message authentication, so it will be trivial to send spoofed messages or tamper with messages while they are in transit.

But a closer examination of the paper’s figures suggests a far more fundamental problem. There’s no key. Anyone with a valid receiver can decode the ciphertext. No key equals no security, and what you have left is a totally broken system.

I e-mailed Claudio R. Mirasso, the corresponding author, about the lack of any key, and got this reply: “To extract the message from the chaotic carrier you need to replicate the carrier itself. This can only be done by a laser that matches the emitter characteristics within, let’s say, within 2-5%. Semiconductor lasers with such similarity have to be carefully selected from the same wafer. Even though you have to test them because they can still be too different and do not synchronize. We talk abut a hardware key. Also the operating conditions (current, feedback length and coupling strength) are part of the key.”

Let me translate that. He’s saying that there is a hardware key baked into the system at fabrication. (It comes from manufacturing deviations in the lasers.) There’s no way to change the key in the field. There’s no way to recover security if any of the transmitters/receivers are lost or stolen. And they don’t know how hard it would be for an attacker to build a compatible receiver, or even a tunable receiver that could listen to a variety of encodings.

This paper would never get past peer review in any competent cryptography journal or conference. I’m surprised it was accepted in Nature, a fiercely competitive journal. I don’t know why Nature is taking articles on topics that are outside its usual competence, but it looks to me like Nature got burnt here by a lack of expertise in the area.

To be fair, the paper very carefully skirts the issue of security, and claims hardly anything: “Additionally, chaotic carriers offer a certain degree of intrinsic privacy, which could complement (via robust hardware encryption) both classical (software based) and quantum cryptography systems.” Now that “certain degree of intrinsic privacy” is approximately zero. But other than that, they’re very careful how they word their claims.

For instance, the abstract says: “Chaotic signals have been proposed as broadband information carriers with the potential of providing a high level of robustness and privacy in data transmission.” But there’s no disclosure that this proposal is bogus, from a privacy perspective. And the next-to-last paragraph says “Building on this, it should be possible to develop reliable cost-effective secure communication systems that exploit deeper properties of chaotic dynamics.” No disclosure that “chaotic dynamics” is actually irrelevant to the “secure” part. The last paragraph talks about “smart encryption techniques” (referencing a paper that talks about chaos encryption), “developing active eavesdropper-evasion strategies” (whatever that means), and so on. It’s just enough that if you don’t parse their words carefully and don’t already know the area well, you might come away with the impression that this is a major advance in secure communications. It seems as if it would have helped to have a more careful disclaimer.

Communications security was listed as one of the motivations for studying this communications technique. To list this as a motivation, without explaining that their experimental setup is actually useless for communications security, is questionable at best.

Meanwhile, the press has written articles that convey the wrong impression. Science News has an article that lauds this as a big achievement for communications privacy.

It talks about it as a “new encryption strategy,” “chaos-encrypted communication,” “1 gigabyte of chaos-encrypted information per second.” It’s obvious that the communications security aspect is what Science News is writing about. If the authors knew that their scheme is useless for communications security, they didn’t explain that very well.

There is also a New Scientist article titled “Let chaos keep your secrets safe” that characterizes this as a “new cryptographic technique, ” but I can’t get a copy of the full article.

Here are two more articles that discuss its security benefits. In the latter, Mirasso says “the main task we have for the future” is to “define, test, and calibrate the security that our system can offer.”

And their project web page says that “the continuous increase of computer speed threatens the safety” of traditional cryptography (which is bogus) and suggests using physical-layer chaos as a way to solve this. That’s listed as the goal of the project.

There’s a lesson here. This is research undertaken by researchers with no prior track record in cryptography, submitted to a journal with no background in cryptography, and reviewed by reviewers with who knows what kind of experience in cryptography. Cryptography is a subtle subject, and trying to design new cryptosystems without the necessary experience and training in the field is a quick route to insecurity.

And what’s up with Nature? Cryptographers with no training in physics know better than to think they are competent to evaluate physics research. If a physics paper were submitted to a cryptography journal, the authors would likely be gently redirected to a physics journal — we wouldn’t want our cryptography conferences to accept a paper on a subject they aren’t competent to evaluate. Why would Nature expect the situation to be any different when physicists try to do cryptography research?

Posted on December 7, 2005 at 6:36 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.