Entries Tagged "random numbers"

Page 3 of 5

Back Door in Juniper Firewalls

Juniper has warned about a malicious back door in its firewalls that automatically decrypts VPN traffic. It’s been there for years.

Hopefully details are forthcoming, but the folks at Hacker News have pointed to this page about Juniper’s use of the DUAL_EC_DBRG random number generator. For those who don’t immediately recognize that name, it’s the pseudo-random-number generator that was backdoored by the NSA. Basically, the PRNG uses two secret parameters to create a public parameter, and anyone who knows those secret parameters can predict the output. In the standard, the NSA chose those parameters. Juniper doesn’t use those tainted parameters. Instead:

ScreenOS does make use of the Dual_EC_DRBG standard, but is designed to not use Dual_EC_DRBG as its primary random number generator. ScreenOS uses it in a way that should not be vulnerable to the possible issue that has been brought to light. Instead of using the NIST recommended curve points it uses self-generated basis points and then takes the output as an input to FIPS/ANSI X.9.31 PRNG, which is the random number generator used in ScreenOS cryptographic operations.

This means that all anyone has to do to break the PRNG is to hack into the firewall and copy or modify those “self-generated basis points.”

Here’s a good summary of what we know. The conclusion:

Again, assuming this hypothesis is correct then, if it wasn’t the NSA who did this, we have a case where a US government backdoor effort (Dual-EC) laid the groundwork for someone else to attack US interests. Certainly this attack would be a lot easier given the presence of a backdoor-friendly RNG already in place. And I’ve not even discussed the SSH backdoor which, as Wired notes, could have been the work of a different group entirely. That backdoor certainly isn’t NOBUS—Fox-IT claim to have found the backdoor password in six hours.

More details to come, I’m sure.

EDITED TO ADD (12/21): A technical overview of the SSH backdoor.

EDITED TO ADD (12/22): Matthew Green wrote a really good technical post about this.

They then piggybacked on top of it to build a backdoor of their own, something they were able to do because all of the hard work had already been done for them. The end result was a period in which someone—maybe a foreign government—was able to decrypt Juniper traffic in the U.S. and around the world. And all because Juniper had already paved the road.

Another good article.

Posted on December 21, 2015 at 6:52 AMView Comments

Risks of Not Understanding a One-Way Function

New York City officials anonymized license plate data by hashing the individual plate numbers with MD5. (I know, they shouldn’t have used MD5, but ignore that for a moment.) Because they didn’t attach long random strings to the plate numbers—i.e., salt—it was trivially easy to hash all valid license plate numbers and deanonymize all the data.

Of course, this technique is not news.

ArsTechnica article. Hacker News thread.

Posted on June 25, 2014 at 6:36 AMView Comments

The Security of the Fortuna PRNG

Providing random numbers on computers can be very difficult. Back in 2003, Niels Ferguson and I designed Fortuna as a secure PRNG. Particularly important is how it collects entropy from various processes on the computer and mixes them all together.

While Fortuna is widely used, there hadn’t been any real analysis of the system. This has now changed. A new paper by Yevgeniy Dodis, Adi Shamir, Noah Stephens-Davidowitz, and Daniel Wichs provides some theoretical modeling for entropy collection and PRNG. They analyze Fortuna and find it good but not optimal, and then provide their own optimal system.

Excellent, and long-needed, research.

Posted on March 11, 2014 at 6:28 AMView Comments

NSA Spying: Whom Do You Believe?

On Friday, Reuters reported that RSA entered into a secret contract to make DUAL_EC_PRNG the default random number generator in the BSAFE toolkit. DUA_EC_PRNG is now known to have been backdoored by the NSA.

Yesterday, RSA denied it:

Recent press coverage has asserted that RSA entered into a “secret contract” with the NSA to incorporate a known flawed random number generator into its BSAFE encryption libraries. We categorically deny this allegation.

[…]

We made the decision to use Dual EC DRBG as the default in BSAFE toolkits in 2004, in the context of an industry-wide effort to develop newer, stronger methods of encryption. At that time, the NSA had a trusted role in the community-wide effort to strengthen, not weaken, encryption.

We know from both Mark Klein and Edward Snowden—and pretty much everything else about the NSA—that the NSA directly taps the trunk lines of AT&T (and pretty much every other telcom carrier). On Friday, AT&T denied that:

In its statement, AT&T sought to push back against the notion that it provides the government with such access. “We do not allow any government agency to connect directly to our network to gather, review or retrieve our customers’ information,” said Watts.

I’ve written before about how the NSA has corroded our trust in the Internet and communications technologies. The debates over these companies’ statements, and about exactly how they are using and abusing individual words to lie while claiming they are not lying, is a manifestation of that.

Me again:

This sort of thing can destroy our country. Trust is essential in our society. And if we can’t trust either our government or the corporations that have intimate access into so much of our lives, society suffers. Study after study demonstrates the value of living in a high-trust society and the costs of living in a low-trust one.

Rebuilding trust is not easy, as anyone who has betrayed or been betrayed by a friend or lover knows, but the path involves transparency, oversight and accountability. Transparency first involves coming clean. Not a little bit at a time, not only when you have to, but complete disclosure about everything. Then it involves continuing disclosure. No more secret rulings by secret courts about secret laws. No more secret programs whose costs and benefits remain hidden.

Oversight involves meaningful constraints on the NSA, the FBI and others. This will be a combination of things: a court system that acts as a third-party advocate for the rule of law rather than a rubber-stamp organization, a legislature that understands what these organizations are doing and regularly debates requests for increased power, and vibrant public-sector watchdog groups that analyze and debate the government’s actions.

Accountability means that those who break the law, lie to Congress or deceive the American people are held accountable. The NSA has gone rogue, and while it’s probably not possible to prosecute people for what they did under the enormous veil of secrecy it currently enjoys, we need to make it clear that this behavior will not be tolerated in the future. Accountability also means voting, which means voters need to know what our leaders are doing in our name.

This is the only way we can restore trust. A market economy doesn’t work unless consumers can make intelligent buying decisions based on accurate product information. That’s why we have agencies like the FDA, truth-in-packaging laws and prohibitions against false advertising.

We no longer know whom to trust. This is the greatest damage the NSA has done to the Internet, and will be the hardest to fix.

EDITED TO ADD (12/23): The requested removal of an NSA employee from an IETF group co-chairmanship is another manifestation of this mistrust.

Posted on December 23, 2013 at 6:26 AMView Comments

Insecurities in the Linux /dev/random

New paper: “Security Analysis of Pseudo-Random Number Generators with Input: /dev/random is not Robust, by Yevgeniy Dodis, David Pointcheval, Sylvain Ruhault, Damien Vergnaud, and Daniel Wichs.

Abstract: A pseudo-random number generator (PRNG) is a deterministic algorithm that produces numbers whose distribution is indistinguishable from uniform. A formal security model for PRNGs with input was proposed in 2005 by Barak and Halevi (BH). This model involves an internal state that is refreshed with a (potentially biased) external random source, and a cryptographic function that outputs random numbers from the continually internal state. In this work we extend the BH model to also include a new security property capturing how it should accumulate the entropy of the input data into the internal state after state compromise. This property states that a good PRNG should be able to eventually recover from compromise even if the entropy is injected into the system at a very slow pace, and expresses the real-life expected behavior of existing PRNG designs. Unfortunately, we show that neither the model nor the specific PRNG construction proposed by Barak and Halevi meet this new property, despite meeting a weaker robustness notion introduced by BH. From a practical side, we also give a precise assessment of the security of the two Linux PRNGs, /dev/random and /dev/urandom. In particular, we show several attacks proving that these PRNGs are not robust according to our definition, and do not accumulate entropy properly. These attacks are due to the vulnerabilities of the entropy estimator and the internal mixing function of the Linux PRNGs. These attacks against the Linux PRNG show that it does not satisfy the “robustness” notion of security, but it remains unclear if these attacks lead to actual exploitable vulnerabilities in practice. Finally, we propose a simple and very efficient PRNG construction that is provably robust in our new and stronger adversarial model. We present benchmarks between this construction and the Linux PRNG that show that this construction is on average more efficient when recovering from a compromised internal state and when generating cryptographic keys. We therefore recommend to use this construction whenever a PRNG with input is used for cryptography.

Posted on October 14, 2013 at 1:06 PMView Comments

A New Postal Privacy Product

The idea is basically to use indirection to hide physical addresses. You would get a random number to give to your correspondents, and the post office would use that number to determine your real address. No security against government surveillance, but potentially valuable nonetheless.

Here are a bunch of documents.

I honestly have no idea what’s going on. It seems to be something the US government is considering, but it was not proposed by the US Postal Service. This guy is proposing the service.

EDITED TO ADD (10/11): Sai has contacted me and asked that people refrain from linking to or writing about this for now, until he posts some more/better information. I’ll update this post with a new link when he sends it to me.

EDITED TO ADD (10/17): Sai has again contacted me, saying that he has posted the more/better information, and that the one true link for the proposal is here.

Posted on October 9, 2013 at 1:08 PMView Comments

Surreptitiously Tampering with Computer Chips

This is really interesting research: “Stealthy Dopant-Level Hardware Trojans.” Basically, you can tamper with a logic gate to be either stuck-on or stuck-off by changing the doping of one transistor. This sort of sabotage is undetectable by functional testing or optical inspection. And it can be done at mask generation—very late in the design process—since it does not require adding circuits, changing the circuit layout, or anything else. All this makes it really hard to detect.

The paper talks about several uses for this type of sabotage, but the most interesting—and devastating—is to modify a chip’s random number generator. This technique could, for example, reduce the amount of entropy in Intel’s hardware random number generator from 128 bits to 32 bits. This could be done without triggering any of the built-in self-tests, without disabling any of the built-in self-tests, and without failing any randomness tests.

I have no idea if the NSA convinced Intel to do this with the hardware random number generator it embedded into its CPU chips, but I do know that it could. And I was always leery of Intel strongly pushing for applications to use the output of its hardware RNG directly and not putting it through some strong software PRNG like Fortuna. And now Theodore Ts’o writes this about Linux: “I am so glad I resisted pressure from Intel engineers to let /dev/random rely only on the RDRAND instruction.”

Yes, this is a conspiracy theory. But I’m not willing to discount such things anymore. That’s the worst thing about the NSA’s actions. We have no idea whom we can trust.

Posted on September 16, 2013 at 1:25 PMView Comments

TSA Considering Implementing Randomized Security

For a change, here’s a good idea by the TSA:

TSA has just issued a Request for Information (RFI) to prospective vendors who could develop and supply such randomizers, which TSA expects to deploy at CAT X through CAT IV airports throughout the United States.

“The Randomizers would be used to route passengers randomly to different checkpoint lines,” says the agency’s RFI.

The article lists a bunch of requirements by the TSA for the device.

I’ve seen something like this at customs in, I think, India. Every passenger walks up to a kiosk and presses a button. If the green light turns on, he walks through. If the red light turns on, his bags get searched. Presumably the customs officials can set the search percentage.

Automatic randomized screening is a good idea. It’s free from bias or profiling. It can’t be gamed. These both make it more secure. Note that this is just an RFI from the TSA. An actual program might be years away, and it might not be implemented well. But it’s certainly a start.

EDITED TO ADD (7/19): This is an opposing view. Basically, it’s based on the argument that profiling makes sense, and randomized screening means that you can’t profile. It’s an argument I’ve argued against before.

EDITED TO ADD (8/10): Another argument that profiling does not work.

Posted on July 19, 2013 at 2:45 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.