Blog: September 2013 Archives

Senator Feinstein Admits the NSA Taps the Internet Backbone

We know from the Snowden documents (and other sources) that the NSA taps the Internet backbone through secret agreements with major US telcos., but the US government still hasn’t admitted it.

In late August, the Obama administration declassified a ruling from the Foreign Intelligence Surveillance Court. Footnote 3 reads:

The term ‘upstream collection’ refers to NSA’s interception of Internet communications as they transit [LONG REDACTED CLAUSE], [REDACTED], rather than to acquisitions directly from Internet service providers such as [LIST OF REDACTED THINGS, PRESUMABLY THE PRISM DOWNSTREAM COMPANIES].

Here’s one analysis of the document.

On Thursday, Senator Diane Feinstein filled in some of the details:

Upstream collection…occurs when NSA obtains internet communications, such as e-mails, from certain US companies that operate the Internet background [sic, she means “backbone”], i.e., the companies that own and operate the domestic telecommunications lines over which internet traffic flows.

Note that we knew this in 2006:

One thing the NSA wanted was access to the growing fraction of global telecommunications that passed through junctions on U.S. territory. According to former senator Bob Graham (D-Fla.), who chaired the Intelligence Committee at the time, briefers told him in Cheney’s office in October 2002 that Bush had authorized the agency to tap into those junctions. That decision, Graham said in an interview first reported in The Washington Post on Dec. 18, allowed the NSA to intercept “conversations that . . . went through a transit facility inside the United States.”

And this in 2007:

[The Program] requires the NSA, as noted by Rep. Peter Hoekstra, “to steal light off of different cables” in order to acquire the “information that’s most important to us” Interview with Rep. Peter Hoekstra by Paul Gigot, Lack of Intelligence: Congress Dawdles on Terrorist Wiretapping, JOURNAL EDITORIAL REPORT, FOX NEWS CHANNEL (Aug. 6, 2007) at 2.

So we knew it already, but now we know it even more. So why won’t President Obama admit it?

EDITED TO ADD (9/28): Another article on this.

EDITED TO ADD (9/30): Also, there’s Mark Klein’s revelations from 2006.

Posted on September 28, 2013 at 6:10 AM112 Comments

Friday Squid Blogging: A Squid that Fishes

The Grimalditeuthis bonplandi is the only known squid to use its tentacles to fish:

Its tentacles are thin and fragile, and almost always break off when it’s captured. For ages, people thought it lacked tentacles altogether until a full specimen was found in the stomach of a fish. Weirder still, its clubs have neither suckers nor hooks. Instead, they are flanked by a pair of leaf-shaped membranes. Why?

Now, after observing a live individual off the coast of California, Hendrik-Jan Hoving from the Monterey Bay Aquarium Research Institute (MBARI) in California thinks he knows what how the squid uses its feeble tentacles. They’re not grasping limbs, but fishing lures. By waving the membranes, the squid uses its clubs to mimic the movements small animals and attract its prey.

Academic paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on September 27, 2013 at 4:53 PM79 Comments

Paradoxes of Big Data

Interesting paper: “Three Paradoxes of Big Data,” by Neil M. Richards and Jonathan H. King, Stanford Law Review Online, 2013.

Abstract: Big data is all the rage. Its proponents tout the use of sophisticated analytics to mine large data sets for insight as the solution to many of our society’s problems. These big data evangelists insist that data-driven decisionmaking can now give us better predictions in areas ranging from college admissions to dating to hiring to medicine to national security and crime prevention. But much of the rhetoric of big data contains no meaningful analysis of its potential perils, only the promise. We don’t deny that big data holds substantial potential for the future, and that large dataset analysis has important uses today. But we would like to sound a cautionary note and pause to consider big data’s potential more critically. In particular, we want to highlight three paradoxes in the current rhetoric about big data to help move us toward a more complete understanding of the big data picture. First, while big data pervasively collects all manner of private information, the operations of big data itself are almost entirely shrouded in legal and commercial secrecy. We call this the Transparency Paradox. Second, though big data evangelists talk in terms of miraculous outcomes, this rhetoric ignores the fact that big data seeks to identify at the expense of individual and collective identity. We call this the Identity Paradox. And third, the rhetoric of big data is characterized by its power to transform society, but big data has power effects of its own, which privilege large government and corporate entities at the expense of ordinary individuals. We call this the Power Paradox. Recognizing the paradoxes of big data, which show its perils alongside its potential, will help us to better understand this revolution. It may also allow us to craft solutions to produce a revolution that will be as good as its evangelists predict.

EDITED TO ADD (10/11): Here’s an HTML version of the paper.

Posted on September 26, 2013 at 6:58 AM47 Comments

Apple's iPhone Fingerprint Reader Successfully Hacked

Nice hack from the Chaos Computer Club:

The method follows the steps outlined in this how-to with materials that can be found in almost every household: First, the fingerprint of the enrolled user is photographed with 2400 dpi resolution. The resulting image is then cleaned up, inverted and laser printed with 1200 dpi onto transparent sheet with a thick toner setting. Finally, pink latex milk or white woodglue is smeared into the pattern created by the toner onto the transparent sheet. After it cures, the thin latex sheet is lifted from the sheet, breathed on to make it a tiny bit moist and then placed onto the sensor to unlock the phone. This process has been used with minor refinements and variations against the vast majority of fingerprint sensors on the market.

I’m not surprised. In my essay on Apple’s technology, I wrote: “I’m sure that someone with a good enough copy of your fingerprint and some rudimentary materials engineering capability—or maybe just a good enough printer—can authenticate his way into your iPhone.”

I don’t agree with CCC’s conclusion, though:

“We hope that this finally puts to rest the illusions people have about fingerprint biometrics. It is plain stupid to use something that you can´t change and that you leave everywhere every day as a security token”, said Frank Rieger, spokesperson of the CCC. “The public should no longer be fooled by the biometrics industry with false security claims. Biometrics is fundamentally a technology designed for oppression and control, not for securing everyday device access.”

Apple is trying to balance security with convenience. This is a cell phone, not a ICBM launcher or even a bank account withdrawal device. Apple is offering an option to replace a four-digit PIN—something that a lot of iPhone users don’t even bother with—with a fingerprint. Despite its drawbacks, I think it’s a good trade-off for a lot of people.

EDITED TO ADD (10/13): The print for the CCC hack was lifted from the iPhone.

Posted on September 24, 2013 at 9:20 AM71 Comments

NSA Job Opening

The NSA is looking for a Civil Liberties & Privacy Officer. It appears to be an internal posting.

The NSA Civil Liberties & Privacy Officer (CLPO) is conceived as a completely new role, combining the separate responsibilities of NSA’s existing Civil Liberties and Privacy (CL/P) protection programs under a single official. The CLPO will serve as the primary advisor to the Director of NSA for ensuring that privacy is protected and civil liberties are maintained by all of NSA’s missions, programs, policies and technologies. This new position is focused on the future, designed to directly enhance decision making and to ensure that CL/P protections continue to be baked into NSA’s future operations, technologies, tradecraft, and policies. The NSA CLPO will consult regularly with the Office of the Director of National Intelligence CLPO, privacy and civil liberties officials from the Department of Defense and the Department of Justice, as well as other U.S. government, private sector, public advocacy groups and foreign partners.

EDITED TO ADD (9/23): Better link here that allows new registration for prospective applicants—it’s Job ID 1039797.

Posted on September 23, 2013 at 1:14 PM47 Comments

Metadata Equals Surveillance

Back in June, when the contents of Edward Snowden’s cache of NSA documents were just starting to be revealed and we learned about the NSA collecting phone metadata of every American, many people—including President Obama—discounted the seriousness of the NSA’s actions by saying that it’s just metadata.

Lots and lots of people effectively demolished that trivialization, but the arguments are generally subtle and hard to convey quickly and simply. I have a more compact argument: metadata equals surveillance.

Imagine you hired a detective to eavesdrop on someone. He might plant a bug in their office. He might tap their phone. He might open their mail. The result would be the details of that person’s communications. That’s the “data.”

Now imagine you hired that same detective to surveil that person. The result would be details of what he did: where he went, who he talked to, what he looked at, what he purchased—how he spent his day. That’s all metadata.

When the government collects metadata on people, the government puts them under surveillance. When the government collects metadata on the entire country, they put everyone under surveillance. When Google does it, they do the same thing. Metadata equals surveillance; it’s that simple.

EDITED TO ADD (10/12): According to Snowden, the administration is partially basing its bulk collection of metadata on an interpretation by the FISC of Section 215 of the Patriot Act.

EDITED TO ADD (10/28): this post has been translated into Portuguese.

Posted on September 23, 2013 at 6:21 AM126 Comments

Friday Squid Blogging: How Bacteria Terraform a Squid

Fascinating:

The bacterium Vibrio fischeri is a squid terraformer. Although it can live independently in seawater, it also colonises the body of the adorable Hawaiian bobtail squid. The squid nourishes the bacteria with nutrients and the bacteria, in turn, act as an invisibility cloak. They produce a dim light that matches the moonlight shining down from above, masking the squid’s silhouette from predators watching from below. With its light-emitting microbes, the squid becomes less visible.

Margaret McFall-Ngai from the University of Wisconsin has been studying this partnership for almost 25 years and her team, led by postdoc Natacha Kremer, have now uncovered its very first moments. They’ve shown how the incoming bacteria activate the squid’s genes to create a world that’s more suitable for their kind. And remarkably, it takes just five of these microbial pioneers to start the terraforming (teuthoforming?) process.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on September 20, 2013 at 4:25 PM63 Comments

Legally Justifying NSA Surveillance of Americans

Kit Walsh has an interesting blog post where he looks at how existing law can be used to justify the surveillance of Americans.

Just to challenge ourselves, we’ll ignore the several statutory provisions and other doctrines that allow for spying without court oversight, such as urgent collection, gathering information not considered protected by the Fourth Amendment, the wartime spying provision, or the president’s “inherent authority” for warrantless spying. Let’s also ignore the fact that we have general wiretaps ala the Verizon order on phone metadata and Internet traffic that we can fish through in secret. Let’s actually try to get this by the FISA Court under 50 U.S.C. §§ 1801-1805 for electronic surveillance or § 1861 for documents and records.

Posted on September 20, 2013 at 12:01 PM24 Comments

Google Knows Every Wi-Fi Password in the World

This article points out that as people are logging into Wi-Fi networks from their Android phones, and backing up those passwords along with everything else into Google’s cloud, that Google is amassing an enormous database of the world’s Wi-Fi passwords. And while it’s not every Wi-Fi password in the world, it’s almost certainly a large percentage of them.

Leaving aside Google’s intentions regarding this database, it is certainly something that the US government could force Google to turn over with a National Security Letter.

Something else to think about.

Posted on September 20, 2013 at 7:05 AM92 Comments

Yochai Benkler on the NSA

Excellent essay:

We have learned that in pursuit of its bureaucratic mission to obtain signals intelligence in a pervasively networked world, the NSA has mounted a systematic campaign against the foundations of American power: constitutional checks and balances, technological leadership, and market entrepreneurship. The NSA scandal is no longer about privacy, or a particular violation of constitutional or legislative obligations. The American body politic is suffering a severe case of auto-immune disease: our defense system is attacking other critical systems of our body.

Posted on September 18, 2013 at 7:06 AM59 Comments

The Limitations of Intelligence

We recently learned that US intelligence agencies had at least three days’ warning that Syrian President Bashar al-Assad was preparing to launch a chemical attack on his own people, but wasn’t able to stop it. At least that’s what an intelligence briefing from the White House reveals. With the combined abilities of our national intelligence apparatus—the CIA, NSA, National Reconnaissance Office and all the rest—it’s not surprising that we had advance notice. It’s not known whether the US shared what it knew.

More interestingly, the US government did not choose to act on that knowledge (for example, launch a preemptive strike), which left some wondering why.

There are several possible explanations, all of which point to a fundamental problem with intelligence information and our national intelligence apparatuses.

The first possibility is that we may have had the data, but didn’t fully understand what it meant. This is the proverbial connect-the-dots problem. As we’ve learned again and again, connecting the dots is hard. Our intelligence services collect billions of individual pieces of data every day. After the fact, it’s easy to walk backward through the data and notice all the individual pieces that point to what actually happened. Before the fact, though, it’s much more difficult. The overwhelming majority of those bits of data point in random directions, or nowhere at all. Almost all the dots don’t connect to anything.

Rather than thinking of intelligence as a connect-the-dots picture, think of it as a million unnumbered pictures superimposed on top of each other. Which picture is the relevant one? We have no idea. Turning that data into actual information is an extraordinarily difficult problem, and one that the vast scope of our data-gathering programs makes even more difficult.

The second possible explanation is that while we had some information about al-Assad’s plans, we didn’t have enough confirmation to act on that information. This is probably the most likely explanation. We can’t act on inklings, hunches, or possibilities. We probably can’t even act on probabilities; we have to be sure. But when it comes to intelligence, it’s hard to be sure. There could always be something else going on—something we’re not able to eavesdrop on, spy on, or see from our satellites. Again, our knowledge is most obvious after the fact.

The third is that while we were sure of our information, we couldn’t act because that would reveal “sources and methods.” This is probably the most frustrating explanation. Imagine we are able to eavesdrop on al-Assad’s most private conversations with his generals and aides, and are absolutely sure of his plans. If we act on them, we reveal that we are eavesdropping. As a result, he’s likely to change how he communicates, costing us our ability to eavesdrop. It might sound perverse, but often the fact that we are able to successfully spy on someone is a bigger secret than the information we learn from that spying.

This dynamic was vitally important during World War II. During the war, the British were able to break the German Enigma encryption machine and eavesdrop on German military communications. But while the Allies knew a lot, they would only act on information they learned when there was another plausible way they could have learned it. They even occasionally manufactured plausible explanations. It was just too risky to tip the Germans off that their encryption machines’ code had been broken.

The fourth possibility is that there was nothing useful we could have done. And it is hard to imagine how we could have prevented the use of chemical weapons in Syria. We couldn’t have launched a preemptive strike, and it’s probable that it wouldn’t have been effective. The only feasible action would be to alert the opposition—and that, too, might not have accomplished anything. Or perhaps there wasn’t sufficient agreement for any one course of action—so, by default, nothing was done.

All of these explanations point out the limitations of intelligence. The NSA serves as an example. The agency measures its success by amount of data collected, not by information synthesized or knowledge gained. But it’s knowledge that matters.

The NSA’s belief that more data is always good, and that it’s worth doing anything in order to collect it, is wrong. There are diminishing returns, and the NSA almost certainly passed that point long ago. But the idea of trade-offs does not seem to be part of its thinking.

The NSA missed the Boston Marathon bombers, even though the suspects left a really sloppy Internet trail and the older brother was on the terrorist watch list. With all the NSA is doing eavesdropping on the world, you would think the least it could manage would be keeping track of people on the terrorist watch list. Apparently not.

I don’t know how the CIA measures its success, but it failed to predict the end of the Cold War.

More data does not necessarily mean better information. It’s much easier to look backward than to predict. Information does not necessarily enable the government to act. Even when we know something, protecting the methods of collection can be more valuable than the possibility of taking action based on gathered information. But there’s not a lot of value to intelligence that can’t be used for action. These are the paradoxes of intelligence, and it’s time we started remembering them.

Of course, we need organizations like the CIA, the NSA, the NRO and all the rest. Intelligence is a vital component of national security, and can be invaluable in both wartime and peacetime. But it is just one security tool among many, and there are significant costs and limitations.

We’ve just learned from the recently leaked “black budget” that we’re spending $52 billion annually on national intelligence. We need to take a serious look at what kind of value we’re getting for our money, and whether it’s worth it.

This essay previously appeared on CNN.com.

Posted on September 17, 2013 at 6:15 AM70 Comments

Surreptitiously Tampering with Computer Chips

This is really interesting research: “Stealthy Dopant-Level Hardware Trojans.” Basically, you can tamper with a logic gate to be either stuck-on or stuck-off by changing the doping of one transistor. This sort of sabotage is undetectable by functional testing or optical inspection. And it can be done at mask generation—very late in the design process—since it does not require adding circuits, changing the circuit layout, or anything else. All this makes it really hard to detect.

The paper talks about several uses for this type of sabotage, but the most interesting—and devastating—is to modify a chip’s random number generator. This technique could, for example, reduce the amount of entropy in Intel’s hardware random number generator from 128 bits to 32 bits. This could be done without triggering any of the built-in self-tests, without disabling any of the built-in self-tests, and without failing any randomness tests.

I have no idea if the NSA convinced Intel to do this with the hardware random number generator it embedded into its CPU chips, but I do know that it could. And I was always leery of Intel strongly pushing for applications to use the output of its hardware RNG directly and not putting it through some strong software PRNG like Fortuna. And now Theodore Ts’o writes this about Linux: “I am so glad I resisted pressure from Intel engineers to let /dev/random rely only on the RDRAND instruction.”

Yes, this is a conspiracy theory. But I’m not willing to discount such things anymore. That’s the worst thing about the NSA’s actions. We have no idea whom we can trust.

Posted on September 16, 2013 at 1:25 PM243 Comments

Reforming the NSA

Leaks from the whistleblower Edward Snowden have catapulted the NSA into newspaper headlines and demonstrated that it has become one of the most powerful government agencies in the country. From the secret court rulings that allow it to collect data on all Americans to its systematic subversion of the entire Internet as a surveillance platform, the NSA has amassed an enormous amount of power.

There are two basic schools of thought about how this came to pass. The first focuses on the agency’s power. Like J. Edgar Hoover, NSA Director Keith Alexander has become so powerful as to be above the law. He is able to get away with what he does because neither political party—and nowhere near enough individual lawmakers—dare cross him. Longtime NSA watcher James Bamford recently quoted a CIA official: “We jokingly referred to him as Emperor Alexander—with good cause, because whatever Keith wants, Keith gets.”

Possibly the best evidence for this position is how well Alexander has weathered the Snowden leaks. The NSA’s most intimate secrets are front-page headlines, week after week. Morale at the agency is in shambles. Revelation after revelation has demonstrated that Alexander has exceeded his authority, deceived Congress, and possibly broken the law. Tens of thousands of additional top-secret documents are still waiting to come. Alexander has admitted that he still doesn’t know what Snowden took with him and wouldn’t have known about the leak at all had Snowden not gone public. He has no idea who else might have stolen secrets before Snowden, or who such insiders might have provided them to. Alexander had no contingency plans in place to deal with this sort of security breach, and even now—four months after Snowden fled the country—still has no coherent response to all this.

For an organization that prides itself on secrecy and security, this is what failure looks like. It is a testament to Alexander’s power that he still has a job.

The second school of thought is that it’s the administration’s fault—not just the present one, but the most recent several. According to this theory, the NSA is simply doing its job. If there’s a problem with the NSA’s actions, it’s because the rules it’s operating under are bad. Like the military, the NSA is merely an instrument of national policy. Blaming the NSA for creating a surveillance state is comparable to blaming the US military for the conduct of the Iraq war. Alexander is performing the mission given to him as best he can, under the rules he has been given, with the sort of zeal you’d expect from someone promoted into that position. And the NSA’s power predated his directorship.

Former NSA Director Michael Hayden exemplifies this in a quote from late July: “Give me the box you will allow me to operate in. I’m going to play to the very edges of that box.”

This doesn’t necessarily mean the administration is deliberately giving the NSA too big a box. More likely, it’s simply that the laws aren’t keeping pace with technology. Every year, technology gives us possibilities that our laws simply don’t cover clearly. And whenever there’s a gray area, the NSA interprets whatever law there is to give them the most expansive authority. They simply run rings around the secret court that rules on these things. My guess is that while they have clearly broken the spirit of the law, it’ll be harder to demonstrate that they broke the letter of the law.

In football terms, the first school of thought says the NSA is out of bounds. The second says the field is too big. I believe that both perspectives have some truth to them, and that the real problem comes from their combination.

Regardless of how we got here, the NSA can’t reform itself. Change cannot come from within; it has to come from above. It’s the job of government: of Congress, of the courts, and of the president. These are the people who have the ability to investigate how things became so bad, rein in the rogue agency, and establish new systems of transparency, oversight, and accountability.

Any solution we devise will make the NSA less efficient at its eavesdropping job. That’s a trade-off we should be willing to make, just as we accept reduced police efficiency caused by requiring warrants for searches and warning suspects that they have the right to an attorney before answering police questions. We do this because we realize that a too-powerful police force is itself a danger, and we need to balance our need for public safety with our aversion of a police state.

The same reasoning needs to apply to the NSA. We want it to eavesdrop on our enemies, but it needs to do so in a way that doesn’t trample on the constitutional rights of Americans, or fundamentally jeopardize their privacy or security. This means that sometimes the NSA won’t get to eavesdrop, just as the protections we put in place to restrain police sometimes result in a criminal getting away. This is a trade-off we need to make willingly and openly, because overall we are safer that way.

Once we do this, there needs to be a cultural change within the NSA. Like at the FBI and CIA after past abuses, the NSA needs new leadership committed to changing its culture. And giving up power.

Our society can handle the occasional terrorist act; we’re resilient, and—if we decided to act that way—indomitable. But a government agency that is above the law… it’s hard to see how America and its freedoms can survive that.

This essay previously appeared on TheAtlantic.com, with the unfortunate title of “Zero Sum: Americans Must Sacrifice Some Security to Reform the NSA.” After I complained, they changed the title to “The NSA-Reform Paradox: Stop Domestic Spying, Get More Security.”

Posted on September 16, 2013 at 6:55 AM62 Comments

Take Back the Internet

Government and industry have betrayed the Internet, and us.

By subverting the Internet at every level to make it a vast, multi-layered and robust surveillance platform, the NSA has undermined a fundamental social contract. The companies that build and manage our Internet infrastructure, the companies that create and sell us our hardware and software, or the companies that host our data: we can no longer trust them to be ethical Internet stewards.

This is not the Internet the world needs, or the Internet its creators envisioned. We need to take it back.

And by we, I mean the engineering community.

Yes, this is primarily a political problem, a policy matter that requires political intervention.

But this is also an engineering problem, and there are several things engineers can—and should—do.

One, we should expose. If you do not have a security clearance, and if you have not received a National Security Letter, you are not bound by a federal confidentially requirements or a gag order. If you have been contacted by the NSA to subvert a product or protocol, you need to come forward with your story. Your employer obligations don’t cover illegal or unethical activity. If you work with classified data and are truly brave, expose what you know. We need whistleblowers.

We need to know how exactly how the NSA and other agencies are subverting routers, switches, the Internet backbone, encryption technologies and cloud systems. I already have five stories from people like you, and I’ve just started collecting. I want 50. There’s safety in numbers, and this form of civil disobedience is the moral thing to do.

Two, we can design. We need to figure out how to re-engineer the Internet to prevent this kind of wholesale spying. We need new techniques to prevent communications intermediaries from leaking private information.

We can make surveillance expensive again. In particular, we need open protocols, open implementations, open systems—these will be harder for the NSA to subvert.

The Internet Engineering Task Force, the group that defines the standards that make the internet run, has a meeting planned for early November in Vancouver. This group needs to dedicate its next meeting to this task. This is an emergency, and demands an emergency response.

Three, we can influence governance. I have resisted saying this up to now, and I am saddened to say it, but the US has proved to be an unethical steward of the Internet. The UK is no better. The NSA’s actions are legitimizing the internet abuses by China, Russia, Iran and others. We need to figure out new means of internet governance, ones that makes it harder for powerful tech countries to monitor everything. For example, we need to demand transparency, oversight, and accountability from our governments and corporations.

Unfortunately, this is going play directly into the hands of totalitarian governments that want to control their country’s Internet for even more extreme forms of surveillance. We need to figure out how to prevent that, too. We need to avoid the mistakes of the International Telecommunications Union, which has become a forum to legitimize bad government behavior, and create truly international governance that can’t be dominated or abused by any one country.

Generations from now, when people look back on these early decades of the Internet, I hope they will not be disappointed in us. We can ensure that they don’t only if each of us makes this a priority, and engages in the debate. We have a moral duty to do this, and we have no time to lose.

Dismantling the surveillance state won’t be easy. Has any country that engaged in mass surveillance of its own citizens voluntarily given up that capability? Has any mass surveillance country avoided becoming totalitarian? Whatever happens, we’re going to be breaking new ground.

Again, the politics of this is a bigger task than the engineering, but the engineering is critical. We need to demand that real technologists be involved in any key government decision making on these issues. We’ve had enough of lawyers and politicians not fully understanding technology; we need technologists at the table when we build tech policy.

To the engineers, I say this: we built the Internet, and some of us have helped to subvert it. Now, those of us who love liberty have to fix it.

This essay previously appeared in the Guardian.

EDITED TO ADD: Slashdot thread. An opposing view to my call to action. And I agree with this, even though the author presents this as an opposing view to mine.

EDITED TO ADD: This essay has been translated into German.

Posted on September 15, 2013 at 11:53 AM66 Comments

How to Remain Secure Against the NSA

Now that we have enough details about how the NSA eavesdrops on the Internet, including today’s disclosures of the NSA’s deliberate weakening of cryptographic systems, we can finally start to figure out how to protect ourselves.

For the past two weeks, I have been working with the Guardian on NSA stories, and have read hundreds of top-secret NSA documents provided by whistleblower Edward Snowden. I wasn’t part of today’s story—it was in process well before I showed up—but everything I read confirms what the Guardian is reporting.

At this point, I feel I can provide some advice for keeping secure against such an adversary.

The primary way the NSA eavesdrops on Internet communications is in the network. That’s where their capabilities best scale. They have invested in enormous programs to automatically collect and analyze network traffic. Anything that requires them to attack individual endpoint computers is significantly more costly and risky for them, and they will do those things carefully and sparingly.

Leveraging its secret agreements with telecommunications companies—all the US and UK ones, and many other “partners” around the world—the NSA gets access to the communications trunks that move Internet traffic. In cases where it doesn’t have that sort of friendly access, it does its best to surreptitiously monitor communications channels: tapping undersea cables, intercepting satellite communications, and so on.

That’s an enormous amount of data, and the NSA has equivalently enormous capabilities to quickly sift through it all, looking for interesting traffic. “Interesting” can be defined in many ways: by the source, the destination, the content, the individuals involved, and so on. This data is funneled into the vast NSA system for future analysis.

The NSA collects much more metadata about Internet traffic: who is talking to whom, when, how much, and by what mode of communication. Metadata is a lot easier to store and analyze than content. It can be extremely personal to the individual, and is enormously valuable intelligence.

The Systems Intelligence Directorate is in charge of data collection, and the resources it devotes to this is staggering. I read status report after status report about these programs, discussing capabilities, operational details, planned upgrades, and so on. Each individual problem—recovering electronic signals from fiber, keeping up with the terabyte streams as they go by, filtering out the interesting stuff—has its own group dedicated to solving it. Its reach is global.

The NSA also attacks network devices directly: routers, switches, firewalls, etc. Most of these devices have surveillance capabilities already built in; the trick is to surreptitiously turn them on. This is an especially fruitful avenue of attack; routers are updated less frequently, tend not to have security software installed on them, and are generally ignored as a vulnerability.

The NSA also devotes considerable resources to attacking endpoint computers. This kind of thing is done by its TAO—Tailored Access Operations—group. TAO has a menu of exploits it can serve up against your computer—whether you’re running Windows, Mac OS, Linux, iOS, or something else—and a variety of tricks to get them on to your computer. Your anti-virus software won’t detect them, and you’d have trouble finding them even if you knew where to look. These are hacker tools designed by hackers with an essentially unlimited budget. What I took away from reading the Snowden documents was that if the NSA wants in to your computer, it’s in. Period.

The NSA deals with any encrypted data it encounters more by subverting the underlying cryptography than by leveraging any secret mathematical breakthroughs. First, there’s a lot of bad cryptography out there. If it finds an Internet connection protected by MS-CHAP, for example, that’s easy to break and recover the key. It exploits poorly chosen user passwords, using the same dictionary attacks hackers use in the unclassified world.

As was revealed today, the NSA also works with security product vendors to ensure that commercial encryption products are broken in secret ways that only it knows about. We know this has happened historically: CryptoAG and Lotus Notes are the most public examples, and there is evidence of a back door in Windows. A few people have told me some recent stories about their experiences, and I plan to write about them soon. Basically, the NSA asks companies to subtly change their products in undetectable ways: making the random number generator less random, leaking the key somehow, adding a common exponent to a public-key exchange protocol, and so on. If the back door is discovered, it’s explained away as a mistake. And as we now know, the NSA has enjoyed enormous success from this program.

TAO also hacks into computers to recover long-term keys. So if you’re running a VPN that uses a complex shared secret to protect your data and the NSA decides it cares, it might try to steal that secret. This kind of thing is only done against high-value targets.

How do you communicate securely against such an adversary? Snowden said it in an online Q&A soon after he made his first document public: “Encryption works. Properly implemented strong crypto systems are one of the few things that you can rely on.”

I believe this is true, despite today’s revelations and tantalizing hints of “groundbreaking cryptanalytic capabilities” made by James Clapper, the director of national intelligence in another top-secret document. Those capabilities involve deliberately weakening the cryptography.

Snowden’s follow-on sentence is equally important: “Unfortunately, endpoint security is so terrifically weak that NSA can frequently find ways around it.”

Endpoint means the software you’re using, the computer you’re using it on, and the local network you’re using it in. If the NSA can modify the encryption algorithm or drop a Trojan on your computer, all the cryptography in the world doesn’t matter at all. If you want to remain secure against the NSA, you need to do your best to ensure that the encryption can operate unimpeded.

With all this in mind, I have five pieces of advice:

  1. Hide in the network. Implement hidden services. Use Tor to anonymize yourself. Yes, the NSA targets Tor users, but it’s work for them. The less obvious you are, the safer you are.
  2. Encrypt your communications. Use TLS. Use IPsec. Again, while it’s true that the NSA targets encrypted connections—and it may have explicit exploits against these protocols—you’re much better protected than if you communicate in the clear.
  3. Assume that while your computer can be compromised, it would take work and risk on the part of the NSA—so it probably isn’t. If you have something really important, use an air gap. Since I started working with the Snowden documents, I bought a new computer that has never been connected to the Internet. If I want to transfer a file, I encrypt the file on the secure computer and walk it over to my Internet computer, using a USB stick. To decrypt something, I reverse the process. This might not be bulletproof, but it’s pretty good.
  4. Be suspicious of commercial encryption software, especially from large vendors. My guess is that most encryption products from large US companies have NSA-friendly back doors, and many foreign ones probably do as well. It’s prudent to assume that foreign products also have foreign-installed backdoors. Closed-source software is easier for the NSA to backdoor than open-source software. Systems relying on master secrets are vulnerable to the NSA, through either legal or more clandestine means.
  5. Try to use public-domain encryption that has to be compatible with other implementations. For example, it’s harder for the NSA to backdoor TLS than BitLocker, because any vendor’s TLS has to be compatible with every other vendor’s TLS, while BitLocker only has to be compatible with itself, giving the NSA a lot more freedom to make changes. And because BitLocker is proprietary, it’s far less likely those changes will be discovered. Prefer symmetric cryptography over public-key cryptography. Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can.

Since I started working with Snowden’s documents, I have been using GPG, Silent Circle, Tails, OTR, TrueCrypt, BleachBit, and a few other things I’m not going to write about. There’s an undocumented encryption feature in my Password Safe program from the command line; I’ve been using that as well.

I understand that most of this is impossible for the typical Internet user. Even I don’t use all these tools for most everything I am working on. And I’m still primarily on Windows, unfortunately. Linux would be safer.

The NSA has turned the fabric of the Internet into a vast surveillance platform, but they are not magical. They’re limited by the same economic realities as the rest of us, and our best defense is to make surveillance of us as expensive as possible.

Trust the math. Encryption is your friend. Use it well, and do your best to ensure that nothing can compromise it. That’s how you can remain secure even in the face of the NSA.

This essay previously appeared in the Guardian.

EDITED TO ADD: Reddit thread.

Someone somewhere commented that the NSA’s “groundbreaking cryptanalytic capabilities” could include a practical attack on RC4. I don’t know one way or the other, but that’s a good speculation.

Posted on September 15, 2013 at 8:11 AM88 Comments

New NSA Leak Shows MITM Attacks Against Major Internet Services

The Brazilian television show “Fantastico” exposed an NSA training presentation that discusses how the agency runs man-in-the-middle attacks on the Internet. The point of the story was that the NSA engages in economic espionage against Petrobras, the Brazilian giant oil company, but I’m more interested in the tactical details.

The video on the webpage is long, and includes what I assume is a dramatization of an NSA classroom, but a few screen shots are important. The pages from the training presentation describe how the NSA’s MITM attack works:

However, in some cases GCHQ and the NSA appear to have taken a more aggressive and controversial route—on at least one occasion bypassing the need to approach Google directly by performing a man-in-the-middle attack to impersonate Google security certificates. One document published by Fantastico, apparently taken from an NSA presentation that also contains some GCHQ slides, describes “how the attack was done” to apparently snoop on SSL traffic. The document illustrates with a diagram how one of the agencies appears to have hacked into a target’s Internet router and covertly redirected targeted Google traffic using a fake security certificate so it could intercept the information in unencrypted format.

Documents from GCHQ’s “network exploitation” unit show that it operates a program called “FLYING PIG” that was started up in response to an increasing use of SSL encryption by email providers like Yahoo, Google, and Hotmail. The FLYING PIG system appears to allow it to identify information related to use of the anonymity browser Tor (it has the option to query “Tor events“) and also allows spies to collect information about specific SSL encryption certificates.

It’s that first link—also here—that shows the MITM attack against Google and its users.

Another screenshot implies is that the 2011 DigiNotar hack was either the work of the NSA, or exploited by the NSA.

Here’s another story on this.

Posted on September 13, 2013 at 6:23 AM141 Comments

Ed Felten on the NSA Disclosures

Ed Felten has an excellent essay on the damage caused by the NSA secretly breaking the security of Internet systems:

In security, the worst case—the thing you most want to avoid—is thinking you are secure when you’re not. And that’s exactly what the NSA seems to be trying to perpetuate.

Suppose you’re driving a car that has no brakes. If you know you have no brakes, then you can drive very slowly, or just get out and walk. What is deadly is thinking you have the ability to stop, until you stomp on the brake pedal and nothing happens. It’s the same way with security: if you know your communications aren’t secure, you can be careful about what you say; but if you think mistakenly that you’re safe, you’re sure to get in trouble.

So the problem is not (only) that we’re unsafe. It’s that “the N.S.A. wants to keep it that way.” The NSA wants to make sure we remain vulnerable.

Posted on September 12, 2013 at 6:05 AM45 Comments

iPhone Fingerprint Authentication

When Apple bought AuthenTec for its biometrics technology—reported as one of its most expensive purchases—there was a lot of speculation about how the company would incorporate biometrics in its product line. Many speculate that the new Apple iPhone to be announced tomorrow will come with a fingerprint authentication system, and there are several ways it could work, such as swiping your finger over a slit-sized reader to have the phone recognize you.

Apple would be smart to add biometric technology to the iPhone. Fingerprint authentication is a good balance between convenience and security for a mobile device.

Biometric systems are seductive, but the reality isn’t that simple. They have complicated security properties. For example, they are not keys. Your fingerprint isn’t a secret; you leave it everywhere you touch.

And fingerprint readers have a long history of vulnerabilities as well. Some are better than others. The simplest ones just check the ridges of a finger; some of those can be fooled with a good photocopy. Others check for pores as well. The better ones verify pulse, or finger temperature. Fooling them with rubber fingers is harder, but often possible. A Japanese researcher had good luck doing this over a decade ago with the gelatin mixture that’s used to make Gummi bears.

The best system I’ve ever seen was at the entry gates of a secure government facility. Maybe you could have fooled it with a fake finger, but a Marine guard with a big gun was making sure you didn’t get the opportunity to try. Disney World uses a similar system at its park gates—but without the Marine guards.

A biometric system that authenticates you and you alone is easier to design than a biometric system that is supposed to identify unknown people. That is, the question “Is this the finger belonging to the owner of this iPhone?” is a much easier question for the system to answer than “Whose finger is this?”

There are two ways an authentication system can fail. It can mistakenly allow an unauthorized person access, or it can mistakenly deny access to an authorized person. In any consumer system, the second failure is far worse than the first. Yes, it can be problematic if an iPhone fingerprint system occasionally allows someone else access to your phone. But it’s much worse if you can’t reliably access your own phone—you’d junk the system after a week.

If it’s true that Apple’s new iPhone will have biometric security, the designers have presumably erred on the side of ensuring that the user can always get in. Failures will be more common in cold weather, when your shriveled fingers just got out of the shower, and so on. But there will certainly still be the traditional PIN system to fall back on.

So…can biometric authentication be hacked?

Almost certainly. I’m sure that someone with a good enough copy of your fingerprint and some rudimentary materials engineering capability—or maybe just a good enough printer—can authenticate his way into your iPhone. But, honestly, if some bad guy has your iPhone and your fingerprint, you’ve probably got bigger problems to worry about.

The final problem with biometric systems is the database. If the system is centralized, there will be a large database of biometric information that’s vulnerable to hacking. A system by Apple will almost certainly be local—you authenticate yourself to the phone, not to any network—so there’s no requirement for a centralized fingerprint database.

Apple’s move is likely to bring fingerprint readers into the mainstream. But all applications are not equal. It’s fine if your fingers unlock your phone. It’s a different matter entirely if your fingerprint is used to authenticate your iCloud account. The centralized database required for that application would create an enormous security risk.

This essay previously appeared on Wired.com.

EDITED TO ADD: The new iPhone does have a fingerprint reader.

Posted on September 11, 2013 at 6:43 AM69 Comments

The TSA Is Legally Allowed to Lie to Us

The TSA does not have to tell the truth:

Can the TSA (or local governments as directed by the TSA) lie in response to a FOIA request?

Sure, no problem! Even the NSA responds that they “can’t confirm or deny the existence” of classified things for which admitting or denying existence would (allegedly, of course) damage national security. But the TSA? U.S. District Judge Joan A. Lenard granted the TSA the special privilege of not needing to go that route, rubber-stamping the decision of the TSA and the airport authority to write to me that no CCTV footage of the incident existed when, in fact, it did. This footage is non-classified and its existence is admitted by over a dozen visible camera domes and even signage that the area is being recorded. Beyond that, the TSA regularly releases checkpoint video when it doesn’t show them doing something wrong (for example, here’s CCTV of me beating their body scanners). But if it shows evidence of misconduct? Just go ahead and lie.

EDITED TO ADD (9/14): This is an overstatement.

Posted on September 10, 2013 at 6:55 AM35 Comments

Government Secrecy and the Generation Gap

Big-government secrets require a lot of secret-keepers. As of October 2012, almost 5m people in the US have security clearances, with 1.4m at the top-secret level or higher, according to the Office of the Director of National Intelligence.

Most of these people do not have access to as much information as Edward Snowden, the former National Security Agency contractor turned leaker, or even Chelsea Manning, the former US army soldier previously known as Bradley who was convicted for giving material to WikiLeaks. But a lot of them do—and that may prove the Achilles heel of government. Keeping secrets is an act of loyalty as much as anything else, and that sort of loyalty is becoming harder to find in the younger generations. If the NSA and other intelligence bodies are going to survive in their present form, they are going to have to figure out how to reduce the number of secrets.

As the writer Charles Stross has explained, the old way of keeping intelligence secrets was to make it part of a life-long culture. The intelligence world would recruit people early in their careers and give them jobs for life. It was a private club, one filled with code words and secret knowledge.

You can see part of this in Mr Snowden’s leaked documents. The NSA has its own lingo—the documents are riddled with codename—its own conferences, its own awards and recognitions. An intelligence career meant that you had access to a new world, one to which “normal” people on the outside were completely oblivious. Membership of the private club meant people were loyal to their organisations, which were in turn loyal back to them.

Those days are gone. Yes, there are still the codenames and the secret knowledge, but a lot of the loyalty is gone. Many jobs in intelligence are now outsourced, and there is no job-for-life culture in the corporate world any more. Workforces are flexible, jobs are interchangeable and people are expendable.

Sure, it is possible to build a career in the classified world of government contracting, but there are no guarantees. Younger people grew up knowing this: there are no employment guarantees anywhere. They see it in their friends. They see it all around them.

Many will also believe in openness, especially the hacker types the NSA needs to recruit. They believe that information wants to be free, and that security comes from public knowledge and debate. Yes, there are important reasons why some intelligence secrets need to be secret, and the NSA culture reinforces secrecy daily. But this is a crowd that is used to radical openness. They have been writing about themselves on the internet for years. They have said very personal things on Twitter; they have had embarrassing photographs of themselves posted on Facebook. They have been dumped by a lover in public. They have overshared in the most compromising ways—and they have got through it. It is a tougher sell convincing this crowd that government secrecy trumps the public’s right to know.

Psychologically, it is hard to be a whistleblower. There is an enormous amount of pressure to be loyal to our peer group: to conform to their beliefs, and not to let them down. Loyalty is a natural human trait; it is one of the social mechanisms we use to thrive in our complex social world. This is why good people sometimes do bad things at work.

When someone becomes a whistleblower, he or she is deliberately eschewing that loyalty. In essence, they are deciding that allegiance to society at large trumps that to peers at work. That is the difficult part. They know their work buddies by name, but “society at large” is amorphous and anonymous. Believing that your bosses ultimately do not care about you makes that switch easier.

Whistleblowing is the civil disobedience of the information age. It is a way that someone without power can make a difference. And in the information age—the fact that everything is stored on computers and potentially accessible with a few keystrokes and mouse clicks—whistleblowing is easier than ever.

Mr Snowden is 30 years old; Manning 25. They are members of the generation we taught not to expect anything long-term from their employers. As such, employers should not expect anything long-term from them. It is still hard to be a whistleblower, but for this generation it is a whole lot easier.

A lot has been written about the problem of over-classification in US government. It has long been thought of as anti-democratic and a barrier to government oversight. Now we know that it is also a security risk. Organizations such as the NSA need to change their culture of secrecy, and concentrate their security efforts on what truly needs to remain secret. Their default practice of classifying everything is not going to work any more.

Hey, NSA, you’ve got a problem.

This essay previously appeared in the Financial Times.

EDITED TO ADD (9/14): Blog comments on this essay are particularly interesting.

Posted on September 9, 2013 at 1:30 PM51 Comments

Excess Automobile Deaths as a Result of 9/11

People commented about a point I made in a recent essay:

In the months after 9/11, so many people chose to drive instead of fly that the resulting deaths dwarfed the deaths from the terrorist attack itself, because cars are much more dangerous than airplanes.

Yes, that’s wrong. Where I said “months,” I should have said “years.”

I got the sound bite from John Mueller and Mark G. Stewart’s book, Terror, Security, and Money. This is footnote 19 from Chapter 1:

The inconvenience of extra passenger screening and added costs at airports after 9/11 cause many short-haul passengers to drive to their destination instead, and, since airline travel is far safer than car travel, this has led to an increase of 500 U.S. traffic fatalities per year. Using DHS-mandated value of statistical life at $6.5 million, this equates to a loss of $3.2 billion per year, or $32 billion over the period 2002 to 2011 (Blalock et al. 2007).

The authors make the same point in this earlier (and shorter) essay:

Increased delays and added costs at U.S. airports due to new security procedures provide incentive for many short-haul passengers to drive to their destination rather than flying, and, since driving is far riskier than air travel, the extra automobile traffic generated has been estimated in one study to result in 500 or more extra road fatalities per year.

The references are:

  • Garrick Blalock, Vrinda Kadiyali, and Daniel H. Simon. 2007. “The Impact of Post-9/11 Airport Security Measures on the Demand for Air Travel.” Journal of Law and Economics 50(4) November: 731­–755.
  • Garrick Blalock, Vrinda Kadiyali, and Daniel H. Simon. 2009. “Driving Fatalities after 9/11: A Hidden Cost of Terrorism.” Applied Economics 41(14): 1717­–1729.

Business Week makes the same point here.

There’s also this reference:

  • Michael Sivak and Michael J. Flannagan. 2004. “Consequences for road traffic fatalities of the reduction in flying following September 11, 2001.” Transportation Research Part F: Traffic Psychology and Behavior 7 (4).

Abstract: Gigerenzer (Gigerenzer, G. (2004). Dread risk, September 11, and fatal traffic accidents. Psychological Science, 15 , 286­287) argued that the increased fear of flying in the U.S. after September 11 resulted in a partial shift from flying to driving on rural interstate highways, with a consequent increase of 353 road traffic fatalities for October through December 2001. We reevaluated the consequences of September 11 by utilizing the trends in road traffic fatalities from 2000 to 2001 for January through August. We also examined which road types and traffic participants contributed most to the increased road fatalities. We conclude that (1) the partial modal shift after September 11 resulted in 1018 additional road fatalities for the three months in question, which is substantially more than estimated by Gigerenzer, (2) the major part of the increased toll occurred on local roads, arguing against a simple modal shift from flying to driving to the same destinations, (3) driver fatalities did not increase more than in proportion to passenger fatalities, and (4) pedestrians and bicyclists bore a disproportionate share of the increased fatalities.

This is another analysis.

Posted on September 9, 2013 at 6:20 AM43 Comments

Conspiracy Theories and the NSA

I’ve recently seen two articles speculating on the NSA’s capability, and practice, of spying on members of Congress and other elected officials. The evidence is all circumstantial and smacks of conspiracy thinking—and I have no idea whether any of it is true or not—but it’s a good illustration of what happens when trust in a public institution fails.

The NSA has repeatedly lied about the extent of its spying program. James R. Clapper, the director of national intelligence, has lied about it to Congress. Top-secret documents provided by Edward Snowden, and reported on by the Guardian and other newspapers, repeatedly show that the NSA’s surveillance systems are monitoring the communications of American citizens. The DEA has used this information to apprehend drug smugglers, then lied about it in court. The IRS has used this information to find tax cheats, then lied about it. It’s even been used to arrest a copyright violator. It seems that every time there is an allegation against the NSA, no matter how outlandish, it turns out to be true.

Guardian reporter Glenn Greenwald has been playing this well, dribbling the information out one scandal at a time. It’s looking more and more as if the NSA doesn’t know what Snowden took. It’s hard for someone to lie convincingly if he doesn’t know what the opposition actually knows.

All of this denying and lying results in us not trusting anything the NSA says, anything the president says about the NSA, or anything companies say about their involvement with the NSA. We know secrecy corrupts, and we see that corruption. There’s simply no credibility, and—the real problem—no way for us to verify anything these people might say.

It’s a perfect environment for conspiracy theories to take root: no trust, assuming the worst, no way to verify the facts. Think JFK assassination theories. Think 9/11 conspiracies. Think UFOs. For all we know, the NSA might be spying on elected officials. Edward Snowden said that he had the ability to spy on anyone in the U.S., in real time, from his desk. His remarks were belittled, but it turns out he was right.

This is not going to improve anytime soon. Greenwald and other reporters are still poring over Snowden’s documents, and will continue to report stories about NSA overreach, lawbreaking, abuses, and privacy violations well into next year. The “independent” review that Obama promised of these surveillance programs will not help, because it will lack both the power to discover everything the NSA is doing and the ability to relay that information to the public.

It’s time to start cleaning up this mess. We need a special prosecutor, one not tied to the military, the corporations complicit in these programs, or the current political leadership, whether Democrat or Republican. This prosecutor needs free rein to go through the NSA’s files and discover the full extent of what the agency is doing, as well as enough technical staff who have the capability to understand it. He needs the power to subpoena government officials and take their sworn testimony. He needs the ability to bring criminal indictments where appropriate. And, of course, he needs the requisite security clearance to see it all.

We also need something like South Africa’s Truth and Reconciliation Commission, where both government and corporate employees can come forward and tell their stories about NSA eavesdropping without fear of reprisal.

Yes, this will overturn the paradigm of keeping everything the NSA does secret, but Snowden and the reporters he’s shared documents with have already done that. The secrets are going to come out, and the journalists doing the outing are not going to be sympathetic to the NSA. If the agency were smart, it’d realize that the best thing it could do would be to get ahead of the leaks.

The result needs to be a public report about the NSA’s abuses, detailed enough that public watchdog groups can be convinced that everything is known. Only then can our country go about cleaning up the mess: shutting down programs, reforming the Foreign Intelligence Surveillance Act system, and reforming surveillance law to make it absolutely clear that even the NSA cannot eavesdrop on Americans without a warrant.

Comparisons are springing up between today’s NSA and the FBI of the 1950s and 1960s, and between NSA Director Keith Alexander and J. Edgar Hoover. We never managed to rein in Hoover’s FBI—it took his death for change to occur. I don’t think we’ll get so lucky with the NSA. While Alexander has enormous personal power, much of his power comes from the institution he leads. When he is replaced, that institution will remain.

Trust is essential for society to function. Without it, conspiracy theories naturally take hold. Even worse, without it we fail as a country and as a culture. It’s time to reinstitute the ideals of democracy: The government works for the people, open government is the best way to protect against government abuse, and a government keeping secrets from its people is a rare exception, not the norm.

This essay originally appeared on TheAtlantic.com.

Posted on September 6, 2013 at 11:08 AM80 Comments

The NSA's Cryptographic Capabilities

The latest Snowden document is the US intelligence “black budget.” There’s a lot of information in the few pages the Washington Post decided to publish, including an introduction by Director of National Intelligence James Clapper. In it, he drops a tantalizing hint: “Also, we are investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic.”

Honestly, I’m skeptical. Whatever the NSA has up its top-secret sleeves, the mathematics of cryptography will still be the most secure part of any encryption system. I worry a lot more about poorly designed cryptographic products, software bugs, bad passwords, companies that collaborate with the NSA to leak all or part of the keys, and insecure computers and networks. Those are where the real vulnerabilities are, and where the NSA spends the bulk of its efforts.

This isn’t the first time we’ve heard this rumor. In a WIRED article last year, longtime NSA-watcher James Bamford wrote:

According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US.

We have no further information from Clapper, Snowden, or this other source of Bamford’s. But we can speculate.

Perhaps the NSA has some new mathematics that breaks one or more of the popular encryption algorithms: AES, Twofish, Serpent, triple-DES, Serpent. It wouldn’t be the first time this happened. Back in the 1970s, the NSA knew of a cryptanalytic technique called “differential cryptanalysis” that was unknown in the academic world. That technique broke a variety of other academic and commercial algorithms that we all thought secure. We learned better in the early 1990s, and now design algorithms to be resistant to that technique.

It’s very probable that the NSA has newer techniques that remain undiscovered in academia. Even so, such techniques are unlikely to result in a practical attack that can break actual encrypted plaintext.

The naive way to break an encryption algorithm is to brute-force the key. The complexity of that attack is 2n, where n is the key length. All cryptanalytic attacks can be viewed as shortcuts to that method. And since the efficacy of a brute-force attack is a direct function of key length, these attacks effectively shorten the key. So if, for example, the best attack against DES has a complexity of 239, that effectively shortens DES’s 56-bit key by 17 bits.

That’s a really good attack, by the way.

Right now the upper practical limit on brute force is somewhere under 80 bits. However, using that as a guide gives us some indication as to how good an attack has to be to break any of the modern algorithms. These days, encryption algorithms have, at a minimum, 128-bit keys. That means any NSA cryptanalytic breakthrough has to reduce the effective key length by at least 48 bits in order to be practical.

There’s more, though. That DES attack requires an impractical 70 terabytes of known plaintext encrypted with the key we’re trying to break. Other mathematical attacks require similar amounts of data. In order to be effective in decrypting actual operational traffic, the NSA needs an attack that can be executed with the known plaintext in a common MS-Word header: much, much less.

So while the NSA certainly has symmetric cryptanalysis capabilities that we in the academic world do not, converting that into practical attacks on the sorts of data it is likely to encounter seems so impossible as to be fanciful.

More likely is that the NSA has some mathematical breakthrough that affects one or more public-key algorithms. There are a lot of mathematical tricks involved in public-key cryptanalysis, and absolutely no theory that provides any limits on how powerful those tricks can be.

Breakthroughs in factoring have occurred regularly over the past several decades, allowing us to break ever-larger public keys. Much of the public-key cryptography we use today involves elliptic curves, something that is even more ripe for mathematical breakthroughs. It is not unreasonable to assume that the NSA has some techniques in this area that we in the academic world do not. Certainly the fact that the NSA is pushing elliptic-curve cryptography is some indication that it can break them more easily.

If we think that’s the case, the fix is easy: increase the key lengths.

Assuming the hypothetical NSA breakthroughs don’t totally break public-cryptography—and that’s a very reasonable assumption—it’s pretty easy to stay a few steps ahead of the NSA by using ever-longer keys. We’re already trying to phase out 1024-bit RSA keys in favor of 2048-bit keys. Perhaps we need to jump even further ahead and consider 3072-bit keys. And maybe we should be even more paranoid about elliptic curves and use key lengths above 500 bits.

One last blue-sky possibility: a quantum computer. Quantum computers are still toys in the academic world, but have the theoretical ability to quickly break common public-key algorithms—regardless of key length—and to effectively halve the key length of any symmetric algorithm. I think it extraordinarily unlikely that the NSA has built a quantum computer capable of performing the magnitude of calculation necessary to do this, but it’s possible. The defense is easy, if annoying: stick with symmetric cryptography based on shared secrets, and use 256-bit keys.

There’s a saying inside the NSA: “Cryptanalysis always gets better. It never gets worse.” It’s naive to assume that, in 2013, we have discovered all the mathematical breakthroughs in cryptography that can ever be discovered. There’s a lot more out there, and there will be for centuries.

And the NSA is in a privileged position: It can make use of everything discovered and openly published by the academic world, as well as everything discovered by it in secret.

The NSA has a lot of people thinking about this problem full-time. According to the black budget summary, 35,000 people and $11 billion annually are part of the Department of Defense-wide Consolidated Cryptologic Program. Of that, 4 percent—or $440 million—goes to “Research and Technology.”

That’s an enormous amount of money; probably more than everyone else on the planet spends on cryptography research put together. I’m sure that results in a lot of interesting—and occasionally groundbreaking—cryptanalytic research results, maybe some of it even practical.

Still, I trust the mathematics.

This essay originally appeared on Wired.com.

EDITED TO ADD: That was written before I could talk about this.

EDITED TO ADD: The Economist expresses a similar sentiment.

Posted on September 6, 2013 at 6:30 AM55 Comments

The NSA Is Breaking Most Encryption on the Internet

The new Snowden revelations are explosive. Basically, the NSA is able to decrypt most of the Internet. They’re doing it primarily by cheating, not by mathematics.

It’s joint reporting between the Guardian, the New York Times, and ProPublica.

I have been working with Glenn Greenwald on the Snowden documents, and I have seen a lot of them. These are my two essays on today’s revelations.

Remember this: The math is good, but math has no agency. Code has agency, and the code has been subverted.

EDITED TO ADD (9/6): Someone somewhere commented that the NSA’s “groundbreaking cryptanalytic capabilities” could include a practical attack on RC4. I don’t know one way or the other, but that’s a good speculation.

EDITED TO ADD (9/6): Relevant Slashdot and Reddit threads.

EDITED TO ADD (9/13): An opposing view to my call to action.

Posted on September 5, 2013 at 2:46 PM393 Comments

The Effect of Money on Trust

Money reduces trust in small groups, but increases it in larger groups. Basically, the introduction of money allows society to scale.

The team devised an experiment where subjects in small and large groups had the option to give gifts in exchange for tokens.

They found that there was a social cost to introducing this incentive. When all tokens were “spent”, a potential gift-giver was less likely to help than they had been in a setting where tokens had not yet been introduced.

The same effect was found in smaller groups, who were less generous when there was the option of receiving a token.

“Subjects basically latched on to monetary exchange, and stopped helping unless they received immediate compensation in a form of an intrinsically worthless object [a token].

“Using money does help large societies to achieve larger levels of co-operation than smaller societies, but it does so at a cost of displacing normal of voluntary help that is the bread and butter of smaller societies, in which everyone knows each other,” said Prof Camera.

But he said that this negative result was not found in larger anonymous groups of 32, instead co-operation increased with the use of tokens.

“This is exciting because we introduced something that adds nothing to the economy, but it helped participants converge on a behaviour that is more trustworthy.”

He added that the study reflected monetary exchange in daily life: “Global interaction expands the set of trade opportunities, but it dilutes the level of information about others’ past behaviour. In this sense, one can view tokens in our experiment as a parable for global monetary exchange.”

Posted on September 5, 2013 at 1:57 PM10 Comments

Human-Machine Trust Failures

I jacked a visitor’s badge from the Eisenhower Executive Office Building in Washington, DC, last month. The badges are electronic; they’re enabled when you check in at building security. You’re supposed to wear it on a chain around your neck at all times and drop it through a slot when you leave.

I kept the badge. I used my body as a shield, and the chain made a satisfying noise when it hit bottom. The guard let me through the gate.

The person after me had problems, though. Some part of the system knew something was wrong, and wouldn’t let her out. Eventually, the guard had to manually override something.

My point in telling this story is not to demonstrate how I beat the EEOB’s security—I’m sure the badge was quickly deactivated and showed up in some missing-badge log next to my name—but to illustrate how security vulnerabilities can result from human/machine trust failures. Something went wrong between when I went through the gate and when the person after me did. The system knew it but couldn’t adequately explain it to the guards. The guards knew it but didn’t know the details. Because the failure occurred when the person after me tried to leave the building, they assumed she was the problem. And when they cleared her of wrongdoing, they blamed the system.

In any hybrid security system, the human portion needs to trust the machine portion. To do so, both must understand the expected behavior for every state—how the system can fail and what those failures look like. The machine must be able to communicate its state and have the capacity to alert the humans when an expected state transition doesn’t happen as expected. Things will go wrong, either by accident or as the result of an attack, and the humans are going to need to troubleshoot the system in real time—that requires understanding on both parts. Each time things go wrong, and the machine portion doesn’t communicate well, the human portion trusts it a little less.

This problem is not specific to security systems, but inducing this sort of confusion is a good way to attack systems. When the attackers understand the system—especially the machine part—better than the humans in the system do, they can create a failure to exploit. Many social engineering attacks fall into this category. Failures also happen the other way. We’ve all experienced trust without understanding, when the human part of the system defers to the machine, even though it makes no sense: “The computer is always right.”

Humans and machines have different strengths. Humans are flexible and can do creative thinking in ways that machines cannot. But they’re easily fooled. Machines are more rigid and can handle state changes and process flows much better than humans can. But they’re bad at dealing with exceptions. If humans are to serve as security sensors, they need to understand what is being sensed. (That’s why “if you see something, say something” fails so often.) If a machine automatically processes input, it needs to clearly flag anything unexpected.

The more machine security is automated, and the more the machine is expected to enforce security without human intervention, the greater the impact of a successful attack. If this sounds like an argument for interface simplicity, it is. The machine design will be necessarily more complicated: more resilience, more error handling, and more internal checking. But the human/computer communication needs to be clear and straightforward. That’s the best way to give humans the trust and understanding they need in the machine part of any security system.

This essay previously appeared in IEEE Security & Privacy.

Posted on September 5, 2013 at 8:32 AM31 Comments

Syrian Electronic Army Cyberattacks

The Syrian Electronic Army attacked again this week, compromising the websites of the New York Times, Twitter, the Huffington Post, and others.

Political hacking isn’t new. Hackers were breaking into systems for political reasons long before commerce and criminals discovered the Internet. Over the years, we’ve seen U.K. vs. Ireland, Israel vs. Arab states, Russia vs. its former Soviet republics, India vs. Pakistan, and US vs. China.

There was a big one in 2007, when the government of Estonia was attacked in cyberspace following a diplomatic incident with Russia. It was hyped as the first cyberwar, but the Kremlin denied any Russian government involvement. The only individuals positively identified were young ethnic Russians living in Estonia.

Poke at any of these international incidents, and what you find are kids playing politics. The Syrian Electronic Army doesn’t seem to be an actual army. We don’t even know if they’re Syrian. And—to be fair—I don’t know their ages. Looking at the details of their attacks, it’s pretty clear they didn’t target the New York Times and others directly. They reportedly hacked into an Australian domain name registrar called Melbourne IT, and used that access to disrupt service at a bunch of big-name sites.

We saw this same tactic last year from Anonymous: hack around at random, then retcon a political reason why the sites they successfully broke into deserved it. It makes them look a lot more skilled than they actually are.

This isn’t to say that cyberattacks by governments aren’t an issue, or that cyberwar is something to be ignored. Attacks from China reportedly are a mix of government-executed military attacks, government-sponsored independent attackers, and random hacking groups that work with tacit government approval. The US also engages in active cyberattacks around the world. Together with Israel, the US employed a sophisticated computer virus (Stuxnet) to attack Iran in 2010.

For the typical company, defending against these attacks doesn’t require anything different than what you’ve been traditionally been doing to secure yourself in cyberspace. If your network is secure, you’re secure against amateur geopoliticians who just want to help their side.

This essay originally appeared on the Wall Street Journal’s website.

Posted on September 3, 2013 at 1:45 PM20 Comments

Our Newfound Fear of Risk

We’re afraid of risk. It’s a normal part of life, but we’re increasingly unwilling to accept it at any level. So we turn to technology to protect us. The problem is that technological security measures aren’t free. They cost money, of course, but they cost other things as well. They often don’t provide the security they advertise, and—paradoxically—they often increase risk somewhere else. This problem is particularly stark when the risk involves another person: crime, terrorism, and so on. While technology has made us much safer against natural risks like accidents and disease, it works less well against man-made risks.

Three examples:

  1. We have allowed the police to turn themselves into a paramilitary organization. They deploy SWAT teams multiple times a day, almost always in nondangerous situations. They tase people at minimal provocation, often when it’s not warranted. Unprovoked shootings are on the rise. One result of these measures is that honest mistakes—a wrong address on a warrant, a misunderstanding—result in the terrorizing of innocent people, and more death in what were once nonviolent confrontations with police.
  2. We accept zero-tolerance policies in schools. This results in ridiculous situations, where young children are suspended for pointing gun-shaped fingers at other students or drawing pictures of guns with crayons, and high-school students are disciplined for giving each other over-the-counter pain relievers. The cost of these policies is enormous, both in dollars to implement and its long-lasting effects on students.
  3. We have spent over one trillion dollars and thousands of lives fighting terrorism in the past decade—including the wars in Iraq and Afghanistan—money that could have been better used in all sorts of ways. We now know that the NSA has turned into a massive domestic surveillance organization, and that its data is also used by other government organizations, which then lie about it. Our foreign policy has changed for the worse: we spy on everyone, we trample human rights abroad, our drones kill indiscriminately, and our diplomatic outposts have either closed down or become fortresses. In the months after 9/11, so many people chose to drive instead of fly that the resulting deaths dwarfed the deaths from the terrorist attack itself, because cars are much more dangerous than airplanes.

There are lots more examples, but the general point is that we tend to fixate on a particular risk and then do everything we can to mitigate it, including giving up our freedoms and liberties.

There’s a subtle psychological explanation. Risk tolerance is both cultural and dependent on the environment around us. As we have advanced technologically as a society, we have reduced many of the risks that have been with us for millennia. Fatal childhood diseases are things of the past, many adult diseases are curable, accidents are rarer and more survivable, buildings collapse less often, death by violence has declined considerably, and so on. All over the world—among the wealthier of us who live in peaceful Western countries—our lives have become safer.

Our notions of risk are not absolute; they’re based more on how far they are from whatever we think of as “normal.” So as our perception of what is normal gets safer, the remaining risks stand out more. When your population is dying of the plague, protecting yourself from the occasional thief or murderer is a luxury. When everyone is healthy, it becomes a necessity.

Some of this fear results from imperfect risk perception. We’re bad at accurately assessing risk; we tend to exaggerate spectacular, strange, and rare events, and downplay ordinary, familiar, and common ones. This leads us to believe that violence against police, school shootings, and terrorist attacks are more common and more deadly than they actually are—and that the costs, dangers, and risks of a militarized police, a school system without flexibility, and a surveillance state without privacy are less than they really are.

Some of this fear stems from the fact that we put people in charge of just one aspect of the risk equation. No one wants to be the senior officer who didn’t approve the SWAT team for the one subpoena delivery that resulted in an officer being shot. No one wants to be the school principal who didn’t discipline—no matter how benign the infraction—the one student who became a shooter. No one wants to be the president who rolled back counterterrorism measures, just in time to have a plot succeed. Those in charge will be naturally risk averse, since they personally shoulder so much of the burden.

We also expect that science and technology should be able to mitigate these risks, as they mitigate so many others. There’s a fundamental problem at the intersection of these security measures with science and technology; it has to do with the types of risk they’re arrayed against. Most of the risks we face in life are against nature: disease, accident, weather, random chance. As our science has improved—medicine is the big one, but other sciences as well—we become better at mitigating and recovering from those sorts of risks.

Security measures combat a very different sort of risk: a risk stemming from another person. People are intelligent, and they can adapt to new security measures in ways nature cannot. An earthquake isn’t able to figure out how to topple structures constructed under some new and safer building code, and an automobile won’t invent a new form of accident that undermines medical advances that have made existing accidents more survivable. But a terrorist will change his tactics and targets in response to new security measures. An otherwise innocent person will change his behavior in response to a police force that compels compliance at the threat of a Taser. We will all change, living in a surveillance state.

When you implement measures to mitigate the effects of the random risks of the world, you’re safer as a result. When you implement measures to reduce the risks from your fellow human beings, the human beings adapt and you get less risk reduction than you’d expect—and you also get more side effects, because we all adapt.

We need to relearn how to recognize the trade-offs that come from risk management, especially risk from our fellow human beings. We need to relearn how to accept risk, and even embrace it, as essential to human progress and our free society. The more we expect technology to protect us from people in the same way it protects us from nature, the more we will sacrifice the very values of our society in futile attempts to achieve this security.

This essay previously appeared on Forbes.com.

EDITED TO ADD (8/5): Slashdot thread.

Posted on September 3, 2013 at 6:41 AM86 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.