Entries Tagged "whistleblowers"

Page 3 of 6

Why the Government Should Help Leakers

In the Information Age, it’s easier than ever to steal and publish data. Corporations and governments have to adjust to their secrets being exposed, regularly.

When massive amounts of government documents are leaked, journalists sift through them to determine which pieces of information are newsworthy, and confer with government agencies over what needs to be redacted.

Managing this reality is going to require that governments actively engage with members of the press who receive leaked secrets, helping them secure those secrets—even while being unable to prevent them from publishing. It might seem abhorrent to help those who are seeking to bring your secrets to light, but it’s the best way to ensure that the things that truly need to be secret remain secret, even as everything else becomes public.

The WikiLeaks cables serve as an excellent example of how a government should not deal with massive leaks of classified information.

WikiLeaks has said it asked US authorities for help in determining what should be redacted before publication of documents, although some government officials have challenged that statement. WikiLeaks’ media partners did redact many documents, but eventually all 250,000 unredacted cables were released to the world as a result of a mistake.

The damage was nowhere near as serious as government officials initially claimed, but it had been avoidable.

Fast-forward to today, and we have an even bigger trove of classified documents. What Edward Snowden took—”exfiltrated” is the National Security Agency term—dwarfs the State Department cables, and contains considerably more important secrets. But again, the US government is doing nothing to prevent a massive data dump.

The government engages with the press on individual stories. The Guardian, the Washington Post, and the New York Times are all redacting the original Snowden documents based on discussions with the government. This isn’t new. The US press regularly consults with the government before publishing something that might be damaging. In 2006, the New York Times consulted with both the NSA and the Bush administration before publishing Mark Klein’s whistle-blowing about the NSA’s eavesdropping on AT&T trunk circuits. In all these cases, the goal is to minimize actual harm to US security while ensuring the press can still report stories in the public interest, even if the government doesn’t want it to.

In today’s world of reduced secrecy, whistleblowing as civil disobedience, and massive document exfiltrations, negotiations over individual stories aren’t enough. The government needs to develop a protocol to actively help news organizations expose their secrets safely and responsibly.

Here’s what should have happened as soon as Snowden’s whistle-blowing became public. The government should have told the reporters and publications with the classified documents something like this: “OK, you have them. We know that we can’t undo the leak. But please let us help. Let us help you secure the documents as you write your stories, and securely dispose of the documents when you’re done.”

The people who have access to the Snowden documents say they don’t want them to be made public in their raw form or to get in the hands of rival governments. But accidents happen, and reporters are not trained in military secrecy practices.

Copies of some of the Snowden documents are being circulated to journalists and others. With each copy, each person, each day, there’s a greater chance that, once again, someone will make a mistake and some—or all—of the raw documents will appear on the Internet. A formal system of working with whistle-blowers could prevent that.

I’m sure the suggestion sounds odious to a government that is actively engaging in a war on whistle-blowers, and that views Snowden as a criminal and the reporters writing these stories as “helping the terrorists.” But it makes sense. Harvard law professor Jonathan Zittrain compares this to plea bargaining.

The police regularly negotiate lenient sentences or probation for confessed criminals in order to convict more important criminals. They make deals with all sorts of unsavory people, giving them benefits they don’t deserve, because the result is a greater good.

In the Snowden case, an agreement would safeguard the most important of NSA’s secrets from other nations’ intelligence agencies. It would help ensure that the truly secret information not be exposed. It would protect US interests.

Why would reporters agree to this? Two reasons. One, they actually do want these documents secured while they look for stories to publish. And two, it would be a public demonstration of that desire.

Why wouldn’t the government just collect all the documents under the pretense of securing them and then delete them? For the same reason they don’t renege on plea bargains: No one would trust them next time. And, of course, because smart reporters will probably keep encrypted backups under their own control.

We’re nowhere near the point where this system could be put into practice, but it’s worth thinking about how it could work. The government would need to establish a semi-independent group, called, say, a Leak Management unit, which could act as an intermediary. Since it would be isolated from the agencies that were the source of the leak, its officials would be less vested and—this is important—less angry over the leak. Over time, it would build a reputation, develop protocols that reporters could rely on. Leaks will be more common in the future, but they’ll still be rare. Expecting each agency to develop expertise in this process is unrealistic.

If there were sufficient trust between the press and the government, this could work. And everyone would benefit.

This essay previously appeared on CNN.com.

Posted on November 8, 2013 at 6:58 AMView Comments

The Battle for Power on the Internet

We’re in the middle of an epic battle for power in cyberspace. On one side are the traditional, organized, institutional powers such as governments and large multinational corporations. On the other are the distributed and nimble: grassroots movements, dissident groups, hackers, and criminals. Initially, the Internet empowered the second side. It gave them a place to coordinate and communicate efficiently, and made them seem unbeatable. But now, the more traditional institutional powers are winning, and winning big. How these two sides fare in the long term, and the fate of the rest of us who don’t fall into either group, is an open question—and one vitally important to the future of the Internet.

In the Internet’s early days, there was a lot of talk about its “natural laws”—how it would upend traditional power blocks, empower the masses, and spread freedom throughout the world. The international nature of the Internet circumvented national laws. Anonymity was easy. Censorship was impossible. Police were clueless about cybercrime. And bigger changes seemed inevitable. Digital cash would undermine national sovereignty. Citizen journalism would topple traditional media, corporate PR, and political parties. Easy digital copying would destroy the traditional movie and music industries. Web marketing would allow even the smallest companies to compete against corporate giants. It really would be a new world order.

This was a utopian vision, but some of it did come to pass. Internet marketing has transformed commerce. The entertainment industries have been transformed by things like MySpace and YouTube, and are now more open to outsiders. Mass media has changed dramatically, and some of the most influential people in the media have come from the blogging world. There are new ways to organize politically and run elections. Crowdfunding has made tens of thousands of projects possible to finance, and crowdsourcing made more types of projects possible. Facebook and Twitter really did help topple governments.

But that is just one side of the Internet’s disruptive character. The Internet has emboldened traditional power as well.

On the corporate side, power is consolidating, a result of two current trends in computing. First, the rise of cloud computing means that we no longer have control of our data. Our e-mail, photos, calendars, address books, messages, and documents are on servers belonging to Google, Apple, Microsoft, Facebook, and so on. And second, we are increasingly accessing our data using devices that we have much less control over: iPhones, iPads, Android phones, Kindles, ChromeBooks, and so on. Unlike traditional operating systems, those devices are controlled much more tightly by the vendors, who limit what software can run, what they can do, how they’re updated, and so on. Even Windows 8 and Apple’s Mountain Lion operating system are heading in the direction of more vendor control.

I have previously characterized this model of computing as “feudal.” Users pledge their allegiance to more powerful companies who, in turn, promise to protect them from both sysadmin duties and security threats. It’s a metaphor that’s rich in history and in fiction, and a model that’s increasingly permeating computing today.

Medieval feudalism was a hierarchical political system, with obligations in both directions. Lords offered protection, and vassals offered service. The lord-peasant relationship was similar, with a much greater power differential. It was a response to a dangerous world.

Feudal security consolidates power in the hands of the few. Internet companies, like lords before them, act in their own self-interest. They use their relationship with us to increase their profits, sometimes at our expense. They act arbitrarily. They make mistakes. They’re deliberately—and incidentally—changing social norms. Medieval feudalism gave the lords vast powers over the landless peasants; we’re seeing the same thing on the Internet.

It’s not all bad, of course. We, especially those of us who are not technical, like the convenience, redundancy, portability, automation, and shareability of vendor-managed devices. We like cloud backup. We like automatic updates. We like not having to deal with security ourselves. We like that Facebook just works—from any device, anywhere.

Government power is also increasing on the Internet. There is more government surveillance than ever before. There is more government censorship than ever before. There is more government propaganda, and an increasing number of governments are controlling what their users can and cannot do on the Internet. Totalitarian governments are embracing a growing “cyber sovereignty” movement to further consolidate their power. And the cyberwar arms race is on, pumping an enormous amount of money into cyber-weapons and consolidated cyber-defenses, further increasing government power.

In many cases, the interests of corporate and government powers are aligning. Both corporations and governments benefit from ubiquitous surveillance, and the NSA is using Google, Facebook, Verizon, and others to get access to data it couldn’t otherwise. The entertainment industry is looking to governments to enforce its antiquated business models. Commercial security equipment from companies like BlueCoat and Sophos is being used by oppressive governments to surveil and censor their citizens. The same facial recognition technology that Disney uses in its theme parks can also identify protesters in China and Occupy Wall Street activists in New York. Think of it as a public/private surveillance partnership.

What happened? How, in those early Internet years, did we get the future so wrong?

The truth is that technology magnifies power in general, but rates of adoption are different. The unorganized, the distributed, the marginal, the dissidents, the powerless, the criminal: they can make use of new technologies very quickly. And when those groups discovered the Internet, suddenly they had power. But later, when the already-powerful big institutions finally figured out how to harness the Internet, they had more power to magnify. That’s the difference: the distributed were more nimble and were faster to make use of their new power, while the institutional were slower but were able to use their power more effectively.

So while the Syrian dissidents used Facebook to organize, the Syrian government used Facebook to identify dissidents to arrest.

All isn’t lost for distributed power, though. For institutional power, the Internet is a change in degree, but for distributed power, it’s a qualitative one. The Internet gives decentralized groups—for the first time—the ability to coordinate. This can have incredible ramifications, as we saw in the SOPA/PIPA debate, Gezi, Brazil, and the rising use of crowdfunding. It can invert power dynamics, even in the presence of surveillance, censorship, and use control. But aside from political coordination, the Internet allows for social coordination as well—to unite, for example, ethnic diasporas, gender minorities, sufferers of rare diseases, and people with obscure interests.

This isn’t static: Technological advances continue to provide advantage to the nimble. I discussed this trend in my book Liars and Outliers. If you think of security as an arms race between attackers and defenders, any technological advance gives one side or the other a temporary advantage. But most of the time, a new technology benefits the nimble first. They are not hindered by bureaucracy—and sometimes not by laws or ethics, either. They can evolve faster.

We saw it with the Internet. As soon as the Internet started being used for commerce, a new breed of cybercriminal emerged, immediately able to take advantage of the new technology. It took police a decade to catch up. And we saw it on social media, as political dissidents made use of its organizational powers before totalitarian regimes did.

This delay is what I call a “security gap.” It’s greater when there’s more technology, and in times of rapid technological change. Basically, if there are more innovations to exploit, there will be more damage resulting from society’s inability to keep up with exploiters of all of them. And since our world is one in which there’s more technology than ever before, and a faster rate of technological change than ever before, we should expect to see a greater security gap than ever before. In other words, there will be an increasing time period during which nimble distributed powers can make use of new technologies before slow institutional powers can make better use of those technologies.

This is the battle: quick vs. strong. To return to medieval metaphors, you can think of a nimble distributed power—whether marginal, dissident, or criminal—as Robin Hood; and ponderous institutional powers—both government and corporate—as the feudal lords.

So who wins? Which type of power dominates in the coming decades?

Right now, it looks like traditional power. Ubiquitous surveillance means that it’s easier for the government to identify dissidents than it is for the dissidents to remain anonymous. Data monitoring means easier for the Great Firewall of China to block data than it is for people to circumvent it. The way we all use the Internet makes it much easier for the NSA to spy on everyone than it is for anyone to maintain privacy. And even though it is easy to circumvent digital copy protection, most users still can’t do it.

The problem is that leveraging Internet power requires technical expertise. Those with sufficient ability will be able to stay ahead of institutional powers. Whether it’s setting up your own e-mail server, effectively using encryption and anonymity tools, or breaking copy protection, there will always be technologies that can evade institutional powers. This is why cybercrime is still pervasive, even as police savvy increases; why technically capable whistleblowers can do so much damage; and why organizations like Anonymous are still a viable social and political force. Assuming technology continues to advance—and there’s no reason to believe it won’t—there will always be a security gap in which technically advanced Robin Hoods can operate.

Most people, though, are stuck in the middle. These are people who don’t have the technical ability to evade large governments and corporations, avoid the criminal and hacker groups who prey on us, or join any resistance or dissident movements. These are the people who accept default configuration options, arbitrary terms of service, NSA-installed back doors, and the occasional complete loss of their data. These are the people who get increasingly isolated as government and corporate power align. In the feudal world, these are the hapless peasants. And it’s even worse when the feudal lords—or any powers—fight each other. As anyone watching Game of Thrones knows, peasants get trampled when powers fight: when Facebook, Google, Apple, and Amazon fight it out in the market; when the US, EU, China, and Russia fight it out in geopolitics; or when it’s the US vs. “the terrorists” or China vs. its dissidents.

The abuse will only get worse as technology continues to advance. In the battle between institutional power and distributed power, more technology means more damage. We’ve already seen this: Cybercriminals can rob more people more quickly than criminals who have to physically visit everyone they rob. Digital pirates can make more copies of more things much more quickly than their analog forebears. And we’ll see it in the future: 3D printers mean that the computer restriction debate will soon involves guns, not movies. Big data will mean that more companies will be able to identify and track you more easily. It’s the same problem as the “weapons of mass destruction” fear: terrorists with nuclear or biological weapons can do a lot more damage than terrorists with conventional explosives. And by the same token, terrorists with large-scale cyberweapons can potentially do more damage than terrorists with those same bombs.

It’s a numbers game. Very broadly, because of the way humans behave as a species and as a society, every society is going to have a certain amount of crime. And there’s a particular crime rate society is willing to tolerate. With historically inefficient criminals, we were willing to live with some percentage of criminals in our society. As technology makes each individual criminal more powerful, the percentage we can tolerate decreases. Again, remember the “weapons of mass destruction” debate: As the amount of damage each individual terrorist can do increases, we need to do increasingly more to prevent even a single terrorist from succeeding.

The more destabilizing the technologies, the greater the rhetoric of fear, and the stronger institutional powers will get. This means increasingly repressive security measures, even if the security gap means that such measures become increasingly ineffective. And it will squeeze the peasants in the middle even more.

Without the protection of his own feudal lord, the peasant was subject to abuse both by criminals and other feudal lords. But both corporations and the government—and often the two in cahoots—are using their power to their own advantage, trampling on our rights in the process. And without the technical savvy to become Robin Hoods ourselves, we have no recourse but to submit to whatever the ruling institutional power wants.

So what happens as technology increases? Is a police state the only effective way to control distributed power and keep our society safe? Or do the fringe elements inevitably destroy society as technology increases their power? Probably neither doomsday scenario will come to pass, but figuring out a stable middle ground is hard. These questions are complicated, and dependent on future technological advances that we cannot predict. But they are primarily political questions, and any solutions will be political.

In the short term, we need more transparency and oversight. The more we know of what institutional powers are doing, the more we can trust that they are not abusing their authority. We have long known this to be true in government, but we have increasingly ignored it in our fear of terrorism and other modern threats. This is also true for corporate power. Unfortunately, market dynamics will not necessarily force corporations to be transparent; we need laws to do that. The same is true for decentralized power; transparency is how we’ll differentiate political dissidents from criminal organizations.

Oversight is also critically important, and is another long-understood mechanism for checking power. This can be a combination of things: courts that act as third-party advocates for the rule of law rather than rubber-stamp organizations, legislatures that understand the technologies and how they affect power balances, and vibrant public-sector press and watchdog groups that analyze and debate the actions of those wielding power.

Transparency and oversight give us the confidence to trust institutional powers to fight the bad side of distributed power, while still allowing the good side to flourish. For if we’re going to entrust our security to institutional powers, we need to know they will act in our interests and not abuse that power. Otherwise, democracy fails.

In the longer term, we need to work to reduce power differences. The key to all of this is access to data. On the Internet, data is power. To the extent the powerless have access to it, they gain in power. To the extent that the already powerful have access to it, they further consolidate their power. As we look to reducing power imbalances, we have to look at data: data privacy for individuals, mandatory disclosure laws for corporations, and open government laws.

Medieval feudalism evolved into a more balanced relationship in which lords had responsibilities as well as rights. Today’s Internet feudalism is both ad-hoc and one-sided. Those in power have a lot of rights, but increasingly few responsibilities or limits. We need to rebalance this relationship. In medieval Europe, the rise of the centralized state and the rule of law provided the stability that feudalism lacked. The Magna Carta first forced responsibilities on governments and put humans on the long road toward government by the people and for the people. In addition to re-reigning in government power, we need similar restrictions on corporate power: a new Magna Carta focused on the institutions that abuse power in the 21st century.

Today’s Internet is a fortuitous accident: a combination of an initial lack of commercial interests, government benign neglect, military requirements for survivability and resilience, and computer engineers building open systems that worked simply and easily.

We’re at the beginning of some critical debates about the future of the Internet: the proper role of law enforcement, the character of ubiquitous surveillance, the collection and retention of our entire life’s history, how automatic algorithms should judge us, government control over the Internet, cyberwar rules of engagement, national sovereignty on the Internet, limitations on the power of corporations over our data, the ramifications of information consumerism, and so on.

Data is the pollution problem of the information age. All computer processes produce it. It stays around. How we deal with it—how we reuse and recycle it, who has access to it, how we dispose of it, and what laws regulate it—is central to how the information age functions. And I believe that just as we look back at the early decades of the industrial age and wonder how society could ignore pollution in their rush to build an industrial world, our grandchildren will look back at us during these early decades of the information age and judge us on how we dealt with the rebalancing of power resulting from all this new data.

This won’t be an easy period for us as we try to work these issues out. Historically, no shift in power has ever been easy. Corporations have turned our personal data into an enormous revenue generator, and they’re not going to back down. Neither will governments, who have harnessed that same data for their own purposes. But we have a duty to tackle this problem.

I can’t tell you what the result will be. These are all complicated issues, and require meaningful debate, international cooperation, and innovative solutions. We need to decide on the proper balance between institutional and decentralized power, and how to build tools that amplify what is good in each while suppressing the bad.

This essay previously appeared in the Atlantic.

EDITED TO ADD (11/5): This essay has been translated into Danish.

Posted on October 30, 2013 at 6:50 AMView Comments

Understanding the Threats in Cyberspace

The primary difficulty of cyber security isn’t technology—it’s policy. The Internet mirrors real-world society, which makes security policy online as complicated as it is in the real world. Protecting critical infrastructure against cyber-attack is just one of cyberspace’s many security challenges, so it’s important to understand them all before any one of them can be solved.

The list of bad actors in cyberspace is long, and spans a wide range of motives and capabilities. At the extreme end there’s cyberwar: destructive actions by governments during a war. When government policymakers like David Omand think of cyber-attacks, that’s what comes to mind. Cyberwar is conducted by capable and well-funded groups and involves military operations against both military and civilian targets. Along much the same lines are non-nation state actors who conduct terrorist operations. Although less capable and well-funded, they are often talked about in the same breath as true cyberwar.

Much more common are the domestic and international criminals who run the gamut from lone individuals to organized crime. They can be very capable and well-funded and will continue to inflict significant economic damage.

Threats from peacetime governments have been seen increasingly in the news. The US worries about Chinese espionage against Western targets, and we’re also seeing US surveillance of pretty much everyone in the world, including Americans inside the US. The National Security Agency (NSA) is probably the most capable and well-funded espionage organization in the world, and we’re still learning about the full extent of its sometimes illegal operations.

Hacktivists are a different threat. Their actions range from Internet-age acts of civil disobedience to the inflicting of actual damage. This is hard to generalize about because the individuals and groups in this category vary so much in skill, funding and motivation. Hackers falling under the “anonymous” aegis—it really isn’t correct to call them a group—come under this category, as does WikiLeaks. Most of these attackers are outside the organization, although whistleblowing—the civil disobedience of the information age—generally involves insiders like Edward Snowden.

This list of potential network attackers isn’t exhaustive. Depending on who you are and what your organization does, you might be also concerned with espionage cyber-attacks by the media, rival corporations or even the corporations we entrust with our data.

The issue here, and why it affects policy, is that protecting against these various threats can lead to contradictory requirements. In the US, the NSA’s post-9/11 mission to protect the country from terrorists has transformed it into a domestic surveillance organization. The NSA’s need to protect its own information systems from outside attack opened it up to attacks from within. Do the corporate security products we buy to protect ourselves against cybercrime contain backdoors that allow for government spying? European countries may condemn the US for spying on its own citizens, but do they do the same thing?

All these questions are especially difficult because military and security organizations along with corporations tend to hype particular threats. For example, cyberwar and cyberterrorism are greatly overblown as threats—because they result in massive government programs with huge budgets and power—while cybercrime is largely downplayed.

We need greater transparency, oversight and accountability on both the government and corporate sides before we can move forward. With the secrecy that surrounds cyber-attack and cyberdefense it’s hard to be optimistic.

This essay previously appeared in Europe’s World.

Posted on October 28, 2013 at 6:39 AMView Comments

SecureDrop

SecureDrop is an open-source whistleblower support system, originally written by Aaron Swartz and now run by the Freedom of the Press Foundation. The first instance of this system was named StrongBox and is being run by The New Yorker. To further add to the naming confusion, Aaron Swartz called the system DeadDrop when he wrote the code.

I participated in a detailed security audit of the StrongBox implementation, along with some great researchers from the University of Washington and Jake Applebaum. The problems we found were largely procedural, and things that the Freedom of the Press Foundation are working to fix.

Freedom of the Press Foundation is not running any instances of SecureDrop. It has about a half dozen major news organization lined up, and will be helping them install their own starting the first week of November. So hopefully any would-be whistleblowers will soon have their choice of news organizations to securely communicate with.

Strong technical whistleblower protection is essential, especially given President Obama’s war on whistleblowers. I hope this system is broadly implemented and extensively used.

Posted on October 17, 2013 at 7:15 AMView Comments

The NSA's New Risk Analysis

As I recently reported in the Guardian, the NSA has secret servers on the Internet that hack into other computers, codename FOXACID. These servers provide an excellent demonstration of how the NSA approaches risk management, and exposes flaws in how the agency thinks about the secrecy of its own programs.

Here are the FOXACID basics: By the time the NSA tricks a target into visiting one of those servers, it already knows exactly who that target is, who wants him eavesdropped on, and the expected value of the data it hopes to receive. Based on that information, the server can automatically decide what exploit to serve the target, taking into account the risks associated with attacking the target, as well as the benefits of a successful attack. According to a top-secret operational procedures manual provided by Edward Snowden, an exploit named Validator might be the default, but the NSA has a variety of options. The documentation mentions United Rake, Peddle Cheap, Packet Wrench, and Beach Head—all delivered from a FOXACID subsystem called Ferret Cannon. Oh how I love some of these code names. (On the other hand, EGOTISTICALGIRAFFE has to be the dumbest code name ever.)

Snowden explained this to Guardian reporter Glenn Greenwald in Hong Kong. If the target is a high-value one, FOXACID might run a rare zero-day exploit that it developed or purchased. If the target is technically sophisticated, FOXACID might decide that there’s too much chance for discovery, and keeping the zero-day exploit a secret is more important. If the target is a low-value one, FOXACID might run an exploit that’s less valuable. If the target is low-value and technically sophisticated, FOXACID might even run an already-known vulnerability.

We know that the NSA receives advance warning from Microsoft of vulnerabilities that will soon be patched; there’s not much of a loss if an exploit based on that vulnerability is discovered. FOXACID has tiers of exploits it can run, and uses a complicated trade-off system to determine which one to run against any particular target.

This cost-benefit analysis doesn’t end at successful exploitation. According to Snowden, the TAO—that’s Tailored Access Operations—operators running the FOXACID system have a detailed flowchart, with tons of rules about when to stop. If something doesn’t work, stop. If they detect a PSP, a personal security product, stop. If anything goes weird, stop. This is how the NSA avoids detection, and also how it takes mid-level computer operators and turn them into what they call “cyberwarriors.” It’s not that they’re skilled hackers, it’s that the procedures do the work for them.

And they’re super cautious about what they do.

While the NSA excels at performing this cost-benefit analysis at the tactical level, it’s far less competent at doing the same thing at the policy level. The organization seems to be good enough at assessing the risk of discovery—for example, if the target of an intelligence-gathering effort discovers that effort—but to have completely ignored the risks of those efforts becoming front-page news.

It’s not just in the U.S., where newspapers are heavy with reports of the NSA spying on every Verizon customer, spying on domestic e-mail users, and secretly working to cripple commercial cryptography systems, but also around the world, most notably in Brazil, Belgium, and the European Union. All of these operations have caused significant blowback—for the NSA, for the U.S., and for the Internet as a whole.

The NSA spent decades operating in almost complete secrecy, but those days are over. As the corporate world learned years ago, secrets are hard to keep in the information age, and openness is a safer strategy. The tendency to classify everything means that the NSA won’t be able to sort what really needs to remain secret from everything else. The younger generation is more used to radical transparency than secrecy, and is less invested in the national security state. And whistleblowing is the civil disobedience of our time.

At this point, the NSA has to assume that all of its operations will become public, probably sooner than it would like. It has to start taking that into account when weighing the costs and benefits of those operations. And it now has to be just as cautious about new eavesdropping operations as it is about using FOXACID exploits attacks against users.

This essay previously appeared in the Atlantic.

Posted on October 9, 2013 at 6:28 AMView Comments

Reforming the NSA

Leaks from the whistleblower Edward Snowden have catapulted the NSA into newspaper headlines and demonstrated that it has become one of the most powerful government agencies in the country. From the secret court rulings that allow it to collect data on all Americans to its systematic subversion of the entire Internet as a surveillance platform, the NSA has amassed an enormous amount of power.

There are two basic schools of thought about how this came to pass. The first focuses on the agency’s power. Like J. Edgar Hoover, NSA Director Keith Alexander has become so powerful as to be above the law. He is able to get away with what he does because neither political party—and nowhere near enough individual lawmakers—dare cross him. Longtime NSA watcher James Bamford recently quoted a CIA official: “We jokingly referred to him as Emperor Alexander—with good cause, because whatever Keith wants, Keith gets.”

Possibly the best evidence for this position is how well Alexander has weathered the Snowden leaks. The NSA’s most intimate secrets are front-page headlines, week after week. Morale at the agency is in shambles. Revelation after revelation has demonstrated that Alexander has exceeded his authority, deceived Congress, and possibly broken the law. Tens of thousands of additional top-secret documents are still waiting to come. Alexander has admitted that he still doesn’t know what Snowden took with him and wouldn’t have known about the leak at all had Snowden not gone public. He has no idea who else might have stolen secrets before Snowden, or who such insiders might have provided them to. Alexander had no contingency plans in place to deal with this sort of security breach, and even now—four months after Snowden fled the country—still has no coherent response to all this.

For an organization that prides itself on secrecy and security, this is what failure looks like. It is a testament to Alexander’s power that he still has a job.

The second school of thought is that it’s the administration’s fault—not just the present one, but the most recent several. According to this theory, the NSA is simply doing its job. If there’s a problem with the NSA’s actions, it’s because the rules it’s operating under are bad. Like the military, the NSA is merely an instrument of national policy. Blaming the NSA for creating a surveillance state is comparable to blaming the US military for the conduct of the Iraq war. Alexander is performing the mission given to him as best he can, under the rules he has been given, with the sort of zeal you’d expect from someone promoted into that position. And the NSA’s power predated his directorship.

Former NSA Director Michael Hayden exemplifies this in a quote from late July: “Give me the box you will allow me to operate in. I’m going to play to the very edges of that box.”

This doesn’t necessarily mean the administration is deliberately giving the NSA too big a box. More likely, it’s simply that the laws aren’t keeping pace with technology. Every year, technology gives us possibilities that our laws simply don’t cover clearly. And whenever there’s a gray area, the NSA interprets whatever law there is to give them the most expansive authority. They simply run rings around the secret court that rules on these things. My guess is that while they have clearly broken the spirit of the law, it’ll be harder to demonstrate that they broke the letter of the law.

In football terms, the first school of thought says the NSA is out of bounds. The second says the field is too big. I believe that both perspectives have some truth to them, and that the real problem comes from their combination.

Regardless of how we got here, the NSA can’t reform itself. Change cannot come from within; it has to come from above. It’s the job of government: of Congress, of the courts, and of the president. These are the people who have the ability to investigate how things became so bad, rein in the rogue agency, and establish new systems of transparency, oversight, and accountability.

Any solution we devise will make the NSA less efficient at its eavesdropping job. That’s a trade-off we should be willing to make, just as we accept reduced police efficiency caused by requiring warrants for searches and warning suspects that they have the right to an attorney before answering police questions. We do this because we realize that a too-powerful police force is itself a danger, and we need to balance our need for public safety with our aversion of a police state.

The same reasoning needs to apply to the NSA. We want it to eavesdrop on our enemies, but it needs to do so in a way that doesn’t trample on the constitutional rights of Americans, or fundamentally jeopardize their privacy or security. This means that sometimes the NSA won’t get to eavesdrop, just as the protections we put in place to restrain police sometimes result in a criminal getting away. This is a trade-off we need to make willingly and openly, because overall we are safer that way.

Once we do this, there needs to be a cultural change within the NSA. Like at the FBI and CIA after past abuses, the NSA needs new leadership committed to changing its culture. And giving up power.

Our society can handle the occasional terrorist act; we’re resilient, and—if we decided to act that way—indomitable. But a government agency that is above the law… it’s hard to see how America and its freedoms can survive that.

This essay previously appeared on TheAtlantic.com, with the unfortunate title of “Zero Sum: Americans Must Sacrifice Some Security to Reform the NSA.” After I complained, they changed the title to “The NSA-Reform Paradox: Stop Domestic Spying, Get More Security.”

Posted on September 16, 2013 at 6:55 AMView Comments

Take Back the Internet

Government and industry have betrayed the Internet, and us.

By subverting the Internet at every level to make it a vast, multi-layered and robust surveillance platform, the NSA has undermined a fundamental social contract. The companies that build and manage our Internet infrastructure, the companies that create and sell us our hardware and software, or the companies that host our data: we can no longer trust them to be ethical Internet stewards.

This is not the Internet the world needs, or the Internet its creators envisioned. We need to take it back.

And by we, I mean the engineering community.

Yes, this is primarily a political problem, a policy matter that requires political intervention.

But this is also an engineering problem, and there are several things engineers can—and should—do.

One, we should expose. If you do not have a security clearance, and if you have not received a National Security Letter, you are not bound by a federal confidentially requirements or a gag order. If you have been contacted by the NSA to subvert a product or protocol, you need to come forward with your story. Your employer obligations don’t cover illegal or unethical activity. If you work with classified data and are truly brave, expose what you know. We need whistleblowers.

We need to know how exactly how the NSA and other agencies are subverting routers, switches, the Internet backbone, encryption technologies and cloud systems. I already have five stories from people like you, and I’ve just started collecting. I want 50. There’s safety in numbers, and this form of civil disobedience is the moral thing to do.

Two, we can design. We need to figure out how to re-engineer the Internet to prevent this kind of wholesale spying. We need new techniques to prevent communications intermediaries from leaking private information.

We can make surveillance expensive again. In particular, we need open protocols, open implementations, open systems—these will be harder for the NSA to subvert.

The Internet Engineering Task Force, the group that defines the standards that make the internet run, has a meeting planned for early November in Vancouver. This group needs to dedicate its next meeting to this task. This is an emergency, and demands an emergency response.

Three, we can influence governance. I have resisted saying this up to now, and I am saddened to say it, but the US has proved to be an unethical steward of the Internet. The UK is no better. The NSA’s actions are legitimizing the internet abuses by China, Russia, Iran and others. We need to figure out new means of internet governance, ones that makes it harder for powerful tech countries to monitor everything. For example, we need to demand transparency, oversight, and accountability from our governments and corporations.

Unfortunately, this is going play directly into the hands of totalitarian governments that want to control their country’s Internet for even more extreme forms of surveillance. We need to figure out how to prevent that, too. We need to avoid the mistakes of the International Telecommunications Union, which has become a forum to legitimize bad government behavior, and create truly international governance that can’t be dominated or abused by any one country.

Generations from now, when people look back on these early decades of the Internet, I hope they will not be disappointed in us. We can ensure that they don’t only if each of us makes this a priority, and engages in the debate. We have a moral duty to do this, and we have no time to lose.

Dismantling the surveillance state won’t be easy. Has any country that engaged in mass surveillance of its own citizens voluntarily given up that capability? Has any mass surveillance country avoided becoming totalitarian? Whatever happens, we’re going to be breaking new ground.

Again, the politics of this is a bigger task than the engineering, but the engineering is critical. We need to demand that real technologists be involved in any key government decision making on these issues. We’ve had enough of lawyers and politicians not fully understanding technology; we need technologists at the table when we build tech policy.

To the engineers, I say this: we built the Internet, and some of us have helped to subvert it. Now, those of us who love liberty have to fix it.

This essay previously appeared in the Guardian.

EDITED TO ADD: Slashdot thread. An opposing view to my call to action. And I agree with this, even though the author presents this as an opposing view to mine.

EDITED TO ADD: This essay has been translated into German.

Posted on September 15, 2013 at 11:53 AMView Comments

Government Secrecy and the Generation Gap

Big-government secrets require a lot of secret-keepers. As of October 2012, almost 5m people in the US have security clearances, with 1.4m at the top-secret level or higher, according to the Office of the Director of National Intelligence.

Most of these people do not have access to as much information as Edward Snowden, the former National Security Agency contractor turned leaker, or even Chelsea Manning, the former US army soldier previously known as Bradley who was convicted for giving material to WikiLeaks. But a lot of them do—and that may prove the Achilles heel of government. Keeping secrets is an act of loyalty as much as anything else, and that sort of loyalty is becoming harder to find in the younger generations. If the NSA and other intelligence bodies are going to survive in their present form, they are going to have to figure out how to reduce the number of secrets.

As the writer Charles Stross has explained, the old way of keeping intelligence secrets was to make it part of a life-long culture. The intelligence world would recruit people early in their careers and give them jobs for life. It was a private club, one filled with code words and secret knowledge.

You can see part of this in Mr Snowden’s leaked documents. The NSA has its own lingo—the documents are riddled with codename—its own conferences, its own awards and recognitions. An intelligence career meant that you had access to a new world, one to which “normal” people on the outside were completely oblivious. Membership of the private club meant people were loyal to their organisations, which were in turn loyal back to them.

Those days are gone. Yes, there are still the codenames and the secret knowledge, but a lot of the loyalty is gone. Many jobs in intelligence are now outsourced, and there is no job-for-life culture in the corporate world any more. Workforces are flexible, jobs are interchangeable and people are expendable.

Sure, it is possible to build a career in the classified world of government contracting, but there are no guarantees. Younger people grew up knowing this: there are no employment guarantees anywhere. They see it in their friends. They see it all around them.

Many will also believe in openness, especially the hacker types the NSA needs to recruit. They believe that information wants to be free, and that security comes from public knowledge and debate. Yes, there are important reasons why some intelligence secrets need to be secret, and the NSA culture reinforces secrecy daily. But this is a crowd that is used to radical openness. They have been writing about themselves on the internet for years. They have said very personal things on Twitter; they have had embarrassing photographs of themselves posted on Facebook. They have been dumped by a lover in public. They have overshared in the most compromising ways—and they have got through it. It is a tougher sell convincing this crowd that government secrecy trumps the public’s right to know.

Psychologically, it is hard to be a whistleblower. There is an enormous amount of pressure to be loyal to our peer group: to conform to their beliefs, and not to let them down. Loyalty is a natural human trait; it is one of the social mechanisms we use to thrive in our complex social world. This is why good people sometimes do bad things at work.

When someone becomes a whistleblower, he or she is deliberately eschewing that loyalty. In essence, they are deciding that allegiance to society at large trumps that to peers at work. That is the difficult part. They know their work buddies by name, but “society at large” is amorphous and anonymous. Believing that your bosses ultimately do not care about you makes that switch easier.

Whistleblowing is the civil disobedience of the information age. It is a way that someone without power can make a difference. And in the information age—the fact that everything is stored on computers and potentially accessible with a few keystrokes and mouse clicks—whistleblowing is easier than ever.

Mr Snowden is 30 years old; Manning 25. They are members of the generation we taught not to expect anything long-term from their employers. As such, employers should not expect anything long-term from them. It is still hard to be a whistleblower, but for this generation it is a whole lot easier.

A lot has been written about the problem of over-classification in US government. It has long been thought of as anti-democratic and a barrier to government oversight. Now we know that it is also a security risk. Organizations such as the NSA need to change their culture of secrecy, and concentrate their security efforts on what truly needs to remain secret. Their default practice of classifying everything is not going to work any more.

Hey, NSA, you’ve got a problem.

This essay previously appeared in the Financial Times.

EDITED TO ADD (9/14): Blog comments on this essay are particularly interesting.

Posted on September 9, 2013 at 1:30 PMView Comments

More on the NSA Commandeering the Internet

If there’s any confirmation that the U.S. government has commandeered the Internet for worldwide surveillance, it is what happened with Lavabit earlier this month.

Lavabit is—well, was—an e-mail service that offered more privacy than the typical large-Internet-corporation services that most of us use. It was a small company, owned and operated by Ladar Levison, and it was popular among the tech-savvy. NSA whistleblower Edward Snowden among its half-million users.

Last month, Levison reportedly received an order—probably a National Security Letter—to allow the NSA to eavesdrop on everyone’s e-mail accounts on Lavabit. Rather than “become complicit in crimes against the American people,” he turned the service off. Note that we don’t know for sure that he received a NSL—that’s the order authorized by the Patriot Act that doesn’t require a judge’s signature and prohibits the recipient from talking about it—or what it covered, but Levison has said that he had complied with requests for individual e-mail access in the past, but this was very different.

So far, we just have an extreme moral act in the face of government pressure. It’s what happened next that is the most chilling. The government threatened him with arrest, arguing that shutting down this e-mail service was a violation of the order.

There it is. If you run a business, and the FBI or NSA want to turn it into a mass surveillance tool, they believe they can do so, solely on their own initiative. They can force you to modify your system. They can do it all in secret and then force your business to keep that secret. Once they do that, you no longer control that part of your business. You can’t shut it down. You can’t terminate part of your service. In a very real sense, it is not your business anymore. It is an arm of the vast U.S. surveillance apparatus, and if your interest conflicts with theirs then they win. Your business has been commandeered.

For most Internet companies, this isn’t a problem. They are already engaging in massive surveillance of their customers and users—collecting and using this data is the primary business model of the Internet—so it’s easy to comply with government demands and give the NSA complete access to everything. This is what we learned from Edward Snowden. Through programs like PRISM, BLARNEY and OAKSTAR, the NSA obtained bulk access to services like Gmail and Facebook, and to Internet backbone connections throughout the US and the rest of the world. But if it were a problem for those companies, presumably the government would not allow them to shut down.

To be fair, we don’t know if the government can actually convict someone of closing a business. It might just be part of their coercion tactics. Intimidation, and retaliation, is part of how the NSA does business.

Former Qwest CEO Joseph Nacchio has a story of what happens to a large company that refuses to cooperate. In February 2001—before the 9/11 terrorist attacks—the NSA approached the four major US telecoms and asked for their cooperation in a secret data collection program, the one we now know to be the bulk metadata collection program exposed by Edward Snowden. Qwest was the only telecom to refuse, leaving the NSA with a hole in its spying efforts. The NSA retaliated by canceling a series of big government contracts with Qwest. The company has since been purchased by CenturyLink, which we presume is more cooperative with NSA demands.

That was before the Patriot Act and National Security Letters. Now, presumably, Nacchio would just comply. Protection rackets are easier when you have the law backing you up.

As the Snowden whistleblowing documents continue to be made public, we’re getting further glimpses into the surveillance state that has been secretly growing around us. The collusion of corporate and government surveillance interests is a big part of this, but so is the government’s resorting to intimidation. Every Lavabit-like service that shuts down—and there have been several—gives us consumers less choice, and pushes us into the large services that cooperate with the NSA. It’s past time we demanded that Congress repeal National Security Letters, give us privacy rights in this new information age, and force meaningful oversight on this rogue agency.

This essay previously appeared in USA Today.

EDITED TO ADD: This essay has been translated into Danish.

Posted on August 30, 2013 at 6:12 AMView Comments

Detaining David Miranda

Last Sunday, David Miranda was detained while changing planes at London Heathrow Airport by British authorities for nine hours under a controversial British law—the maximum time allowable without making an arrest. There has been much made of the fact that he’s the partner of Glenn Greenwald, the Guardian reporter whom Edward Snowden trusted with many of his NSA documents and the most prolific reporter of the surveillance abuses disclosed in those documents. There’s less discussion of what I feel was the real reason for Miranda’s detention. He was ferrying documents between Greenwald and Laura Poitras, a filmmaker and his co-reporter on Snowden and his information. These document were on several USB memory sticks he had with him. He had already carried documents from Greenwald in Rio de Janeiro to Poitras in Berlin, and was on his way back with different documents when he was detained.

The memory sticks were encrypted, of course, and Miranda did not know the key. This didn’t stop the British authorities from repeatedly asking for the key, and from confiscating the memory sticks along with his other electronics.

The incident prompted a major outcry in the UK. The UK’s Terrorist Act has always been controversial, and this clear misuse—it was intended to give authorities the right to detain and question suspected terrorists—is prompting new calls for its review. Certainly the UK. police will be more reluctant to misuse the law again in this manner.

I have to admit this story has me puzzled. Why would the British do something like this? What did they hope to gain, and why did they think it worth the cost? And—of course—were the British acting on their own under the Official Secrets Act, or were they acting on behalf of the United States? (My initial assumption was that they were acting on behalf of the US, but after the bizarre story of the British GCHQ demanding the destruction of Guardian computers last month, I’m not sure anymore.)

We do know the British were waiting for Miranda. It’s reasonable to assume they knew his itinerary, and had good reason to suspect that he was ferrying documents back and forth between Greenwald and Poitras. These documents could be source documents provided by Snowden, new documents that the two were working on either separately or together, or both. That being said, it’s inconceivable that the memory sticks would contain the only copies of these documents. Poitras retained copies of everything she gave Miranda. So the British authorities couldn’t possibly destroy the documents; the best they could hope for is that they would be able to read them.

Is it truly possible that the NSA doesn’t already know what Snowden has? They claim they don’t, but after Snowden’s name became public, the NSA would have conducted the mother of all audits. It would try to figure out what computer systems Snowden had access to, and therefore what documents he could have accessed. Hopefully, the audit information would give more detail, such as which documents he downloaded. I have a hard time believing that its internal auditing systems would be so bad that it wouldn’t be able to discover this.

So if the NSA knows what Snowden has, or what he could have, then the most it could learn from the USB sticks is what Greenwald and Poitras are currently working on, or thinking about working on. But presumably the things the two of them are working on are the things they’re going to publish next. Did the intelligence agencies really do all this simply for a few weeks’ heads-up on what was coming? Given how ham-handedly the NSA has handled PR as each document was exposed, it seems implausible that it wanted advance knowledge so it could work on a response. It’s been two months since the first Snowden revelation, and it still doesn’t have a decent PR story.

Furthermore, the UK authorities must have known that the data would be encrypted. Greenwald might have been a crypto newbie at the start of the Snowden affair, but Poitras is known to be good at security. The two have been communicating securely by e-mail when they do communicate. Maybe the UK authorities thought there was a good chance that one of them would make a security mistake, or that Miranda would be carrying paper documents.

Another possibility is that this was just intimidation. If so, it’s misguided. Anyone who regularly reads Greenwald could have told them that he would not have been intimidated—and, in fact, he expressed the exact opposite sentiment—and anyone who follows Poitras knows that she is even more strident in her views. Going after the loved ones of state enemies is a typically thuggish tactic, but it’s not a very good one in this case. The Snowden documents will get released. There’s no way to put this cat back in the bag, not even by killing the principal players.

It could possibly have been intended to intimidate others who are helping Greenwald and Poitras, or the Guardian and its advertisers. This will have some effect. Lavabit, Silent Circle, and now Groklaw have all been successfully intimidated. Certainly others have as well. But public opinion is shifting against the intelligence community. I don’t think it will intimidate future whistleblowers. If the treatment of Chelsea Manning didn’t discourage them, nothing will.

This leaves one last possible explanation—those in power were angry and impulsively acted on that anger. They’re lashing out: sending a message and demonstrating that they’re not to be messed with—that the normal rules of polite conduct don’t apply to people who screw with them. That’s probably the scariest explanation of all. Both the US and UK intelligence apparatuses have enormous money and power, and they have already demonstrated that they are willing to ignore their own laws. Once they start wielding that power unthinkingly, it could get really bad for everyone.

And it’s not going to be good for them, either. They seem to want Snowden so badly that that they’ll burn the world down to get him. But every time they act impulsively aggressive—convincing the governments of Portugal and France to block the plane carrying the Bolivian president because they thought Snowden was on it is another example—they lose a small amount of moral authority around the world, and some ability to act in the same way again. The more pressure Snowden feels, the more likely he is to give up on releasing the documents slowly and responsibly, and publish all of them at once—the same way that WikiLeaks published the US State Department cables.

Just this week, the Wall Street Journal reported on some new NSA secret programs that are spying on Americans. It got the information from “interviews with current and former intelligence and government officials and people from companies that help build or operate the systems, or provide data,” not from Snowden. This is only the beginning. The media will not be intimidated. I will not be intimidated. But it scares me that the NSA is so blind that it doesn’t see it.

This essay previously appeared on TheAtlantic.com.

EDITED TO ADD: I’ve been thinking about it, and there’s a good chance that the NSA doesn’t know what Snowden has. He was a sysadmin. He had access. Most of the audits and controls protect against normal users; someone with root access is going to be able to bypass a lot of them. And he had the technical chops to cover his tracks when he couldn’t just evade the auditing systems.

The AP makes an excellent point about this:

The disclosure undermines the Obama administration’s assurances to Congress and the public that the NSA surveillance programs can’t be abused because its spying systems are so aggressively monitored and audited for oversight purposes: If Snowden could defeat the NSA’s own tripwires and internal burglar alarms, how many other employees or contractors could do the same?

And, to be clear, I didn’t mean to say that intimidation wasn’t the government’s motive. I believe it was, and that it was poorly thought out intimidation: lashing out in anger, rather than from some Machiavellian strategy. (Here’s a similar view.) If they wanted Miranda’s electronics, they could have confiscated them and sent him on his way in fifteen minutes. Holding him for nine hours—the absolute maximum they could under the current law—was intimidation.

I am reminded of the phone call the Guardian received from British government. The exact quote reported was: “You’ve had your fun. Now we want the stuff back.” That’s something you would tell your child. And that’s the power dynamic that’s going on here.

EDITED TO ADD (8/27): Jay Rosen has an excellent essay on this.

EDITED TO ADD (9/12): Other editors react.

Posted on August 27, 2013 at 6:39 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.