Entries Tagged "cyberwar"

Page 4 of 14

Attack Attribution in Cyberspace

When you’re attacked by a missile, you can follow its trajectory back to where it was launched from. When you’re attacked in cyberspace, figuring out who did it is much harder. The reality of international aggression in cyberspace will change how we approach defense.

Many of us in the computer-security field are skeptical of the US government’s claim that it has positively identified North Korea as the perpetrator of the massive Sony hack in November 2014. The FBI’s evidence is circumstantial and not very convincing. The attackers never mentioned the movie that became the centerpiece of the hack until the press did. More likely, the culprits are random hackers who have loved to hate Sony for over a decade, or possibly a disgruntled insider.

On the other hand, most people believe that the FBI would not sound so sure unless it was convinced. And President Obama would not have imposed sanctions against North Korea if he weren’t convinced. This implies that there’s classified evidence as well. A couple of weeks ago, I wrote for the Atlantic, “The NSA has been trying to eavesdrop on North Korea’s government communications since the Korean War, and it’s reasonable to assume that its analysts are in pretty deep. The agency might have intelligence on the planning process for the hack. It might, say, have phone calls discussing the project, weekly PowerPoint status reports, or even Kim Jong Un’s sign-off on the plan. On the other hand, maybe not. I could have written the same thing about Iraq’s weapons-of-mass-destruction program in the run-up to the 2003 invasion of that country, and we all know how wrong the government was about that.”

The NSA is extremely reluctant to reveal its intelligence capabilities—or what it refers to as “sources and methods”—against North Korea simply to convince all of us of its conclusion, because by revealing them, it tips North Korea off to its insecurities. At the same time, we rightly have reason to be skeptical of the government’s unequivocal attribution of the attack without seeing the evidence. Iraq’s mythical weapons of mass destruction is only the most recent example of a major intelligence failure. American history is littered with examples of claimed secret intelligence pointing us toward aggression against other countries, only for us to learn later that the evidence was wrong.

Cyberspace exacerbates this in two ways. First, it is very difficult to attribute attacks in cyberspace. Packets don’t come with return addresses, and you can never be sure that what you think is the originating computer hasn’t itself been hacked. Even worse, it’s hard to tell the difference between attacks carried out by a couple of lone hackers and ones where a nation-state military is responsible. When we do know who did it, it’s usually because a lone hacker admitted it or because there was a months-long forensic investigation.

Second, in cyberspace, it is much easier to attack than to defend. The primary defense we have against military attacks in cyberspace is counterattack and the threat of counterattack that leads to deterrence.

What this all means is that it’s in the US’s best interest to claim omniscient powers of attribution. More than anything else, those in charge want to signal to other countries that they cannot get away with attacking the US: If they try something, we will know. And we will retaliate, swiftly and effectively. This is also why the US has been cagey about whether it caused North Korea’s Internet outage in late December.

It can be an effective bluff, but only if you get away with it. Otherwise, you lose credibility. The FBI is already starting to equivocate, saying others might have been involved in the attack, possibly hired by North Korea. If the real attackers surface and can demonstrate that they acted independently, it will be obvious that the FBI and NSA were overconfident in their attribution. Already, the FBI has lost significant credibility.

The only way out of this, with respect to the Sony hack and any other incident of cyber-aggression in which we’re expected to support retaliatory action, is for the government to be much more forthcoming about its evidence. The secrecy of the NSA’s sources and methods is going to have to take a backseat to the public’s right to know. And in cyberspace, we’re going to have to accept the uncomfortable fact that there’s a lot we don’t know.

This essay previously appeared in Time.

Posted on January 8, 2015 at 6:34 AMView Comments

Attributing the Sony Attack

No one has admitted taking down North Korea’s Internet. It could have been an act of retaliation by the US government, but it could just as well have been an ordinary DDoS attack. The follow-on attack against Sony PlayStation definitely seems to be the work of hackers unaffiliated with a government.

Not knowing who did what isn’t new. It’s called the “attribution problem,” and it plagues Internet security. But as governments increasingly get involved in cyberspace attacks, it has policy implications as well. Last year, I wrote:

Ordinarily, you could determine who the attacker was by the weaponry. When you saw a tank driving down your street, you knew the military was involved because only the military could afford tanks. Cyberspace is different. In cyberspace, technology is broadly spreading its capability, and everyone is using the same weaponry: hackers, criminals, politically motivated hacktivists, national spies, militaries, even the potential cyberterrorist. They are all exploiting the same vulnerabilities, using the same sort of hacking tools, engaging in the same attack tactics, and leaving the same traces behind. They all eavesdrop or steal data. They all engage in denial-of-service attacks. They all probe cyberdefences and do their best to cover their tracks.

Despite this, knowing the attacker is vitally important. As members of society, we have several different types of organizations that can defend us from an attack. We can call the police or the military. We can call on our national anti-terrorist agency and our corporate lawyers. Or we can defend ourselves with a variety of commercial products and services. Depending on the situation, all of these are reasonable choices.

The legal regime in which any defense operates depends on two things: who is attacking you and why. Unfortunately, when you are being attacked in cyberspace, the two things you often do not know are who is attacking you and why. It is not that everything can be defined as cyberwar; it is that we are increasingly seeing warlike tactics used in broader cyberconflicts. This makes defence and national cyberdefence policy difficult.

In 2007, the Israeli Air Force bombed and destroyed the al-Kibar nuclear facility in Syria. The Syrian government immediately knew who did it, because airplanes are hard to disguise. In 2010, the US and Israel jointly damaged Iran’s Natanz nuclear facility. But this time they used a cyberweapon, Stuxnet, and no one knew who did it until details were leaked years later. China routinely denies its cyberespionage activities. And a 2009 cyberattack against the United States and South Korea was blamed on North Korea even though it may have originated from either London or Miami.

When it’s possible to identify the origins of cyberattacks­—like forensic experts were able to do with many of the Chinese attacks against US networks­—it’s as a result of months of detailed analysis and investigation. That kind of time frame doesn’t help at the moment of attack, when you have to decide within milliseconds how your network is going to react and within days how your country is going to react. This, in part, explains the relative disarray within the Obama administration over what to do about North Korea. Officials in the US government and international institutions simply don’t have the legal or even the conceptual framework to deal with these types of scenarios.

The blurring of lines between individual actors and national governments has been happening more and more in cyberspace. What has been called the first cyberwar, Russia vs. Estonia in 2007, was partly the work of a 20-year-old ethnic Russian living in Tallinn, and partly the work of a pro-Kremlin youth group associated with the Russian government. Many of the Chinese hackers targeting Western networks seem to be unaffiliated with the Chinese government. And in 2011, the hacker group Anonymous threatened NATO.

It’s a strange future we live in when we can’t tell the difference between random hackers and major governments, or when those same random hackers can credibly threaten international military organizations.

This is why people around the world should care about the Sony hack. In this future, we’re going to see an even greater blurring of traditional lines between police, military, and private actions as technology broadly distributes attack capabilities across a variety of actors. This attribution difficulty is here to stay, at least for the foreseeable future.

If North Korea is responsible for the cyberattack, how is the situation different than a North Korean agent breaking into Sony’s office, photocopying a lot of papers, and making them available to the public? Is Chinese corporate espionage a problem for governments to solve, or should we let corporations defend themselves? Should the National Security Agency defend US corporate networks, or only US military networks? How much should we allow organizations like the NSA to insist that we trust them without proof when they claim to have classified evidence that they don’t want to disclose? How should we react to one government imposing sanctions on another based on this secret evidence? More importantly, when we don’t know who is launching an attack or why, who is in charge of the response and under what legal system should those in charge operate?

We need to figure all of this out. We need national guidelines to determine when the military should get involved and when it’s a police matter, as well as what sorts of proportional responses are available in each instance. We need international agreements defining what counts as cyberwar and what does not. And, most of all right now, we need to tone down all the cyberwar rhetoric. Breaking into the offices of a company and photocopying their paperwork is not an act of war, no matter who did it. Neither is doing the same thing over the Internet. Let’s save the big words for when it matters.

This essay previously appeared on TheAtlantic.com.

Jack Goldsmith responded to this essay.

Posted on January 7, 2015 at 11:16 AMView Comments

Attributing Cyberattacks

New paper: “Attributing Cyber Attacks,” by Thomas Rid and Ben Buchanan:

Abstract: Who did it? Attribution is fundamental. Human lives and the security of the state may depend on ascribing agency to an agent. In the context of computer network intrusions, attribution is commonly seen as one of the most intractable technical problems, as either solvable or not solvable, and as dependent mainly on the available forensic evidence. But is it? Is this a productive understanding of attribution? ­ This article argues that attribution is what states make of it. To show how, we introduce the Q Model: designed to explain, guide, and improve the making of attribution. Matching an offender to an offence is an exercise in minimising uncertainty on three levels: tactically, attribution is an art as well as a science; operationally, attribution is a nuanced process not a black-and-white problem; and strategically, attribution is a function of what is at stake politically. Successful attribution requires a range of skills on all levels, careful management, time, leadership, stress-testing, prudent communication, and recognising limitations and challenges.

Posted on January 6, 2015 at 6:50 AMView Comments

Reacting to the Sony Hack

First we thought North Korea was behind the Sony cyberattacks. Then we thought it was a couple of hacker guys with an axe to grind. Now we think North Korea is behind it again, but the connection is still tenuous. There have been accusations of cyberterrorism, and even cyberwar. I’ve heard calls for us to strike back, with actual missiles and bombs. We’re collectively pegging the hype meter, and the best thing we can do is calm down and take a deep breath.

First, this is not an act of terrorism. There has been no senseless violence. No innocents are coming home in body bags. Yes, a company is seriously embarrassed—and financially hurt—by all of its information leaking to the public. But posting unreleased movies online is not terrorism. It’s not even close.

Nor is this an act of war. Stealing and publishing a company’s proprietary information is not an act of war. We wouldn’t be talking about going to war if someone snuck in and photocopied everything, and it makes equally little sense to talk about it when someone does it over the internet. The threshold of war is much, much higher, and we’re not going to respond to this militarily. Over the years, North Korea has performed far more aggressive acts against US and South Korean soldiers. We didn’t go to war then, and we’re not going to war now.

Finally, we don’t know these attacks were sanctioned by the North Korean government. The US government has made statements linking the attacks to North Korea, but hasn’t officially blamed the government, nor have officials provided any evidence of the linkage. We’ve known about North Korea’s cyberattack capabilities long before this attack, but it might not be the government at all. This wouldn’t be the first time a nationalistic cyberattack was launched without government sanction. We have lots of examples of these sorts of attacks being conducted by regular hackers with nationalistic pride. Kids playing politics, I call them. This may be that, and it could also be a random hacker who just has it out for Sony.

Remember, the hackers didn’t start talking about The Interview until the press did. Maybe the NSA has some secret information pinning this attack on the North Korean government, but unless the agency comes forward with the evidence, we should remain skeptical. We don’t know who did this, and we may never find out. I personally think it is a disgruntled ex-employee, but I don’t have any more evidence than anyone else does.

What we have is a very extreme case of hacking. By “extreme” I mean the quantity of the information stolen from Sony’s networks, not the quality of the attack. The attackers seem to have been good, but no more than that. Sony made its situation worse by having substandard security.

Sony’s reaction has all the markings of a company without any sort of coherent plan. Near as I can tell, every Sony executive is in full panic mode. They’re certainly facing dozens of lawsuits: from shareholders, from companies who invested in those movies, from employees who had their medical and financial data exposed, from everyone who was affected. They’re probably facing government fines, for leaking financial and medical information, and possibly for colluding with other studios to attack Google.

If previous major hacks are any guide, there will be multiple senior executives fired over this; everyone at Sony is probably scared for their jobs. In this sort of situation, the interests of the corporation are not the same as the interests of the people running the corporation. This might go a long way to explain some of the reactions we’ve seen.

Pulling The Interview was exactly the wrong thing to do, as there was no credible threat and it just emboldens the hackers. But it’s the kind of response you get when you don’t have a plan.

Politically motivated hacking isn’t new, and the Sony hack is not unprecedented. In 2011 the hacker group Anonymous did something similar to the internet-security company HBGary Federal, exposing corporate secrets and internal emails. This sort of thing has been possible for decades, although it’s gotten increasingly damaging as more corporate information goes online. It will happen again; there’s no doubt about that.

But it hasn’t happened very often, and that’s not likely to change. Most hackers are garden-variety criminals, less interested in internal emails and corporate secrets and more interested in personal information and credit card numbers that they can monetize. Their attacks are opportunistic, and very different from the targeted attack Sony fell victim to.

When a hacker releases personal data on an individual, it’s called doxing. We don’t have a name for it when it happens to a company, but it’s what happened to Sony. Companies need to wake up to the possibility that a whistleblower, a civic-minded hacker, or just someone who is out to embarrass them will hack their networks and publish their proprietary data. They need to recognize that their chatty private emails and their internal memos might be front-page news.

In a world where everything happens online, including what we think of as ephemeral conversation, everything is potentially subject to public scrutiny. Companies need to make sure their computer and network security is up to snuff, and their incident response and crisis management plans can handle this sort of thing. But they should also remember how rare this sort of attack is, and not panic.

This essay previously appeared on Vice Motherboard.

EDITED TO ADD (12/25): Reddit thread.

Posted on December 22, 2014 at 6:08 AMView Comments

Regin Malware

Last week, we learned about a striking piece of malware called Regin that has been infecting computer networks worldwide since 2008. It’s more sophisticated than any known criminal malware, and everyone believes a government is behind it. No country has taken credit for Regin, but there’s substantial evidence that it was built and operated by the United States.

This isn’t the first government malware discovered. GhostNet is believed to be Chinese. Red October and Turla are believed to be Russian. The Mask is probably Spanish. Stuxnet and Flame are probably from the U.S. All these were discovered in the past five years, and named by researchers who inferred their creators from clues such as who the malware targeted.

I dislike the “cyberwar” metaphor for espionage and hacking, but there is a war of sorts going on in cyberspace. Countries are using these weapons against each other. This affects all of us not just because we might be citizens of one of these countries, but because we are all potentially collateral damage. Most of the varieties of malware listed above have been used against nongovernment targets, such as national infrastructure, corporations, and NGOs. Sometimes these attacks are accidental, but often they are deliberate.

For their defense, civilian networks must rely on commercial security products and services. We largely rely on antivirus products from companies such as Symantec, Kaspersky, and F-Secure. These products continuously scan our computers, looking for malware, deleting it, and alerting us as they find it. We expect these companies to act in our interests, and never deliberately fail to protect us from a known threat.

This is why the recent disclosure of Regin is so disquieting. The first public announcement of Regin was from Symantec, on November 23. The company said that its researchers had been studying it for about a year, and announced its existence because they knew of another source that was going to announce it. That source was a news site, the Intercept, which described Regin and its U.S. connections the following day. Both Kaspersky and F-Secure soon published their own findings. Both stated that they had been tracking Regin for years. All three of the antivirus companies were able to find samples of it in their files since 2008 or 2009.

So why did these companies all keep Regin a secret for so long? And why did they leave us vulnerable for all this time?

To get an answer, we have to disentangle two things. Near as we can tell, all the companies had added signatures for Regin to their detection database long before last month. The VirusTotal website has a signature for Regin as of 2011. Both Microsoft security and F-Secure started detecting and removing it that year as well. Symantec has protected its users against Regin since 2013, although it certainly added the VirusTotal signature in 2011.

Entirely separately and seemingly independently, all of these companies decided not to publicly discuss Regin’s existence until after Symantec and the Intercept did so. Reasons given vary. Mikko Hyponnen of F-Secure said that specific customers asked him not to discuss the malware that had been found on their networks. Fox IT, which was hired to remove Regin from the Belgian phone company Belgacom’s website, didn’t say anything about what it discovered because it “didn’t want to interfere with NSA/GCHQ operations.”

My guess is that none of the companies wanted to go public with an incomplete picture. Unlike criminal malware, government-grade malware can be hard to figure out. It’s much more elusive and complicated. It is constantly updated. Regin is made up of multiple modules—Fox IT called it “a full framework of a lot of species of malware”—making it even harder to figure out what’s going on. Regin has also been used sparingly, against only a select few targets, making it hard to get samples. When you make a press splash by identifying a piece of malware, you want to have the whole story. Apparently, no one felt they had that with Regin.

That is not a good enough excuse, though. As nation-state malware becomes more common, we will often lack the whole story. And as long as countries are battling it out in cyberspace, some of us will be targets and the rest of us might be unlucky enough to be sitting in the blast radius. Military-grade malware will continue to be elusive.

Right now, antivirus companies are probably sitting on incomplete stories about a dozen more varieties of government-grade malware. But they shouldn’t. We want, and need, our antivirus companies to tell us everything they can about these threats as soon as they know them, and not wait until the release of a political story makes it impossible for them to remain silent.

This essay previously appeared in the MIT Technology Review.

Posted on December 8, 2014 at 7:19 AMView Comments

US National Guard is Getting Into Cyberwar

The Maryland Air National Guard needs a new facility for its cyberwar operations:

The purpose of this facility is to house a Network Warfare Group and ISR Squadron. The Cyber mission includes a set of capabilities, expertise to enable the cyber operational need for an always-on, net-speed awareness and integrated operational response with global reach. It enables operators to drive upstream in pursuit of cyber adversaries, and is informed 24/7 by intelligence and all-source information.

Is this something we want the Maryland Air National Guard to get involved in?

Posted on July 17, 2014 at 3:16 PMView Comments

Computer Network Exploitation vs. Computer Network Attack

Back when we first started getting reports of the Chinese breaking into U.S. computer networks for espionage purposes, we described it in some very strong language. We called the Chinese actions cyberattacks. We sometimes even invoked the word cyberwar, and declared that a cyber-attack was an act of war.

When Edward Snowden revealed that the NSA has been doing exactly the same thing as the Chinese to computer networks around the world, we used much more moderate language to describe U.S. actions: words like espionage, or intelligence gathering, or spying. We stressed that it’s a peacetime activity, and that everyone does it.

The reality is somewhere in the middle, and the problem is that our intuitions are based on history.

Electronic espionage is different today than it was in the pre-Internet days of the Cold War. Eavesdropping isn’t passive anymore. It’s not the electronic equivalent of sitting close to someone and overhearing a conversation. It’s not passively monitoring a communications circuit. It’s more likely to involve actively breaking into an adversary’s computer network—be it Chinese, Brazilian, or Belgian—and installing malicious software designed to take over that network.

In other words, it’s hacking. Cyber-espionage is a form of cyber-attack. It’s an offensive action. It violates the sovereignty of another country, and we’re doing it with far too little consideration of its diplomatic and geopolitical costs.

The abbreviation-happy U.S. military has two related terms for what it does in cyberspace. CNE stands for “computer network exploitation.” That’s spying. CNA stands for “computer network attack.” That includes actions designed to destroy or otherwise incapacitate enemy networks. That’s—among other things—sabotage.

CNE and CNA are not solely in the purview of the U.S.; everyone does it. We know that other countries are building their offensive cyberwar capabilities. We have discovered sophisticated surveillance networks from other countries with names like GhostNet, Red October, The Mask. We don’t know who was behind them—these networks are very difficult to trace back to their source—but we suspect China, Russia, and Spain, respectively. We recently learned of a hacking tool called RCS that’s used by 21 governments: Azerbaijan, Colombia, Egypt, Ethiopia, Hungary, Italy, Kazakhstan, Korea, Malaysia, Mexico, Morocco, Nigeria, Oman, Panama, Poland, Saudi Arabia, Sudan, Thailand, Turkey, UAE, and Uzbekistan.

When the Chinese company Huawei tried to sell networking equipment to the U.S., the government considered that equipment a “national security threat,” rightly fearing that those switches were backdoored to allow the Chinese government both to eavesdrop and attack US networks. Now we know that the NSA is doing the exact same thing to Americanmade equipment sold in China, as well as to those very same Huawei switches.

The problem is that, from the point of view of the object of an attack, CNE and CNA look the same as each other, except for the end result. Today’s surveillance systems involve breaking into the computers and installing malware, just as cybercriminals do when they want your money. And just like Stuxnet: the U.S./Israeli cyberweapon that disabled the Natanz nuclear facility in Iran in 2010.

This is what Microsoft’s General Counsel Brad Smith meant when he said: “Indeed, government snooping potentially now constitutes an ‘advanced persistent threat,’ alongside sophisticated malware and cyber attacks.”

When the Chinese penetrate U.S. computer networks, which they do with alarming regularity, we don’t really know what they’re doing. Are they modifying our hardware and software to just eavesdrop, or are they leaving :logic bombs” that could be triggered to do real damage at some future time? It can be impossible to tell. As a 2011 EU cybersecurity policy document stated (page 7):

…technically speaking, CNA requires CNE to be effective. In other words, what may be preparations for cyberwarfare can well be cyberespionage initially or simply be disguised as such.

We can’t tell the intentions of the Chinese, and they can’t tell ours, either.

Much of the current debate in the U.S. is over what the NSA should be allowed to do, and whether limiting the NSA somehow empowers other governments. That’s the wrong debate. We don’t get to choose between a world where the NSA spies and one where the Chinese spy. Our choice is between a world where our information infrastructure is vulnerable to all attackers or secure for all users.

As long as cyber-espionage equals cyber-attack, we would be much safer if we focused the NSA’s efforts on securing the Internet from these attacks. True, we wouldn’t get the same level of access to information flows around the world. But we would be protecting the world’s information flows—including our own—from both eavesdropping and more damaging attacks. We would be protecting our information flows from governments, nonstate actors, and criminals. We would be making the world safer.

Offensive military operations in cyberspace, be they CNE or CNA, should be the purview of the military. In the U.S., that’s CyberCommand. Such operations should be recognized as offensive military actions, and should be approved at the highest levels of the executive branch, and be subject to the same international law standards that govern acts of war in the offline world.

If we’re going to attack another country’s electronic infrastructure, we should treat it like any other attack on a foreign country. It’s no longer just espionage, it’s a cyber-attack.

This essay previously appeared on TheAtlantic.com.

Posted on March 10, 2014 at 6:46 AMView Comments

"Military Style" Raid on California Power Station

I don’t know what to think about this:

Around 1:00 AM on April 16, at least one individual (possibly two) entered two different manholes at the PG&E Metcalf power substation, southeast of San Jose, and cut fiber cables in the area around the substation. That knocked out some local 911 services, landline service to the substation, and cell phone service in the area, a senior U.S. intelligence official told Foreign Policy. The intruder(s) then fired more than 100 rounds from what two officials described as a high-powered rifle at several transformers in the facility. Ten transformers were damaged in one area of the facility, and three transformer banks—or groups of transformers—were hit in another, according to a PG&E spokesman.

The article worries that this might be a dry-run to some cyberwar-like attack, but that doesn’t make sense. But it’s just too complicated and weird to be a prank.

Anyone have any ideas?

Posted on January 2, 2014 at 6:40 AMView Comments

The Battle for Power on the Internet

We’re in the middle of an epic battle for power in cyberspace. On one side are the traditional, organized, institutional powers such as governments and large multinational corporations. On the other are the distributed and nimble: grassroots movements, dissident groups, hackers, and criminals. Initially, the Internet empowered the second side. It gave them a place to coordinate and communicate efficiently, and made them seem unbeatable. But now, the more traditional institutional powers are winning, and winning big. How these two sides fare in the long term, and the fate of the rest of us who don’t fall into either group, is an open question—and one vitally important to the future of the Internet.

In the Internet’s early days, there was a lot of talk about its “natural laws”—how it would upend traditional power blocks, empower the masses, and spread freedom throughout the world. The international nature of the Internet circumvented national laws. Anonymity was easy. Censorship was impossible. Police were clueless about cybercrime. And bigger changes seemed inevitable. Digital cash would undermine national sovereignty. Citizen journalism would topple traditional media, corporate PR, and political parties. Easy digital copying would destroy the traditional movie and music industries. Web marketing would allow even the smallest companies to compete against corporate giants. It really would be a new world order.

This was a utopian vision, but some of it did come to pass. Internet marketing has transformed commerce. The entertainment industries have been transformed by things like MySpace and YouTube, and are now more open to outsiders. Mass media has changed dramatically, and some of the most influential people in the media have come from the blogging world. There are new ways to organize politically and run elections. Crowdfunding has made tens of thousands of projects possible to finance, and crowdsourcing made more types of projects possible. Facebook and Twitter really did help topple governments.

But that is just one side of the Internet’s disruptive character. The Internet has emboldened traditional power as well.

On the corporate side, power is consolidating, a result of two current trends in computing. First, the rise of cloud computing means that we no longer have control of our data. Our e-mail, photos, calendars, address books, messages, and documents are on servers belonging to Google, Apple, Microsoft, Facebook, and so on. And second, we are increasingly accessing our data using devices that we have much less control over: iPhones, iPads, Android phones, Kindles, ChromeBooks, and so on. Unlike traditional operating systems, those devices are controlled much more tightly by the vendors, who limit what software can run, what they can do, how they’re updated, and so on. Even Windows 8 and Apple’s Mountain Lion operating system are heading in the direction of more vendor control.

I have previously characterized this model of computing as “feudal.” Users pledge their allegiance to more powerful companies who, in turn, promise to protect them from both sysadmin duties and security threats. It’s a metaphor that’s rich in history and in fiction, and a model that’s increasingly permeating computing today.

Medieval feudalism was a hierarchical political system, with obligations in both directions. Lords offered protection, and vassals offered service. The lord-peasant relationship was similar, with a much greater power differential. It was a response to a dangerous world.

Feudal security consolidates power in the hands of the few. Internet companies, like lords before them, act in their own self-interest. They use their relationship with us to increase their profits, sometimes at our expense. They act arbitrarily. They make mistakes. They’re deliberately—and incidentally—changing social norms. Medieval feudalism gave the lords vast powers over the landless peasants; we’re seeing the same thing on the Internet.

It’s not all bad, of course. We, especially those of us who are not technical, like the convenience, redundancy, portability, automation, and shareability of vendor-managed devices. We like cloud backup. We like automatic updates. We like not having to deal with security ourselves. We like that Facebook just works—from any device, anywhere.

Government power is also increasing on the Internet. There is more government surveillance than ever before. There is more government censorship than ever before. There is more government propaganda, and an increasing number of governments are controlling what their users can and cannot do on the Internet. Totalitarian governments are embracing a growing “cyber sovereignty” movement to further consolidate their power. And the cyberwar arms race is on, pumping an enormous amount of money into cyber-weapons and consolidated cyber-defenses, further increasing government power.

In many cases, the interests of corporate and government powers are aligning. Both corporations and governments benefit from ubiquitous surveillance, and the NSA is using Google, Facebook, Verizon, and others to get access to data it couldn’t otherwise. The entertainment industry is looking to governments to enforce its antiquated business models. Commercial security equipment from companies like BlueCoat and Sophos is being used by oppressive governments to surveil and censor their citizens. The same facial recognition technology that Disney uses in its theme parks can also identify protesters in China and Occupy Wall Street activists in New York. Think of it as a public/private surveillance partnership.

What happened? How, in those early Internet years, did we get the future so wrong?

The truth is that technology magnifies power in general, but rates of adoption are different. The unorganized, the distributed, the marginal, the dissidents, the powerless, the criminal: they can make use of new technologies very quickly. And when those groups discovered the Internet, suddenly they had power. But later, when the already-powerful big institutions finally figured out how to harness the Internet, they had more power to magnify. That’s the difference: the distributed were more nimble and were faster to make use of their new power, while the institutional were slower but were able to use their power more effectively.

So while the Syrian dissidents used Facebook to organize, the Syrian government used Facebook to identify dissidents to arrest.

All isn’t lost for distributed power, though. For institutional power, the Internet is a change in degree, but for distributed power, it’s a qualitative one. The Internet gives decentralized groups—for the first time—the ability to coordinate. This can have incredible ramifications, as we saw in the SOPA/PIPA debate, Gezi, Brazil, and the rising use of crowdfunding. It can invert power dynamics, even in the presence of surveillance, censorship, and use control. But aside from political coordination, the Internet allows for social coordination as well—to unite, for example, ethnic diasporas, gender minorities, sufferers of rare diseases, and people with obscure interests.

This isn’t static: Technological advances continue to provide advantage to the nimble. I discussed this trend in my book Liars and Outliers. If you think of security as an arms race between attackers and defenders, any technological advance gives one side or the other a temporary advantage. But most of the time, a new technology benefits the nimble first. They are not hindered by bureaucracy—and sometimes not by laws or ethics, either. They can evolve faster.

We saw it with the Internet. As soon as the Internet started being used for commerce, a new breed of cybercriminal emerged, immediately able to take advantage of the new technology. It took police a decade to catch up. And we saw it on social media, as political dissidents made use of its organizational powers before totalitarian regimes did.

This delay is what I call a “security gap.” It’s greater when there’s more technology, and in times of rapid technological change. Basically, if there are more innovations to exploit, there will be more damage resulting from society’s inability to keep up with exploiters of all of them. And since our world is one in which there’s more technology than ever before, and a faster rate of technological change than ever before, we should expect to see a greater security gap than ever before. In other words, there will be an increasing time period during which nimble distributed powers can make use of new technologies before slow institutional powers can make better use of those technologies.

This is the battle: quick vs. strong. To return to medieval metaphors, you can think of a nimble distributed power—whether marginal, dissident, or criminal—as Robin Hood; and ponderous institutional powers—both government and corporate—as the feudal lords.

So who wins? Which type of power dominates in the coming decades?

Right now, it looks like traditional power. Ubiquitous surveillance means that it’s easier for the government to identify dissidents than it is for the dissidents to remain anonymous. Data monitoring means easier for the Great Firewall of China to block data than it is for people to circumvent it. The way we all use the Internet makes it much easier for the NSA to spy on everyone than it is for anyone to maintain privacy. And even though it is easy to circumvent digital copy protection, most users still can’t do it.

The problem is that leveraging Internet power requires technical expertise. Those with sufficient ability will be able to stay ahead of institutional powers. Whether it’s setting up your own e-mail server, effectively using encryption and anonymity tools, or breaking copy protection, there will always be technologies that can evade institutional powers. This is why cybercrime is still pervasive, even as police savvy increases; why technically capable whistleblowers can do so much damage; and why organizations like Anonymous are still a viable social and political force. Assuming technology continues to advance—and there’s no reason to believe it won’t—there will always be a security gap in which technically advanced Robin Hoods can operate.

Most people, though, are stuck in the middle. These are people who don’t have the technical ability to evade large governments and corporations, avoid the criminal and hacker groups who prey on us, or join any resistance or dissident movements. These are the people who accept default configuration options, arbitrary terms of service, NSA-installed back doors, and the occasional complete loss of their data. These are the people who get increasingly isolated as government and corporate power align. In the feudal world, these are the hapless peasants. And it’s even worse when the feudal lords—or any powers—fight each other. As anyone watching Game of Thrones knows, peasants get trampled when powers fight: when Facebook, Google, Apple, and Amazon fight it out in the market; when the US, EU, China, and Russia fight it out in geopolitics; or when it’s the US vs. “the terrorists” or China vs. its dissidents.

The abuse will only get worse as technology continues to advance. In the battle between institutional power and distributed power, more technology means more damage. We’ve already seen this: Cybercriminals can rob more people more quickly than criminals who have to physically visit everyone they rob. Digital pirates can make more copies of more things much more quickly than their analog forebears. And we’ll see it in the future: 3D printers mean that the computer restriction debate will soon involves guns, not movies. Big data will mean that more companies will be able to identify and track you more easily. It’s the same problem as the “weapons of mass destruction” fear: terrorists with nuclear or biological weapons can do a lot more damage than terrorists with conventional explosives. And by the same token, terrorists with large-scale cyberweapons can potentially do more damage than terrorists with those same bombs.

It’s a numbers game. Very broadly, because of the way humans behave as a species and as a society, every society is going to have a certain amount of crime. And there’s a particular crime rate society is willing to tolerate. With historically inefficient criminals, we were willing to live with some percentage of criminals in our society. As technology makes each individual criminal more powerful, the percentage we can tolerate decreases. Again, remember the “weapons of mass destruction” debate: As the amount of damage each individual terrorist can do increases, we need to do increasingly more to prevent even a single terrorist from succeeding.

The more destabilizing the technologies, the greater the rhetoric of fear, and the stronger institutional powers will get. This means increasingly repressive security measures, even if the security gap means that such measures become increasingly ineffective. And it will squeeze the peasants in the middle even more.

Without the protection of his own feudal lord, the peasant was subject to abuse both by criminals and other feudal lords. But both corporations and the government—and often the two in cahoots—are using their power to their own advantage, trampling on our rights in the process. And without the technical savvy to become Robin Hoods ourselves, we have no recourse but to submit to whatever the ruling institutional power wants.

So what happens as technology increases? Is a police state the only effective way to control distributed power and keep our society safe? Or do the fringe elements inevitably destroy society as technology increases their power? Probably neither doomsday scenario will come to pass, but figuring out a stable middle ground is hard. These questions are complicated, and dependent on future technological advances that we cannot predict. But they are primarily political questions, and any solutions will be political.

In the short term, we need more transparency and oversight. The more we know of what institutional powers are doing, the more we can trust that they are not abusing their authority. We have long known this to be true in government, but we have increasingly ignored it in our fear of terrorism and other modern threats. This is also true for corporate power. Unfortunately, market dynamics will not necessarily force corporations to be transparent; we need laws to do that. The same is true for decentralized power; transparency is how we’ll differentiate political dissidents from criminal organizations.

Oversight is also critically important, and is another long-understood mechanism for checking power. This can be a combination of things: courts that act as third-party advocates for the rule of law rather than rubber-stamp organizations, legislatures that understand the technologies and how they affect power balances, and vibrant public-sector press and watchdog groups that analyze and debate the actions of those wielding power.

Transparency and oversight give us the confidence to trust institutional powers to fight the bad side of distributed power, while still allowing the good side to flourish. For if we’re going to entrust our security to institutional powers, we need to know they will act in our interests and not abuse that power. Otherwise, democracy fails.

In the longer term, we need to work to reduce power differences. The key to all of this is access to data. On the Internet, data is power. To the extent the powerless have access to it, they gain in power. To the extent that the already powerful have access to it, they further consolidate their power. As we look to reducing power imbalances, we have to look at data: data privacy for individuals, mandatory disclosure laws for corporations, and open government laws.

Medieval feudalism evolved into a more balanced relationship in which lords had responsibilities as well as rights. Today’s Internet feudalism is both ad-hoc and one-sided. Those in power have a lot of rights, but increasingly few responsibilities or limits. We need to rebalance this relationship. In medieval Europe, the rise of the centralized state and the rule of law provided the stability that feudalism lacked. The Magna Carta first forced responsibilities on governments and put humans on the long road toward government by the people and for the people. In addition to re-reigning in government power, we need similar restrictions on corporate power: a new Magna Carta focused on the institutions that abuse power in the 21st century.

Today’s Internet is a fortuitous accident: a combination of an initial lack of commercial interests, government benign neglect, military requirements for survivability and resilience, and computer engineers building open systems that worked simply and easily.

We’re at the beginning of some critical debates about the future of the Internet: the proper role of law enforcement, the character of ubiquitous surveillance, the collection and retention of our entire life’s history, how automatic algorithms should judge us, government control over the Internet, cyberwar rules of engagement, national sovereignty on the Internet, limitations on the power of corporations over our data, the ramifications of information consumerism, and so on.

Data is the pollution problem of the information age. All computer processes produce it. It stays around. How we deal with it—how we reuse and recycle it, who has access to it, how we dispose of it, and what laws regulate it—is central to how the information age functions. And I believe that just as we look back at the early decades of the industrial age and wonder how society could ignore pollution in their rush to build an industrial world, our grandchildren will look back at us during these early decades of the information age and judge us on how we dealt with the rebalancing of power resulting from all this new data.

This won’t be an easy period for us as we try to work these issues out. Historically, no shift in power has ever been easy. Corporations have turned our personal data into an enormous revenue generator, and they’re not going to back down. Neither will governments, who have harnessed that same data for their own purposes. But we have a duty to tackle this problem.

I can’t tell you what the result will be. These are all complicated issues, and require meaningful debate, international cooperation, and innovative solutions. We need to decide on the proper balance between institutional and decentralized power, and how to build tools that amplify what is good in each while suppressing the bad.

This essay previously appeared in the Atlantic.

EDITED TO ADD (11/5): This essay has been translated into Danish.

Posted on October 30, 2013 at 6:50 AMView Comments

Understanding the Threats in Cyberspace

The primary difficulty of cyber security isn’t technology—it’s policy. The Internet mirrors real-world society, which makes security policy online as complicated as it is in the real world. Protecting critical infrastructure against cyber-attack is just one of cyberspace’s many security challenges, so it’s important to understand them all before any one of them can be solved.

The list of bad actors in cyberspace is long, and spans a wide range of motives and capabilities. At the extreme end there’s cyberwar: destructive actions by governments during a war. When government policymakers like David Omand think of cyber-attacks, that’s what comes to mind. Cyberwar is conducted by capable and well-funded groups and involves military operations against both military and civilian targets. Along much the same lines are non-nation state actors who conduct terrorist operations. Although less capable and well-funded, they are often talked about in the same breath as true cyberwar.

Much more common are the domestic and international criminals who run the gamut from lone individuals to organized crime. They can be very capable and well-funded and will continue to inflict significant economic damage.

Threats from peacetime governments have been seen increasingly in the news. The US worries about Chinese espionage against Western targets, and we’re also seeing US surveillance of pretty much everyone in the world, including Americans inside the US. The National Security Agency (NSA) is probably the most capable and well-funded espionage organization in the world, and we’re still learning about the full extent of its sometimes illegal operations.

Hacktivists are a different threat. Their actions range from Internet-age acts of civil disobedience to the inflicting of actual damage. This is hard to generalize about because the individuals and groups in this category vary so much in skill, funding and motivation. Hackers falling under the “anonymous” aegis—it really isn’t correct to call them a group—come under this category, as does WikiLeaks. Most of these attackers are outside the organization, although whistleblowing—the civil disobedience of the information age—generally involves insiders like Edward Snowden.

This list of potential network attackers isn’t exhaustive. Depending on who you are and what your organization does, you might be also concerned with espionage cyber-attacks by the media, rival corporations or even the corporations we entrust with our data.

The issue here, and why it affects policy, is that protecting against these various threats can lead to contradictory requirements. In the US, the NSA’s post-9/11 mission to protect the country from terrorists has transformed it into a domestic surveillance organization. The NSA’s need to protect its own information systems from outside attack opened it up to attacks from within. Do the corporate security products we buy to protect ourselves against cybercrime contain backdoors that allow for government spying? European countries may condemn the US for spying on its own citizens, but do they do the same thing?

All these questions are especially difficult because military and security organizations along with corporations tend to hype particular threats. For example, cyberwar and cyberterrorism are greatly overblown as threats—because they result in massive government programs with huge budgets and power—while cybercrime is largely downplayed.

We need greater transparency, oversight and accountability on both the government and corporate sides before we can move forward. With the secrecy that surrounds cyber-attack and cyberdefense it’s hard to be optimistic.

This essay previously appeared in Europe’s World.

Posted on October 28, 2013 at 6:39 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.