Entries Tagged "cyberwar"

Page 14 of 15

Security in Ten Years

This is a conversation between myself and Marcus Ranum. It will appear in Information Security Magazine this month.


Bruce Schneier: Predictions are easy and difficult. Roy Amara of the Institute for the Future once said: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

Moore’s Law is easy: In 10 years, computers will be 100 times more powerful. My desktop will fit into my cell phone, we’ll have gigabit wireless connectivity everywhere, and personal networks will connect our computing devices and the remote services we subscribe to. Other aspects of the future are much more difficult to predict. I don’t think anyone can predict what the emergent properties of 100x computing power will bring: new uses for computing, new paradigms of communication. A 100x world will be different, in ways that will be surprising.

But throughout history and into the future, the one constant is human nature. There hasn’t been a new crime invented in millennia. Fraud, theft, impersonation and counterfeiting are perennial problems that have been around since the beginning of society. During the last 10 years, these crimes have migrated into cyberspace, and over the next 10, they will migrate into whatever computing, communications and commerce platforms we’re using.

The nature of the attacks will be different: the targets, tactics and results. Security is both a trade-off and an arms race, a balance between attacker and defender, and changes in technology upset that balance. Technology might make one particular tactic more effective, or one particular security technology cheaper and more ubiquitous. Or a new emergent application might become a favored target.

I don’t see anything by 2017 that will fundamentally alter this. Do you?


Marcus Ranum: I think you’re right; at a meta-level, the problems are going to stay the same. What’s shocking and disappointing to me is that our responses to those problems also remain the same, in spite of the obvious fact that they aren’t effective. It’s 2007 and we haven’t seemed to accept that:

  • You can’t turn shovelware into reliable software by patching it a whole lot.
  • You shouldn’t mix production systems with non-production systems.
  • You actually have to know what’s going on in your networks.
  • If you run your computers with an open execution runtime model you’ll always get viruses, spyware and Trojan horses.
  • You can pass laws about locking barn doors after horses have left, but it won’t put the horses back in the barn.
  • Security has to be designed in, as part of a system plan for reliability, rather than bolted on afterward.

The list could go on for several pages, but it would be too depressing. It would be “Marcus’ list of obvious stuff that everybody knows but nobody accepts.”

You missed one important aspect of the problem: By 2017, computers will be even more important to our lives, economies and infrastructure.

If you’re right that crime remains a constant, and I’m right that our responses to computer security remain ineffective, 2017 is going to be a lot less fun than 2007 was.

I’ve been pretty dismissive of the concepts of cyberwar and cyberterror. That dismissal was mostly motivated by my observation that the patchworked and kludgy nature of most computer systems acts as a form of defense in its own right, and that real-world attacks remain more cost-effective and practical for terror purposes.

I’d like to officially modify my position somewhat: I believe it’s increasingly likely that we’ll suffer catastrophic failures in critical infrastructure systems by 2017. It probably won’t be terrorists that do it, though. More likely, we’ll suffer some kind of horrible outage because a critical system was connected to a non-critical system that was connected to the Internet so someone could get to MySpace—­and that ancillary system gets a piece of malware. Or it’ll be some incomprehensibly complex software, layered with Band-Aids and patches, that topples over when some “merely curious” hacker pushes the wrong e-button. We’ve got some bad-looking trend lines; all the indicators point toward a system that is more complex, less well-understood and more interdependent. With infrastructure like that, who needs enemies?

You’re worried criminals will continue to penetrate into cyberspace, and I’m worried complexity, poor design and mismanagement will be there to meet them.


Bruce Schneier: I think we’ve already suffered that kind of critical systems failure. The August 2003 blackout that covered much of northeastern United States and Canada­—50 million people­—was caused by a software bug.

I don’t disagree that things will continue to get worse. Complexity is the worst enemy of security, and the Internet—and the computers and processes connected to it—­is getting more complex all the time. So things are getting worse, even though security technology is improving. One could say those critical insecurities are another emergent property of the 100x world of 2017.

Yes, IT systems will continue to become more critical to our infrastructure­—banking, communications, utilities, defense, everything.

By 2017, the interconnections will be so critical that it will probably be cost-effective—and low-risk—for a terrorist organization to attack over the Internet. I also deride talk of cyberterror today, but I don’t think I will in another 10 years.

While the trends of increased complexity and poor management don’t look good, there is another trend that points to more security—but neither you nor I is going to like it. That trend is IT as a service.

By 2017, people and organizations won’t be buying computers and connectivity the way they are today. The world will be dominated by telcos, large ISPs and systems integration companies, and computing will look a lot like a utility. Companies will be selling services, not products: email services, application services, entertainment services. We’re starting to see this trend today, and it’s going to take off in the next 10 years. Where this affects security is that by 2017, people and organizations won’t have a lot of control over their security. Everything will be handled at the ISPs and in the backbone. The free-wheeling days of general-use PCs will be largely over. Think of the iPhone model: You get what Apple decides to give you, and if you try to hack your phone, they can disable it remotely. We techie geeks won’t like it, but it’s the future. The Internet is all about commerce, and commerce won’t survive any other way.


Marcus Ranum: You’re right about the shift toward services—it’s the ultimate way to lock in customers.

If you can make it difficult for the customer to get his data back after you’ve held it for a while, you can effectively prevent the customer from ever leaving. And of course, customers will be told “trust us, your data is secure,” and they’ll take that for an answer. The back-end systems that will power the future of utility computing are going to be just as full of flaws as our current systems. Utility computing will also completely fail to address the problem of transitive trust unless people start shifting to a more reliable endpoint computing platform.

That’s the problem with where we’re heading: the endpoints are not going to get any better. People are attracted to appliances because they get around the headache of system administration (which, in today’s security environment, equates to “endless patching hell”), but underneath the slick surface of the appliance we’ll have the same insecure nonsense we’ve got with general-purpose desktops. In fact, the development of appliances running general-purpose operating systems really does raise the possibility of a software monoculture. By 2017, do you think system engineering will progress to the point where we won’t see a vendor release a new product and instantly create an installed base of 1 million-plus users with root privileges? I don’t, and that scares me.

So if you’re saying the trend is to continue putting all our eggs in one basket and blithely trusting that basket, I agree.

Another trend I see getting worse is government IT know-how. At the rate outsourcing has been brain-draining the federal workforce, by 2017 there won’t be a single government employee who knows how to do anything with a computer except run PowerPoint and Web surf. Joking aside, the result is that the government’s critical infrastructure will be almost entirely managed from the outside. The strategic implications of such a shift have scared me for a long time; it amounts to a loss of control over data, resources and communications.


Bruce Schneier: You’re right about the endpoints not getting any better. I’ve written again and again how measures like two-factor authentication aren’t going to make electronic banking any more secure. The problem is if someone has stuck a Trojan on your computer, it doesn’t matter how many ways you authenticate to the banking server; the Trojan is going to perform illicit transactions after you authenticate.

It’s the same with a lot of our secure protocols. SSL, SSH, PGP and so on all assume the endpoints are secure, and the threat is in the communications system. But we know the real risks are the endpoints.

And a misguided attempt to solve this is going to dominate computing by 2017. I mentioned software-as-a-service, which you point out is really a trick that allows businesses to lock up their customers for the long haul. I pointed to the iPhone, whose draconian rules about who can write software for that platform accomplishes much the same thing. We could also point to Microsoft’s Trusted Computing, which is being sold as a security measure but is really another lock-in mechanism designed to keep users from switching to “unauthorized” software or OSes.

I’m reminded of the post-9/11 anti-terrorist hysteria—we’ve confused security with control, and instead of building systems for real security, we’re building systems of control. Think of ID checks everywhere, the no-fly list, warrantless eavesdropping, broad surveillance, data mining, and all the systems to check up on scuba divers, private pilots, peace activists and other groups of people. These give us negligible security, but put a whole lot of control in the government’s hands.

Computing is heading in the same direction, although this time it is industry that wants control over its users. They’re going to sell it to us as a security system—they may even have convinced themselves it will improve security—but it’s fundamentally a control system. And in the long run, it’s going to hurt security.

Imagine we’re living in a world of Trustworthy Computing, where no software can run on your Windows box unless Microsoft approves it. That brain drain you talk about won’t be a problem, because security won’t be in the hands of the user. Microsoft will tout this as the end of malware, until some hacker figures out how to get his software approved. That’s the problem with any system that relies on control: Once you figure out how to hack the control system, you’re pretty much golden. So instead of a zillion pesky worms, by 2017 we’re going to see fewer but worse super worms that sail past our defenses.

By then, though, we’ll be ready to start building real security. As you pointed out, networks will be so embedded into our critical infrastructure—­and there’ll probably have been at least one real disaster by then—that we’ll have no choice. The question is how much we’ll have to dismantle and build over to get it right.


Marcus Ranum: I agree regarding your gloomy view of the future. It’s ironic the counterculture “hackers” have enabled (by providing an excuse) today’s run-patch-run-patch-reboot software environment and tomorrow’s software Stalinism.

I don’t think we’re going to start building real security. Because real security is not something you build—­it’s something you get when you leave out all the other garbage as part of your design process. Purpose-designed and purpose-built software is more expensive to build, but cheaper to maintain. The prevailing wisdom about software return on investment doesn’t factor in patching and patch-related downtime, because if it did, the numbers would stink. Meanwhile, I’ve seen purpose-built Internet systems run for years without patching because they didn’t rely on bloated components. I doubt industry will catch on.

The future will be captive data running on purpose-built back-end systems—and it won’t be a secure future, because turning your data over always decreases your security. Few possess the understanding of complexity and good design principles necessary to build reliable or secure systems. So, effectively, outsourcing—or other forms of making security someone else’s problem—will continue to seem attractive.
That doesn’t look like a very rosy future to me. It’s a shame, too, because getting this stuff correct is important. You’re right that there are going to be disasters in our future.

I think they’re more likely to be accidents where the system crumbles under the weight of its own complexity, rather than hostile action. Will we even be able to figure out what happened, when it happens?

Folks, the captains have illuminated the “Fasten your seat belts” sign. We predict bumpy conditions ahead.

EDITED TO ADD (12/4): Commentary on the point/counterpoint.

Posted on December 3, 2007 at 12:14 PMView Comments

Cybercrime vs Cyberterrorism

I’ve been saying this for a while now:

Since the outbreak of a cybercrime epidemic that has cost the American economy billions of dollars, the federal government has failed to respond with enough resources, attention and determination to combat the cyberthreat, a Mercury News investigation reveals.

“The U.S. government has not devoted the leadership and energy that this issue needs,” said Paul Kurtz, a former administration homeland and cybersecurity adviser. “It’s been neglected.”

Even as the White House asked last week for $154 million toward a new cybersecurity initiative expected to reach billions of dollars over the next several years, security experts complain the administration remains too focused on the risks of online espionage and information warfare, overlooking the international criminals who are stealing a fortune through the Internet.

This is Part III of a good series on cybercrime. Here are Parts I and II.

Posted on November 28, 2007 at 6:56 AMView Comments

Cyberwar: Myth or Reality?

The biggest problems in discussing cyberwar are the definitions. The things most often described as cyberwar are really cyberterrorism, and the things most often described as cyberterrorism are more like cybercrime, cybervandalism or cyberhooliganism—or maybe cyberespionage.

At first glance there’s nothing new about these terms except the “cyber” prefix. War, terrorism, crime and vandalism are old concepts. What’s new is the domain; it’s the same old stuff occurring in a new arena. But because cyberspace is different, there are differences worth considering.

Of course, the terms overlap. Although the goals are different, many tactics used by armies, terrorists and criminals are the same. Just as they use guns and bombs, they can use cyberattacks. And just as every shooting is not necessarily an act of war, every successful Internet attack, no matter how deadly, is not necessarily an act of cyberwar. A cyberattack that shuts down the power grid might be part of a cyberwar campaign, but it also might be an act of cyberterrorism, cybercrime or even—if done by some 14-year-old who doesn’t really understand what he’s doing—cyberhooliganism. Which it is depends on the attacker’s motivations and the surrounding circumstances—just as in the real world.

For it to be cyberwar, it must first be war. In the 21st century, war will inevitably include cyberwar. Just as war moved into the air with the development of kites, balloons and aircraft, and into space with satellites and ballistic missiles, war will move into cyberspace with the development of specialized weapons, tactics and defenses.

I have no doubt that smarter and better-funded militaries are planning for cyberwar. They have Internet attack tools: denial-of-service tools; exploits that would allow military intelligence to penetrate military systems; viruses and worms similar to what we see now, but perhaps country- or network-specific; and Trojans that eavesdrop on networks, disrupt operations, or allow an attacker to penetrate other networks. I believe militaries know of vulnerabilities in operating systems, generic or custom military applications, and code to exploit those vulnerabilities. It would be irresponsible for them not to.

The most obvious attack is the disabling of large parts of the Internet, although in the absence of global war, I doubt a military would do so; the Internet is too useful an asset and too large a part of the world economy. More interesting is whether militaries would disable national pieces of it. For a surgical approach, we can imagine a cyberattack against a military headquarters, or networks handling logistical information.

Destruction is the last thing a military wants to accomplish with a communications network. A military only wants to shut down an enemy’s network if it isn’t acquiring useful information. The best thing is to infiltrate enemy computers and networks, spy on them, and surreptitiously disrupt select pieces of their communications when appropriate. The next best thing is to passively eavesdrop. After that, perform traffic analysis: analyze the characteristics of communications. Only if a military can’t do any of this would it consider shutting the thing down. Or if, as sometimes but rarely happens, the benefits of completely denying the enemy the communications channel outweigh the advantages of eavesdropping on it.

Cyberwar is certainly not a myth. But you haven’t seen it yet, despite the attacks on Estonia. Cyberwar is warfare in cyberspace. And warfare involves massive death and destruction. When you see it, you’ll know it.

This is the second half of a point/counterpoint with Marcus Ranum; it appeared in the November issue of Information Security Magazine. Marcus’s half is here.

I wrote a longer essay on cyberwar here.

Posted on November 12, 2007 at 7:38 AMView Comments

Staged Attack Causes Generator to Self-Destruct

I assume you’ve all seen the news:

A government video shows the potential destruction caused by hackers seizing control of a crucial part of the U.S. electrical grid: an industrial turbine spinning wildly out of control until it becomes a smoking hulk and power shuts down.

The video, produced for the Homeland Security Department and obtained by The Associated Press on Wednesday, was marked “Official Use Only.” It shows commands quietly triggered by simulated hackers having such a violent reaction that the enormous turbine shudders as pieces fly apart and it belches black-and-white smoke.

The video was produced for top U.S. policy makers by the Idaho National Laboratory, which has studied the little-understood risks to the specialized electronic equipment that operates power, water and chemical plants. Vice President Dick Cheney is among those who have watched the video, said one U.S. official, speaking on condition of anonymity because this official was not authorized to publicly discuss such high-level briefings.

More here. And the video is on CNN.com.

I haven’t written much about SCADA security, except to say that I think the risk is overblown today but is getting more serious all the time—and we need to deal with the security before it’s too late. I didn’t know quite what to make of the Idaho National Laboratory video; it seemed like hype, but I couldn’t find any details. (The CNN headline, “Mouse click could plunge city into darkness, experts say,” was definitely hype.)

Then, I received this anonymous e-mail:

I was one of the industry technical folks the DHS consulted in developing the “immediate and required” mitigation strategies for this problem.

They talked to several industry groups (mostly management not tech folks): electric, refining, chemical, and water. They ignored most of what we said but attached our names to the technical parts of the report to make it look credible. We softened or eliminated quite a few sections that may have had relevance 20 years ago, such as war dialing attacks against modems.

The end product is a work order document from DHS which requires such things as background checks on people who have access to modems and logging their visits to sites with datacom equipment or control systems.

By the way—they were unable to hurt the generator you see in the video but did destroy the shaft that drives it and the power unit. They triggered the event from 30 miles away! Then they extrapolated the theory that a malfunctioning generator can destroy not only generators at the power company but the power glitches on the grid would destroy motors many miles away on the electric grid that pump water or gasoline (through pipelines).

They kept everything very secret (all emails and reports encrypted, high security meetings in DC) until they produced a video and press release for CNN. There was huge concern by DHS that this vulnerability would become known to the bad guys—yet now they release it to the world for their own career reasons. Beyond shameful.

Oh, and they did use a contractor for all the heavy lifting that went into writing/revising the required mitigations document. Could not even produce this work product on their own.

By the way, the vulnerability they hypothesize is completely bogus but I won’t say more about the details. Gitmo is still too hot for me this time of year.

Posted on October 2, 2007 at 6:26 AMView Comments

Pentagon Hacked by Chinese Military

The story seems to have started yesterday in the Financial Times, and is now spreading.

Not enough details to know what’s really going on, though. From the FT:

The Chinese military hacked into a Pentagon computer network in June in the most successful cyber attack on the US defence department, say American officials.

The Pentagon acknowledged shutting down part of a computer system serving the office of Robert Gates, defence secretary, but declined to say who it believed was behind the attack.

Current and former officials have told the Financial Times an internal investigation has revealed that the incursion came from the People’s Liberation Army.

One senior US official said the Pentagon had pinpointed the exact origins of the attack. Another person familiar with the event said there was a “very high level of confidence…trending towards total certainty” that the PLA was responsible. The defence ministry in Beijing declined to comment on Monday.

EDITED TO ADD (9/13): Another good commentary.

Posted on September 4, 2007 at 10:44 AMView Comments

"Cyberwar" in Estonia

I had been thinking about writing about the massive distributed-denial-of-service attack against the Estonian government last April. It’s been called the first cyberwar, although it is unclear that the Russian government was behind the attacks. And while I’ve written about cyberwar in general, I haven’t really addressed the Estonian attacks.

Now I don’t have to. Kevin Poulsen has written an excellent article on both the reality and the hype surrounding the attacks on Estonia’s networks, commenting on a story in the magazine Wired:

Writer Joshua Davis was dispatched to the smoking ruins of Estonia to assess the damage wrought by last spring’s DDoS attacks against the country’s web, e-mail and DNS servers. Josh is a talented writer, and he returned with a story that offers some genuine insights—a few, though, are likely unintentional.

We see, for example, that Estonia’s computer emergency response team responded to the junk packets with technical aplomb and coolheaded professionalism, while Estonia’s leadership … well, didn’t. Faced with DDoS and nationalistic, cross-border hacktivism—nuisances that have plagued the rest of the wired world for the better part of a decade—Estonia’s leaders lost perspective.

Here’s the best quote, from the speaker of the Estonian parliament, Ene Ergma: “When I look at a nuclear explosion, and the explosion that happened in our country in May, I see the same thing.”

[…]

While cooler heads were combating the first wave of Estonia’s DDoS attacks with packet filters, we learn, the country’s defense minister was contemplating invoking NATO Article 5, which considers an “armed attack” against any NATO country to be an attack against all. That might have obliged the U.S. and other signatories to go to war with Russia, if anyone was silly enough to take it seriously.

Fortunately, nobody important really is that silly. The U.S. has known about DDoS attacks since our own Web War One in 2000, when some our most trafficked sites—Yahoo, Amazon.com, E-Trade, eBay, and CNN.com—were attacked in rapid succession by Canada. (The culprit was a 15-year-old boy in Montreal).

As in Estonia years later, the attack took America’s leaders by surprise. President Clinton summoned some of the United States’ most respected computer security experts to the White House to meet and discuss options for shoring up the internet. At a photo op afterwards, a reporter lobbed Clinton a cyberwar softball: was this the “electronic Pearl Harbor?”

Estonia’s leaders, among others, could learn from the restraint of Clinton’s response. “I think it was an alarm,” he said. “I don’t think it was Pearl Harbor.

“We lost our Pacific fleet at Pearl Harbor.”

Read the whole thing.

Posted on August 23, 2007 at 1:18 PMView Comments

Department of Homeland Security Research Solicitation

Interesting document.

Lots of good stuff. The nine research areas:

  • Botnets and Other Malware: Detection and Mitigation
  • Composable and Scalable Secure Systems
  • Cyber Security Metrics
  • Network Data Visualization for Information Assurance
  • Internet Tomography/Topography
  • Routing Security Management Tool
  • Process Control System Security
  • Data Anonymization Tools and Techniques
  • Insider Threat Detection and Mitigation

And this implies they’ve accepted the problem:

Cyber attacks are increasing in frequency and impact. Even though these attacks have not yet had a significant impact on our Nation’s critical infrastructures, they have demonstrated that extensive vulnerabilities exist in information systems and networks, with the potential for serious damage. The effects of a successful cyber attack might include: serious consequences for major economic and industrial sectors, threats to infrastructure elements such as electric power, and disruption of the response and communications capabilities of first responders.

It’s good to see research money going to this stuff.

Posted on June 6, 2007 at 6:07 AMView Comments

Cyberwar

I haven’t posted anything about the cyberwar between Russia and Estonia because, well, because I didn’t think there was anything new to say. We know that this kind of thing is possible. We don’t have any definitive proof that Russia was behind it. But it would be foolish to think that the various world’s militaries don’t have capabilities like this.

And anyway, I wrote about cyberwar back in January 2005.

But it seems that the essay never made it into the blog. So here it is again.


Cyberwar

The first problem with any discussion about cyberwar is definitional. I’ve been reading about cyberwar for years now, and there seem to be as many definitions of the term as there are people who write about the topic. Some people try to limit cyberwar to military actions taken during wartime, while others are so inclusive that they include the script kiddies who deface websites for fun.

I think the restrictive definition is more useful, and would like to define four different terms as follows:

Cyberwar—Warfare in cyberspace. This includes warfare attacks against a nation’s military—forcing critical communications channels to fail, for example—and attacks against the civilian population.

Cyberterrorism—The use of cyberspace to commit terrorist acts. An example might be hacking into a computer system to cause a nuclear power plant to melt down, a dam to open, or two airplanes to collide. In a previous Crypto-Gram essay, I discussed how realistic the cyberterrorism threat is.

Cybercrime—Crime in cyberspace. This includes much of what we’ve already experienced: theft of intellectual property, extortion based on the threat of DDOS attacks, fraud based on identity theft, and so on.

Cybervandalism—The script kiddies who deface websites for fun are technically criminals, but I think of them more as vandals or hooligans. They’re like the kids who spray paint buses: in it more for the thrill than anything else.

At first glance, there’s nothing new about these terms except the “cyber” prefix. War, terrorism, crime, even vandalism are old concepts. That’s correct, the only thing new is the domain; it’s the same old stuff occurring in a new arena. But because the arena of cyberspace is different from other arenas, there are differences worth considering.

One thing that hasn’t changed is that the terms overlap: although the goals are different, many of the tactics used by armies, terrorists, and criminals are the same. Just as all three groups use guns and bombs, all three groups can use cyberattacks. And just as every shooting is not necessarily an act of war, every successful Internet attack, no matter how deadly, is not necessarily an act of cyberwar. A cyberattack that shuts down the power grid might be part of a cyberwar campaign, but it also might be an act of cyberterrorism, cybercrime, or even—if it’s done by some fourteen-year-old who doesn’t really understand what he’s doing—cybervandalism. Which it is will depend on the motivations of the attacker and the circumstances surrounding the attack…just as in the real world.

For it to be cyberwar, it must first be war. And in the 21st century, war will inevitably include cyberwar. For just as war moved into the air with the development of kites and balloons and then aircraft, and war moved into space with the development of satellites and ballistic missiles, war will move into cyberspace with the development of specialized weapons, tactics, and defenses.

The Waging of Cyberwar

There should be no doubt that the smarter and better-funded militaries of the world are planning for cyberwar, both attack and defense. It would be foolish for a military to ignore the threat of a cyberattack and not invest in defensive capabilities, or to disregard the strategic or tactical possibility of launching an offensive cyberattack against an enemy during wartime. And while history has taught us that many militaries are indeed foolish and ignore the march of progress, cyberwar has been discussed too much in military circles to be ignored.

This implies that at least some of our world’s militaries have Internet attack tools that they’re saving in case of wartime. They could be denial-of-service tools. They could be exploits that would allow military intelligence to penetrate military systems. They could be viruses and worms similar to what we’re seeing now, but perhaps country- or network-specific. They could be Trojans that eavesdrop on networks, disrupt network operations, or allow an attacker to penetrate still other networks.

Script kiddies are attackers who run exploit code written by others, but don’t really understand the intricacies of what they’re doing. Conversely, professional attackers spend an enormous amount of time developing exploits: finding vulnerabilities, writing code to exploit them, figuring out how to cover their tracks. The real professionals don’t release their code to the script kiddies; the stuff is much more valuable if it remains secret until it is needed. I believe that militaries have collections of vulnerabilities in common operating systems, generic applications, or even custom military software that their potential enemies are using, and code to exploit those vulnerabilities. I believe that these militaries are keeping these vulnerabilities secret, and that they are saving them in case of wartime or other hostilities. It would be irresponsible for them not to.

The most obvious cyberattack is the disabling of large parts of the Internet, at least for a while. Certainly some militaries have the capability to do this, but in the absence of global war I doubt that they would do so; the Internet is far too useful an asset and far too large a part of the world economy. More interesting is whether they would try to disable national pieces of it. If Country A went to war with Country B, would Country A want to disable Country B’s portion of the Internet, or remove connections between Country B’s Internet and the rest of the world? Depending on the country, a low-tech solution might be the easiest: disable whatever undersea cables they’re using as access. Could Country A’s military turn its own Internet into a domestic-only network if they wanted?

For a more surgical approach, we can also imagine cyberattacks designed to destroy particular organizations’ networks; e.g., as the denial-of-service attack against the Al Jazeera website during the recent Iraqi war, allegedly by pro-American hackers but possibly by the government. We can imagine a cyberattack against the computer networks at a nation’s military headquarters, or the computer networks that handle logistical information.

One important thing to remember is that destruction is the last thing a military wants to do with a communications network. A military only wants to shut an enemy’s network down if they aren’t getting useful information from it. The best thing to do is to infiltrate the enemy’s computers and networks, spy on them, and surreptitiously disrupt select pieces of their communications when appropriate. The next best thing is to passively eavesdrop. After that, the next best is to perform traffic analysis: analyze who is talking to whom and the characteristics of that communication. Only if a military can’t do any of that do they consider shutting the thing down. Or if, as sometimes but rarely happens, the benefits of completely denying the enemy the communications channel outweigh all of the advantages.

Properties of Cyberwar

Because attackers and defenders use the same network hardware and software, there is a fundamental tension between cyberattack and cyberdefense. The National Security Agency has referred to this as the “equities issue,” and it can be summarized as follows. When a military discovers a vulnerability in a common product, they can either alert the manufacturer and fix the vulnerability, or not tell anyone. It’s not an easy decision. Fixing the vulnerability gives both the good guys and the bad guys a more secure system. Keeping the vulnerability secret means that the good guys can exploit the vulnerability to attack the bad guys, but it also means that the good guys are vulnerable. As long as everyone uses the same microprocessors, operating systems, network protocols, applications software, etc., the equities issue will always be a consideration when planning cyberwar.

Cyberwar can take on aspects of espionage, and does not necessarily involve open warfare. (In military talk, cyberwar is not necessarily “hot.”) Since much of cyberwar will be about seizing control of a network and eavesdropping on it, there may not be any obvious damage from cyberwar operations. This means that the same tactics might be used in peacetime by national intelligence agencies. There’s considerable risk here. Just as U.S. U2 flights over the Soviet Union could have been viewed as an act of war, the deliberate penetration of a country’s computer networks might be as well.

Cyberattacks target infrastructure. In this way they are no different than conventional military attacks against other networks: power, transportation, communications, etc. All of these networks are used by both civilians and the military during wartime, and attacks against them inconvenience both groups of people. For example, when the Allies bombed German railroad bridges during World War II, that affected both civilian and military transport. And when the United States bombed Iraqi communications links in both the First and Second Iraqi Wars, that affected both civilian and military communications. Cyberattacks, even attacks targeted as precisely as today’s smart bombs, are likely to have collateral effects.

Cyberattacks can be used to wage information war. Information war is another topic that’s received considerable media attention of late, although it is not new. Dropping leaflets on enemy soldiers to persuade them to surrender is information war. Broadcasting radio programs to enemy troops is information war. As people get more and more of their information over cyberspace, cyberspace will increasingly become a theater for information war. It’s not hard to imagine cyberattacks designed to co-opt the enemy’s communications channels and use them as a vehicle for information war.

Because cyberwar targets information infrastructure, the waging of it can be more damaging to countries that have significant computer-network infrastructure. The idea is that a technologically poor country might decide that a cyberattack that affects the entire world would disproportionately affect its enemies, because rich nations rely on the Internet much more than poor ones. In some ways this is the dark side of the digital divide, and one of the reasons countries like the United States are so worried about cyberdefense.

Cyberwar is asymmetric, and can be a guerrilla attack. Unlike conventional military offensives involving divisions of men and supplies, cyberattacks are carried out by a few trained operatives. In this way, cyberattacks can be part of a guerrilla warfare campaign.

Cyberattacks also make effective surprise attacks. For years we’ve heard dire warnings of an “electronic Pearl Harbor.” These are largely hyperbole today. I discuss this more in that previous Crypto-Gram essay on cyberterrorism, but right now the infrastructure just isn’t sufficiently vulnerable in that way.

Cyberattacks do not necessarily have an obvious origin. Unlike other forms of warfare, misdirection is more likely a feature of a cyberattack. It’s possible to have damage being done, but not know where it’s coming from. This is a significant difference; there’s something terrifying about not knowing your opponent—or knowing it, and then being wrong. Imagine if, after Pearl Harbor, we did not know who attacked us?

Cyberwar is a moving target. In the previous paragraph, I said that today the risks of an electronic Pearl Harbor are unfounded. That’s true; but this, like all other aspects of cyberspace, is continually changing. Technological improvements affect everyone, including cyberattack mechanisms. And the Internet is becoming critical to more of our infrastructure, making cyberattacks more attractive. There will be a time in the future, perhaps not too far into the future, when a surprise cyberattack becomes a realistic threat.

And finally, cyberwar is a multifaceted concept. It’s part of a larger military campaign, and attacks are likely to have both real-world and cyber components. A military might target the enemy’s communications infrastructure through both physical attack—bombings of selected communications facilities and transmission cables—and virtual attack. An information warfare campaign might include dropping of leaflets, usurpation of a television channel, and mass sending of e-mail. And many cyberattacks still have easier non-cyber equivalents: A country wanting to isolate another country’s Internet might find a low-tech solution, involving the acquiescence of backbone companies like Cable & Wireless, easier than a targeted worm or virus. Cyberwar doesn’t replace war; it’s just another arena in which the larger war is fought.

People overplay the risks of cyberwar and cyberterrorism. It’s sexy, and it gets media attention. And at the same time, people underplay the risks of cybercrime. Today crime is big business on the Internet, and it’s getting bigger all the time. But luckily, the defenses are the same. The countermeasures aimed at preventing both cyberwar and cyberterrorist attacks will also defend against cybercrime and cybervandalism. So even if organizations secure their networks for the wrong reasons, they’ll do the right thing.

Here’s my previous essay on cyberterrorism.

Posted on June 4, 2007 at 6:13 AMView Comments

Cyber-Attack

Last month Marine General James Cartwright told the House Armed Services Committee that the best cyber defense is a good offense.

As reported in Federal Computer Week, Cartwright said: “History teaches us that a purely defensive posture poses significant risks,” and that if “we apply the principle of warfare to the cyberdomain, as we do to sea, air and land, we realize the defense of the nation is better served by capabilities enabling us to take the fight to our adversaries, when necessary, to deter actions detrimental to our interests.”

The general isn’t alone. In 2003, the entertainment industry tried to get a law passed giving them the right to attack any computer suspected of distributing copyrighted material. And there probably isn’t a sys-admin in the world who doesn’t want to strike back at computers that are blindly and repeatedly attacking their networks.

Of course, the general is correct. But his reasoning illustrates perfectly why peacetime and wartime are different, and why generals don’t make good police chiefs.

A cyber-security policy that condones both active deterrence and retaliation—without any judicial determination of wrongdoing—is attractive, but it’s wrongheaded, not least because it ignores the line between war, where those involved are permitted to determine when counterattack is required, and crime, where only impartial third parties (judges and juries) can impose punishment.

In warfare, the notion of counterattack is extremely powerful. Going after the enemy—its positions, its supply lines, its factories, its infrastructure—is an age-old military tactic. But in peacetime, we call it revenge, and consider it dangerous. Anyone accused of a crime deserves a fair trial. The accused has the right to defend himself, to face his accuser, to an attorney, and to be presumed innocent until proven guilty.

Both vigilante counterattacks, and pre-emptive attacks, fly in the face of these rights. They punish people before who haven’t been found guilty. It’s the same whether it’s an angry lynch mob stringing up a suspect, the MPAA disabling the computer of someone it believes made an illegal copy of a movie, or a corporate security officer launching a denial-of-service attack against someone he believes is targeting his company over the net.

In all of these cases, the attacker could be wrong. This has been true for lynch mobs, and on the internet it’s even harder to know who’s attacking you. Just because my computer looks like the source of an attack doesn’t mean that it is. And even if it is, it might be a zombie controlled by yet another computer; I might be a victim, too. The goal of a government’s legal system is justice; the goal of a vigilante is expediency.

I understand the frustrations of General Cartwright, just as I do the frustrations of the entertainment industry, and the world’s sys-admins. Justice in cyberspace can be difficult. It can be hard to figure out who is attacking you, and it can take a long time to make them stop. It can be even harder to prove anything in court. The international nature of many attacks exacerbates the problems; more and more cybercriminals are jurisdiction shopping: attacking from countries with ineffective computer crime laws, easily bribable police forces and no extradition treaties.

Revenge is appealingly straightforward, and treating the whole thing as a military problem is easier than working within the legal system.

But that doesn’t make it right. In 1789, the Declaration of the Rights of Man and of the Citizen declared: “No person shall be accused, arrested, or imprisoned except in the cases and according to the forms prescribed by law. Any one soliciting, transmitting, executing, or causing to be executed any arbitrary order shall be punished.”

I’m glad General Cartwright thinks about offensive cyberwar; it’s how generals are supposed to think. I even agree with Richard Clarke’s threat of military-style reaction in the event of a cyber-attack by a foreign country or a terrorist organization. But short of an act of war, we’re far safer with a legal system that respects our rights.

This essay originally appeared in Wired.

Posted on April 5, 2007 at 7:35 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.