Entries Tagged "cyberterrorism"

Page 3 of 4

Terrorist Risk of Cloud Computing

I don’t even know where to begin on this one:

As we have seen in the past with other technologies, while cloud resources will likely start out decentralized, as time goes by and economies of scale take hold, they will start to collect into mega-technology hubs. These hubs could, as the end of this cycle, number in the low single digits and carry most of the commerce and data for a nation like ours. Elsewhere, particularly in Europe, those hubs could handle several nations’ public and private data.

And therein lays the risk.

The Twin Towers, which were destroyed in the 9/11 attack, took down a major portion of the U.S. infrastructure at the same time. The capability and coverage of cloud-based mega-hubs would easily dwarf hundreds of Twin Tower-like operations. Although some redundancy would likely exist—hopefully located in places safe from disasters—should a hub be destroyed, it could likely take down a significant portion of the country it supported at the same time.

[…]

Each hub may represent a target more attractive to terrorists than today’s favored nuclear power plants.

It’s only been eight years, and this author thinks that the 9/11 attacks “took down a major portion of the U.S. infrastructure.” That’s just plain ridiculous. I was there (in the U.S, not in New York). The government, the banks, the power system, commerce everywhere except lower Manhattan, the Internet, the water supply, the food supply, and every other part of the U.S. infrastructure I can think of worked just fine during and after the attacks. The New York Stock Exchange was up and running in a few days. Even the piece of our infrastructure that was the most disrupted—the airplane network—was up and running in a week. I think the author of that piece needs to travel to somewhere on the planet where major portions of the infrastructure actually get disrupted, so he can see what it’s like.

No less ridiculous is the main point of the article, which seems to imply that terrorists will someday decide that disrupting people’s Lands’ End purchases will be more attractive than killing them. Okay, that was a caricature of the article, but not by much. Terrorism is an attack against our minds, using random death and destruction as a tactic to cause terror in everyone. To even suggest that data disruption would cause more terror than nuclear fallout completely misunderstands terrorism and terrorists.

And anyway, any e-commerce, banking, etc. site worth anything is backed up and dual-homed. There are lots of risks to our data networks, but physically blowing up a data center isn’t high on the list.

Posted on July 6, 2009 at 6:12 AMView Comments

Dual-Use Technologies and the Equities Issue

On April 27, 2007, Estonia was attacked in cyberspace. Following a diplomatic incident with Russia about the relocation of a Soviet World War II memorial, the networks of many Estonian organizations, including the Estonian parliament, banks, ministries, newspapers and broadcasters, were attacked and—in many cases—shut down. Estonia was quick to blame Russia, which was equally quick to deny any involvement.

It was hyped as the first cyberwar: Russia attacking Estonia in cyberspace. But nearly a year later, evidence that the Russian government was involved in the denial-of-service attacks still hasn’t emerged. Though Russian hackers were indisputably the major instigators of the attack, the only individuals positively identified have been young ethnic Russians living inside Estonia, who were pissed off over the statue incident.

You know you’ve got a problem when you can’t tell a hostile attack by another nation from bored kids with an axe to grind.

Separating cyberwar, cyberterrorism and cybercrime isn’t easy; these days you need a scorecard to tell the difference. It’s not just that it’s hard to trace people in cyberspace, it’s that military and civilian attacks—and defenses—look the same.

The traditional term for technology the military shares with civilians is “dual use.” Unlike hand grenades and tanks and missile targeting systems, dual-use technologies have both military and civilian applications. Dual-use technologies used to be exceptions; even things you’d expect to be dual use, like radar systems and toilets, were designed differently for the military. But today, almost all information technology is dual use. We both use the same operating systems, the same networking protocols, the same applications, and even the same security software.

And attack technologies are the same. The recent spurt of targeted hacks against U.S. military networks, commonly attributed to China, exploit the same vulnerabilities and use the same techniques as criminal attacks against corporate networks. Internet worms make the jump to classified military networks in less than 24 hours, even if those networks are physically separate. The Navy Cyber Defense Operations Command uses the same tools against the same threats as any large corporation.

Because attackers and defenders use the same IT technology, there is a fundamental tension between cyberattack and cyberdefense. The National Security Agency has referred to this as the “equities issue,” and it can be summarized as follows: When a military discovers a vulnerability in a dual-use technology, they can do one of two things. They can alert the manufacturer and fix the vulnerability, thereby protecting both the good guys and the bad guys. Or they can keep quiet about the vulnerability and not tell anyone, thereby leaving the good guys insecure but also leaving the bad guys insecure.

The equities issue has long been hotly debated inside the NSA. Basically, the NSA has two roles: eavesdrop on their stuff, and protect our stuff. When both sides use the same stuff, the agency has to decide whether to exploit vulnerabilities to eavesdrop on their stuff or close the same vulnerabilities to protect our stuff.

In the 1980s and before, the tendency of the NSA was to keep vulnerabilities to themselves. In the 1990s, the tide shifted, and the NSA was starting to open up and help us all improve our security defense. But after the attacks of 9/11, the NSA shifted back to the attack: vulnerabilities were to be hoarded in secret. Slowly, things in the U.S. are shifting back again.

So now we’re seeing the NSA help secure Windows Vista and releasing their own version of Linux. The DHS, meanwhile, is funding a project to secure popular open source software packages, and across the Atlantic the UK’s GCHQ is finding bugs in PGPDisk and reporting them back to the company. (NSA is rumored to be doing the same thing with BitLocker.)

I’m in favor of this trend, because my security improves for free. Whenever the NSA finds a security problem and gets the vendor to fix it, our security gets better. It’s a side-benefit of dual-use technologies.

But I want governments to do more. I want them to use their buying power to improve my security. I want them to offer countrywide contracts for software, both security and non-security, that have explicit security requirements. If these contracts are big enough, companies will work to modify their products to meet those requirements. And again, we all benefit from the security improvements.

The only example of this model I know about is a U.S. government-wide procurement competition for full-disk encryption, but this can certainly be done with firewalls, intrusion detection systems, databases, networking hardware, even operating systems.

When it comes to IT technologies, the equities issue should be a no-brainer. The good uses of our common hardware, software, operating systems, network protocols, and everything else vastly outweigh the bad uses. It’s time that the government used its immense knowledge and experience, as well as its buying power, to improve cybersecurity for all of us.

This essay originally appeared on Wired.com.

Posted on May 6, 2008 at 5:17 AMView Comments

Hacking Power Networks

The CIA unleashed a big one at a SANS conference:

On Wednesday, in New Orleans, US Central Intelligence Agency senior analyst Tom Donahue told a gathering of 300 US, UK, Swedish, and Dutch government officials and engineers and security managers from electric, water, oil & gas and other critical industry asset owners from all across North America, that “We have information, from multiple regions outside the United States, of cyber intrusions into utilities, followed by extortion demands. We suspect, but cannot confirm, that some of these attackers had the benefit of inside knowledge. We have information that cyber attacks have been used to disrupt power equipment in several regions outside the United States. In at least one case, the disruption caused a power outage affecting multiple cities. We do not know who executed these attacks or why, but all involved intrusions through the Internet.”

According to Mr. Donahue, the CIA actively and thoroughly considered the benefits and risks of making this information public, and came down on the side of disclosure.

I’ll bet. There’s nothing like an vague unsubstantiated rumor to forestall reasoned discussion. But, of course, everyone is writing about it anyway.

SANS’s Alan Paller is happy to add details:

In the past two years, hackers have in fact successfully penetrated and extorted multiple utility companies that use SCADA systems, says Alan Paller, director of the SANS Institute, an organization that hosts a crisis center for hacked companies. “Hundreds of millions of dollars have been extorted, and possibly more. It’s difficult to know, because they pay to keep it a secret,” Paller says. “This kind of extortion is the biggest untold story of the cybercrime industry.”

And to up the fear factor:

The prospect of cyberattacks crippling multicity regions appears to have prompted the government to make this information public. The issue “went from ‘we should be concerned about to this’ to ‘this is something we should fix now,’ ” said Paller. “That’s why, I think, the government decided to disclose this.”

More rumor:

An attendee of the meeting said that the attack was not well-known through the industry and came as a surprise to many there. Said the person who asked to remain anonymous, “There were apparently a couple of incidents where extortionists cut off power to several cities using some sort of attack on the power grid, and it does not appear to be a physical attack.”

And more hyperbole from someone in the industry:

Over the past year to 18 months, there has been “a huge increase in focused attacks on our national infrastructure networks, . . . and they have been coming from outside the United States,” said Ralph Logan, principal of the Logan Group, a cybersecurity firm.

It is difficult to track the sources of such attacks, because they are usually made by people who have disguised themselves by worming into three or four other computer networks, Logan said. He said he thinks the attacks were launched from computers belonging to foreign governments or militaries, not terrorist groups.”

I’m more than a bit skeptical here. To be sure—fake staged attacks aside—there are serious risks to SCADA systems (Ganesh Devarajan gave a talk at DefCon this year about some potential attack vectors), although at this point I think they’re more a future threat than present danger. But this CIA tidbit tells us nothing about how the attacks happened. Were they against SCADA systems? Were they against general-purpose computer, maybe Windows machines? Insiders may have been involved, so was this a computer security vulnerability at all? We have no idea.

Cyber-extortion is certainly on the rise; we see it at Counterpane. Primarily it’s against fringe industries—online gambling, online gaming, online porn—operating offshore in countries like Bermuda and the Cayman Islands. It is going mainstream, but this is the first I’ve heard of it targeting power companies. Certainly possible, but is that part of the CIA rumor or was it tacked on afterwards?

And here’s list of power outages. Which ones were hacker caused? Some details would be nice.

I’d like a little bit more information before I start panicking.

EDITED TO ADD (1/23): Slashdot thread.

Posted on January 22, 2008 at 2:24 PMView Comments

Security in Ten Years

This is a conversation between myself and Marcus Ranum. It will appear in Information Security Magazine this month.


Bruce Schneier: Predictions are easy and difficult. Roy Amara of the Institute for the Future once said: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

Moore’s Law is easy: In 10 years, computers will be 100 times more powerful. My desktop will fit into my cell phone, we’ll have gigabit wireless connectivity everywhere, and personal networks will connect our computing devices and the remote services we subscribe to. Other aspects of the future are much more difficult to predict. I don’t think anyone can predict what the emergent properties of 100x computing power will bring: new uses for computing, new paradigms of communication. A 100x world will be different, in ways that will be surprising.

But throughout history and into the future, the one constant is human nature. There hasn’t been a new crime invented in millennia. Fraud, theft, impersonation and counterfeiting are perennial problems that have been around since the beginning of society. During the last 10 years, these crimes have migrated into cyberspace, and over the next 10, they will migrate into whatever computing, communications and commerce platforms we’re using.

The nature of the attacks will be different: the targets, tactics and results. Security is both a trade-off and an arms race, a balance between attacker and defender, and changes in technology upset that balance. Technology might make one particular tactic more effective, or one particular security technology cheaper and more ubiquitous. Or a new emergent application might become a favored target.

I don’t see anything by 2017 that will fundamentally alter this. Do you?


Marcus Ranum: I think you’re right; at a meta-level, the problems are going to stay the same. What’s shocking and disappointing to me is that our responses to those problems also remain the same, in spite of the obvious fact that they aren’t effective. It’s 2007 and we haven’t seemed to accept that:

  • You can’t turn shovelware into reliable software by patching it a whole lot.
  • You shouldn’t mix production systems with non-production systems.
  • You actually have to know what’s going on in your networks.
  • If you run your computers with an open execution runtime model you’ll always get viruses, spyware and Trojan horses.
  • You can pass laws about locking barn doors after horses have left, but it won’t put the horses back in the barn.
  • Security has to be designed in, as part of a system plan for reliability, rather than bolted on afterward.

The list could go on for several pages, but it would be too depressing. It would be “Marcus’ list of obvious stuff that everybody knows but nobody accepts.”

You missed one important aspect of the problem: By 2017, computers will be even more important to our lives, economies and infrastructure.

If you’re right that crime remains a constant, and I’m right that our responses to computer security remain ineffective, 2017 is going to be a lot less fun than 2007 was.

I’ve been pretty dismissive of the concepts of cyberwar and cyberterror. That dismissal was mostly motivated by my observation that the patchworked and kludgy nature of most computer systems acts as a form of defense in its own right, and that real-world attacks remain more cost-effective and practical for terror purposes.

I’d like to officially modify my position somewhat: I believe it’s increasingly likely that we’ll suffer catastrophic failures in critical infrastructure systems by 2017. It probably won’t be terrorists that do it, though. More likely, we’ll suffer some kind of horrible outage because a critical system was connected to a non-critical system that was connected to the Internet so someone could get to MySpace—­and that ancillary system gets a piece of malware. Or it’ll be some incomprehensibly complex software, layered with Band-Aids and patches, that topples over when some “merely curious” hacker pushes the wrong e-button. We’ve got some bad-looking trend lines; all the indicators point toward a system that is more complex, less well-understood and more interdependent. With infrastructure like that, who needs enemies?

You’re worried criminals will continue to penetrate into cyberspace, and I’m worried complexity, poor design and mismanagement will be there to meet them.


Bruce Schneier: I think we’ve already suffered that kind of critical systems failure. The August 2003 blackout that covered much of northeastern United States and Canada­—50 million people­—was caused by a software bug.

I don’t disagree that things will continue to get worse. Complexity is the worst enemy of security, and the Internet—and the computers and processes connected to it—­is getting more complex all the time. So things are getting worse, even though security technology is improving. One could say those critical insecurities are another emergent property of the 100x world of 2017.

Yes, IT systems will continue to become more critical to our infrastructure­—banking, communications, utilities, defense, everything.

By 2017, the interconnections will be so critical that it will probably be cost-effective—and low-risk—for a terrorist organization to attack over the Internet. I also deride talk of cyberterror today, but I don’t think I will in another 10 years.

While the trends of increased complexity and poor management don’t look good, there is another trend that points to more security—but neither you nor I is going to like it. That trend is IT as a service.

By 2017, people and organizations won’t be buying computers and connectivity the way they are today. The world will be dominated by telcos, large ISPs and systems integration companies, and computing will look a lot like a utility. Companies will be selling services, not products: email services, application services, entertainment services. We’re starting to see this trend today, and it’s going to take off in the next 10 years. Where this affects security is that by 2017, people and organizations won’t have a lot of control over their security. Everything will be handled at the ISPs and in the backbone. The free-wheeling days of general-use PCs will be largely over. Think of the iPhone model: You get what Apple decides to give you, and if you try to hack your phone, they can disable it remotely. We techie geeks won’t like it, but it’s the future. The Internet is all about commerce, and commerce won’t survive any other way.


Marcus Ranum: You’re right about the shift toward services—it’s the ultimate way to lock in customers.

If you can make it difficult for the customer to get his data back after you’ve held it for a while, you can effectively prevent the customer from ever leaving. And of course, customers will be told “trust us, your data is secure,” and they’ll take that for an answer. The back-end systems that will power the future of utility computing are going to be just as full of flaws as our current systems. Utility computing will also completely fail to address the problem of transitive trust unless people start shifting to a more reliable endpoint computing platform.

That’s the problem with where we’re heading: the endpoints are not going to get any better. People are attracted to appliances because they get around the headache of system administration (which, in today’s security environment, equates to “endless patching hell”), but underneath the slick surface of the appliance we’ll have the same insecure nonsense we’ve got with general-purpose desktops. In fact, the development of appliances running general-purpose operating systems really does raise the possibility of a software monoculture. By 2017, do you think system engineering will progress to the point where we won’t see a vendor release a new product and instantly create an installed base of 1 million-plus users with root privileges? I don’t, and that scares me.

So if you’re saying the trend is to continue putting all our eggs in one basket and blithely trusting that basket, I agree.

Another trend I see getting worse is government IT know-how. At the rate outsourcing has been brain-draining the federal workforce, by 2017 there won’t be a single government employee who knows how to do anything with a computer except run PowerPoint and Web surf. Joking aside, the result is that the government’s critical infrastructure will be almost entirely managed from the outside. The strategic implications of such a shift have scared me for a long time; it amounts to a loss of control over data, resources and communications.


Bruce Schneier: You’re right about the endpoints not getting any better. I’ve written again and again how measures like two-factor authentication aren’t going to make electronic banking any more secure. The problem is if someone has stuck a Trojan on your computer, it doesn’t matter how many ways you authenticate to the banking server; the Trojan is going to perform illicit transactions after you authenticate.

It’s the same with a lot of our secure protocols. SSL, SSH, PGP and so on all assume the endpoints are secure, and the threat is in the communications system. But we know the real risks are the endpoints.

And a misguided attempt to solve this is going to dominate computing by 2017. I mentioned software-as-a-service, which you point out is really a trick that allows businesses to lock up their customers for the long haul. I pointed to the iPhone, whose draconian rules about who can write software for that platform accomplishes much the same thing. We could also point to Microsoft’s Trusted Computing, which is being sold as a security measure but is really another lock-in mechanism designed to keep users from switching to “unauthorized” software or OSes.

I’m reminded of the post-9/11 anti-terrorist hysteria—we’ve confused security with control, and instead of building systems for real security, we’re building systems of control. Think of ID checks everywhere, the no-fly list, warrantless eavesdropping, broad surveillance, data mining, and all the systems to check up on scuba divers, private pilots, peace activists and other groups of people. These give us negligible security, but put a whole lot of control in the government’s hands.

Computing is heading in the same direction, although this time it is industry that wants control over its users. They’re going to sell it to us as a security system—they may even have convinced themselves it will improve security—but it’s fundamentally a control system. And in the long run, it’s going to hurt security.

Imagine we’re living in a world of Trustworthy Computing, where no software can run on your Windows box unless Microsoft approves it. That brain drain you talk about won’t be a problem, because security won’t be in the hands of the user. Microsoft will tout this as the end of malware, until some hacker figures out how to get his software approved. That’s the problem with any system that relies on control: Once you figure out how to hack the control system, you’re pretty much golden. So instead of a zillion pesky worms, by 2017 we’re going to see fewer but worse super worms that sail past our defenses.

By then, though, we’ll be ready to start building real security. As you pointed out, networks will be so embedded into our critical infrastructure—­and there’ll probably have been at least one real disaster by then—that we’ll have no choice. The question is how much we’ll have to dismantle and build over to get it right.


Marcus Ranum: I agree regarding your gloomy view of the future. It’s ironic the counterculture “hackers” have enabled (by providing an excuse) today’s run-patch-run-patch-reboot software environment and tomorrow’s software Stalinism.

I don’t think we’re going to start building real security. Because real security is not something you build—­it’s something you get when you leave out all the other garbage as part of your design process. Purpose-designed and purpose-built software is more expensive to build, but cheaper to maintain. The prevailing wisdom about software return on investment doesn’t factor in patching and patch-related downtime, because if it did, the numbers would stink. Meanwhile, I’ve seen purpose-built Internet systems run for years without patching because they didn’t rely on bloated components. I doubt industry will catch on.

The future will be captive data running on purpose-built back-end systems—and it won’t be a secure future, because turning your data over always decreases your security. Few possess the understanding of complexity and good design principles necessary to build reliable or secure systems. So, effectively, outsourcing—or other forms of making security someone else’s problem—will continue to seem attractive.
That doesn’t look like a very rosy future to me. It’s a shame, too, because getting this stuff correct is important. You’re right that there are going to be disasters in our future.

I think they’re more likely to be accidents where the system crumbles under the weight of its own complexity, rather than hostile action. Will we even be able to figure out what happened, when it happens?

Folks, the captains have illuminated the “Fasten your seat belts” sign. We predict bumpy conditions ahead.

EDITED TO ADD (12/4): Commentary on the point/counterpoint.

Posted on December 3, 2007 at 12:14 PMView Comments

Cybercrime vs Cyberterrorism

I’ve been saying this for a while now:

Since the outbreak of a cybercrime epidemic that has cost the American economy billions of dollars, the federal government has failed to respond with enough resources, attention and determination to combat the cyberthreat, a Mercury News investigation reveals.

“The U.S. government has not devoted the leadership and energy that this issue needs,” said Paul Kurtz, a former administration homeland and cybersecurity adviser. “It’s been neglected.”

Even as the White House asked last week for $154 million toward a new cybersecurity initiative expected to reach billions of dollars over the next several years, security experts complain the administration remains too focused on the risks of online espionage and information warfare, overlooking the international criminals who are stealing a fortune through the Internet.

This is Part III of a good series on cybercrime. Here are Parts I and II.

Posted on November 28, 2007 at 6:56 AMView Comments

Cyberwar: Myth or Reality?

The biggest problems in discussing cyberwar are the definitions. The things most often described as cyberwar are really cyberterrorism, and the things most often described as cyberterrorism are more like cybercrime, cybervandalism or cyberhooliganism—or maybe cyberespionage.

At first glance there’s nothing new about these terms except the “cyber” prefix. War, terrorism, crime and vandalism are old concepts. What’s new is the domain; it’s the same old stuff occurring in a new arena. But because cyberspace is different, there are differences worth considering.

Of course, the terms overlap. Although the goals are different, many tactics used by armies, terrorists and criminals are the same. Just as they use guns and bombs, they can use cyberattacks. And just as every shooting is not necessarily an act of war, every successful Internet attack, no matter how deadly, is not necessarily an act of cyberwar. A cyberattack that shuts down the power grid might be part of a cyberwar campaign, but it also might be an act of cyberterrorism, cybercrime or even—if done by some 14-year-old who doesn’t really understand what he’s doing—cyberhooliganism. Which it is depends on the attacker’s motivations and the surrounding circumstances—just as in the real world.

For it to be cyberwar, it must first be war. In the 21st century, war will inevitably include cyberwar. Just as war moved into the air with the development of kites, balloons and aircraft, and into space with satellites and ballistic missiles, war will move into cyberspace with the development of specialized weapons, tactics and defenses.

I have no doubt that smarter and better-funded militaries are planning for cyberwar. They have Internet attack tools: denial-of-service tools; exploits that would allow military intelligence to penetrate military systems; viruses and worms similar to what we see now, but perhaps country- or network-specific; and Trojans that eavesdrop on networks, disrupt operations, or allow an attacker to penetrate other networks. I believe militaries know of vulnerabilities in operating systems, generic or custom military applications, and code to exploit those vulnerabilities. It would be irresponsible for them not to.

The most obvious attack is the disabling of large parts of the Internet, although in the absence of global war, I doubt a military would do so; the Internet is too useful an asset and too large a part of the world economy. More interesting is whether militaries would disable national pieces of it. For a surgical approach, we can imagine a cyberattack against a military headquarters, or networks handling logistical information.

Destruction is the last thing a military wants to accomplish with a communications network. A military only wants to shut down an enemy’s network if it isn’t acquiring useful information. The best thing is to infiltrate enemy computers and networks, spy on them, and surreptitiously disrupt select pieces of their communications when appropriate. The next best thing is to passively eavesdrop. After that, perform traffic analysis: analyze the characteristics of communications. Only if a military can’t do any of this would it consider shutting the thing down. Or if, as sometimes but rarely happens, the benefits of completely denying the enemy the communications channel outweigh the advantages of eavesdropping on it.

Cyberwar is certainly not a myth. But you haven’t seen it yet, despite the attacks on Estonia. Cyberwar is warfare in cyberspace. And warfare involves massive death and destruction. When you see it, you’ll know it.

This is the second half of a point/counterpoint with Marcus Ranum; it appeared in the November issue of Information Security Magazine. Marcus’s half is here.

I wrote a longer essay on cyberwar here.

Posted on November 12, 2007 at 7:38 AMView Comments

Al Qaeda Hacker Attack to Begin Sunday

At least that’s what they said two weeks ago:

On Sunday, Nov. 11, al Qaeda’s electronic experts will start attacking Western, Jewish, Israeli, Muslim apostate and Shiite Web sites. On Day One, they will test their skills against 15 targeted sites expand the operation from day to day thereafter until hundreds of thousands of Islamist hackers are in action against untold numbers of anti-Muslim sites.

I think this is nonsense. We’ll see who’s right next week.

Posted on November 9, 2007 at 6:44 AMView Comments

Staged Attack Causes Generator to Self-Destruct

I assume you’ve all seen the news:

A government video shows the potential destruction caused by hackers seizing control of a crucial part of the U.S. electrical grid: an industrial turbine spinning wildly out of control until it becomes a smoking hulk and power shuts down.

The video, produced for the Homeland Security Department and obtained by The Associated Press on Wednesday, was marked “Official Use Only.” It shows commands quietly triggered by simulated hackers having such a violent reaction that the enormous turbine shudders as pieces fly apart and it belches black-and-white smoke.

The video was produced for top U.S. policy makers by the Idaho National Laboratory, which has studied the little-understood risks to the specialized electronic equipment that operates power, water and chemical plants. Vice President Dick Cheney is among those who have watched the video, said one U.S. official, speaking on condition of anonymity because this official was not authorized to publicly discuss such high-level briefings.

More here. And the video is on CNN.com.

I haven’t written much about SCADA security, except to say that I think the risk is overblown today but is getting more serious all the time—and we need to deal with the security before it’s too late. I didn’t know quite what to make of the Idaho National Laboratory video; it seemed like hype, but I couldn’t find any details. (The CNN headline, “Mouse click could plunge city into darkness, experts say,” was definitely hype.)

Then, I received this anonymous e-mail:

I was one of the industry technical folks the DHS consulted in developing the “immediate and required” mitigation strategies for this problem.

They talked to several industry groups (mostly management not tech folks): electric, refining, chemical, and water. They ignored most of what we said but attached our names to the technical parts of the report to make it look credible. We softened or eliminated quite a few sections that may have had relevance 20 years ago, such as war dialing attacks against modems.

The end product is a work order document from DHS which requires such things as background checks on people who have access to modems and logging their visits to sites with datacom equipment or control systems.

By the way—they were unable to hurt the generator you see in the video but did destroy the shaft that drives it and the power unit. They triggered the event from 30 miles away! Then they extrapolated the theory that a malfunctioning generator can destroy not only generators at the power company but the power glitches on the grid would destroy motors many miles away on the electric grid that pump water or gasoline (through pipelines).

They kept everything very secret (all emails and reports encrypted, high security meetings in DC) until they produced a video and press release for CNN. There was huge concern by DHS that this vulnerability would become known to the bad guys—yet now they release it to the world for their own career reasons. Beyond shameful.

Oh, and they did use a contractor for all the heavy lifting that went into writing/revising the required mitigations document. Could not even produce this work product on their own.

By the way, the vulnerability they hypothesize is completely bogus but I won’t say more about the details. Gitmo is still too hot for me this time of year.

Posted on October 2, 2007 at 6:26 AMView Comments

Department of Homeland Security Research Solicitation

Interesting document.

Lots of good stuff. The nine research areas:

  • Botnets and Other Malware: Detection and Mitigation
  • Composable and Scalable Secure Systems
  • Cyber Security Metrics
  • Network Data Visualization for Information Assurance
  • Internet Tomography/Topography
  • Routing Security Management Tool
  • Process Control System Security
  • Data Anonymization Tools and Techniques
  • Insider Threat Detection and Mitigation

And this implies they’ve accepted the problem:

Cyber attacks are increasing in frequency and impact. Even though these attacks have not yet had a significant impact on our Nation’s critical infrastructures, they have demonstrated that extensive vulnerabilities exist in information systems and networks, with the potential for serious damage. The effects of a successful cyber attack might include: serious consequences for major economic and industrial sectors, threats to infrastructure elements such as electric power, and disruption of the response and communications capabilities of first responders.

It’s good to see research money going to this stuff.

Posted on June 6, 2007 at 6:07 AMView Comments

Cyberwar

I haven’t posted anything about the cyberwar between Russia and Estonia because, well, because I didn’t think there was anything new to say. We know that this kind of thing is possible. We don’t have any definitive proof that Russia was behind it. But it would be foolish to think that the various world’s militaries don’t have capabilities like this.

And anyway, I wrote about cyberwar back in January 2005.

But it seems that the essay never made it into the blog. So here it is again.


Cyberwar

The first problem with any discussion about cyberwar is definitional. I’ve been reading about cyberwar for years now, and there seem to be as many definitions of the term as there are people who write about the topic. Some people try to limit cyberwar to military actions taken during wartime, while others are so inclusive that they include the script kiddies who deface websites for fun.

I think the restrictive definition is more useful, and would like to define four different terms as follows:

Cyberwar—Warfare in cyberspace. This includes warfare attacks against a nation’s military—forcing critical communications channels to fail, for example—and attacks against the civilian population.

Cyberterrorism—The use of cyberspace to commit terrorist acts. An example might be hacking into a computer system to cause a nuclear power plant to melt down, a dam to open, or two airplanes to collide. In a previous Crypto-Gram essay, I discussed how realistic the cyberterrorism threat is.

Cybercrime—Crime in cyberspace. This includes much of what we’ve already experienced: theft of intellectual property, extortion based on the threat of DDOS attacks, fraud based on identity theft, and so on.

Cybervandalism—The script kiddies who deface websites for fun are technically criminals, but I think of them more as vandals or hooligans. They’re like the kids who spray paint buses: in it more for the thrill than anything else.

At first glance, there’s nothing new about these terms except the “cyber” prefix. War, terrorism, crime, even vandalism are old concepts. That’s correct, the only thing new is the domain; it’s the same old stuff occurring in a new arena. But because the arena of cyberspace is different from other arenas, there are differences worth considering.

One thing that hasn’t changed is that the terms overlap: although the goals are different, many of the tactics used by armies, terrorists, and criminals are the same. Just as all three groups use guns and bombs, all three groups can use cyberattacks. And just as every shooting is not necessarily an act of war, every successful Internet attack, no matter how deadly, is not necessarily an act of cyberwar. A cyberattack that shuts down the power grid might be part of a cyberwar campaign, but it also might be an act of cyberterrorism, cybercrime, or even—if it’s done by some fourteen-year-old who doesn’t really understand what he’s doing—cybervandalism. Which it is will depend on the motivations of the attacker and the circumstances surrounding the attack…just as in the real world.

For it to be cyberwar, it must first be war. And in the 21st century, war will inevitably include cyberwar. For just as war moved into the air with the development of kites and balloons and then aircraft, and war moved into space with the development of satellites and ballistic missiles, war will move into cyberspace with the development of specialized weapons, tactics, and defenses.

The Waging of Cyberwar

There should be no doubt that the smarter and better-funded militaries of the world are planning for cyberwar, both attack and defense. It would be foolish for a military to ignore the threat of a cyberattack and not invest in defensive capabilities, or to disregard the strategic or tactical possibility of launching an offensive cyberattack against an enemy during wartime. And while history has taught us that many militaries are indeed foolish and ignore the march of progress, cyberwar has been discussed too much in military circles to be ignored.

This implies that at least some of our world’s militaries have Internet attack tools that they’re saving in case of wartime. They could be denial-of-service tools. They could be exploits that would allow military intelligence to penetrate military systems. They could be viruses and worms similar to what we’re seeing now, but perhaps country- or network-specific. They could be Trojans that eavesdrop on networks, disrupt network operations, or allow an attacker to penetrate still other networks.

Script kiddies are attackers who run exploit code written by others, but don’t really understand the intricacies of what they’re doing. Conversely, professional attackers spend an enormous amount of time developing exploits: finding vulnerabilities, writing code to exploit them, figuring out how to cover their tracks. The real professionals don’t release their code to the script kiddies; the stuff is much more valuable if it remains secret until it is needed. I believe that militaries have collections of vulnerabilities in common operating systems, generic applications, or even custom military software that their potential enemies are using, and code to exploit those vulnerabilities. I believe that these militaries are keeping these vulnerabilities secret, and that they are saving them in case of wartime or other hostilities. It would be irresponsible for them not to.

The most obvious cyberattack is the disabling of large parts of the Internet, at least for a while. Certainly some militaries have the capability to do this, but in the absence of global war I doubt that they would do so; the Internet is far too useful an asset and far too large a part of the world economy. More interesting is whether they would try to disable national pieces of it. If Country A went to war with Country B, would Country A want to disable Country B’s portion of the Internet, or remove connections between Country B’s Internet and the rest of the world? Depending on the country, a low-tech solution might be the easiest: disable whatever undersea cables they’re using as access. Could Country A’s military turn its own Internet into a domestic-only network if they wanted?

For a more surgical approach, we can also imagine cyberattacks designed to destroy particular organizations’ networks; e.g., as the denial-of-service attack against the Al Jazeera website during the recent Iraqi war, allegedly by pro-American hackers but possibly by the government. We can imagine a cyberattack against the computer networks at a nation’s military headquarters, or the computer networks that handle logistical information.

One important thing to remember is that destruction is the last thing a military wants to do with a communications network. A military only wants to shut an enemy’s network down if they aren’t getting useful information from it. The best thing to do is to infiltrate the enemy’s computers and networks, spy on them, and surreptitiously disrupt select pieces of their communications when appropriate. The next best thing is to passively eavesdrop. After that, the next best is to perform traffic analysis: analyze who is talking to whom and the characteristics of that communication. Only if a military can’t do any of that do they consider shutting the thing down. Or if, as sometimes but rarely happens, the benefits of completely denying the enemy the communications channel outweigh all of the advantages.

Properties of Cyberwar

Because attackers and defenders use the same network hardware and software, there is a fundamental tension between cyberattack and cyberdefense. The National Security Agency has referred to this as the “equities issue,” and it can be summarized as follows. When a military discovers a vulnerability in a common product, they can either alert the manufacturer and fix the vulnerability, or not tell anyone. It’s not an easy decision. Fixing the vulnerability gives both the good guys and the bad guys a more secure system. Keeping the vulnerability secret means that the good guys can exploit the vulnerability to attack the bad guys, but it also means that the good guys are vulnerable. As long as everyone uses the same microprocessors, operating systems, network protocols, applications software, etc., the equities issue will always be a consideration when planning cyberwar.

Cyberwar can take on aspects of espionage, and does not necessarily involve open warfare. (In military talk, cyberwar is not necessarily “hot.”) Since much of cyberwar will be about seizing control of a network and eavesdropping on it, there may not be any obvious damage from cyberwar operations. This means that the same tactics might be used in peacetime by national intelligence agencies. There’s considerable risk here. Just as U.S. U2 flights over the Soviet Union could have been viewed as an act of war, the deliberate penetration of a country’s computer networks might be as well.

Cyberattacks target infrastructure. In this way they are no different than conventional military attacks against other networks: power, transportation, communications, etc. All of these networks are used by both civilians and the military during wartime, and attacks against them inconvenience both groups of people. For example, when the Allies bombed German railroad bridges during World War II, that affected both civilian and military transport. And when the United States bombed Iraqi communications links in both the First and Second Iraqi Wars, that affected both civilian and military communications. Cyberattacks, even attacks targeted as precisely as today’s smart bombs, are likely to have collateral effects.

Cyberattacks can be used to wage information war. Information war is another topic that’s received considerable media attention of late, although it is not new. Dropping leaflets on enemy soldiers to persuade them to surrender is information war. Broadcasting radio programs to enemy troops is information war. As people get more and more of their information over cyberspace, cyberspace will increasingly become a theater for information war. It’s not hard to imagine cyberattacks designed to co-opt the enemy’s communications channels and use them as a vehicle for information war.

Because cyberwar targets information infrastructure, the waging of it can be more damaging to countries that have significant computer-network infrastructure. The idea is that a technologically poor country might decide that a cyberattack that affects the entire world would disproportionately affect its enemies, because rich nations rely on the Internet much more than poor ones. In some ways this is the dark side of the digital divide, and one of the reasons countries like the United States are so worried about cyberdefense.

Cyberwar is asymmetric, and can be a guerrilla attack. Unlike conventional military offensives involving divisions of men and supplies, cyberattacks are carried out by a few trained operatives. In this way, cyberattacks can be part of a guerrilla warfare campaign.

Cyberattacks also make effective surprise attacks. For years we’ve heard dire warnings of an “electronic Pearl Harbor.” These are largely hyperbole today. I discuss this more in that previous Crypto-Gram essay on cyberterrorism, but right now the infrastructure just isn’t sufficiently vulnerable in that way.

Cyberattacks do not necessarily have an obvious origin. Unlike other forms of warfare, misdirection is more likely a feature of a cyberattack. It’s possible to have damage being done, but not know where it’s coming from. This is a significant difference; there’s something terrifying about not knowing your opponent—or knowing it, and then being wrong. Imagine if, after Pearl Harbor, we did not know who attacked us?

Cyberwar is a moving target. In the previous paragraph, I said that today the risks of an electronic Pearl Harbor are unfounded. That’s true; but this, like all other aspects of cyberspace, is continually changing. Technological improvements affect everyone, including cyberattack mechanisms. And the Internet is becoming critical to more of our infrastructure, making cyberattacks more attractive. There will be a time in the future, perhaps not too far into the future, when a surprise cyberattack becomes a realistic threat.

And finally, cyberwar is a multifaceted concept. It’s part of a larger military campaign, and attacks are likely to have both real-world and cyber components. A military might target the enemy’s communications infrastructure through both physical attack—bombings of selected communications facilities and transmission cables—and virtual attack. An information warfare campaign might include dropping of leaflets, usurpation of a television channel, and mass sending of e-mail. And many cyberattacks still have easier non-cyber equivalents: A country wanting to isolate another country’s Internet might find a low-tech solution, involving the acquiescence of backbone companies like Cable & Wireless, easier than a targeted worm or virus. Cyberwar doesn’t replace war; it’s just another arena in which the larger war is fought.

People overplay the risks of cyberwar and cyberterrorism. It’s sexy, and it gets media attention. And at the same time, people underplay the risks of cybercrime. Today crime is big business on the Internet, and it’s getting bigger all the time. But luckily, the defenses are the same. The countermeasures aimed at preventing both cyberwar and cyberterrorist attacks will also defend against cybercrime and cybervandalism. So even if organizations secure their networks for the wrong reasons, they’ll do the right thing.

Here’s my previous essay on cyberterrorism.

Posted on June 4, 2007 at 6:13 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.