Entries Tagged "infrastructure"

Page 9 of 10

Security in Ten Years

This is a conversation between myself and Marcus Ranum. It will appear in Information Security Magazine this month.


Bruce Schneier: Predictions are easy and difficult. Roy Amara of the Institute for the Future once said: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

Moore’s Law is easy: In 10 years, computers will be 100 times more powerful. My desktop will fit into my cell phone, we’ll have gigabit wireless connectivity everywhere, and personal networks will connect our computing devices and the remote services we subscribe to. Other aspects of the future are much more difficult to predict. I don’t think anyone can predict what the emergent properties of 100x computing power will bring: new uses for computing, new paradigms of communication. A 100x world will be different, in ways that will be surprising.

But throughout history and into the future, the one constant is human nature. There hasn’t been a new crime invented in millennia. Fraud, theft, impersonation and counterfeiting are perennial problems that have been around since the beginning of society. During the last 10 years, these crimes have migrated into cyberspace, and over the next 10, they will migrate into whatever computing, communications and commerce platforms we’re using.

The nature of the attacks will be different: the targets, tactics and results. Security is both a trade-off and an arms race, a balance between attacker and defender, and changes in technology upset that balance. Technology might make one particular tactic more effective, or one particular security technology cheaper and more ubiquitous. Or a new emergent application might become a favored target.

I don’t see anything by 2017 that will fundamentally alter this. Do you?


Marcus Ranum: I think you’re right; at a meta-level, the problems are going to stay the same. What’s shocking and disappointing to me is that our responses to those problems also remain the same, in spite of the obvious fact that they aren’t effective. It’s 2007 and we haven’t seemed to accept that:

  • You can’t turn shovelware into reliable software by patching it a whole lot.
  • You shouldn’t mix production systems with non-production systems.
  • You actually have to know what’s going on in your networks.
  • If you run your computers with an open execution runtime model you’ll always get viruses, spyware and Trojan horses.
  • You can pass laws about locking barn doors after horses have left, but it won’t put the horses back in the barn.
  • Security has to be designed in, as part of a system plan for reliability, rather than bolted on afterward.

The list could go on for several pages, but it would be too depressing. It would be “Marcus’ list of obvious stuff that everybody knows but nobody accepts.”

You missed one important aspect of the problem: By 2017, computers will be even more important to our lives, economies and infrastructure.

If you’re right that crime remains a constant, and I’m right that our responses to computer security remain ineffective, 2017 is going to be a lot less fun than 2007 was.

I’ve been pretty dismissive of the concepts of cyberwar and cyberterror. That dismissal was mostly motivated by my observation that the patchworked and kludgy nature of most computer systems acts as a form of defense in its own right, and that real-world attacks remain more cost-effective and practical for terror purposes.

I’d like to officially modify my position somewhat: I believe it’s increasingly likely that we’ll suffer catastrophic failures in critical infrastructure systems by 2017. It probably won’t be terrorists that do it, though. More likely, we’ll suffer some kind of horrible outage because a critical system was connected to a non-critical system that was connected to the Internet so someone could get to MySpace—­and that ancillary system gets a piece of malware. Or it’ll be some incomprehensibly complex software, layered with Band-Aids and patches, that topples over when some “merely curious” hacker pushes the wrong e-button. We’ve got some bad-looking trend lines; all the indicators point toward a system that is more complex, less well-understood and more interdependent. With infrastructure like that, who needs enemies?

You’re worried criminals will continue to penetrate into cyberspace, and I’m worried complexity, poor design and mismanagement will be there to meet them.


Bruce Schneier: I think we’ve already suffered that kind of critical systems failure. The August 2003 blackout that covered much of northeastern United States and Canada­—50 million people­—was caused by a software bug.

I don’t disagree that things will continue to get worse. Complexity is the worst enemy of security, and the Internet—and the computers and processes connected to it—­is getting more complex all the time. So things are getting worse, even though security technology is improving. One could say those critical insecurities are another emergent property of the 100x world of 2017.

Yes, IT systems will continue to become more critical to our infrastructure­—banking, communications, utilities, defense, everything.

By 2017, the interconnections will be so critical that it will probably be cost-effective—and low-risk—for a terrorist organization to attack over the Internet. I also deride talk of cyberterror today, but I don’t think I will in another 10 years.

While the trends of increased complexity and poor management don’t look good, there is another trend that points to more security—but neither you nor I is going to like it. That trend is IT as a service.

By 2017, people and organizations won’t be buying computers and connectivity the way they are today. The world will be dominated by telcos, large ISPs and systems integration companies, and computing will look a lot like a utility. Companies will be selling services, not products: email services, application services, entertainment services. We’re starting to see this trend today, and it’s going to take off in the next 10 years. Where this affects security is that by 2017, people and organizations won’t have a lot of control over their security. Everything will be handled at the ISPs and in the backbone. The free-wheeling days of general-use PCs will be largely over. Think of the iPhone model: You get what Apple decides to give you, and if you try to hack your phone, they can disable it remotely. We techie geeks won’t like it, but it’s the future. The Internet is all about commerce, and commerce won’t survive any other way.


Marcus Ranum: You’re right about the shift toward services—it’s the ultimate way to lock in customers.

If you can make it difficult for the customer to get his data back after you’ve held it for a while, you can effectively prevent the customer from ever leaving. And of course, customers will be told “trust us, your data is secure,” and they’ll take that for an answer. The back-end systems that will power the future of utility computing are going to be just as full of flaws as our current systems. Utility computing will also completely fail to address the problem of transitive trust unless people start shifting to a more reliable endpoint computing platform.

That’s the problem with where we’re heading: the endpoints are not going to get any better. People are attracted to appliances because they get around the headache of system administration (which, in today’s security environment, equates to “endless patching hell”), but underneath the slick surface of the appliance we’ll have the same insecure nonsense we’ve got with general-purpose desktops. In fact, the development of appliances running general-purpose operating systems really does raise the possibility of a software monoculture. By 2017, do you think system engineering will progress to the point where we won’t see a vendor release a new product and instantly create an installed base of 1 million-plus users with root privileges? I don’t, and that scares me.

So if you’re saying the trend is to continue putting all our eggs in one basket and blithely trusting that basket, I agree.

Another trend I see getting worse is government IT know-how. At the rate outsourcing has been brain-draining the federal workforce, by 2017 there won’t be a single government employee who knows how to do anything with a computer except run PowerPoint and Web surf. Joking aside, the result is that the government’s critical infrastructure will be almost entirely managed from the outside. The strategic implications of such a shift have scared me for a long time; it amounts to a loss of control over data, resources and communications.


Bruce Schneier: You’re right about the endpoints not getting any better. I’ve written again and again how measures like two-factor authentication aren’t going to make electronic banking any more secure. The problem is if someone has stuck a Trojan on your computer, it doesn’t matter how many ways you authenticate to the banking server; the Trojan is going to perform illicit transactions after you authenticate.

It’s the same with a lot of our secure protocols. SSL, SSH, PGP and so on all assume the endpoints are secure, and the threat is in the communications system. But we know the real risks are the endpoints.

And a misguided attempt to solve this is going to dominate computing by 2017. I mentioned software-as-a-service, which you point out is really a trick that allows businesses to lock up their customers for the long haul. I pointed to the iPhone, whose draconian rules about who can write software for that platform accomplishes much the same thing. We could also point to Microsoft’s Trusted Computing, which is being sold as a security measure but is really another lock-in mechanism designed to keep users from switching to “unauthorized” software or OSes.

I’m reminded of the post-9/11 anti-terrorist hysteria—we’ve confused security with control, and instead of building systems for real security, we’re building systems of control. Think of ID checks everywhere, the no-fly list, warrantless eavesdropping, broad surveillance, data mining, and all the systems to check up on scuba divers, private pilots, peace activists and other groups of people. These give us negligible security, but put a whole lot of control in the government’s hands.

Computing is heading in the same direction, although this time it is industry that wants control over its users. They’re going to sell it to us as a security system—they may even have convinced themselves it will improve security—but it’s fundamentally a control system. And in the long run, it’s going to hurt security.

Imagine we’re living in a world of Trustworthy Computing, where no software can run on your Windows box unless Microsoft approves it. That brain drain you talk about won’t be a problem, because security won’t be in the hands of the user. Microsoft will tout this as the end of malware, until some hacker figures out how to get his software approved. That’s the problem with any system that relies on control: Once you figure out how to hack the control system, you’re pretty much golden. So instead of a zillion pesky worms, by 2017 we’re going to see fewer but worse super worms that sail past our defenses.

By then, though, we’ll be ready to start building real security. As you pointed out, networks will be so embedded into our critical infrastructure—­and there’ll probably have been at least one real disaster by then—that we’ll have no choice. The question is how much we’ll have to dismantle and build over to get it right.


Marcus Ranum: I agree regarding your gloomy view of the future. It’s ironic the counterculture “hackers” have enabled (by providing an excuse) today’s run-patch-run-patch-reboot software environment and tomorrow’s software Stalinism.

I don’t think we’re going to start building real security. Because real security is not something you build—­it’s something you get when you leave out all the other garbage as part of your design process. Purpose-designed and purpose-built software is more expensive to build, but cheaper to maintain. The prevailing wisdom about software return on investment doesn’t factor in patching and patch-related downtime, because if it did, the numbers would stink. Meanwhile, I’ve seen purpose-built Internet systems run for years without patching because they didn’t rely on bloated components. I doubt industry will catch on.

The future will be captive data running on purpose-built back-end systems—and it won’t be a secure future, because turning your data over always decreases your security. Few possess the understanding of complexity and good design principles necessary to build reliable or secure systems. So, effectively, outsourcing—or other forms of making security someone else’s problem—will continue to seem attractive.
That doesn’t look like a very rosy future to me. It’s a shame, too, because getting this stuff correct is important. You’re right that there are going to be disasters in our future.

I think they’re more likely to be accidents where the system crumbles under the weight of its own complexity, rather than hostile action. Will we even be able to figure out what happened, when it happens?

Folks, the captains have illuminated the “Fasten your seat belts” sign. We predict bumpy conditions ahead.

EDITED TO ADD (12/4): Commentary on the point/counterpoint.

Posted on December 3, 2007 at 12:14 PMView Comments

Terrorist Insects

Yet another movie-plot threat to worry about:

One of the cheapest and most destructive weapons available to terrorists today is also one of the most widely ignored: insects. These biological warfare agents are easy to sneak across borders, reproduce quickly, spread disease, and devastate crops in an indefatigable march. Our stores of grain could be ravaged by the khapra beetle, cotton and soybean fields decimated by the Egyptian cottonworm, citrus and cotton crops stripped by the false codling moth, and vegetable fields pummeled by the cabbage moth. The costs could easily escalate into the billions of dollars, and the resulting disruption of our food supply – and our sense of well-being – could be devastating. Yet the government focuses on shoe bombs and anthrax while virtually ignoring insect insurgents.

[…]

Seeing the potential, military strategists have been keen to conscript insects during war. In World War II, the French and Germans pursued the mass production and dispersion of Colorado potato beetles to destroy enemy food supplies. The Japanese military, meanwhile, sprayed disease-carrying fleas from low-flying airplanes and dropped bombs packed with flies and a slurry of cholera bacteria. The Japanese killed at least 440,000 Chinese using plague-infected fleas and cholera-coated flies, according to a 2002 international symposium of historians.

During the Cold War, the US military planned a facility to produce 100 million yellow-fever-infected mosquitoes a month, produced an “Entomological Warfare Target Analysis” of vulnerable sites in the Soviet Union and among its allies, and tested the dispersal and biting capacity of (uninfected) mosquitoes by secretly dropping the insects over American cities.

Posted on October 24, 2007 at 6:14 AMView Comments

Staged Attack Causes Generator to Self-Destruct

I assume you’ve all seen the news:

A government video shows the potential destruction caused by hackers seizing control of a crucial part of the U.S. electrical grid: an industrial turbine spinning wildly out of control until it becomes a smoking hulk and power shuts down.

The video, produced for the Homeland Security Department and obtained by The Associated Press on Wednesday, was marked “Official Use Only.” It shows commands quietly triggered by simulated hackers having such a violent reaction that the enormous turbine shudders as pieces fly apart and it belches black-and-white smoke.

The video was produced for top U.S. policy makers by the Idaho National Laboratory, which has studied the little-understood risks to the specialized electronic equipment that operates power, water and chemical plants. Vice President Dick Cheney is among those who have watched the video, said one U.S. official, speaking on condition of anonymity because this official was not authorized to publicly discuss such high-level briefings.

More here. And the video is on CNN.com.

I haven’t written much about SCADA security, except to say that I think the risk is overblown today but is getting more serious all the time—and we need to deal with the security before it’s too late. I didn’t know quite what to make of the Idaho National Laboratory video; it seemed like hype, but I couldn’t find any details. (The CNN headline, “Mouse click could plunge city into darkness, experts say,” was definitely hype.)

Then, I received this anonymous e-mail:

I was one of the industry technical folks the DHS consulted in developing the “immediate and required” mitigation strategies for this problem.

They talked to several industry groups (mostly management not tech folks): electric, refining, chemical, and water. They ignored most of what we said but attached our names to the technical parts of the report to make it look credible. We softened or eliminated quite a few sections that may have had relevance 20 years ago, such as war dialing attacks against modems.

The end product is a work order document from DHS which requires such things as background checks on people who have access to modems and logging their visits to sites with datacom equipment or control systems.

By the way—they were unable to hurt the generator you see in the video but did destroy the shaft that drives it and the power unit. They triggered the event from 30 miles away! Then they extrapolated the theory that a malfunctioning generator can destroy not only generators at the power company but the power glitches on the grid would destroy motors many miles away on the electric grid that pump water or gasoline (through pipelines).

They kept everything very secret (all emails and reports encrypted, high security meetings in DC) until they produced a video and press release for CNN. There was huge concern by DHS that this vulnerability would become known to the bad guys—yet now they release it to the world for their own career reasons. Beyond shameful.

Oh, and they did use a contractor for all the heavy lifting that went into writing/revising the required mitigations document. Could not even produce this work product on their own.

By the way, the vulnerability they hypothesize is completely bogus but I won’t say more about the details. Gitmo is still too hot for me this time of year.

Posted on October 2, 2007 at 6:26 AMView Comments

Chlorine and Cholera in Iraq

Excellent blog post:

So cholera has now reached Baghdad. That’s not much of a surprise given the utter breakdown of infrastructure. But there’s a reason the cholera is picking up speed now. From the NYT:

“We are suffering from a shortage of chlorine, which is sometimes zero,” Dr. Ameer said in an interview on Al Hurra, an American-financed television network in the Middle East. “Chlorine is essential to disinfect the water.”

So why is there is a shortage? Because insurgents have laced a few bombs with chlorine and the U.S. and Iraq have responded by making it darn hard to import the stuff. From the AP:

[A World Health Organization representative in Iraq] also said some 100,000 tons of chlorine were being held up at Iraq’s border with Jordan, apparently because of fears the chemical could be used in explosives. She urged authorities to release it for use in decontaminating water supplies.

I understand why Iraq would put restrictions on dangerous chemicals. And I’m sure nobody intended for the restrictions to be so burdensome that they’d effectively cut off Iraq’s clean water supply. But that’s what looks to have happened. What makes it all the more tragic is that chlorine—for all the hype and worry—is actually a very ineffective booster for bombs. Of the roughly dozen chlorine-laced bombings in Iraq, it appears the chlorine has killed exactly nobody.

In other words, the biggest damage from chlorine bombs—as with so many terrorist attacks—has come from overreaction to it. Fear operates as a “force multiplier” for terrorists, and in this case has helped them cut off Iraq’s clean water. Pretty impressive feat for some bombs that turned out to be close to duds.

I couldn’t have said it better. In this case, the security countermeasure is worse than the threat. Same thing could be said about a lot of the terrorism countermeasures in the U.S.

Another article on the topic.

Posted on September 25, 2007 at 12:23 PMView Comments

Drug Testing an Entire Community

You won’t identity individual users, but you can test for the prevalence of drug use in a community by testing the sewage water.

Presumably, if you push the sample high enough into the pipe, you can test groups of houses or even individual houses.

EDITED TO ADD (7/13): Here’s information on drug numbers in the Rhine. They estimated that, for a population of 38,5 million feeding wastewater into the Rhine down to Düsseldorf, cocaine use amounts to 11 metric tonnes per year. Street value: 1.64 billion Euros.

Posted on August 24, 2007 at 12:35 PMView Comments

First Responders

I live in Minneapolis, so the collapse of the Interstate 35W bridge over the Mississippi River earlier this month hit close to home, and was covered in both my local and national news.

Much of the initial coverage consisted of human interest stories, centered on the victims of the disaster and the incredible bravery shown by first responders: the policemen, firefighters, EMTs, divers, National Guard soldiers and even ordinary people, who all risked their lives to save others. (Just two weeks later, three rescue workers died in their almost-certainly futile attempt to save six miners in Utah.)

Perhaps the most amazing aspect of these stories is that there’s nothing particularly amazing about it. No matter what the disaster—hurricane, earthquake, terrorist attack—the nation’s first responders get to the scene soon after.

Which is why it’s such a crime when these people can’t communicate with each other.

Historically, police departments, fire departments and ambulance drivers have all had their own independent communications equipment, so when there’s a disaster that involves them all, they can’t communicate with each other. A 1996 government report said this about the first World Trade Center bombing in 1993: “Rescuing victims of the World Trade Center bombing, who were caught between floors, was hindered when police officers could not communicate with firefighters on the very next floor.”

And we all know that police and firefighters had the same problem on 9/11. You can read details in firefighter Dennis Smith’s book and 9/11 Commission testimony. The 9/11 Commission Report discusses this as well: Chapter 9 talks about the first responders’ communications problems, and commission recommendations for improving emergency-response communications are included in Chapter 12 (pp. 396-397).

In some cities, this communication gap is beginning to close. Homeland Security money has flowed into communities around the country. And while some wasted it on measures like cameras, armed robots and things having nothing to do with terrorism, others spent it on interoperable communications capabilities. Minnesota did that in 2004.

It worked. Hennepin County Sheriff Rich Stanek told the St. Paul Pioneer-Press that lives were saved by disaster planning that had been fine-tuned and improved with lessons learned from 9/11:

“We have a unified command system now where everyone—police, fire, the sheriff’s office, doctors, coroners, local and state and federal officials—operate under one voice,” said Stanek, who is in charge of water recovery efforts at the collapse site.

“We all operate now under the 800 (megahertz radio frequency system), which was the biggest criticism after 9/11,” Stanek said, “and to have 50 to 60 different agencies able to speak to each other was just fantastic.”

Others weren’t so lucky. Louisiana’s first responders had catastrophic communications problems in 2005, after Hurricane Katrina. According to National Defense Magazine:

Police could not talk to firefighters and emergency medical teams. Helicopter and boat rescuers had to wave signs and follow one another to survivors. Sometimes, police and other first responders were out of touch with comrades a few blocks away. National Guard relay runners scurried about with scribbled messages as they did during the Civil War.

A congressional report on preparedness and response to Katrina said much the same thing.

In 2004, the U.S. Conference of Mayors issued a report on communications interoperability. In 25 percent of the 192 cities surveyed, the police couldn’t communicate with the fire department. In 80 percent of cities, municipal authorities couldn’t communicate with the FBI, FEMA and other federal agencies.

The source of the problem is a basic economic one, called the collective action problem. A collective action is one that needs the coordinated effort of several entities in order to succeed. The problem arises when each individual entity’s needs diverge from the collective needs, and there is no mechanism to ensure that those individual needs are sacrificed in favor of the collective need.

Jerry Brito of George Mason University shows how this applies to first-responder communications. Each of the nation’s 50,000 or so emergency-response organizations—local police department, local fire department, etc.—buys its own communications equipment. As you’d expect, they buy equipment as closely suited to their needs as they can. Ensuring interoperability with other organizations’ equipment benefits the common good, but sacrificing their unique needs for that compatibility may not be in the best immediate interest of any of those organizations. There’s no central directive to ensure interoperability, so there ends up being none.

This is an area where the federal government can step in and do good. Too much of the money spent on terrorism defense has been overly specific: effective only if the terrorists attack a particular target or use a particular tactic. Money spent on emergency response is different: It’s effective regardless of what the terrorists plan, and it’s also effective in the wake of natural or infrastructure disasters.

No particular disaster, whether intentional or accidental, is common enough to justify spending a lot of money on preparedness for a specific emergency. But spending money on preparedness in general will pay off again and again.

This essay originally appeared on Wired.com.

EDITED TO ADD (7/13): More research.

Posted on August 23, 2007 at 3:23 AMView Comments

Department of Homeland Security Research Solicitation

Interesting document.

Lots of good stuff. The nine research areas:

  • Botnets and Other Malware: Detection and Mitigation
  • Composable and Scalable Secure Systems
  • Cyber Security Metrics
  • Network Data Visualization for Information Assurance
  • Internet Tomography/Topography
  • Routing Security Management Tool
  • Process Control System Security
  • Data Anonymization Tools and Techniques
  • Insider Threat Detection and Mitigation

And this implies they’ve accepted the problem:

Cyber attacks are increasing in frequency and impact. Even though these attacks have not yet had a significant impact on our Nation’s critical infrastructures, they have demonstrated that extensive vulnerabilities exist in information systems and networks, with the potential for serious damage. The effects of a successful cyber attack might include: serious consequences for major economic and industrial sectors, threats to infrastructure elements such as electric power, and disruption of the response and communications capabilities of first responders.

It’s good to see research money going to this stuff.

Posted on June 6, 2007 at 6:07 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.