Entries Tagged "infrastructure"

Page 10 of 11

First Responders

I live in Minneapolis, so the collapse of the Interstate 35W bridge over the Mississippi River earlier this month hit close to home, and was covered in both my local and national news.

Much of the initial coverage consisted of human interest stories, centered on the victims of the disaster and the incredible bravery shown by first responders: the policemen, firefighters, EMTs, divers, National Guard soldiers and even ordinary people, who all risked their lives to save others. (Just two weeks later, three rescue workers died in their almost-certainly futile attempt to save six miners in Utah.)

Perhaps the most amazing aspect of these stories is that there’s nothing particularly amazing about it. No matter what the disaster—hurricane, earthquake, terrorist attack—the nation’s first responders get to the scene soon after.

Which is why it’s such a crime when these people can’t communicate with each other.

Historically, police departments, fire departments and ambulance drivers have all had their own independent communications equipment, so when there’s a disaster that involves them all, they can’t communicate with each other. A 1996 government report said this about the first World Trade Center bombing in 1993: “Rescuing victims of the World Trade Center bombing, who were caught between floors, was hindered when police officers could not communicate with firefighters on the very next floor.”

And we all know that police and firefighters had the same problem on 9/11. You can read details in firefighter Dennis Smith’s book and 9/11 Commission testimony. The 9/11 Commission Report discusses this as well: Chapter 9 talks about the first responders’ communications problems, and commission recommendations for improving emergency-response communications are included in Chapter 12 (pp. 396-397).

In some cities, this communication gap is beginning to close. Homeland Security money has flowed into communities around the country. And while some wasted it on measures like cameras, armed robots and things having nothing to do with terrorism, others spent it on interoperable communications capabilities. Minnesota did that in 2004.

It worked. Hennepin County Sheriff Rich Stanek told the St. Paul Pioneer-Press that lives were saved by disaster planning that had been fine-tuned and improved with lessons learned from 9/11:

“We have a unified command system now where everyone—police, fire, the sheriff’s office, doctors, coroners, local and state and federal officials—operate under one voice,” said Stanek, who is in charge of water recovery efforts at the collapse site.

“We all operate now under the 800 (megahertz radio frequency system), which was the biggest criticism after 9/11,” Stanek said, “and to have 50 to 60 different agencies able to speak to each other was just fantastic.”

Others weren’t so lucky. Louisiana’s first responders had catastrophic communications problems in 2005, after Hurricane Katrina. According to National Defense Magazine:

Police could not talk to firefighters and emergency medical teams. Helicopter and boat rescuers had to wave signs and follow one another to survivors. Sometimes, police and other first responders were out of touch with comrades a few blocks away. National Guard relay runners scurried about with scribbled messages as they did during the Civil War.

A congressional report on preparedness and response to Katrina said much the same thing.

In 2004, the U.S. Conference of Mayors issued a report on communications interoperability. In 25 percent of the 192 cities surveyed, the police couldn’t communicate with the fire department. In 80 percent of cities, municipal authorities couldn’t communicate with the FBI, FEMA and other federal agencies.

The source of the problem is a basic economic one, called the collective action problem. A collective action is one that needs the coordinated effort of several entities in order to succeed. The problem arises when each individual entity’s needs diverge from the collective needs, and there is no mechanism to ensure that those individual needs are sacrificed in favor of the collective need.

Jerry Brito of George Mason University shows how this applies to first-responder communications. Each of the nation’s 50,000 or so emergency-response organizations—local police department, local fire department, etc.—buys its own communications equipment. As you’d expect, they buy equipment as closely suited to their needs as they can. Ensuring interoperability with other organizations’ equipment benefits the common good, but sacrificing their unique needs for that compatibility may not be in the best immediate interest of any of those organizations. There’s no central directive to ensure interoperability, so there ends up being none.

This is an area where the federal government can step in and do good. Too much of the money spent on terrorism defense has been overly specific: effective only if the terrorists attack a particular target or use a particular tactic. Money spent on emergency response is different: It’s effective regardless of what the terrorists plan, and it’s also effective in the wake of natural or infrastructure disasters.

No particular disaster, whether intentional or accidental, is common enough to justify spending a lot of money on preparedness for a specific emergency. But spending money on preparedness in general will pay off again and again.

This essay originally appeared on Wired.com.

EDITED TO ADD (7/13): More research.

Posted on August 23, 2007 at 3:23 AMView Comments

Department of Homeland Security Research Solicitation

Interesting document.

Lots of good stuff. The nine research areas:

  • Botnets and Other Malware: Detection and Mitigation
  • Composable and Scalable Secure Systems
  • Cyber Security Metrics
  • Network Data Visualization for Information Assurance
  • Internet Tomography/Topography
  • Routing Security Management Tool
  • Process Control System Security
  • Data Anonymization Tools and Techniques
  • Insider Threat Detection and Mitigation

And this implies they’ve accepted the problem:

Cyber attacks are increasing in frequency and impact. Even though these attacks have not yet had a significant impact on our Nation’s critical infrastructures, they have demonstrated that extensive vulnerabilities exist in information systems and networks, with the potential for serious damage. The effects of a successful cyber attack might include: serious consequences for major economic and industrial sectors, threats to infrastructure elements such as electric power, and disruption of the response and communications capabilities of first responders.

It’s good to see research money going to this stuff.

Posted on June 6, 2007 at 6:07 AMView Comments

Is There Strategic Software?

If you define “critical infrastructure” as “things essential for the functioning of a society and economy,” then software is critical infrastructure. For many companies and individuals, if their computers stop working, they stop working.

It’s a situation that snuck up on us. Everyone knew that the software that flies 747s or targets cruise missiles was critical, but who thought of the airlines’ weight and balance computers, or the operating system running the databases and spreadsheets that determine which cruise missiles get shipped where?

And over the years, common, off-the-shelf, personal- and business-grade software has been used for more and more critical applications. Today we find ourselves in a situation where a well-positioned flaw in Windows, Cisco routers or Apache could seriously affect the economy.

It’s perfectly rational to assume that some programmers—a tiny minority I’m sure—are deliberately adding vulnerabilities and back doors into the code they write. I’m actually kind of amazed that back doors secretly added by the CIA/NSA, MI5, the Chinese, Mossad and others don’t conflict with each other. Even if these groups aren’t infiltrating software companies with back doors, you can be sure they’re scouring products for vulnerabilities they can exploit, if necessary. On the other hand, we’re already living in a world where dozens of new flaws are discovered in common software products weekly, and the economy is humming along. But we’re not talking about this month’s worm from Asia or new phishing software from the Russian mafia—we’re talking national intelligence organizations. “Infowar” is an overhyped term, but the next war will have a cyberspace component, and these organizations wouldn’t be doing their jobs if they weren’t preparing for it.

Marcus is 100 percent correct when he says it’s simply too late to do anything about it. The software industry is international, and no country can start demanding domestic-only software and expect to get anywhere. Nor would that actually solve the problem, which is more about the allegiance of millions of individual programmers than which country they happen to inhabit.

So, what to do? The key here is to remember the real problem: current commercial software practices are not secure enough to reliably detect and delete deliberately inserted malicious code. Once you understand this, you’ll drop the red herring arguments that led to CheckPoint not being able to buy Sourcefire and concentrate on the real solution: defense in depth.

In theory, security software are after-the-fact kludges because the underlying OS and apps are riddled with vulnerabilities. If your software were written properly, you wouldn’t need a firewall—right?

If we were to get serious about critical infrastructure, we’d recognize it’s all critical and start building security software to protect it. We’d build our security based on the principles of safe failure; we’d assume security would fail and make sure it’s OK when it does. We’d use defense in depth and compartmentalization to minimize the effects of failure. Basically, we’d do everything we’re supposed to do now to secure our networks.

It’d be expensive, probably prohibitively so. Maybe it would be easier to continue to ignore the problem, or at least manage geopolitics so that no national military wants to take us down.

This is the second half of a point/counterpoint I did with Marcus Ranum (here’s his half) for the September 2006 issue of Information Security Magazine.

Posted on September 12, 2006 at 10:38 AMView Comments

A Minor Security Lesson from Mumbai Terrorist Bombings

Two quotes:

Authorities had also severely limited the cellular network for fear it could be used to trigger more attacks.

And:

Some of the injured were seen frantically dialing their cell phones. The mobile phone network collapsed adding to the sense of panic.

(Note: The story was changed online, and the second quote was deleted.)

Cell phones are useful to terrorists, but they’re more useful to the rest of us.

Posted on July 13, 2006 at 1:20 PMView Comments

Cold War Software Bugs

Here’s a report that the CIA slipped software bugs to the Soviets in the 1980s:

In January 1982, President Ronald Reagan approved a CIA plan to sabotage the economy of the Soviet Union through covert transfers of technology that contained hidden malfunctions, including software that later triggered a huge explosion in a Siberian natural gas pipeline, according to a new memoir by a Reagan White House official.

A CIA article from 1996 also describes this.

EDITED TO ADD (11/14): Marcus Ranum wrote about this.

Posted on November 14, 2005 at 8:04 AMView Comments

Melbourne Water-Supply Security Risk

Here’s a scary hacking target: the remote-control system for Melbourne’s water supply. According to TheAge:

Remote access to the Brooklyn pumping station and the rest of the infrastructure means the entire network can be controlled from any of seven main Melbourne Water sites, or by key staff such as Mr Woodland from home via a secure internet connection using Citrix’s Metaframe or a standard web browser.

SCADA systems are hard to hack, but SSL connections—at least, that’s what I presume they mean by “secure internet connection”—are much easier.

(Seen on Benambra.)

Posted on March 11, 2005 at 9:17 AMView Comments

Keeping Network Outages Secret

There’s considerable confusion between the concept of secrecy and the concept of security, and it is causing a lot of bad security and some surprising political arguments. Secrecy is not the same as security, and most of the time secrecy contributes to a false feeling of security instead of to real security.

In June, the U.S. Department of Homeland Security urged regulators to keep network outage information secret. The Federal Communications Commission already requires telephone companies to report large disruptions of telephone service, and wants to extend that requirement to high-speed data lines and wireless networks. But the DHS fears that such information would give cyberterrorists a “virtual road map” to target critical infrastructures.

This sounds like the “full disclosure” debate all over again. Is publishing computer and network vulnerability information a good idea, or does it just help the hackers? It arises again and again, as malware takes advantage of software vulnerabilities after they’ve been made public.

The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is only beneficial to security in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they’re lost they’re lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there’s no way to recover security. Trying to base security on secrecy is just plain bad design.

Cryptography is based on secrets—keys—but look at all the work that goes into making them effective. Keys are short and easy to transfer. They’re easy to update and change. And the key is the only secret component of a cryptographic system. Cryptographic algorithms make terrible secrets, which is why one of cryptography’s most basic principles is to assume that the algorithm is public.

That’s the other fallacy with the secrecy argument: the assumption that secrecy works. Do we really think that the physical weak points of networks are such a mystery to the bad guys? Do we really think that the hacker underground never discovers vulnerabilities?

Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies routinely denied their existence and wouldn’t bother fixing them, believing in the security of secrecy. And because customers didn’t know any better, they bought these systems, believing them to be secure. If we return to a practice of keeping software bugs secret, we’ll have vulnerabilities known to a few in the security community and to much of the hacker underground.

Secrecy prevents people from assessing their own risks.

Public reporting of network outages forces telephone companies to improve their service. It allows consumers to compare the reliability of different companies, and to choose one that best serves their needs. Without public disclosure, companies could hide their reliability performance from the public.

Just look at who supports secrecy. Software vendors such as Microsoft want very much to keep vulnerability information secret. The Department of Homeland Security’s recommendations were loudly echoed by the phone companies. It’s the interests of these companies that are served by secrecy, not the interests of consumers, citizens, or society.

In the post-9/11 world, we’re seeing this clash of secrecy versus openness everywhere. The U.S. government is trying to keep details of many anti-terrorism countermeasures—and even routine government operations—secret. Information about the infrastructure of plants and government buildings is secret. Profiling information used to flag certain airline passengers is secret. The standards for the Department of Homeland Security’s color-coded terrorism threat levels are secret. Even information about government operations without any terrorism connections is being kept secret.

This keeps terrorists in the dark, especially “dumb” terrorists who might not be able to figure out these vulnerabilities on their own. But at the same time, the citizenry—to whom the government is ultimately accountable—is not allowed to evaluate the countermeasures, or comment on their efficacy. Security can’t improve because there’s no public debate or public education.

Recent studies have shown that most water, power, gas, telephone, data, transportation, and distribution systems are scale-free networks. This means they always have highly connected hubs. Attackers know this intuitively and go after the hubs. Defenders are beginning to learn how to harden the hubs and provide redundancy among them. Trying to keep it a secret that a network has hubs is futile. Better to identify and protect them.

We’re all safer when we have the information we need to exert market pressure on vendors to improve security. We would all be less secure if software vendors didn’t make their security vulnerabilities public, and if telephone companies didn’t have to report network outages. And when government operates without accountability, that serves the security interests of the government, not of the people.

Security Focus article
CNN article

Another version of this essay appeared in the October Communications of the ACM.

Posted on October 1, 2004 at 9:36 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.