Entries Tagged "mitigation"

Page 3 of 8

Security Theater in the Theater

This is a bit surreal:

Additional steps are needed to prepare Broadway theaters in New York City for a potential WMD attack or other crisis, a New York state legislature subcommittee said yesterday.

[…]

Broadway district personnel did not know “what to do in case of an emergency as well as the unique problems that a theater workplace poses in the event of a fire or evacuation,” according to the report, which drew on interviews with theater employees following the attempted bombing.

“Taking the May 1, 2010, car bomb as an example, theater employees expressed how unprepared they were in dealing with the situation,” the report reads. “They were given misinformation, and they were directed to exit through portals they did not even know existed, indicating their lack of knowledge of the building they work in and exit routes. In the event of another attack, the same issues would arise.”

Posted on January 26, 2011 at 1:42 PMView Comments

Internet Quarantines

Last month, Scott Charney of Microsoft proposed that infected computers be quarantined from the Internet. Using a public health model for Internet security, the idea is that infected computers spreading worms and viruses are a risk to the greater community and thus need to be isolated. Internet service providers would administer the quarantine, and would also clean up and update users’ computers so they could rejoin the greater Internet.

This isn’t a new idea. Already there are products that test computers trying to join private networks, and only allow them access if their security patches are up-to-date and their antivirus software certifies them as clean. Computers denied access are sometimes shunned to a limited-capability sub-network where all they can do is download and install the updates they need to regain access. This sort of system has been used with great success at universities and end-user-device-friendly corporate networks. They’re happy to let you log in with any device you want—this is the consumerization trend in action—as long as your security is up to snuff.

Charney’s idea is to do that on a larger scale. To implement it we have to deal with two problems. There’s the technical problem—making the quarantine work in the face of malware designed to evade it, and the social problem—ensuring that people don’t have their computers unduly quarantined. Understanding the problems requires us to understand quarantines in general.

Quarantines have been used to contain disease for millennia. In general several things need to be true for them to work. One, the thing being quarantined needs to be easily recognized. It’s easier to quarantine a disease if it has obvious physical characteristics: fever, boils, etc. If there aren’t any obvious physical effects, or if those effects don’t show up while the disease is contagious, a quarantine is much less effective.

Similarly, it’s easier to quarantine an infected computer if that infection is detectable. As Charney points out, his plan is only effective against worms and viruses that our security products recognize, not against those that are new and still undetectable.

Two, the separation has to be effective. The leper colonies on Molokai and Spinalonga both worked because it was hard for the quarantined to leave. Quarantined medieval cities worked less well because it was too easy to leave, or—when the diseases spread via rats or mosquitoes—because the quarantine was targeted at the wrong thing.

Computer quarantines have been generally effective because the users whose computers are being quarantined aren’t sophisticated enough to break out of the quarantine, and find it easier to update their software and rejoin the network legitimately.

Three, only a small section of the population must need to be quarantined. The solution works only if it’s a minority of the population that’s affected, either with physical diseases or computer diseases. If most people are infected, overall infection rates aren’t going to be slowed much by quarantining. Similarly, a quarantine that tries to isolate most of the Internet simply won’t work.

Fourth, the benefits must outweigh the costs. Medical quarantines are expensive to maintain, especially if people are being quarantined against their will. Determining who to quarantine is either expensive (if it’s done correctly) or arbitrary, authoritative and abuse-prone (if it’s done badly). It could even be both. The value to society must be worth it.

It’s the last point that Charney and others emphasize. If Internet worms were only damaging to the infected, we wouldn’t need a societally imposed quarantine like this. But they’re damaging to everyone else on the Internet, spreading and infecting others. At the same time, we can implement systems that quarantine cheaply. The value to society far outweighs the cost.

That makes sense, but once you move quarantines from isolated private networks to the general Internet, the nature of the threat changes. Imagine an intelligent and malicious infectious disease: That’s what malware is. The current crop of malware ignores quarantines; they’re few and far enough between not to affect their effectiveness.

If we tried to implement Internet-wide—or even countrywide—quarantining, worm-writers would start building in ways to break the quarantine. So instead of nontechnical users not bothering to break quarantines because they don’t know how, we’d have technically sophisticated virus-writers trying to break quarantines. Implementing the quarantine at the ISP level would help, and if the ISP monitored computer behavior, not just specific virus signatures, it would be somewhat effective even in the face of evasion tactics. But evasion would be possible, and we’d be stuck in another computer security arms race. This isn’t a reason to dismiss the proposal outright, but it is something we need to think about when weighing its potential effectiveness.

Additionally, there’s the problem of who gets to decide which computers to quarantine. It’s easy on a corporate or university network: the owners of the network get to decide. But the Internet doesn’t have that sort of hierarchical control, and denying people access without due process is fraught with danger. What are the appeal mechanisms? The audit mechanisms? Charney proposes that ISPs administer the quarantines, but there would have to be some central authority that decided what degree of infection would be sufficient to impose the quarantine. Although this is being presented as a wholly technical solution, it’s these social and political ramifications that are the most difficult to determine and the easiest to abuse.

Once we implement a mechanism for quarantining infected computers, we create the possibility of quarantining them in all sorts of other circumstances. Should we quarantine computers that don’t have their patches up to date, even if they’re uninfected? Might there be a legitimate reason for someone to avoid patching his computer? Should the government be able to quarantine someone for something he said in a chat room, or a series of search queries he made? I’m sure we don’t think it should, but what if that chat and those queries revolved around terrorism? Where’s the line?

Microsoft would certainly like to quarantine any computers it feels are not running legal copies of its operating system or applications software.The music and movie industry will want to quarantine anyone it decides is downloading or sharing pirated media files—they’re already pushing similar proposals.

A security measure designed to keep malicious worms from spreading over the Internet can quickly become an enforcement tool for corporate business models. Charney addresses the need to limit this kind of function creep, but I don’t think it will be easy to prevent; it’s an enforcement mechanism just begging to be used.

Once you start thinking about implementation of quarantine, all sorts of other social issues emerge. What do we do about people who need the Internet? Maybe VoIP is their only phone service. Maybe they have an Internet-enabled medical device. Maybe their business requires the Internet to run. The effects of quarantining these people would be considerable, even potentially life-threatening. Again, where’s the line?

What do we do if people feel they are quarantined unjustly? Or if they are using nonstandard software unfamiliar to the ISP? Is there an appeals process? Who administers it? Surely not a for-profit company.

Public health is the right way to look at this problem. This conversation—between the rights of the individual and the rights of society—is a valid one to have, and this solution is a good possibility to consider.

There are some applicable parallels. We require drivers to be licensed and cars to be inspected not because we worry about the danger of unlicensed drivers and uninspected cars to themselves, but because we worry about their danger to other drivers and pedestrians. The small number of parents who don’t vaccinate their kids have already caused minor outbreaks of whooping cough and measles among the greater population. We all suffer when someone on the Internet allows his computer to get infected. How we balance that with individuals’ rights to maintain their own computers as they see fit is a discussion we need to start having.

This essay previously appeared on Forbes.com.

EDITED TO ADD (11/15): From an anonymous reader:

In your article you mention that for quarantines to work, you must be able to detect infected individuals. It must also be detectable quickly, before the individual has the opportunity to infect many others. Quarantining an individual after they’ve infected most of the people they regularly interact with is of little value. You must quarantine individuals when they have infected, on average, less than one other person.

Just as worm-writers would respond to the technical mechanisms to implement a quarantine by investing in ways to get around them, they would also likely invest in outpacing the quarantine. If a worm is designed to spread fast, even the best quarantine mechanisms may be unable to keep up.

Another concern with quarantining mechanisms is the damage that attackers could do if they were able to compromise the mechanism itself. This is of especially great concern if the mechanism were to include code within end-user’s TCBs to scan computers­ essentially a built-in root kit. Without a scanner in the end-user’s TCB, it’s hard to see how you could reliably detect infections.

Posted on November 15, 2010 at 4:55 AMView Comments

Doomsday Shelters

Selling fear:

The Vivos network, which offers partial ownerships similar to a timeshare in underground shelter communities, is one of several ventures touting escape from a surface-level calamity.

Radius Engineering in Terrell, Texas, has built underground shelters for more than three decades, and business has never been better, says Walton McCarthy, company president.

The company sells fiberglass shelters that can accommodate 10 to 2,000 adults to live underground for one to five years with power, food, water and filtered air, McCarthy says.

The shelters range from $400,000 to a $41 million facility Radius built and installed underground that is suitable for 750 people, McCarthy says. He declined to disclose the client or location of the shelter.

“We’ve doubled sales every year for five years,” he says.Other shelter manufacturers include Hardened Structures of Colorado and Utah Shelter Systems, which also report increased sales.

[…]

The Vivos website features a clock counting down to Dec. 21, 2012, the date when the ancient Mayan “Long Count” calendar marks the end of a 5,126-year era, at which time some people expect an unknown apocalypse.

Vicino, whose terravivos.com website lists 11 global catastrophes ranging from nuclear war to solar flares to comets, bristles at the notion he’s profiting from people’s fears.

“You don’t think of the person who sells you a fire extinguisher as taking advantage of your fear,” he says. “The fact that you may never use that fire extinguisher doesn’t make it a waste or bad.

“We’re not creating the fear; the fear is already out there. We’re creating a solution.

Yip Harburg commented on the subject about half a century ago, and the Chad Mitchell Trio recited it. It’s at about 0:40 on the recording, though the rest is worth listening to as well.

    Hammacher Schlemmer is selling a shelter,
          worthy of Kubla Khan’s Xanadu dome;
    Plushy and swanky, with posh hanky panky
          that affluent Yankees can really call home.

    Hammacher Schlemmer is selling a shelter,
          a push-button palace, fluorescent repose;
    Electric devices for facing a crisis
          with frozen fruit ices and cinema shows.

    Hammacher Schlemmer is selling a shelter
          all chromium kitchens and rubber-tiled dorms;
    With waterproof portals to echo the chortles
          of weatherproof mortals in hydrogen storms.

    What a great come-to-glory emporium!
    To enjoy a deluxe moratorium,
    Where nuclear heat can beguile the elite
          in a creme-de-la-creme crematorium.

EDITED TO ADD (8/9: Slate on this as a bogus trend.

Posted on July 30, 2010 at 12:47 PMView Comments

Preventing Terrorist Attacks in Crowded Areas

On the New York Times Room for Debate Blog, I—along with several other people—was asked about how to prevent terrorist attacks in crowded areas. This is my response.

In the wake of Saturday’s failed Times Square car bombing, it’s natural to ask how we can prevent this sort of thing from happening again. The answer is stop focusing on the specifics of what actually happened, and instead think about the threat in general.

Think about the security measures commonly proposed. Cameras won’t help. They don’t prevent terrorist attacks, and their forensic value after the fact is minimal. In the Times Square case, surely there’s enough other evidence—the car’s identification number, the auto body shop the stolen license plates came from, the name of the fertilizer store—to identify the guy. We will almost certainly not need the camera footage. The images released so far, like the images in so many other terrorist attacks, may make for exciting television, but their value to law enforcement officers is limited.

Check points won’t help, either. You can’t check everybody and everything. There are too many people to check, and too many train stations, buses, theaters, department stores and other places where people congregate. Patrolling guards, bomb-sniffing dogs, chemical and biological weapons detectors: they all suffer from similar problems. In general, focusing on specific tactics or defending specific targets doesn’t make sense. They’re inflexible; possibly effective if you guess the plot correctly, but completely ineffective if you don’t. At best, the countermeasures just force the terrorists to make minor changes in their tactic and target.

It’s much smarter to spend our limited counterterrorism resources on measures that don’t focus on the specific. It’s more efficient to spend money on investigating and stopping terrorist attacks before they happen, and responding effectively to any that occur. This approach works because it’s flexible and adaptive; it’s effective regardless of what the bad guys are planning for next time.

After the Christmas Day airplane bombing attempt, I was asked how we can better protect our airplanes from terrorist attacks. I pointed out that the event was a security success—the plane landed safely, nobody was hurt, a terrorist was in custody—and that the next attack would probably have nothing to do with explosive underwear. After the Moscow subway bombing, I wrote that overly specific security countermeasures like subway cameras and sensors were a waste of money.

Now we have a failed car bombing in Times Square. We can’t protect against the next imagined movie-plot threat. Isn’t it time to recognize that the bad guys are flexible and adaptive, and that we need the same quality in our countermeasures?

I know, nothing I haven’t said many times before.

Steven Simon likes cameras, although his arguments are more movie-plot than real. Michael Black, Noah Shachtman, Michael Tarr, and Jeffrey Rosen all write about the limitations of security cameras. Paul Ekman wants more people. And Richard Clarke has a nice essay about how we shouldn’t panic.

Posted on May 4, 2010 at 1:31 PMView Comments

Post-Underwear-Bomber Airport Security

In the headlong rush to “fix” security after the Underwear Bomber’s unsuccessful Christmas Day attack, there’s been far too little discussion about what worked and what didn’t, and what will and will not make us safer in the future.

The security checkpoints worked. Because we screen for obvious bombs, Umar Farouk Abdulmutallab—or, more precisely, whoever built the bomb—had to construct a far less reliable bomb than he would have otherwise. Instead of using a timer or a plunger or a reliable detonation mechanism, as would any commercial user of PETN, he had to resort to an ad hoc and much more inefficient homebrew mechanism: one involving a syringe and 20 minutes in the lavatory and we don’t know exactly what else. And it didn’t work.

Yes, the Amsterdam screeners allowed Abdulmutallab onto the plane with PETN sewn into his underwear, but that’s not a failure, either. There is no security checkpoint, run by any government anywhere in the world, designed to catch this. It isn’t a new threat; it’s more than a decade old. Nor is it unexpected; anyone who says otherwise simply isn’t paying attention. But PETN is hard to explode, as we saw on Christmas Day.

Additionally, the passengers on the airplane worked. For years, I’ve said that exactly two things have made us safer since 9/11: reinforcing the cockpit door and convincing passengers that they need to fight back. It was the second of these that, on Christmas Day, quickly subdued Abdulmutallab after he set his pants on fire.

To the extent security failed, it failed before Abdulmutallab even got to the airport. Why was he issued an American visa? Why didn’t anyone follow up on his father’s tip? While I’m sure there are things to be improved and fixed, remember that everything is obvious in hindsight. After the fact, it’s easy to point to the bits of evidence and claim that someone should have “connected the dots.” But before the fact, when there are millions of dots—some important but the vast majority unimportant—uncovering plots is a lot harder.

Despite this, the proposed fixes focus on the details of the plot rather than the broad threat. We’re going to install full-body scanners, even though there are lots of ways to hide PETN—stuff it in a body cavity, spread it thinly on a garment—from the machines. We’re going to profile people traveling from 14 countries, even though it’s easy for a terrorist to travel from a different country. Seating requirements for the last hour of flight were the most ridiculous example.

The problem with all these measures is that they’re only effective if we guess the plot correctly. Defending against a particular tactic or target makes sense if tactics and targets are few. But there are hundreds of tactics and millions of targets, so all these measures will do is force the terrorists to make a minor modification to their plot.

It’s magical thinking: If we defend against what the terrorists did last time, we’ll somehow defend against what they do next time. Of course this doesn’t work. We take away guns and bombs, so the terrorists use box cutters. We take away box cutters and corkscrews, and the terrorists hide explosives in their shoes. We screen shoes, they use liquids. We limit liquids, they sew PETN into their underwear. We implement full-body scanners, and they’re going to do something else. This is a stupid game; we should stop playing it.

But we can’t help it. As a species, we’re hardwired to fear specific stories—terrorists with PETN underwear, terrorists on subways, terrorists with crop dusters—and we want to feel secure against those stories. So we implement security theater against the stories, while ignoring the broad threats.

What we need is security that’s effective even if we can’t guess the next plot: intelligence, investigation, and emergency response. Our foiling of the liquid bombers demonstrates this. They were arrested in London, before they got to the airport. It didn’t matter if they were using liquids—which they chose precisely because we weren’t screening for them—or solids or powders. It didn’t matter if they were targeting airplanes or shopping malls or crowded movie theaters. They were arrested, and the plot was foiled. That’s effective security.

Finally, we need to be indomitable. The real security failure on Christmas Day was in our reaction. We’re reacting out of fear, wasting money on the story rather than securing ourselves against the threat. Abdulmutallab succeeded in causing terror even though his attack failed.

If we refuse to be terrorized, if we refuse to implement security theater and remember that we can never completely eliminate the risk of terrorism, then the terrorists fail even if their attacks succeed.

This essay previously appeared on Sphere, the AOL.com news site.

EDITED TO ADD (1/8): Similar sentiment.

Posted on January 7, 2010 at 1:18 PMView Comments

Breaching the Secure Area in Airports

An unidentified man breached airport security at Newark Airport on Sunday, walking into the secured area through the exit, prompting the evacuation of a terminal and flight delays that continued into the next day. This isn’t common, but it happens regularly. The result is always the same, and it’s not obvious that fixing the problem is the right solution.

This kind of security breach is inevitable, simply because human guards are not perfect. Sometimes it’s someone going in through the out door, unnoticed by a bored guard. Sometimes it’s someone running through the checkpoint and getting lost in the crowd. Sometimes it’s an open door that should be locked. Amazing as it seems to frequent fliers, the perpetrator often doesn’t even know he did anything wrong.

Basically, whenever there is—or could be—an unscreened person lost within the secure area of an airport, there are two things the TSA can do. They can say “this isn’t a big deal,” and ignore it. Or they can evacuate everyone inside the secure area, search every nook and cranny—inside the large boxes of napkins at the fast food restaurant, above the false ceilings in the bathrooms, everywhere—looking for anyone hiding or anything anyone hid, and then rescreen everybody: causing delays of six, eight, twelve, or more hours. That’s it; those are the options. And there’s no way someone in charge will choose to ignore the risk; even if the odds of a terrorist exploit are minuscule, it’ll cost him his career if he’s wrong.

Several European airports have their security screening organized differently. At Schipol Airport in Amsterdam, for example, passengers are screened at the gates. This is more expensive and requires a substantially different airport design, but it does mean that if there is a security breach, only the gate has to be evacuated and searched, and the people rescreened.

American airports can do more to secure against this risk, but I’m reasonably sure it’s not worth it. We could double the guards to reduce the risk of inattentiveness, and redesign the airports to make this kind of thing less likely, but those are expensive solutions to an already rare problem. As much as I don’t like saying it, the smartest thing is probably to live with this occasional but major inconvenience.

This essay originally appeared on ThreatPost.com.

EDITED TO ADD (1/9): A first-person account of the chaos at Newark Airport, with observations and recommendations.

Posted on January 6, 2010 at 6:10 AMView Comments

Surviving a Suicide Bombing

Where you stand matters:

The two researchers have developed accurate physics-based models of a suicide bombing attack, including casualty levels and explosive composition. Their work also describes human shields available in the crowd with partial and full coverage in both two- and three-dimensional environments.

Their virtual simulation tool assesses the impact of crowd formation patterns and their densities on the magnitude of injury and number of casualties of a suicide bombing attack. For a typical attack, the writers suggest that they can reduce the number of fatalities by 12 percent and the number of injuries by 7 percent if their recommendations are followed.

Simulation results were compared and validated by real-life incidents in Iraq. Line-of-sight with the attacker, rushing toward the exit and stampede were found to be the victims’ most lethal choices both during and after the attack.

Presumably they also discovered where the attacker should stand to be as lethal as possible, but there’s no indication that they published those results.

Posted on March 26, 2009 at 8:08 AMView Comments

A Rational Response to Peanut Allergies and Children

Some parents of children with peanut allergies are not asking their school to ban peanuts. They consider it more important that teachers know which children are likely to have a reaction, and how to deal with it when it happens; i.e., how to use an Epipen.

This is a much more resilient response to the threat. It works even when the peanut ban fails. It works whether the child has an anaphylactic reaction to nuts, fruit, dairy, gluten, or whatever.

It’s so rare to see rational risk management when it comes to children and safety; I just had to blog it.

Related blog post, including a very lively comments section.

Posted on January 27, 2009 at 2:10 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.