Entries Tagged "movie-plot threats"

Page 5 of 15

More Movie Plot Terrorist Threats

The Foreign Policy website has its own list of movie-plot threats: machine-gun wielding terrorists on paragliders, disease-laden insect swarms, a dirty bomb made from smoke detector parts, planning via online games, and botulinum in the food supply. The site fleshes these threats out a bit, but it’s nothing regular readers of this blog can’t imagine for themselves.

Maybe they should have their own movie-plot threat contest.

Posted on February 2, 2010 at 6:34 AMView Comments

Australia Restores Some Sanity to Airport Screening

Welcome news:

Carry-on baggage rules will be relaxed under a shake-up of aviation security announced by the Federal Government today.

The changes will see passengers again allowed to carry some sharp implements, such as nail files and clippers, umbrellas, crochet and knitting needles on board aircraft from July next year.

Metal cutlery will return to return to cabin meals and airport restaurants following Government recognition that security arrangements must be targeted at ‘real risks’.

I’m sure these rules won’t apply to flights to the U.S., where security arrangements must still be targeted at movie-plot threats.

Posted on December 17, 2009 at 12:54 PMView Comments

Beyond Security Theater

[I was asked to write this essay for the New Internationalist (n. 427, November 2009, pp. 10–13). It’s nothing I haven’t said before, but I’m pleased with how this essay came together.]

Terrorism is rare, far rarer than many people think. It’s rare because very few people want to commit acts of terrorism, and executing a terrorist plot is much harder than television makes it appear. The best defenses against terrorism are largely invisible: investigation, intelligence, and emergency response. But even these are less effective at keeping us safe than our social and political policies, both at home and abroad. However, our elected leaders don’t think this way: they are far more likely to implement security theater against movie-plot threats.

A movie-plot threat is an overly specific attack scenario. Whether it’s terrorists with crop dusters, terrorists contaminating the milk supply, or terrorists attacking the Olympics, specific stories affect our emotions more intensely than mere data does. Stories are what we fear. It’s not just hypothetical stories: terrorists flying planes into buildings, terrorists with bombs in their shoes or in their water bottles, and terrorists with guns and bombs waging a co-ordinated attack against a city are even scarier movie-plot threats because they actually happened.

Security theater refers to security measures that make people feel more secure without doing anything to actually improve their security. An example: the photo ID checks that have sprung up in office buildings. No-one has ever explained why verifying that someone has a photo ID provides any actual security, but it looks like security to have a uniformed guard-for-hire looking at ID cards. Airport-security examples include the National Guard troops stationed at US airports in the months after 9/11—their guns had no bullets. The US colour-coded system of threat levels, the pervasive harassment of photographers, and the metal detectors that are increasingly common in hotels and office buildings since the Mumbai terrorist attacks, are additional examples.

To be sure, reasonable arguments can be made that some terrorist targets are more attractive than others: aeroplanes because a small bomb can result in the death of everyone aboard, monuments because of their national significance, national events because of television coverage, and transportation because of the numbers of people who commute daily. But there are literally millions of potential targets in any large country (there are five million commercial buildings alone in the US), and hundreds of potential terrorist tactics; it’s impossible to defend every place against everything, and it’s impossible to predict which tactic and target terrorists will try next.

Feeling and Reality

Security is both a feeling and a reality. The propensity for security theater comes from the interplay between the public and its leaders. When people are scared, they need something done that will make them feel safe, even if it doesn’t truly make them safer. Politicians naturally want to do something in response to crisis, even if that something doesn’t make any sense.

Often, this “something” is directly related to the details of a recent event: we confiscate liquids, screen shoes, and ban box cutters on airplanes. But it’s not the target and tactics of the last attack that are important, but the next attack. These measures are only effective if we happen to guess what the next terrorists are planning. If we spend billions defending our rail systems, and the terrorists bomb a shopping mall instead, we’ve wasted our money. If we concentrate airport security on screening shoes and confiscating liquids, and the terrorists hide explosives in their brassieres and use solids, we’ve wasted our money. Terrorists don’t care what they blow up and it shouldn’t be our goal merely to force the terrorists to make a minor change in their tactics or targets.

Our penchant for movie plots blinds us to the broader threats. And security theater consumes resources that could better be spent elsewhere.

Any terrorist attack is a series of events: something like planning, recruiting, funding, practising, executing, aftermath. Our most effective defenses are at the beginning and end of that process—intelligence, investigation, and emergency response—and least effective when they require us to guess the plot correctly. By intelligence and investigation, I don’t mean the broad data-mining or eavesdropping systems that have been proposed and in some cases implemented—those are also movie-plot stories without much basis in actual effectiveness—but instead the traditional “follow the evidence” type of investigation that has worked for decades.

Unfortunately for politicians, the security measures that work are largely invisible. Such measures include enhancing the intelligence-gathering abilities of the secret services, hiring cultural experts and Arabic translators, building bridges with Islamic communities both nationally and internationally, funding police capabilities—both investigative arms to prevent terrorist attacks, and emergency communications systems for after attacks occur—and arresting terrorist plotters without media fanfare. They do not include expansive new police or spying laws. Our police don’t need any new laws to deal with terrorism; rather, they need apolitical funding. These security measures don’t make good television, and they don’t help, come re-election time. But they work, addressing the reality of security instead of the feeling.

The arrest of the “liquid bombers” in London is an example: they were caught through old-fashioned intelligence and police work. Their choice of target (airplanes) and tactic (liquid explosives) didn’t matter; they would have been arrested regardless.

But even as we do all of this we cannot neglect the feeling of security, because it’s how we collectively overcome the psychological damage that terrorism causes. It’s not security theater we need, it’s direct appeals to our feelings. The best way to help people feel secure is by acting secure around them. Instead of reacting to terrorism with fear, we—and our leaders—need to react with indomitability.

Refuse to Be Terrorized

By not overreacting, by not responding to movie-plot threats, and by not becoming defensive, we demonstrate the resilience of our society, in our laws, our culture, our freedoms. There is a difference between indomitability and arrogant “bring ’em on” rhetoric. There’s a difference between accepting the inherent risk that comes with a free and open society, and hyping the threats.

We should treat terrorists like common criminals and give them all the benefits of true and open justice—not merely because it demonstrates our indomitability, but because it makes us all safer. Once a society starts circumventing its own laws, the risks to its future stability are much greater than terrorism.

Supporting real security even though it’s invisible, and demonstrating indomitability even though fear is more politically expedient, requires real courage. Demagoguery is easy. What we need is leaders willing both to do what’s right and to speak the truth.

Despite fearful rhetoric to the contrary, terrorism is not a transcendent threat. A terrorist attack cannot possibly destroy a country’s way of life; it’s only our reaction to that attack that can do that kind of damage. The more we undermine our own laws, the more we convert our buildings into fortresses, the more we reduce the freedoms and liberties at the foundation of our societies, the more we’re doing the terrorists’ job for them.

We saw some of this in the Londoners’ reaction to the 2005 transport bombings. Among the political and media hype and fearmongering, there was a thread of firm resolve. People didn’t fall victim to fear. They rode the trains and buses the next day and continued their lives. Terrorism’s goal isn’t murder; terrorism attacks the mind, using victims as a prop. By refusing to be terrorized, we deny the terrorists their primary weapon: our own fear.

Today, we can project indomitability by rolling back all the fear-based post-9/11 security measures. Our leaders have lost credibility; getting it back requires a decrease in hyperbole. Ditch the invasive mass surveillance systems and new police state-like powers. Return airport security to pre-9/11 levels. Remove swagger from our foreign policies. Show the world that our legal system is up to the challenge of terrorism. Stop telling people to report all suspicious activity; it does little but make us suspicious of each other, increasing both fear and helplessness.

Terrorism has always been rare, and for all we’ve heard about 9/11 changing the world, it’s still rare. Even 9/11 failed to kill as many people as automobiles do in the US every single month. But there’s a pervasive myth that terrorism is easy. It’s easy to imagine terrorist plots, both large-scale “poison the food supply” and small-scale “10 guys with guns and cars.” Movies and television bolster this myth, so many people are surprised that there have been so few attacks in Western cities since 9/11. Certainly intelligence and investigation successes have made it harder, but mostly it’s because terrorist attacks are actually hard. It’s hard to find willing recruits, to co-ordinate plans, and to execute those plans—and it’s easy to make mistakes.

Counterterrorism is also hard, especially when we’re psychologically prone to muck it up. Since 9/11, we’ve embarked on strategies of defending specific targets against specific tactics, overreacting to every terrorist video, stoking fear, demonizing ethnic groups, and treating the terrorists as if they were legitimate military opponents who could actually destroy a country or a way of life—all of this plays into the hands of terrorists. We’d do much better by leveraging the inherent strengths of our modern democracies and the natural advantages we have over the terrorists: our adaptability and survivability, our international network of laws and law enforcement, and the freedoms and liberties that make our society so enviable. The way we live is open enough to make terrorists rare; we are observant enough to prevent most of the terrorist plots that exist, and indomitable enough to survive the even fewer terrorist plots that actually succeed. We don’t need to pretend otherwise.

EDITED TO ADD (11/14): Commentary from Kevin Drum, James Fallows, and The Economist.

Posted on November 13, 2009 at 6:52 AMView Comments

The Futility of Defending the Targets

This is just silly:

Beaver Stadium is a terrorist target. It is most likely the No. 1 target in the region. As such, it deserves security measures commensurate with such a designation, but is the stadium getting such security?

[..]

When the stadium is not in use it does not mean it is not a target. It must be watched constantly. An easy solution is to assign police officers there 24 hours a day, seven days a week. This is how a plot to destroy the Brooklyn Bridge was thwarted—police presence. Although there are significant costs to this, the costs pale in comparison if the stadium is destroyed or damaged.

The idea is to create omnipresence, which is a belief in everyone’s minds (terrorists and pranksters included) that the stadium is constantly being watched so that any attempt would be futile.

Actually, the Brooklyn Bridge plot failed because the plotters were idiots and the plot—cutting through cables with blowtorches—was dumb. That, and the all-too-common police informant who egged the plotters on.

But never mind that. Beaver Stadium is Pennsylvania State University’s football stadium, and this article argues that it’s a potential terrorist target that needs 24/7 police protection.

The problem with that kind of reasoning is that it makes no sense. As I said in an article that will appear in New Internationalist:

To be sure, reasonable arguments can be made that some terrorist targets are more attractive than others: aeroplanes because a small bomb can result in the death of everyone aboard, monuments because of their national significance, national events because of television coverage, and transportation because of the numbers of people who commute daily. But there are literally millions of potential targets in any large country (there are five million commercial buildings alone in the US), and hundreds of potential terrorist tactics; it’s impossible to defend every place against everything, and it’s impossible to predict which tactic and target terrorists will try next.

Defending individual targets only makes sense if the number of potential targets is few. If there are seven terrorist targets and you defend five of them, you seriously reduce the terrorists’ ability to do damage. But if there are a million terrorist targets and you defend five of them, the terrorists won’t even notice. I tend to dislike security measures that merely cause the bad guys to make a minor change in their plans.

And the expense would be enormous. Add up these secondary terrorist targets—stadiums, theaters, churches, schools, malls, office buildings, anyplace where a lot of people are packed together—and the number is probably around 200,000, including Beaver Stadium. Full-time police protection requires people, so that’s 1,000,000 policemen. At an encumbered cost of $100,000 per policeman per year, probably a low estimate, that’s a total annual cost of $100B. (That’s about what we’re spending each year in Iraq.) On the other hand, hiring one out of every 300 Americans to guard our nation’s infrastructure would solve our unemployment problem. And since policemen get health care, our health care problem as well. Just make sure you don’t accidentally hire a terrorist to guard against terrorists—that would be embarrassing.

The whole idea is nonsense. As I’ve been saying for years, what works is investigation, intelligence, and emergency response:

We need to defend against the broad threat of terrorism, not against specific movie plots. Security is most effective when it doesn’t make arbitrary assumptions about the next terrorist act. We need to spend more money on intelligence and investigation: identifying the terrorists themselves, cutting off their funding, and stopping them regardless of what their plans are. We need to spend more money on emergency response: lessening the impact of a terrorist attack, regardless of what it is. And we need to face the geopolitical consequences of our foreign policy and how it helps or hinders terrorism.

Posted on October 9, 2009 at 6:37 AMView Comments

Modeling Zombie Outbreaks

The math doesn’t look good: “When Zombies Attack!: Mathematical Modelling of an Outbreak of Zombie Infection.”

An outbreak of zombies infecting humans is likely to be disastrous, unless extremely aggressive tactics are employed against the undead. While aggressive quarantine may eradicate the infection, this is unlikely to happen in practice. A cure would only result in some humans surviving the outbreak, although they will still coexist with zombies. Only sufficiently frequent attacks, with increasing force, will result in eradication, assuming the available resources can be mustered in time.

Furthermore, these results assumed that the timescale of the outbreak was short, so that the natural birth and death rates could be ignored. If the timescale of the outbreak increases, then the result is the doomsday scenario: an outbreak of zombies will result in the collapse of civilisation, with every human infected, or dead. This is because human births and deaths will provide the undead with a limitless supply of new bodies to infect, resurrect and convert. Thus, if zombies arrive, we must act quickly and decisively to eradicate them before they eradicate us.

The key difference between the models presented here and other models of infectious disease is that the dead can come back to life. Clearly, this is an unlikely scenario if taken literally, but possible real-life applications may include allegiance to political parties, or diseases with a dormant infection.

This is, perhaps unsurprisingly, the first mathematical analysis of an outbreak of zombie infection. While the scenarios considered are obviously not realistic, it is nevertheless instructive to develop mathematical models for an unusual outbreak. This demonstrates the flexibility of mathematical modelling and shows how modelling can respond to a wide variety of challenges in ‘biology’.

In summary, a zombie outbreak is likely to lead to the collapse of civilisation, unless it is dealt with quickly. While aggressive quarantine may contain the epidemic, or a cure may lead to coexistence of humans and zombies, the most effective way to contain the rise of the undead is to hit hard and hit often. As seen in the movies, it is imperative that zombies are dealt with quickly, or else we are all in a great deal of trouble.

Posted on August 24, 2009 at 5:57 AMView Comments

Movie-Plot Threat Alert: Robot Suicide Bombers

Let’s all be afraid:

But it adds: “Robots that effectively mimic human appearance and movements may be used as human proxies.”

It raised the prospects of terrorists using robots to plant and detonate bombs or even replacing human suicide bombers.

A Home Office spokeswoman said: “This strategy looks at how technology might develop in future.

“Clearly it is important that we understand how those wishing us harm might use such technology in future so we can stay one step ahead.”

The document also warns that nanotechnology will help accelerate development of materials for future explosives while advances in fabrics will “significantly” improve camouflage and protection.

I’m sure I’ve seen this stuff in movies.

Posted on August 18, 2009 at 6:16 AMView Comments

Nuclear Self-Terrorization

More fearmongering. The headline is “Terrorists could use internet to launch nuclear attack: report.” The subhead: “The risk of cyber-terrorism escalating to a nuclear strike is growing daily, according to a study.” In the article:

The claims come in a study commissioned by the International Commission on Nuclear Non-proliferation and Disarmament (ICNND), which suggests that under the right circumstances, terrorists could break into computer systems and launch an attack on a nuclear state ­ triggering a catastrophic chain of events that would have a global impact.

Without better protection of computer and information systems, the paper suggests, governments around the world are leaving open the possibility that a well-coordinated cyberwar could quickly elevate to nuclear levels.

In fact, says the study, “this may be an easier alternative for terrorist groups than building or acquiring a nuclear weapon or dirty bomb themselves”.

Though the paper admits that the media and entertainment industries often confuse and exaggerate the risk of cyberterrorism, it also outlines a number of potential threats and situations in which dedicated hackers could use information warfare techniques to make a nuclear attack more likely.

Note the weasel words: the study “suggests that under the right circumstances.” We’re “leaving open the possibility.” The report “outlines a number of potential threats and situations” where the bad guys could “make a nuclear attack more likely.”

Gadzooks. I’m tired of this idiocy. Stop overreacting to rare risks. Refuse to be terrorized, people.

Posted on July 31, 2009 at 6:00 AMView Comments

Spanish Police Foil Remote-Controlled Zeppelin Jailbreak

Sometimes movie plots actually happen:

…three people have been arrested after police discovered their plan to free a drug trafficker from an island prison using a 13-foot airship carrying night goggles, climbing gear and camouflage paint.

[…]

The arrested men had setup an elaborate surveillance operation of the prison that involved a camouflaged tent, powerful binoculars, telephoto lenses, and motion detection sensors. But authorities caught wind of the plan when they intercepted the inflatable zeppelin as it arrived from the Italian town of Bergamo.

EDITED TO ADD (7/14): Another story, with more detail.

Posted on July 8, 2009 at 1:54 PMView Comments

This Week's Movie-Plot Threat: Fungus

I had been wondering whether to post this, since it’s not really a security threat—there’s no intelligence by the attacker:

Crop scientists fear the Ug99 fungus could wipe out more than 80% of worldwide wheat crops as it spreads from eastern Africa. It has already jumped the Red Sea and traveled as far as Iran. Experts say it is poised to enter the breadbasket of northern India and Pakistan, and the wind will inevitably carry it to Russia, China and even North America—if it doesn’t hitch a ride with people first.

“It’s a time bomb,” said Jim Peterson, a professor of wheat breeding and genetics at Oregon State University in Corvallis. “It moves in the air, it can move in clothing on an airplane. We know it’s going to be here. It’s a matter of how long it’s going to take.”

Posted on June 19, 2009 at 2:03 PMView Comments

Imagining Threats

A couple of years ago, the Department of Homeland Security hired a bunch of science fiction writers to come in for a day and think of ways terrorists could attack America. If our inability to prevent 9/11 marked a failure of imagination, as some said at the time, then who better than science fiction writers to inject a little imagination into counterterrorism planning?

I discounted the exercise at the time, calling it “embarrassing.” I never thought that 9/11 was a failure of imagination. I thought, and still think, that 9/11 was primarily a confluence of three things: the dual failure of centralized coordination and local control within the FBI, and some lucky breaks on the part of the attackers. More imagination leads to more movie-plot threats—which contributes to overall fear and overestimation of the risks. And that doesn’t help keep us safe at all.

Recently, I read a paper by Magne Jørgensen that provides some insight into why this is so. Titled More Risk Analysis Can Lead to Increased Over-Optimism and Over-Confidence, the paper isn’t about terrorism at all. It’s about software projects.

Most software development project plans are overly optimistic, and most planners are overconfident about their overoptimistic plans. Jørgensen studied how risk analysis affected this. He conducted four separate experiments on software engineers, and concluded (though there are lots of caveats in the paper, and more research needs to be done) that performing more risk analysis can make engineers more overoptimistic instead of more realistic.

Potential explanations all come from behavioral economics: cognitive biases that affect how we think and make decisions. (I’ve written about some of these biases and how they affect security decisions, and there’s a great book on the topic as well.)

First, there’s a control bias. We tend to underestimate risks in situations where we are in control, and overestimate risks in situations when we are not in control. Driving versus flying is a common example. This bias becomes stronger with familiarity, involvement and a desire to experience control, all of which increase with increased risk analysis. So the more risk analysis, the greater the control bias, and the greater the underestimation of risk.

The second explanation is the availability heuristic. Basically, we judge the importance or likelihood of something happening by the ease of bringing instances of that thing to mind. So we tend to overestimate the probability of a rare risk that is seen in a news headline, because it is so easy to imagine. Likewise, we underestimate the probability of things occurring that don’t happen to be in the news.

A corollary of this phenomenon is that, if we’re asked to think about a series of things, we overestimate the probability of the last thing thought about because it’s more easily remembered.

According to Jørgensen’s reasoning, people tend to do software risk analysis by thinking of the severe risks first, and then the more manageable risks. So the more risk analysis that’s done, the less severe the last risk imagined, and thus the greater the underestimation of the total risk.

The third explanation is similar: the peak end rule. When thinking about a total experience, people tend to place too much weight on the last part of the experience. In one experiment, people had to hold their hands under cold water for one minute. Then, they had to hold their hands under cold water for one minute again, then keep their hands in the water for an additional 30 seconds while the temperature was gradually raised. When asked about it afterwards, most people preferred the second option to the first, even though the second had more total discomfort. (An intrusive medical device was redesigned along these lines, resulting in a longer period of discomfort but a relatively comfortable final few seconds. People liked it a lot better.) This means, like the second explanation, that the least severe last risk imagined gets greater weight than it deserves.

Fascinating stuff. But the biases produce the reverse effect when it comes to movie-plot threats. The more you think about far-fetched terrorism possibilities, the more outlandish and scary they become, and the less control you think you have. This causes us to overestimate the risks.

Think about this in the context of terrorism. If you’re asked to come up with threats, you’ll think of the significant ones first. If you’re pushed to find more, if you hire science-fiction writers to dream them up, you’ll quickly get into the low-probability movie plot threats. But since they’re the last ones generated, they’re more available. (They’re also more vivid—science fiction writers are good at that—which also leads us to overestimate their probability.) They also suggest we’re even less in control of the situation than we believed. Spending too much time imagining disaster scenarios leads people to overestimate the risks of disaster.

I’m sure there’s also an anchoring effect in operation. This is another cognitive bias, where people’s numerical estimates of things are affected by numbers they’ve most recently thought about, even random ones. People who are given a list of three risks will think the total number of risks are lower than people who are given a list of 12 risks. So if the science fiction writers come up with 137 risks, people will believe that the number of risks is higher than they otherwise would—even if they recognize the 137 number is absurd.

Jørgensen does not believe risk analysis is useless in software projects, and I don’t believe scenario brainstorming is useless in counterterrorism. Both can lead to new insights and, as a result, a more intelligent analysis of both specific risks and general risk. But an over-reliance on either can be detrimental.

Last month, at the 2009 Homeland Security Science & Technology Stakeholders Conference in Washington D.C., science fiction writers helped the attendees think differently about security. This seems like a far better use of their talents than imagining some of the zillions of ways terrorists can attack America.

This essay originally appeared on Wired.com.

Posted on June 19, 2009 at 6:49 AMView Comments

1 3 4 5 6 7 15

Sidebar photo of Bruce Schneier by Joe MacInnis.