Entries Tagged "fear"

Page 22 of 22

The Myth of Panic

This New York Times op ed argues that panic is largely a myth. People feel stressed but they behave rationally, and it only gets called “panic” because of the stress.

If our leaders are really planning for panic, in the technical sense, then they are at best wasting resources on a future that is unlikely to happen. At worst, they may be doing our enemies’ work for them – while people are amazing under pressure, it cannot help to have predictions of panic drummed into them by supposed experts.

It can set up long-term foreboding, causing people to question whether they have the mettle to handle terrorists’ challenges. Studies have found that when interpreting ambiguous situations, people look to one another for cues. Panicky warnings can color the cues that people draw from one another when interpreting ambiguous situations, like seeing a South Asian-looking man with a backpack get on a bus.

Nor can it help if policy makers talk about possible draconian measures (like martial law and rigidly policed quarantines) to control the public and deny its right to manage its own affairs. The very planning for such measures can alienate citizens and the authorities from each other.

Whatever its source, the myth of panic is a threat to our welfare. Given the difficulty of using the term precisely and the rarity of actual panic situations, the cleanest solution is for the politicians and the press to avoid the term altogether. It’s time to end chatter about “panic” and focus on ways to support public resilience in an emergency.

Posted on August 9, 2005 at 7:25 AMView Comments


There is a great discussion about profiling going on in the comments to the previous post. To help, here is what I wrote on the subject in Beyond Fear (pp. 133-7):

Good security has people in charge. People are resilient. People can improvise. People can be creative. People can develop on-the-spot solutions. People can detect attackers who cheat, and can attempt to maintain security despite the cheating. People can detect passive failures and attempt to recover. People are the strongest point in a security process. When a security system succeeds in the face of a new or coordinated or devastating attack, it’s usually due to the efforts of people.

On 14 December 1999, Ahmed Ressam tried to enter the U.S. by ferryboat from Victoria Island, British Columbia. In the trunk of his car, he had a suitcase bomb. His plan was to drive to Los Angeles International Airport, put his suitcase on a luggage cart in the terminal, set the timer, and then leave. The plan would have worked had someone not been vigilant.

Ressam had to clear customs before boarding the ferry. He had fake ID, in the name of Benni Antoine Noris, and the computer cleared him based on this ID. He was allowed to go through after a routine check of his car’s trunk, even though he was wanted by the Canadian police. On the other side of the Strait of Juan de Fuca, at Port Angeles, Washington, Ressam was approached by U.S. customs agent Diana Dean, who asked some routine questions and then decided that he looked suspicious. He was fidgeting, sweaty, and jittery. He avoided eye contact. In Dean’s own words, he was acting “hinky.” More questioning — there was no one else crossing the border, so two other agents got involved — and more hinky behavior. Ressam’s car was eventually searched, and he was finally discovered and captured. It wasn’t any one thing that tipped Dean off; it was everything encompassed in the slang term “hinky.” But the system worked. The reason there wasn’t a bombing at LAX around Christmas in 1999 was because a knowledgeable person was in charge of security and paying attention.

There’s a dirty word for what Dean did that chilly afternoon in December, and it’s profiling. Everyone does it all the time. When you see someone lurking in a dark alley and change your direction to avoid him, you’re profiling. When a storeowner sees someone furtively looking around as she fiddles inside her jacket, that storeowner is profiling. People profile based on someone’s dress, mannerisms, tone of voice … and yes, also on their race and ethnicity. When you see someone running toward you on the street with a bloody ax, you don’t know for sure that he’s a crazed ax murderer. Perhaps he’s a butcher who’s actually running after the person next to you to give her the change she forgot. But you’re going to make a guess one way or another. That guess is an example of profiling.

To profile is to generalize. It’s taking characteristics of a population and applying them to an individual. People naturally have an intuition about other people based on different characteristics. Sometimes that intuition is right and sometimes it’s wrong, but it’s still a person’s first reaction. How good this intuition is as a countermeasure depends on two things: how accurate the intuition is and how effective it is when it becomes institutionalized or when the profile characteristics become commonplace.

One of the ways profiling becomes institutionalized is through computerization. Instead of Diana Dean looking someone over, a computer looks the profile over and gives it some sort of rating. Generally profiles with high ratings are further evaluated by people, although sometimes countermeasures kick in based on the computerized profile alone. This is, of course, more brittle. The computer can profile based only on simple, easy-to-assign characteristics: age, race, credit history, job history, et cetera. Computers don’t get hinky feelings. Computers also can’t adapt the way people can.

Profiling works better if the characteristics profiled are accurate. If erratic driving is a good indication that the driver is intoxicated, then that’s a good characteristic for a police officer to use to determine who he’s going to pull over. If furtively looking around a store or wearing a coat on a hot day is a good indication that the person is a shoplifter, then those are good characteristics for a store owner to pay attention to. But if wearing baggy trousers isn’t a good indication that the person is a shoplifter, then the store owner is going to spend a lot of time paying undue attention to honest people with lousy fashion sense.

In common parlance, the term “profiling” doesn’t refer to these characteristics. It refers to profiling based on characteristics like race and ethnicity, and institutionalized profiling based on those characteristics alone. During World War II, the U.S. rounded up over 100,000 people of Japanese origin who lived on the West Coast and locked them in camps (prisons, really). That was an example of profiling. Israeli border guards spend a lot more time scrutinizing Arab men than Israeli women; that’s another example of profiling. In many U.S. communities, police have been known to stop and question people of color driving around in wealthy white neighborhoods (commonly referred to as “DWB” — Driving While Black). In all of these cases you might possibly be able to argue some security benefit, but the trade-offs are enormous: Honest people who fit the profile can get annoyed, or harassed, or arrested, when they’re assumed to be attackers.

For democratic governments, this is a major problem. It’s just wrong to segregate people into “more likely to be attackers” and “less likely to be attackers” based on race or ethnicity. It’s wrong for the police to pull a car over just because its black occupants are driving in a rich white neighborhood. It’s discrimination.

But people make bad security trade-offs when they’re scared, which is why we saw Japanese internment camps during World War II, and why there is so much discrimination against Arabs in the U.S. going on today. That doesn’t make it right, and it doesn’t make it effective security. Writing about the Japanese internment, for example, a 1983 commission reported that the causes of the incarceration were rooted in “race prejudice, war hysteria, and a failure of political leadership.” But just because something is wrong doesn’t mean that people won’t continue to do it.

Ethics aside, institutionalized profiling fails because real attackers are so rare: Active failures will be much more common than passive failures. The great majority of people who fit the profile will be innocent. At the same time, some real attackers are going to deliberately try to sneak past the profile. During World War II, a Japanese American saboteur could try to evade imprisonment by pretending to be Chinese. Similarly, an Arab terrorist could dye his hair blond, practice an American accent, and so on.

Profiling can also blind you to threats outside the profile. If U.S. border guards stop and search everyone who’s young, Arab, and male, they’re not going to have the time to stop and search all sorts of other people, no matter how hinky they might be acting. On the other hand, if the attackers are of a single race or ethnicity, profiling is more likely to work (although the ethics are still questionable). It makes real security sense for El Al to spend more time investigating young Arab males than it does for them to investigate Israeli families. In Vietnam, American soldiers never knew which local civilians were really combatants; sometimes killing all of them was the security solution they chose.

If a lot of this discussion is abhorrent, as it probably should be, it’s the trade-offs in your head talking. It’s perfectly reasonable to decide not to implement a countermeasure not because it doesn’t work, but because the trade-offs are too great. Locking up every Arab-looking person will reduce the potential for Muslim terrorism, but no reasonable person would suggest it. (It’s an example of “winning the battle but losing the war.”) In the U.S., there are laws that prohibit police profiling by characteristics like ethnicity, because we believe that such security measures are wrong (and not simply because we believe them to be ineffective).

Still, no matter how much a government makes it illegal, profiling does occur. It occurs at an individual level, at the level of Diana Dean deciding which cars to wave through and which ones to investigate further. She profiled Ressam based on his mannerisms and his answers to her questions. He was Algerian, and she certainly noticed that. However, this was before 9/11, and the reports of the incident clearly indicate that she thought he was a drug smuggler; ethnicity probably wasn’t a key profiling factor in this case. In fact, this is one of the most interesting aspects of the story. That intuitive sense that something was amiss worked beautifully, even though everybody made a wrong assumption about what was wrong. Human intuition detected a completely unexpected kind of attack. Humans will beat computers at hinkiness-detection for many decades to come.

And done correctly, this intuition-based sort of profiling can be an excellent security countermeasure. Dean needed to have the training and the experience to profile accurately and properly, without stepping over the line and profiling illegally. The trick here is to make sure perceptions of risk match the actual risks. If those responsible for security profile based on superstition and wrong-headed intuition, or by blindly following a computerized profiling system, profiling won’t work at all. And even worse, it actually can reduce security by blinding people to the real threats. Institutionalized profiling can ossify a mind, and a person’s mind is the most important security countermeasure we have.

A couple of other points (not from the book):

  • Whenever you design a security system with two ways through — an easy way and a hard way — you invite the attacker to take the easy way. Profile for young Arab males, and you’ll get terrorists that are old non-Arab females. This paper looks at the security effectiveness of profiling versus random searching.
  • If we are going to increase security against terrorism, the young Arab males living in our country are precisely the people we want on our side. Discriminating against them in the name of security is not going to make them more likely to help.
  • Despite what many people think, terrorism is not confined to young Arab males. Shoe-bomber Richard Reid was British. Germaine Lindsay, one of the 7/7 London bombers, was Afro-Caribbean. Here are some more examples:

    In 1986, a 32-year-old Irish woman, pregnant at the time, was about to board an El Al flight from London to Tel Aviv when El Al security agents discovered an explosive device hidden in the false bottom of her bag. The woman’s boyfriend — the father of her unborn child — had hidden the bomb.

    In 1987, a 70-year-old man and a 25-year-old woman — neither of whom were Middle Eastern — posed as father and daughter and brought a bomb aboard a Korean Air flight from Baghdad to Thailand. En route to Bangkok, the bomb exploded, killing all on board.

    In 1999, men dressed as businessmen (and one dressed as a Catholic priest) turned out to be terrorist hijackers, who forced an Avianca flight to divert to an airstrip in Colombia, where some passengers were held as hostages for more than a year-and-half.

    The 2002 Bali terrorists were Indonesian. The Chechnyan terrorists who downed the Russian planes were women. Timothy McVeigh and the Unabomber were Americans. The Basque terrorists are Basque, and Irish terrorists are Irish. Tha Tamil Tigers are Sri Lankan.

    And many Muslims are not Arabs. Even worse, almost everyone who is Arab is not a terrorist — many people who look Arab are not even Muslims. So not only are there an large number of false negatives — terrorists who don’t meet the profile — but there an enormous number of false positives: innocents that do meet the profile.

Posted on July 22, 2005 at 3:12 PMView Comments

New York Times on Identity Theft

I got some really good quotes in this New York Times article on identity theft:

Which is why I wish William Proxmire were still on the case. What we need right now is someone in power who can put the burden for this problem right where it belongs: on the financial and other institutions who collect this data. Let’s face it: by the time even the most vigilant consumer discovers his information has been used fraudulently, it’s already too late. “When people ask me what can the average person do to stop identity theft, I say, ‘nothing,'” said Bruce Schneier, the chief technology officer of Counterpane Internet Security. “This data is held by third parties and they have no impetus to fix it.”

Mr. Schneier, though, has a solution that is positively Proxmirian in its elegance and simplicity. Most of the bills that have been filed in Congress to deal with identity fraud are filled with specific requirements for banks and other institutions: encrypt this; safeguard that; strengthen this firewall.

Mr. Schneier says forget about all that. Instead, do what Congress did in the 1970’s — just put the burden on the financial industry. “If we’re ever going to manage the risks and effects of electronic impersonation,” he wrote recently on CNET (and also in his blog), “we must concentrate on preventing and detecting fraudulent transactions.” And the only way to do that, he added, is by making the financial institutions liable for fraudulent transactions.

“I think business ingenuity is top notch,” Mr. Schneier said in an interview. “And I think if you make it their problem, they will solve it.”

Yes, he acknowledged, letting consumers off the hook might cause them to be less vigilant. But that is exactly what Senator Proxmire did and to great effect. Forcing the financial institutions to bear the entire burden will cause them to tighten up their procedures until the fraud is under control. Maybe they will invest in complex software. But maybe they’ll take simpler measures as well, like making it a little less easy than it is today to obtain a credit card. Best of all, once people see these measures take effect — and realize that someone else is responsible for fixing the problems — their fear will abate.

As Senator Proxmire understood a long time ago, fear is the great enemy of commerce. Maybe this time, the banks will finally understand that as well.

Posted on July 12, 2005 at 5:14 PMView Comments

Fearmongering About Bot Networks

Bot networks are a serious security problem, but this is ridiculous. From the Independent:

The PC in your home could be part of a complex international terrorist network. Without you realising it, your computer could be helping to launder millions of pounds, attacking companies’ websites or cracking confidential government codes.

This is not the stuff of science fiction or a conspiracy theory from a paranoid mind, but a warning from one of the world’s most-respected experts on computer crime. Dr Peter Tippett is chief technology officer at Cybertrust, a US computer security company, and a senior adviser on the issue to President George Bush. His warning is stark: criminals and terrorists are hijacking home PCs over the internet, creating “bot” computers to carry out illegal activities.

Yes, bot networks are bad. They’re used to send spam (both commercial and phishing), launch denial-of-service attacks (sometimes involving extortion), and stage attacks on other systems. Most bot networks are controlled by kids, but more and more criminals are getting into the act.

But your computer a part of an international terrorist network? Get real.

Once a criminal has gathered together what is known as a “herd” of bots, the combined computing power can be dangerous. “If you want to break the nuclear launch code then set a million computers to work on it. There is now a danger of nation state attacks,” says Dr Tippett. “The vast majority of terrorist organisations will use bots.”

I keep reading that last sentence, and wonder if “bots” is just a typo for “bombs.” And the line about bot networks being used to crack nuclear launch codes is nothing more than fearmongering.

Clearly I need to write an essay on bot networks.

Posted on May 17, 2005 at 3:33 PMView Comments

Should Terrorism be Reported in the News?

In the New York Times (read it here without registering), columnist John Tierney argues that the media is performing a public disservice by writing about all the suicide bombings in Iraq. This only serves to scare people, he claims, and serves the terrorists’ ends.

Some liberal bloggers have jumped on this op-ed as furthering the administration’s attempts to hide the horrors of the Iraqi war from the American people, but I think the argument is more subtle than that. Before you can figure out why Tierney is wrong, you need to understand that he has a point.

Terrorism is a crime against the mind. The real target of a terrorist is morale, and press coverage helps him achieve his goal. I wrote in Beyond Fear (pages 242-3):

Morale is the most significant terrorist target. By refusing to be scared, by refusing to overreact, and by refusing to publicize terrorist attacks endlessly in the media, we limit the effectiveness of terrorist attacks. Through the long spate of IRA bombings in England and Northern Ireland in the 1970s and 1980s, the press understood that the terrorists wanted the British government to overreact, and praised their restraint. The U.S. press demonstrated no such understanding in the months after 9/11 and made it easier for the U.S. government to overreact.

Consider this thought experiment. If the press did not report the 9/11 attacks, if most people in the U.S. didn’t know about them, then the attacks wouldn’t have been such a defining moment in our national politics. If we lived 100 years ago, and people only read newspaper articles and saw still photographs of the attacks, then people wouldn’t have had such an emotional reaction. If we lived 200 years ago and all we had to go on was the written word and oral accounts, the emotional reaction would be even less. Modern news coverage amplifies the terrorists’ actions by endlessly replaying them, with real video and sound, burning them into the psyche of every viewer.

Just as the media’s attention to 9/11 scared people into accepting government overreactions like the PATRIOT Act, the media’s attention to the suicide bombings in Iraq are convincing people that Iraq is more dangerous than it is.

Tiernan writes:

I’m not advocating official censorship, but there’s no reason the news media can’t reconsider their own fondness for covering suicide bombings. A little restraint would give the public a more realistic view of the world’s dangers.

Just as New Yorkers came to be guided by crime statistics instead of the mayhem on the evening news, people might begin to believe the statistics showing that their odds of being killed by a terrorist are minuscule in Iraq or anywhere else.

I pretty much said the same thing, albeit more generally, in Beyond Fear (page 29):

Modern mass media, specifically movies and TV news, has degraded our sense of natural risk. We learn about risks, or we think we are learning, not by directly experiencing the world around us and by seeing what happens to others, but increasingly by getting our view of things through the distorted lens of the media. Our experience is distilled for us, and it’s a skewed sample that plays havoc with our perceptions. Kids try stunts they’ve seen performed by professional stuntmen on TV, never recognizing the precautions the pros take. The five o’clock news doesn’t truly reflect the world we live in — only a very few small and special parts of it.

Slices of life with immediate visual impact get magnified; those with no visual component, or that can’t be immediately and viscerally comprehended, get downplayed. Rarities and anomalies, like terrorism, are endlessly discussed and debated, while common risks like heart disease, lung cancer, diabetes, and suicide are minimized.

The global reach of today’s news further exacerbates this problem. If a child is kidnapped in Salt Lake City during the summer, mothers all over the country suddenly worry about the risk to their children. If there are a few shark attacks in Florida — and a graphic movie — suddenly every swimmer is worried. (More people are killed every year by pigs than by sharks, which shows you how good we are at evaluating risk.)

One of the things I routinely tell people is that if it’s in the news, don’t worry about it. By definition, “news” means that it hardly ever happens. If a risk is in the news, then it’s probably not worth worrying about. When something is no longer reported — automobile deaths, domestic violence — when it’s so common that it’s not news, then you should start worrying.

Tierney is arguing his position as someone who thinks that the Bush administration is doing a good job fighting terrorism, and that the media’s reporting of suicide bombings in Iraq are sapping Americans’ will to fight. I am looking at the same issue from the other side, as someone who thinks the media’s reporting of terrorist attacks and threats has increased public support for the Bush administration’s draconian counterterrorism laws and dangerous and damaging foreign and domestic policies. If the media didn’t report all of the administrations’s alerts and warnings and arrests, we would have a much more sensible counterterrorism policy in America and we would all be much safer.

So why is the argument wrong? It’s wrong because the danger of not reporting terrorist attacks is greater than the risk of continuing to report them. Freedom of the press is a security measure. The only tool we have to keep government honest is public disclosure. Once we start hiding pieces of reality from the public — either through legal censorship or self-imposed “restraint” — we end up with a government that acts based on secrets. We end up with some sort of system that decides what the public should or should not know.

Here’s one example. Last year I argued that the constant stream of terrorist alerts were a mechanism to keep Americans scared. This week, the media reported that the Bush administration repeatedly raised the terror threat level on flimsy evidence, against the recommendation of former DHS secretary Tom Ridge. If the media follows this story, we will learn — too late for the 2004 election, but not too late for the future — more about the Bush administration’s terrorist propaganda machine.

Freedom of the press — the unfettered publishing of all the bad news — isn’t without dangers. But anything else is even more dangerous. That’s why Tierney is wrong.

And honestly, if anyone thinks they can get an accurate picture of anyplace on the planet by reading news reports, they’re sadly mistaken.

Posted on May 12, 2005 at 9:49 AMView Comments

Anonymity and the Internet

From Slate:

Anonymice on Anonymity Wendy.Seltzer.org (“Musings of a techie lawyer”) deflates the New York Times‘ breathless Saturday (March 19) piece about the menace posed by anonymous access to Wi-Fi networks (“Growth of Wireless Internet Opens New Path for Thieves” by Seth Schiesel). Wi-Fi pirates around the nation are using unsecured hotspots to issue anonymous death threats, download child pornography, and commit credit card fraud, Schiesel writes. Then he plays the terrorist card.

But unsecured wireless networks are nonetheless being looked at by the authorities as a potential tool for furtive activities of many sorts, including terrorism. Two federal law enforcement officials said on condition of anonymity that while they were not aware of specific cases, they believed that sophisticated terrorists might also be starting to exploit unsecured Wi-Fi connections.

Never mind the pod of qualifiers swimming through in those two sentences — “being looked at”; “potential tool”; “not aware of specific cases”; “might” — look at the sourcing. “Two federal law enforcement officials said on condition of anonymity. …” Seltzer points out the deep-dish irony of the Times citing anonymous sources about the imagined threats posed by anonymous Wi-Fi networks. Anonymous sources of unsubstantiated information, good. Anonymous Wi-Fi networks, bad.

This is the post from wendy.seltzer.org:

The New York Times runs an article in which law enforcement officials lament, somewhat breathlessly, that open wifi connections can be used, anonymously, by wrongdoers. The piece omits any mention of the benefits of these open wireless connections — no-hassle connectivity anywhere the “default” community network is operating, and anonymous browsing and publication for those doing good, too.

Without a hint of irony, however:

Two federal law enforcement officials said on condition of anonymity that while they were not aware of specific cases, they believed that sophisticated terrorists might also be starting to exploit unsecured Wi-Fi connections.

Yes, even law enforcement needs anonymity sometimes.

Open WiFi networks are a good thing. Yes, they allow bad guys to do bad things. But so do automobiles, telephones, and just about everything else you can think of. I like it when I find an open wireless network that I can use. I like it when my friends keep their home wireless network open so I can use it.

Scare stories like the New York Times one don’t help any.

Posted on March 25, 2005 at 12:49 PMView Comments

Altimeter Watches Now a Terrorism Threat

This story is so idiotic that I have trouble believing it’s true. According to MSNBC:

An advisory issued Monday by the Department of Homeland Security and the FBI urges the Transportation Security Administration to have airport screeners keep an eye out for wristwatches containing cigarette lighters or altimeters.

The notice says “recent intelligence suggests al-Qaida has expressed interest in obtaining wristwatches with a hidden butane-lighter function and Casio watches with an altimeter function. Casio watches have been extensively used by al-Qaida and associated organizations as timers for improvised explosive devices. The Casio brand is likely chosen due to its worldwide availability and inexpensive price.”

Clocks and watches definitely make good device timers for remotely triggered bombs. In this scenario, the person carrying the watch is an innocent. (Otherwise he wouldn’t need a remote triggering device; he could set the bomb off himself.) This implies that the bomb is stuffed inside the functional watch. But if you assume a bomb as small as the non-functioning space in a wristwatch can blow up an airplane, you’ve got problems far bigger than one particular brand of wristwatch. This story simply makes no sense.

And, like most of the random “alerts” from the DHS, it’s not based on any real facts:

The advisory notes that there is no specific information indicating any terrorist plans to use the devices, but it urges screeners to watch for them.

I wish the DHS were half as good at keeping people safe as they are at scaring people. (I’ve written more about that here.)

Posted on January 5, 2005 at 12:34 PMView Comments

Bad Quote

In a story on a computer glitch that forced Comair to cancel 1,100 flighs on Christmas Day, I was quoted in an AP story as saying:

“If this kind of thing could happen by accident, what would happen if the bad guys did this on purpose?” he said.

I’m sure I said that, but I wish the reporter hadn’t used it. It’s just the sort of fear-mongering that I object to when others do it.

Posted on December 28, 2004 at 8:58 AMView Comments

The Doghouse: Internet Security Foundation

This organization wants to sell their tool to view passwords in textboxes “hidden” by asterisks on Windows. They claim it’s “a glaring security hole in Microsoft Windows” and a “grave security risk.” Their webpage is thick with FUD, and warns that criminals and terrorists can easily clean out your bank accounts because of this problem.

Of course the problem isn’t that users type passwords into their computers. The problem is that programs don’t store passwords securely. The problem is that programs pass passwords around in plaintext. The problem is that users choose lousy passwords, and then store them insecurely. The problem is that financial applications are still relying on passwords for security, rather than two-factor authentication.

But the “Internet Security Foundation” is trying to make as much noise as possible. They even have this nasty letter to Bill Gates that you can sign (36 people had signed, the last time I looked). I’m not sure what their angle is, but I don’t like it.

Posted on December 13, 2004 at 1:32 PMView Comments

Do Terror Alerts Work?

As I read the litany of terror threat warnings that the government has issued in the past three years, the thing that jumps out at me is how vague they are. The careful wording implies everything without actually saying anything. We hear “terrorists might try to bomb buses and rail lines in major U.S. cities this summer,” and there’s “increasing concern about the possibility of a major terrorist attack.” “At least one of these attacks could be executed by the end of the summer 2003.” Warnings are based on “uncorroborated intelligence,” and issued even though “there is no credible, specific information about targets or method of attack.” And, of course, “weapons of mass destruction, including those containing chemical, biological, or radiological agents or materials, cannot be discounted.”

Terrorists might carry out their attacks using cropdusters, helicopters, scuba divers, even prescription drugs from Canada. They might be carrying almanacs. They might strike during the Christmas season, disrupt the “democratic process,” or target financial buildings in New York and Washington.

It’s been more than two years since the government instituted a color-coded terror alert system, and the Department of Homeland Security has issued about a dozen terror alerts in that time. How effective have they been in preventing terrorism? Have they made us any safer, or are they causing harm? Are they, as critics claim, just a political ploy?

When Attorney General John Ashcroft came to Minnesota recently, he said the fact that there had been no terrorist attacks in America in the three years since September 11th was proof that the Bush administration’s anti-terrorist policies were working. I thought: There were no terrorist attacks in America in the three years before September 11th, and we didn’t have any terror alerts. What does that prove?

In theory, the warnings are supposed to cultivate an atmosphere of preparedness. If Americans are vigilant against the terrorist threat, then maybe the terrorists will be caught and their plots foiled. And repeated warnings brace Americans for the aftermath of another attack.

The problem is that the warnings don’t do any of this. Because they are so vague and so frequent, and because they don’t recommend any useful actions that people can take, terror threat warnings don’t prevent terrorist attacks. They might force a terrorist to delay his plan temporarily, or change his target. But in general, professional security experts like me are not particularly impressed by systems that merely force the bad guys to make minor modifications in their tactics.

And the alerts don’t result in a more vigilant America. It’s one thing to issue a hurricane warning, and advise people to board up their windows and remain in the basement. Hurricanes are short-term events, and it’s obvious when the danger is imminent and when it’s over. People can do useful things in response to a hurricane warning; then there is a discrete period when their lives are markedly different, and they feel there was utility in the higher alert mode, even if nothing came of it.

It’s quite another thing to tell people to be on alert, but not to alter their plans–as Americans were instructed last Christmas. A terrorist alert that instills a vague feeling of dread or panic, without giving people anything to do in response, is ineffective. Indeed, it inspires terror itself. Compare people’s reactions to hurricane threats with their reactions to earthquake threats. According to scientists, California is expecting a huge earthquake sometime in the next two hundred years. Even though the magnitude of the disaster will be enormous, people just can’t stay alert for two centuries. The news seems to have generated the same levels of short-term fear and long-term apathy in Californians that the terrorist warnings do. It’s human nature; people simply can’t be vigilant indefinitely.

It’s true too that people want to make their own decisions. Regardless of what the government suggests, people are going to independently assess the situation. They’re going to decide for themselves whether or not changing their behavior seems like a good idea. If there’s no rational information to base their independent assessment on, they’re going to come to conclusions based on fear, prejudice, or ignorance.

We’re already seeing this in the U.S. We see it when Muslim men are assaulted on the street. We see it when a woman on an airplane panics because a Syrian pop group is flying with her. We see it again and again, as people react to rumors about terrorist threats from Al Qaeda and its allies endlessly repeated by the news media.

This all implies that if the government is going to issue a threat warning at all, it should provide as many details as possible. But this is a catch-22: Unfortunately, there’s an absolute limit to how much information the government can reveal. The classified nature of the intelligence that goes into these threat alerts precludes the government from giving the public all the information it would need to be meaningfully prepared. And maddeningly, the current administration occasionally compromises the intelligence assets it does have, in the interest of politics. It recently released the name of a Pakistani agent working undercover in Al Qaeda, blowing ongoing counterterrorist operations both in Pakistan and the U.K.

Still, ironically, most of the time the administration projects a “just trust me” attitude. And there are those in the U.S. who trust it, and there are those who do not. Unfortunately, there are good reasons not to trust it. There are two reasons government likes terror alerts. Both are self-serving, and neither has anything to do with security.

The first is such a common impulse of bureaucratic self-protection that it has achieved a popular acronym in government circles: CYA. If the worst happens and another attack occurs, the American public isn’t going to be as sympathetic to the current administration as it was last time. After the September 11th attacks, the public reaction was primarily shock and disbelief. In response, the government vowed to fight the terrorists. They passed the draconian USA PATRIOT Act, invaded two countries, and spent hundreds of billions of dollars. Next time, the public reaction will quickly turn into anger, and those in charge will need to explain why they failed. The public is going to demand to know what the government knew and why it didn’t warn people, and they’re not going to look kindly on someone who says: “We didn’t think the threat was serious enough to warn people.” Issuing threat warnings is a way to cover themselves. “What did you expect?” they’ll say. “We told you it was Code Orange.”

The second purpose is even more self-serving: Terror threat warnings are a publicity tool. They’re a method of keeping terrorism in people’s minds. Terrorist attacks on American soil are rare, and unless the topic stays in the news, people will move on to other concerns. There is, of course, a hierarchy to these things. Threats against U.S. soil are most important, threats against Americans abroad are next, and terrorist threats–even actual terrorist attacks–against foreigners in foreign countries are largely ignored.

Since the September 11th attacks, Republicans have made “tough on terror” the centerpiece of their reelection strategies. Study after study has shown that Americans who are worried about terrorism are more likely to vote Republican. In 2002, Karl Rove specifically told Republican legislators to run on that platform, and strength in the face of the terrorist threat is the basis of Bush’s reelection campaign. For that strategy to work, people need to be reminded constantly about the terrorist threat and how the current government is keeping them safe.

It has to be the right terrorist threat, though. Last month someone exploded a pipe bomb in a stem-cell research center near Boston, but the administration didn’t denounce this as a terrorist attack. In April 2003, the FBI disrupted a major terrorist plot in the U.S., arresting William Krar and seizing automatic weapons, pipe bombs, bombs disguised as briefcases, and at least one cyanide bomb–an actual chemical weapon. But because Krar was a member of a white supremacist group and not Muslim, Ashcroft didn’t hold a press conference, Tom Ridge didn’t announce how secure the homeland was, and Bush never mentioned it.

Threat warnings can be a potent tool in the fight against terrorism–when there is a specific threat at a specific moment. There are times when people need to act, and act quickly, in order to increase security. But this is a tool that can easily be abused, and when it’s abused it loses its effectiveness.

It’s instructive to look at the European countries that have been dealing with terrorism for decades, like the United Kingdom, Ireland, France, Italy, and Spain. None of these has a color-coded terror-alert system. None calls a press conference on the strength of “chatter.” Even Israel, which has seen more terrorism than any other nation in the world, issues terror alerts only when there is a specific imminent attack and they need people to be vigilant. And these alerts include specific times and places, with details people can use immediately. They’re not dissimilar from hurricane warnings.

A terror alert that instills a vague feeling of dread or panic echoes the very tactics of the terrorists. There are essentially two ways to terrorize people. The first is to do something spectacularly horrible, like flying airplanes into skyscrapers and killing thousands of people. The second is to keep people living in fear with the threat of doing something horrible. Decades ago, that was one of the IRA’s major aims. Inadvertently, the DHS is achieving the same thing.

There’s another downside to incessant threat warnings, one that happens when everyone realizes that they have been abused for political purposes. Call it the “Boy Who Cried Wolf” problem. After too many false alarms, the public will become inured to them. Already this has happened. Many Americans ignore terrorist threat warnings; many even ridicule them. The Bush administration lost considerable respect when it was revealed that August’s New York/Washington warning was based on three-year-old information. And the more recent warning that terrorists might target cheap prescription drugs from Canada was assumed universally to be politics-as-usual.

Repeated warnings do more harm than good, by needlessly creating fear and confusion among those who still trust the government, and anesthetizing everyone else to any future alerts that might be important. And every false alarm makes the next terror alert less effective.

Fighting global terrorism is difficult, and it’s not something that should be played for political gain. Countries that have been dealing with terrorism for decades have realized that much of the real work happens outside of public view, and that often the most important victories are the most secret. The elected officials of these countries take the time to explain this to their citizens, who in return have a realistic view of what the government can and can’t do to keep them safe.

By making terrorism the centerpiece of his reelection campaign, President Bush and the Republicans play a very dangerous game. They’re making many people needlessly fearful. They’re attracting the ridicule of others, both domestically and abroad. And they’re distracting themselves from the serious business of actually keeping Americans safe.

This article was originally published in the October 2004 edition of The Rake

Posted on October 4, 2004 at 7:08 PMView Comments

1 20 21 22

Sidebar photo of Bruce Schneier by Joe MacInnis.