Entries Tagged "psychology of security"

Page 23 of 26

Conspiracy Theories

Fascinating New Scientist article (for subscribers only, but there’s a copy here) on conspiracy theories, and why we believe them:

So what kind of thought processes contribute to belief in conspiracy theories? A study I carried out in 2002 explored a way of thinking sometimes called “major event – major cause” reasoning. Essentially, people often assume that an event with substantial, significant or wide-ranging consequences is likely to have been caused by something substantial, significant or wide-ranging.

I gave volunteers variations of a newspaper story describing an assassination attempt on a fictitious president. Those who were given the version where the president died were significantly more likely to attribute the event to a conspiracy than those who read the one where the president survived, even though all other aspects of the story were equivalent.

To appreciate why this form of reasoning is seductive, consider the alternative: major events having minor or mundane causes—for example, the assassination of a president by a single, possibly mentally unstable, gunman, or the death of a princess because of a drunk driver. This presents us with a rather chaotic and unpredictable relationship between cause and effect. Instability makes most of us uncomfortable; we prefer to imagine we live in a predictable, safe world, so in a strange way, some conspiracy theories offer us accounts of events that allow us to retain a sense of safety and predictability.

Other research has examined how the way we search for and evaluate evidence affects our belief systems. Numerous studies have shown that in general, people give greater attention to information that fits with their existing beliefs, a tendency called “confirmation bias.” Reasoning about conspiracy theories follows this pattern, as shown by research I carried out with Marco Cinnirella at the Royal Holloway University of London, which we presented at the British Psychological Society conference in 2005.

The study, which again involved giving volunteers fictional accounts of an assassination attempt, showed that conspiracy believers found new information to be more plausible if it was consistent with their beliefs. Moreover, believers considered that ambiguous or neutral information fitted better with the conspiracy explanation, while non-believers felt it fitted better with the non-conspiracy account. The same piece of evidence can be used by different people to support very different accounts of events.

This fits with the observation that conspiracy theories often mutate over time in light of new or contradicting evidence. So, for instance, if some new information appears to undermine a conspiracy theory, either the plot is changed to make it consistent with the new information, or the theorists question the legitimacy of the new information. Theorists often argue that those who present such information are themselves embroiled in the conspiracy. In fact, because of my research, I have been accused of being secretly in the pay of various western intelligence services (I promise, I haven’t seen a penny).

Lots of good stuff in the article, including instructions on how to create your own conspiracy theory.

Posted on August 14, 2007 at 6:17 AMView Comments

MRI Lie Detectors

Long and interesting article on fMRI lie detectors.

I was particularly struck by this paragraph, about why people are bad at detecting lies:

Maureen O’Sullivan, a deception researcher at the University of San Francisco, studies why humans are so bad at recognizing lies. Many people, she says, base assessments of truthfulness on irrelevant factors, such as personality or appearance. “Baby-faced, non-weird, and extroverted people are more likely to be judged truthful,” she says. (Maybe this explains my trust in Steve Glass.) People are also blinkered by the “truthfulness bias”: the vast majority of questions we ask of other people—the time, the price off the breakfast special—are answered honestly, and truth is therefore our default expectation. Then, there’s the “learning-curve problem.” We don’t have a refined idea of what a successful lie looks and sounds like, since we almost never receive feedback on the fibs that we’ve been told; the co-worker who, at the corporate retreat, assured you that she loved your presentation doesn’t usually reveal later that she hated it. As O’Sullivan puts it, “By definition, the most convincing lies go undetected.”

EDITED TO ADD (8/28): The New York Times has an article on the topic.

Posted on July 25, 2007 at 6:26 AMView Comments

Correspondent Inference Theory

Two people are sitting in a room together: an experimenter and a subject. The experimenter gets up and closes the door, and the room becomes quieter. The subject is likely to believe that the experimenter’s purpose in closing the door was to make the room quieter.

This is an example of correspondent inference theory. People tend to infer the motives—and also the disposition—of someone who performs an action based on the effects of his actions, and not on external or situational factors. If you see someone violently hitting someone else, you assume it’s because he wanted to—and is a violent person—and not because he’s play-acting. If you read about someone getting into a car accident, you assume it’s because he’s a bad driver and not because he was simply unlucky. And—more importantly for this column—if you read about a terrorist, you assume that terrorism is his ultimate goal.

It’s not always this easy, of course. If someone chooses to move to Seattle instead of New York, is it because of the climate, the culture or his career? Edward Jones and Keith Davis, who advanced this theory in the 1960s and 1970s, proposed a theory of “correspondence” to describe the extent to which this effect predominates. When an action has a high correspondence, people tend to infer the motives of the person directly from the action: e.g., hitting someone violently. When the action has a low correspondence, people tend to not to make the assumption: e.g., moving to Seattle.

Like most cognitive biases, correspondent inference theory makes evolutionary sense. In a world of simple actions and base motivations, it’s a good rule of thumb that allows a creature to rapidly infer the motivations of another creature. (He’s attacking me because he wants to kill me.) Even in sentient and social creatures like humans, it makes a lot of sense most of the time. If you see someone violently hitting someone else, it’s reasonable to assume that he’s a violent person. Cognitive biases aren’t bad; they’re sensible rules of thumb.

But like all cognitive biases, correspondent inference theory fails sometimes. And one place it fails pretty spectacularly is in our response to terrorism. Because terrorism often results in the horrific deaths of innocents, we mistakenly infer that the horrific deaths of innocents is the primary motivation of the terrorist, and not the means to a different end.

I found this interesting analysis in a paper by Max Abrahms in International Security. “Why Terrorism Does Not Work” (.PDF) analyzes the political motivations of 28 terrorist groups: the complete list of “foreign terrorist organizations” designated by the U.S. Department of State since 2001. He lists 42 policy objectives of those groups, and found that they only achieved them 7 percent of the time.

According to the data, terrorism is more likely to work if 1) the terrorists attack military targets more often than civilian ones, and 2) if they have minimalist goals like evicting a foreign power from their country or winning control of a piece of territory, rather than maximalist objectives like establishing a new political system in the country or annihilating another nation. But even so, terrorism is a pretty ineffective means of influencing policy.

There’s a lot to quibble about in Abrahms’ methodology, but he seems to be erring on the side of crediting terrorist groups with success. (Hezbollah’s objectives of expelling both peacekeepers and Israel out of Lebanon counts as a success, but so does the “limited success” by the Tamil Tigers of establishing a Tamil state.) Still, he provides good data to support what was until recently common knowledge: Terrorism doesn’t work.

This is all interesting stuff, and I recommend that you read the paper for yourself. But to me, the most insightful part is when Abrahms uses correspondent inference theory to explain why terrorist groups that primarily attack civilians do not achieve their policy goals, even if they are minimalist. Abrahms writes:

The theory posited here is that terrorist groups that target civilians are unable to coerce policy change because terrorism has an extremely high correspondence. Countries believe that their civilian populations are attacked not because the terrorist group is protesting unfavorable external conditions such as territorial occupation or poverty. Rather, target countries infer the short-term consequences of terrorism—the deaths of innocent civilians, mass fear, loss of confidence in the government to offer protection, economic contraction, and the inevitable erosion of civil liberties—(are) the objects of the terrorist groups. In short, target countries view the negative consequences of terrorist attacks on their societies and political systems as evidence that the terrorists want them destroyed. Target countries are understandably skeptical that making concessions will placate terrorist groups believed to be motivated by these maximalist objectives.

In other words, terrorism doesn’t work, because it makes people less likely to acquiesce to the terrorists’ demands, no matter how limited they might be. The reaction to terrorism has an effect completely opposite to what the terrorists want; people simply don’t believe those limited demands are the actual demands.

This theory explains, with a clarity I have never seen before, why so many people make the bizarre claim that al Qaeda terrorism—or Islamic terrorism in general—is “different”: that while other terrorist groups might have policy objectives, al Qaeda’s primary motivation is to kill us all. This is something we have heard from President Bush again and again—Abrahms has a page of examples in the paper—and is a rhetorical staple in the debate. (You can see a lot of it in the comments to this previous essay.)

In fact, Bin Laden’s policy objectives have been surprisingly consistent. Abrahms lists four; here are six from former CIA analyst Michael Scheuer’s book Imperial Hubris:

  1. End U.S. support of Israel
  2. Force American troops out of the Middle East, particularly Saudi Arabia
  3. End the U.S. occupation of Afghanistan and (subsequently) Iraq
  4. End U.S. support of other countries’ anti-Muslim policies
  5. End U.S. pressure on Arab oil companies to keep prices low
  6. End U.S. support for “illegitimate” (i.e. moderate) Arab governments, like Pakistan

Although Bin Laden has complained that Americans have completely misunderstood the reason behind the 9/11 attacks, correspondent inference theory postulates that he’s not going to convince people. Terrorism, and 9/11 in particular, has such a high correspondence that people use the effects of the attacks to infer the terrorists’ motives. In other words, since Bin Laden caused the death of a couple of thousand people in the 9/11 attacks, people assume that must have been his actual goal, and he’s just giving lip service to what he claims are his goals. Even Bin Laden’s actual objectives are ignored as people focus on the deaths, the destruction and the economic impact.

Perversely, Bush’s misinterpretation of terrorists’ motives actually helps prevent them from achieving their goals.

None of this is meant to either excuse or justify terrorism. In fact, it does the exact opposite, by demonstrating why terrorism doesn’t work as a tool of persuasion and policy change. But we’re more effective at fighting terrorism if we understand that it is a means to an end and not an end in itself; it requires us to understand the true motivations of the terrorists and not just their particular tactics. And the more our own cognitive biases cloud that understanding, the more we mischaracterize the threat and make bad security trade-offs.

This is my 46th essay for Wired.com, based on a paper I blogged about last week (there are a lot of good comments to that blog post).

Posted on July 12, 2007 at 12:59 PMView Comments

Why Terrorism Doesn't Work

This is an interesting paper on the efficacy of terrorism:

This study analyzes the political plights of twenty-eight terrorist groups—the complete list of foreign terrorist organizations (FTOs) as designated by the U.S. Department of State since 2001. The data yield two unexpected findings. First, the groups accomplished their forty-two policy objectives only 7 percent of the time. Second, although the groups achieved certain types of policy objectives more than others, the key variable for terrorist success was a tactical one: target selection. Groups whose attacks on civilian targets outnumbered attacks on military targets systematically failed to achieve their policy objectives, regardless of their nature.

The author believes that correspondent inference theory explains this. Basically, the theory says that people infer the motives of an actor based on the consequences of the action. So people assume that the motives of a terrorist are wanton death and destruction, and not the stated aims of the terrorist group:

The theory posited here is that terrorist groups that target civilians are unable to coerce policy change because terrorism has an extremely high correspondence. Countries believe that their civilian populations are attacked not because the terrorist group is protesting unfavorable external conditions such as territorial occupation or poverty. Rather, target countries infer from the short-term consequences of terrorism—the deaths of innocent citizens, mass fear, loss of confidence in the government to offer protection, economic contraction, and the inevitable erosion of civil liberties—the objectives of the terrorist group. In short, target countries view the negative consequences of terrorist attacks on their societies and political systems as evidence that the terrorists want them destroyed. Target countries are understandably skeptical that making concessions will placate terrorist groups believed to be motivated by these maximalist objectives.

This certainly explains a great deal about the U.S.’s reaction to the 9/11 attacks. Many people—along with our politicians and press—believe that al Qaeda terrorism is different, and they’re just out to kill us all. (In fact, I’m sure I’ll get blog comments along those lines.) The paper examines this belief: where it came from, how it manifested itself, and why it is wrong.

Posted on July 3, 2007 at 6:21 AMView Comments

Essay on Fear

The only thing we have to fear is the ‘culture of fear’ itself,” by Frank Furedi:

Fear plays a key role in twenty-first century consciousness. Increasingly, we seem to engage with various issues through a narrative of fear. You could see this trend emerging and taking hold in the last century, which was frequently described as an ‘Age of Anxiety’. But in recent decades, it has become more and better defined, as specific fears have been cultivated.

Posted on June 29, 2007 at 6:38 AMView Comments

Rare Risk and Overreactions

Everyone had a reaction to the horrific events of the Virginia Tech shootings. Some of those reactions were rational. Others were not.

A high school student was suspended for customizing a first-person shooter game with a map of his school. A contractor was fired from his government job for talking about a gun, and then visited by the police when he created a comic about the incident. A dean at Yale banned realistic stage weapons from the university theaters—a policy that was reversed within a day. And some teachers terrorized a sixth-grade class by staging a fake gunman attack, without telling them that it was a drill.

These things all happened, even though shootings like this are incredibly rare; even though—for all the press—less than one percent (.pdf) of homicides and suicides of children ages 5 to 19 occur in schools. In fact, these overreactions occurred, not despite these facts, but because of them.

The Virginia Tech massacre is precisely the sort of event we humans tend to overreact to. Our brains aren’t very good at probability and risk analysis, especially when it comes to rare occurrences. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. There’s a lot of research in the psychological community about how the brain responds to risk—some of it I have already written about—but the gist is this: Our brains are much better at processing the simple risks we’ve had to deal with throughout most of our species’ existence, and much poorer at evaluating the complex risks society forces us to face today.

Novelty plus dread equals overreaction.

We can see the effects of this all the time. We fear being murdered, kidnapped, raped and assaulted by strangers, when it’s far more likely that the perpetrator of such offenses is a relative or a friend. We worry about airplane crashes and rampaging shooters instead of automobile crashes and domestic violence—both far more common.

In the United States, dogs, snakes, bees and pigs each kill more people per year (.pdf) than sharks. In fact, dogs kill more humans than any animal except for other humans. Sharks are more dangerous than dogs, yes, but we’re far more likely to encounter dogs than sharks.

Our greatest recent overreaction to a rare event was our response to the terrorist attacks of 9/11. I remember then-Attorney General John Ashcroft giving a speech in Minnesota—where I live—in 2003, and claiming that the fact there were no new terrorist attacks since 9/11 was proof that his policies were working. I thought: “There were no terrorist attacks in the two years preceding 9/11, and you didn’t have any policies. What does that prove?”

What it proves is that terrorist attacks are very rare, and maybe our reaction wasn’t worth the enormous expense, loss of liberty, attacks on our Constitution and damage to our credibility on the world stage. Still, overreacting was the natural thing for us to do. Yes, it’s security theater, but it makes us feel safer.

People tend to base risk analysis more on personal story than on data, despite the old joke that “the plural of anecdote is not data.” If a friend gets mugged in a foreign country, that story is more likely to affect how safe you feel traveling to that country than abstract crime statistics.

We give storytellers we have a relationship with more credibility than strangers, and stories that are close to us more weight than stories from foreign lands. In other words, proximity of relationship affects our risk assessment. And who is everyone’s major storyteller these days? Television. (Nassim Nicholas Taleb’s great book, The Black Swan: The Impact of the Highly Improbable, discusses this.)

Consider the reaction to another event from last month: professional baseball player Josh Hancock got drunk and died in a car crash. As a result, several baseball teams are banning alcohol in their clubhouses after games. Aside from this being a ridiculous reaction to an incredibly rare event (2,430 baseball games per season, 35 people per clubhouse, two clubhouses per game. And how often has this happened?), it makes no sense as a solution. Hancock didn’t get drunk in the clubhouse; he got drunk at a bar. But Major League Baseball needs to be seen as doing something, even if that something doesn’t make sense—even if that something actually increases risk by forcing players to drink at bars instead of at the clubhouse, where there’s more control over the practice.

I tell people that if it’s in the news, don’t worry about it. The very definition of “news” is “something that hardly ever happens.” It’s when something isn’t in the news, when it’s so common that it’s no longer news—car crashes, domestic violence—that you should start worrying.

But that’s not the way we think. Psychologist Scott Plous said it well in The Psychology of Judgment and Decision Making: “In very general terms: (1) The more available an event is, the more frequent or probable it will seem; (2) the more vivid a piece of information is, the more easily recalled and convincing it will be; and (3) the more salient something is, the more likely it will be to appear causal.”

So, when faced with a very available and highly vivid event like 9/11 or the Virginia Tech shootings, we overreact. And when faced with all the salient related events, we assume causality. We pass the Patriot Act. We think if we give guns out to students, or maybe make it harder for students to get guns, we’ll have solved the problem. We don’t let our children go to playgrounds unsupervised. We stay out of the ocean because we read about a shark attack somewhere.

It’s our brains again. We need to “do something,” even if that something doesn’t make sense; even if it is ineffective. And we need to do something directly related to the details of the actual event. So instead of implementing effective, but more general, security measures to reduce the risk of terrorism, we ban box cutters on airplanes. And we look back on the Virginia Tech massacre with 20-20 hindsight and recriminate ourselves about the things we should have done.

Lastly, our brains need to find someone or something to blame. (Jon Stewart has an excellent bit on the Virginia Tech scapegoat search, and media coverage in general.) But sometimes there is no scapegoat to be found; sometimes we did everything right, but just got unlucky. We simply can’t prevent a lone nutcase from shooting people at random; there’s no security measure that would work.

As circular as it sounds, rare events are rare primarily because they don’t occur very often, and not because of any preventive security measures. And implementing security measures to make these rare events even rarer is like the joke about the guy who stomps around his house to keep the elephants away.

“Elephants? There are no elephants in this neighborhood,” says a neighbor.

“See how well it works!”

If you want to do something that makes security sense, figure out what’s common among a bunch of rare events, and concentrate your countermeasures there. Focus on the general risk of terrorism, and not the specific threat of airplane bombings using liquid explosives. Focus on the general risk of troubled young adults, and not the specific threat of a lone gunman wandering around a college campus. Ignore the movie-plot threats, and concentrate on the real risks.

This essay originally appeared on Wired.com, my 42nd essay on that site.

EDITED TO ADD (6/5): Archiloque has translated this essay into French.

EDITED TO ADD (6/14): The British academic risk researcher Prof. John Adams wrote an insightful essay on this topic called “What Kills You Matters—Not Numbers.”

Posted on May 17, 2007 at 2:16 PMView Comments

Teenagers and Risk Assessment

In an article on auto-asphyxiation, there’s commentary on teens and risk:

But the new debate also coincides with a reassessment of how teenagers think about risk. Conventional wisdom said adolescents often flirted with the edges of danger because they felt invulnerable.

Newer studies have dismissed that notion. They say that most teenagers are quite cool-headed in assessing risk and reward—and that is what sometimes gets them in trouble. Adults, by contrast, are more likely to rely on experience or gut feelings than rational calculation.

Asked whether it would ever make sense to play Russian roulette for a million dollars, for example, most adults immediately say no, said Valerie F. Reyna, a professor of human development and psychology at Cornell University.

But when Professor Reyna asks teenagers the same question in intervention sessions to teach smarter risk-taking behavior, they often stop to calculate or debate, she said—what exactly would the odds be of getting the chamber with the bullet?

“I use the example to try to get them to see that thinking rationally like that doesn’t always lead to rational choices,” she said.

Of course, reality is always more complicated. We can invent fictional scenarios where it makes sense to play that game of Russian roulette. Imagine you have terminal cancer, and that million dollars would make a huge difference to your survivors. You might very well take the risk.

Posted on March 29, 2007 at 6:48 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.