Entries Tagged "brain"

Page 3 of 3

The Psychology of Security

I just posted a long essay (pdf available here) on my website, exploring how psychology can help explain the difference between the feeling of security and the reality of security.

We make security trade-offs, large and small, every day. We make them when we decide to lock our doors in the morning, when we choose our driving route, and when we decide whether we’re going to pay for something via check, credit card, or cash. They’re often not the only factor in a decision, but they’re a contributing factor. And most of the time, we don’t even realize, it. We make security trade-offs intuitively. Most decisions are default decisions, and there have been many popular books that explore reaction, intuition, choice, and decision.

These intuitive choices are central to life on this planet. Every living thing makes security trade-offs, mostly as a species—evolving this way instead of that way—but also as individuals. Imagine a rabbit sitting in a field, eating clover. Suddenly, he spies a fox. He’s going to make a security trade-off: should I stay or should I flee? The rabbits that are good at making these trade-offs are going to live to reproduce, while the rabbits that are bad at it are going to get eaten or starve. This means that, as a successful species on the planet, humans should be really good at making security trade-offs.

And yet at the same time we seem hopelessly bad at it. We get it wrong all the time. We exaggerate some risks while minimizing others. We exaggerate some costs while minimizing others. Even simple trade-offs we get wrong, wrong, wrong—again and again. A Vulcan studying human security behavior would shake his head in amazement.

The truth is that we’re not hopelessly bad at making security trade-offs. We are very well adapted to dealing with the security environment endemic to hominids living in small family groups on the highland plains of East Africa. It’s just that the environment in New York in 2006 is different from Kenya circa 100,000 BC. And so our feeling of security diverges from the reality of security, and we get things wrong.

The essay examines particular brain heuristics, how they work and how they fail, in an attempt to explain why our feeling of security so often diverges from reality. I’m giving a talk on the topic at the RSA Conference today at 3:00 PM. Dark Reading posted an article on this, also discussed on Slashdot. CSO Online also has a podcast interview with me on the topic. I expect there’ll be more press coverage this week.

The essay is really still in draft, and I would very much appreciate any and all comments, criticisms, additions, corrections, suggestions for further research, and so on. I think security technology has a lot to learn from psychology, and that I’ve only scratched the surface of the interesting and relevant research—and what it means.

EDITED TO ADD (2/7): Two more articles on topic.

Posted on February 6, 2007 at 1:44 PMView Comments

Perceived Risk vs. Actual Risk

I’ve written repeatedly about the difference between perceived and actual risk, and how it explains many seemingly perverse security trade-offs. Here’s a Los Angeles Times op-ed that does the same. The author is Daniel Gilbert, psychology professor at Harvard. (I just recently finished his book Stumbling on Happiness, which is not a self-help book but instead about how the brain works. Strongly recommended.)

The op-ed is about the public’s reaction to the risks of global warming and terrorism, but the points he makes are much more general. He gives four reasons why some risks are perceived to be more or less serious than they actually are:

  1. We over-react to intentional actions, and under-react to accidents, abstract events, and natural phenomena.

    That’s why we worry more about anthrax (with an annual death toll of roughly zero) than influenza (with an annual death toll of a quarter-million to a half-million people). Influenza is a natural accident, anthrax is an intentional action, and the smallest action captures our attention in a way that the largest accident doesn’t. If two airplanes had been hit by lightning and crashed into a New York skyscraper, few of us would be able to name the date on which it happened.

  2. We over-react to things that offend our morals.

    When people feel insulted or disgusted, they generally do something about it, such as whacking each other over the head, or voting. Moral emotions are the brain’s call to action.

    He doesn’t say it, but it’s reasonable to assume that we under-react to things that don’t.

  3. We over-react to immediate threats and under-react to long-term threats.

    The brain is a beautifully engineered get-out-of-the-way machine that constantly scans the environment for things out of whose way it should right now get. That’s what brains did for several hundred million years—and then, just a few million years ago, the mammalian brain learned a new trick: to predict the timing and location of dangers before they actually happened.

    Our ability to duck that which is not yet coming is one of the brain’s most stunning innovations, and we wouldn’t have dental floss or 401(k) plans without it. But this innovation is in the early stages of development. The application that allows us to respond to visible baseballs is ancient and reliable, but the add-on utility that allows us to respond to threats that loom in an unseen future is still in beta testing.

  4. We under-react to changes that occur slowly and over time.

    The human brain is exquisitely sensitive to changes in light, sound, temperature, pressure, size, weight and just about everything else. But if the rate of change is slow enough, the change will go undetected. If the low hum of a refrigerator were to increase in pitch over the course of several weeks, the appliance could be singing soprano by the end of the month and no one would be the wiser.

It’s interesting to compare this to what I wrote in Beyond Fear (pages 26-27) about perceived vs. actual risk:

  • People exaggerate spectacular but rare risks and downplay common risks. They worry more about earthquakes than they do about slipping on the bathroom floor, even though the latter kills far more people than the former. Similarly, terrorism causes far more anxiety than common street crime, even though the latter claims many more lives. Many people believe that their children are at risk of being given poisoned candy by strangers at Halloween, even though there has been no documented case of this ever happening.
  • People have trouble estimating risks for anything not exactly like their normal situation. Americans worry more about the risk of mugging in a foreign city, no matter how much safer it might be than where they live back home. Europeans routinely perceive the U.S. as being full of guns. Men regularly underestimate how risky a situation might be for an unaccompanied woman. The risks of computer crime are generally believed to be greater than they are, because computers are relatively new and the risks are unfamiliar. Middle-class Americans can be particularly naïve and complacent; their lives are incredibly secure most of the time, so their instincts about the risks of many situations have been dulled.
  • Personified risks are perceived to be greater than anonymous risks. Joseph Stalin said, “A single death is a tragedy, a million deaths is a statistic.” He was right; large numbers have a way of blending into each other. The final death toll from 9/11 was less than half of the initial estimates, but that didn’t make people feel less at risk. People gloss over statistics of automobile deaths, but when the press writes page after page about nine people trapped in a mine—complete with human-interest stories about their lives and families—suddenly everyone starts paying attention to the dangers with which miners have contended for centuries. Osama bin Laden represents the face of Al Qaeda, and has served as the personification of the terrorist threat. Even if he were dead, it would serve the interests of some politicians to keep him “alive” for his effect on public opinion.
  • People underestimate risks they willingly take and overestimate risks in situations they can’t control. When people voluntarily take a risk, they tend to underestimate it. When they have no choice but to take the risk, they tend to overestimate it. Terrorists are scary because they attack arbitrarily, and from nowhere. Commercial airplanes are perceived as riskier than automobiles, because the controls are in someone else’s hands—even though they’re much safer per passenger mile. Similarly, people overestimate even more those risks that they can’t control but think they, or someone, should. People worry about airplane crashes not because we can’t stop them, but because we think as a society we should be capable of stopping them (even if that is not really the case). While we can’t really prevent criminals like the two snipers who terrorized the Washington, DC, area in the fall of 2002 from killing, most people think we should be able to.
  • Last, people overestimate risks that are being talked about and remain an object of public scrutiny. News, by definition, is about anomalies. Endless numbers of automobile crashes hardly make news like one airplane crash does. The West Nile virus outbreak in 2002 killed very few people, but it worried many more because it was in the news day after day. AIDS kills about 3 million people per year worldwide—about three times as many people each day as died in the terrorist attacks of 9/11. If a lunatic goes back to the office after being fired and kills his boss and two coworkers, it’s national news for days. If the same lunatic shoots his ex-wife and two kids instead, it’s local news…maybe not even the lead story.

Posted on November 3, 2006 at 7:18 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.