Entries Tagged "psychology of security"

Page 24 of 26

Eyewitness Identification Reform

According to this article, “Mistaken eyewitness identification is the leading cause of wrongful convictions.” Given what I’ve been reading recently about memory and the brain, this does not surprise me at all.

New Mexico is currently debating a bill reforming eyewitness identification procedures:

Under the proposed regulations, an eyewitness must provide a written description before a lineup takes place; there must be at least six individuals in a live lineup and 10 photos in a photographic line-up; and the members of the lineup must be shown sequentially rather than simultaneously.

The bill would also restrict the amount of time in which law enforcement could bring a suspect by for a physical identification by a victim or witness to within one hour after the crime was reported. Anything beyond one hour would require a lineup with multiple photos or people.

I don’t have access to any of the psychological or criminology studies that back these reforms up, but the bill is being supported by the right sorts of people.

Posted on February 7, 2007 at 6:38 AMView Comments

The Psychology of Security

I just posted a long essay (pdf available here) on my website, exploring how psychology can help explain the difference between the feeling of security and the reality of security.

We make security trade-offs, large and small, every day. We make them when we decide to lock our doors in the morning, when we choose our driving route, and when we decide whether we’re going to pay for something via check, credit card, or cash. They’re often not the only factor in a decision, but they’re a contributing factor. And most of the time, we don’t even realize, it. We make security trade-offs intuitively. Most decisions are default decisions, and there have been many popular books that explore reaction, intuition, choice, and decision.

These intuitive choices are central to life on this planet. Every living thing makes security trade-offs, mostly as a species—evolving this way instead of that way—but also as individuals. Imagine a rabbit sitting in a field, eating clover. Suddenly, he spies a fox. He’s going to make a security trade-off: should I stay or should I flee? The rabbits that are good at making these trade-offs are going to live to reproduce, while the rabbits that are bad at it are going to get eaten or starve. This means that, as a successful species on the planet, humans should be really good at making security trade-offs.

And yet at the same time we seem hopelessly bad at it. We get it wrong all the time. We exaggerate some risks while minimizing others. We exaggerate some costs while minimizing others. Even simple trade-offs we get wrong, wrong, wrong—again and again. A Vulcan studying human security behavior would shake his head in amazement.

The truth is that we’re not hopelessly bad at making security trade-offs. We are very well adapted to dealing with the security environment endemic to hominids living in small family groups on the highland plains of East Africa. It’s just that the environment in New York in 2006 is different from Kenya circa 100,000 BC. And so our feeling of security diverges from the reality of security, and we get things wrong.

The essay examines particular brain heuristics, how they work and how they fail, in an attempt to explain why our feeling of security so often diverges from reality. I’m giving a talk on the topic at the RSA Conference today at 3:00 PM. Dark Reading posted an article on this, also discussed on Slashdot. CSO Online also has a podcast interview with me on the topic. I expect there’ll be more press coverage this week.

The essay is really still in draft, and I would very much appreciate any and all comments, criticisms, additions, corrections, suggestions for further research, and so on. I think security technology has a lot to learn from psychology, and that I’ve only scratched the surface of the interesting and relevant research—and what it means.

EDITED TO ADD (2/7): Two more articles on topic.

Posted on February 6, 2007 at 1:44 PMView Comments

In Praise of Security Theater

While visiting some friends and their new baby in the hospital last week, I noticed an interesting bit of security. To prevent infant abduction, all babies had RFID tags attached to their ankles by a bracelet. There are sensors on the doors to the maternity ward, and if a baby passes through, an alarm goes off.

Infant abduction is rare, but still a risk. In the last 22 years, about 233 such abductions have occurred in the United States. About 4 million babies are born each year, which means that a baby has a 1-in-375,000 chance of being abducted. Compare this with the infant mortality rate in the U.S.—one in 145—and it becomes clear where the real risks are.

And the 1-in-375,000 chance is not today’s risk. Infant abduction rates have plummeted in recent years, mostly due to education programs at hospitals.

So why are hospitals bothering with RFID bracelets? I think they’re primarily to reassure the mothers. Many times during my friends’ stay at the hospital the doctors had to take the baby away for this or that test. Millions of years of evolution have forged a strong bond between new parents and new baby; the RFID bracelets are a low-cost way to ensure that the parents are more relaxed when their baby was out of their sight.

Security is both a reality and a feeling. The reality of security is mathematical, based on the probability of different risks and the effectiveness of different countermeasures. We know the infant abduction rates and how well the bracelets reduce those rates. We also know the cost of the bracelets, and can thus calculate whether they’re a cost-effective security measure or not. But security is also a feeling, based on individual psychological reactions to both the risks and the countermeasures. And the two things are different: You can be secure even though you don’t feel secure, and you can feel secure even though you’re not really secure.

The RFID bracelets are what I’ve come to call security theater: security primarily designed to make you feel more secure. I’ve regularly maligned security theater as a waste, but it’s not always, and not entirely, so.

It’s only a waste if you consider the reality of security exclusively. There are times when people feel less secure than they actually are. In those cases—like with mothers and the threat of baby abduction—a palliative countermeasure that primarily increases the feeling of security is just what the doctor ordered.

Tamper-resistant packaging for over-the-counter drugs started to appear in the 1980s, in response to some highly publicized poisonings. As a countermeasure, it’s largely security theater. It’s easy to poison many foods and over-the-counter medicines right through the seal—with a syringe, for example—or to open and replace the seal well enough that an unwary consumer won’t detect it. But in the 1980s, there was a widespread fear of random poisonings in over-the-counter medicines, and tamper-resistant packaging brought people’s perceptions of the risk more in line with the actual risk: minimal.

Much of the post-9/11 security can be explained by this as well. I’ve often talked about the National Guard troops in airports right after the terrorist attacks, and the fact that they had no bullets in their guns. As a security countermeasure, it made little sense for them to be there. They didn’t have the training necessary to improve security at the checkpoints, or even to be another useful pair of eyes. But to reassure a jittery public that it’s OK to fly, it was probably the right thing to do.

Security theater also addresses the ancillary risk of lawsuits. Lawsuits are ultimately decided by juries, or settled because of the threat of jury trial, and juries are going to decide cases based on their feelings as well as the facts. It’s not enough for a hospital to point to infant abduction rates and rightly claim that RFID bracelets aren’t worth it; the other side is going to put a weeping mother on the stand and make an emotional argument. In these cases, security theater provides real security against the legal threat.

Like real security, security theater has a cost. It can cost money, time, concentration, freedoms and so on. It can come at the cost of reducing the things we can do. Most of the time security theater is a bad trade-off, because the costs far outweigh the benefits. But there are instances when a little bit of security theater makes sense.

We make smart security trade-offs—and by this I mean trade-offs for genuine security—when our feeling of security closely matches the reality. When the two are out of alignment, we get security wrong. Security theater is no substitute for security reality, but, used correctly, security theater can be a way of raising our feeling of security so that it more closely matches the reality of security. It makes us feel more secure handing our babies off to doctors and nurses, buying over-the-counter medicines, and flying on airplanes—closer to how secure we should feel if we had all the facts and did the math correctly.

Of course, too much security theater and our feeling of security becomes greater than the reality, which is also bad. And others—politicians, corporations and so on—can use security theater to make us feel more secure without doing the hard work of actually making us secure. That’s the usual way security theater is used, and why I so often malign it.

But to write off security theater completely is to ignore the feeling of security. And as long as people are involved with security trade-offs, that’s never going to work.

This essay appeared on Wired.com, and is dedicated to my new godson, Nicholas Quillen Perry.

EDITED TO ADD: This essay has been translated into Portuguese.

Posted on January 25, 2007 at 5:50 AMView Comments

Surveillance Cameras Catch a Cold-Blooded Killer

I’m in the middle of writing a long essay on the psychology of security. One of the things I’m writing about is the “availability heuristic,” which basically says that the human brain tends to assess the frequency of a class of events based on how easy it is to bring an instance of that class to mind. It explains why people tend to be afraid of the risks that are discussed in the media, or why people are afraid to fly but not afraid to drive.

One of the effects of this heuristic is that people are more persuaded by a vivid example than they are by statistics. The latter might be more useful, but the former is easier to remember.

That’s the context in which I want you to think about this very gripping story about a cold-blooded killer caught by city-wide surveillance cameras.

Federal agents showed Peterman the recordings from that morning. One camera captured McDermott, 48, getting off the bus. A man wearing a light jacket and dark pants got off the same bus, and followed a few steps behind her.

Another camera caught them as they rounded the corner. McDermott didn’t seem to notice the man following her. Halfway down the block, the man suddenly raised his arm and shot her once in the back of the head.

“I’ve seen shootings incidents on video before,” Peterman said, “but the suddenness, and that he did it for no reason at all, was really scary.”

I can write essay after essay about the inefficacy of security cameras. I can talk about trade-offs, and the better ways to spend the money. I can cite statistics and experts and whatever I want. But—used correctly—stories like this one will do more to move public opinion than anything I can do.

Posted on January 10, 2007 at 11:36 AMView Comments

Perceived Risk vs. Actual Risk

I’ve written repeatedly about the difference between perceived and actual risk, and how it explains many seemingly perverse security trade-offs. Here’s a Los Angeles Times op-ed that does the same. The author is Daniel Gilbert, psychology professor at Harvard. (I just recently finished his book Stumbling on Happiness, which is not a self-help book but instead about how the brain works. Strongly recommended.)

The op-ed is about the public’s reaction to the risks of global warming and terrorism, but the points he makes are much more general. He gives four reasons why some risks are perceived to be more or less serious than they actually are:

  1. We over-react to intentional actions, and under-react to accidents, abstract events, and natural phenomena.

    That’s why we worry more about anthrax (with an annual death toll of roughly zero) than influenza (with an annual death toll of a quarter-million to a half-million people). Influenza is a natural accident, anthrax is an intentional action, and the smallest action captures our attention in a way that the largest accident doesn’t. If two airplanes had been hit by lightning and crashed into a New York skyscraper, few of us would be able to name the date on which it happened.

  2. We over-react to things that offend our morals.

    When people feel insulted or disgusted, they generally do something about it, such as whacking each other over the head, or voting. Moral emotions are the brain’s call to action.

    He doesn’t say it, but it’s reasonable to assume that we under-react to things that don’t.

  3. We over-react to immediate threats and under-react to long-term threats.

    The brain is a beautifully engineered get-out-of-the-way machine that constantly scans the environment for things out of whose way it should right now get. That’s what brains did for several hundred million years—and then, just a few million years ago, the mammalian brain learned a new trick: to predict the timing and location of dangers before they actually happened.

    Our ability to duck that which is not yet coming is one of the brain’s most stunning innovations, and we wouldn’t have dental floss or 401(k) plans without it. But this innovation is in the early stages of development. The application that allows us to respond to visible baseballs is ancient and reliable, but the add-on utility that allows us to respond to threats that loom in an unseen future is still in beta testing.

  4. We under-react to changes that occur slowly and over time.

    The human brain is exquisitely sensitive to changes in light, sound, temperature, pressure, size, weight and just about everything else. But if the rate of change is slow enough, the change will go undetected. If the low hum of a refrigerator were to increase in pitch over the course of several weeks, the appliance could be singing soprano by the end of the month and no one would be the wiser.

It’s interesting to compare this to what I wrote in Beyond Fear (pages 26-27) about perceived vs. actual risk:

  • People exaggerate spectacular but rare risks and downplay common risks. They worry more about earthquakes than they do about slipping on the bathroom floor, even though the latter kills far more people than the former. Similarly, terrorism causes far more anxiety than common street crime, even though the latter claims many more lives. Many people believe that their children are at risk of being given poisoned candy by strangers at Halloween, even though there has been no documented case of this ever happening.
  • People have trouble estimating risks for anything not exactly like their normal situation. Americans worry more about the risk of mugging in a foreign city, no matter how much safer it might be than where they live back home. Europeans routinely perceive the U.S. as being full of guns. Men regularly underestimate how risky a situation might be for an unaccompanied woman. The risks of computer crime are generally believed to be greater than they are, because computers are relatively new and the risks are unfamiliar. Middle-class Americans can be particularly naïve and complacent; their lives are incredibly secure most of the time, so their instincts about the risks of many situations have been dulled.
  • Personified risks are perceived to be greater than anonymous risks. Joseph Stalin said, “A single death is a tragedy, a million deaths is a statistic.” He was right; large numbers have a way of blending into each other. The final death toll from 9/11 was less than half of the initial estimates, but that didn’t make people feel less at risk. People gloss over statistics of automobile deaths, but when the press writes page after page about nine people trapped in a mine—complete with human-interest stories about their lives and families—suddenly everyone starts paying attention to the dangers with which miners have contended for centuries. Osama bin Laden represents the face of Al Qaeda, and has served as the personification of the terrorist threat. Even if he were dead, it would serve the interests of some politicians to keep him “alive” for his effect on public opinion.
  • People underestimate risks they willingly take and overestimate risks in situations they can’t control. When people voluntarily take a risk, they tend to underestimate it. When they have no choice but to take the risk, they tend to overestimate it. Terrorists are scary because they attack arbitrarily, and from nowhere. Commercial airplanes are perceived as riskier than automobiles, because the controls are in someone else’s hands—even though they’re much safer per passenger mile. Similarly, people overestimate even more those risks that they can’t control but think they, or someone, should. People worry about airplane crashes not because we can’t stop them, but because we think as a society we should be capable of stopping them (even if that is not really the case). While we can’t really prevent criminals like the two snipers who terrorized the Washington, DC, area in the fall of 2002 from killing, most people think we should be able to.
  • Last, people overestimate risks that are being talked about and remain an object of public scrutiny. News, by definition, is about anomalies. Endless numbers of automobile crashes hardly make news like one airplane crash does. The West Nile virus outbreak in 2002 killed very few people, but it worried many more because it was in the news day after day. AIDS kills about 3 million people per year worldwide—about three times as many people each day as died in the terrorist attacks of 9/11. If a lunatic goes back to the office after being fired and kills his boss and two coworkers, it’s national news for days. If the same lunatic shoots his ex-wife and two kids instead, it’s local news…maybe not even the lead story.

Posted on November 3, 2006 at 7:18 AMView Comments

Perceived Risk vs. Actual Risk

Good essay on perceived vs. actual risk. The hook is Mayor Daley of Chicago demanding a no-fly-zone over Chicago in the wake of the New York City airplane crash.

Other politicians (with the spectacular and notable exception of New York City Mayor Michael Bloomberg) and self-appointed “experts” are jumping on the tragic accident—repeat, accident—in New York to sound off again about the “danger” of light aircraft, and how they must be regulated, restricted, banned.

OK, for all of those ranting about “threats” from GA aircraft, we’ll believe that you’re really serious about controlling “threats” when you call for:

  • Banning all vans within cities. A small panel van was used in the first World Trade Center attack. The bomb, which weighed 1,500 pounds, killed six and injured 1,042.
  • Banning all box trucks from cities. Timothy McVeigh’s rented Ryder truck carried a 5,000-pound bomb that killed 168 in Oklahoma City.
  • Banning all semi-trailer trucks. They can carry bombs weighing more than 50,000 pounds.
  • Banning newspapers on subways. That’s how the terrorists hid packages of sarin nerve gas in the Tokyo subway system. They killed 12.
  • Banning backpacks on all buses and subways. That’s how the terrorists got the bombs into the London subway system. They killed 52.
  • Banning all cell phones on trains. That’s how they detonated the bombs in backpacks placed on commuter trains in Madrid. They killed 191.
  • Banning all small pleasure boats on public waterways. That’s how terrorists attacked the USS Cole, killing 17.
  • Banning all heavy or bulky clothing in all public places. That’s how suicide bombers hide their murderous charges. Thousands killed.

Number of people killed by a terrorist attack using a GA aircraft? Zero.

Number of people injured by a terrorist attack using a GA aircraft? Zero.

Property damage from a terrorist attack using a GA aircraft? None.

So Mr. Mayor (and Mr. Governor, Ms. Senator, Mr. Congressman, and Mr. “Expert”), if you’re truly serious about “protecting” the public, advocate all of the bans I’ve listed above. Using the “logic” you apply to general aviation aircraft, you’re forced to conclude that newspapers, winter coats, cell phones, backpacks, trucks, and boats all pose much greater risks to the public.

So be consistent in your logic. If you are dead set on restricting a personal transportation system that carries more passengers than any single airline, reaches more American cities than all the airlines combined, provides employment for 1.3 million American citizens and $160 billion in business “to protect the public,” then restrict or control every other transportation system that the terrorists have demonstrated they can use to kill.

And, on the same topic, why it doesn’t make sense to ban small aircraft from cities as a terrorism defense.

Posted on October 23, 2006 at 10:01 AMView Comments

Memoirs of an Airport Security Screener

This person worked as an airport security screener years before 9/11, before the TSA, so hopefully things are different now. It’s a pretty fascinating read, though.

Two things pop out at me. One, as I wrote, it’s a mind-numbingly boring task. And two, the screeners were trained not to find weapons, but to find the particular example weapons that the FAA would test them on.

“How do you know it’s a gun?” he asked me.

“it looks like one,” I said, and was immediately pounded on the back.

“Goddamn right it does. You get over here,” yelled Mike to Will.

“How do you know it’s a gun?”

“I look for the outline of the cartridge and the…” Will started.

“What?”

“The barrel you can see right here,” Will continued, oblivious to his pending doom.

“What the hell are you talking about? That’s not how you find this gun.”

“No sir. It’s how you find any gun, sir,” said Will. I knew right then that this was a disaster.

“Any gun? Any gun? I don’t give a fuck about any gun, dipshit. I care about this gun. The FAA will not test you with another gun. The FAA will never put any gun but this one in the machine. I don’t care if you are a fucking gun nut who can tell the caliber by sniffing the barrel, you look for this gun. THIS ONE.” Mike strode to the test bag and dumped it out at the feet of the metal detector, sending the machine into a frenzy.

“THIS bomb. This knife. I don’t care if you miss a goddamn bazooka and some son of a bitch cuts your throat with a knife you let through as long as you find THIS GUN.”

“But we’re supposed to find,” Will insisted.

“You find what I trained you to find. The other shit doesn’t get taken out of my paycheck when you miss it,” said Mike.

Not exactly the result we’re looking for, but one that makes sense given the economic incentives that were at work.

I sure hope things are different today.

Posted on July 28, 2006 at 6:22 AMView Comments

Movie Plot Threat Contest: Status Report

On the first of this month, I announced my (possibly First) Movie-Plot Threat Contest.

Entrants are invited to submit the most unlikely, yet still plausible, terrorist attack scenarios they can come up with.

Your goal: cause terror. Make the American people notice. Inflict lasting damage on the U.S. economy. Change the political landscape, or the culture. The more grandiose the goal, the better.

Assume an attacker profile on the order of 9/11: 20 to 30 unskilled people, and about $500,000 with which to buy skills, equipment, etc.

As of this morning, the blog post has 580 comments. I expected a lot of submissions, but the response has blown me away.

Looking over the different terrorist plots, they seem to fall into several broad categories. The first category consists of attacks against our infrastructure: the food supply, the water supply, the power infrastructure, the telephone system, etc. The idea is to cripple the country by targeting one of the basic systems that make it work.

The second category consists of big-ticket plots. Either they have very public targets—blowing up the Super Bowl, the Oscars, etc.—or they have high-tech components: nuclear waste, anthrax, chlorine gas, a full oil tanker, etc. And they are often complex and hard to pull off. This is the 9/11 idea: a single huge event that affects the entire nation.

The third category consists of low-tech attacks that go on and on. Several people imagined a version of the DC sniper scenario, but with multiple teams. The teams would slowly move around the country, perhaps each team starting up after the previous one was captured or killed. Other people suggested a variant of this with small bombs in random public locations around the country.

(There’s a fourth category: actual movie plots. Some entries are comical, unrealistic, have science fiction premises, etc. I’m not even considering those.)

The better ideas tap directly into public fears. In my book, Beyond Fear, I discusse five different tendencies people have to exaggerate risks: to believe that something is more risky than it actually is.

  1. People exaggerate spectacular but rare risks and downplay common risks.
  2. People have trouble estimating risks for anything not exactly like their normal situation.
  3. Personified risks are perceived to be greater than anonymous risks.
  4. People underestimate risks they willingly take and overestimate risks in situations they can’t control.
  5. People overestimate risks that are being talked about and remain an object of public scrutiny.

The best plot ideas leverage one or more of those tendencies. Big-ticket attacks leverage the first. Infrastructure and low-tech attacks leverage the fourth. And every attack tries to leverage the fifth, especially those attacks that go on and on. I’m willing to bet that when I find a winner, it will be the plot that leverages the greatest number of those tendencies to the best possible advantage.

I also got a bunch of e-mails from people with ideas they thought too terrifying to post publicly. Some of them wouldn’t even tell them to me. I also received e-mails from people accusing me of helping the terrorists by giving them ideas.

But if there’s one thing this contest demonstrates, it’s that good terrorist ideas are a dime a dozen. Anyone can figure out how to cause terror. The hard part is execution.

Some of the submitted plots require minimal skill and equipment. Twenty guys with cars and guns—that sort of thing. Reading through them, you have to wonder why there have been no terrorist attacks in the U.S. since 9/11. I don’t believe the “flypaper theory,” that the terrorists are all in Iraq instead of in the U.S. And despite all the ineffectual security we’ve put in place since 9/11, I’m sure we have had some successes in intelligence and investigation—and have made it harder for terrorists to operate both in the U.S. and abroad.

But mostly, I think terrorist attacks are much harder than most of us think. It’s harder to find willing recruits than we think. It’s harder to coordinate plans. It’s harder to execute those plans. Terrorism is rare, and for all we’ve heard about 9/11 changing the world, it’s still rare.

The submission deadline is the end of this month, so there’s still time to submit your entry. And please read through some of the others and comment on them; I’m curious as to what other people think are the most interesting, compelling, realistic, or effective scenarios.

EDITED TO ADD (4/23): The contest made The New York Times.

Posted on April 22, 2006 at 10:14 AMView Comments

Why Phishing Works

Interesting paper.

Abstract:

To build systems shielding users from fraudulent (or phishing) websites, designers need to know which attack strategies work and why. This paper provides the first empirical evidence about which malicious strategies are successful at deceiving general users. We first analyzed a large set of captured phishing attacks and developed a set of hypotheses about why these strategies might work. We then assessed these hypotheses with a usability study in which 22 participants were shown 20 web sites and asked to determine which ones were fraudulent. We found that 23% of the participants did not look at browser-based cues such as the address bar, status bar and the security indicators, leading to incorrect choices 40% of the time. We also found that some visual deception attacks can fool even the most sophisticated users. These results illustrate that standard security indicators are not effective for a substantial fraction of users, and suggest that alternative approaches are needed.

Here’s an article on the paper.

Posted on April 4, 2006 at 2:18 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.