Entries Tagged "cost-benefit analysis"
Page 14 of 23
DHS Data Privacy and Integrity Advisory Committee's Report on REAL ID
The Data Privacy and Integrity Advisory Committee of the Department of Homeland Security has issued an excellent report on REAL ID:
The REAL ID Act is one of the largest identity management undertakings in history. It would bring more than 200 million people from a large, diverse, and mobile country within a uniformly defined identity system, jointly operated by state governments. This has never been done before in the USA, and it raises numerous policy, privacy, and data security issues that have had only brief scrutiny, particularly given the scope and scale of the undertaking.
It is critical that specific issues be carefully considered before developing and deploying a uniform identity management system in the 21st century. These include, but are not limited to, the implementation costs, the privacy consequences, the security of stored identity documents and personal information, redress and fairness, “mission creep”, and, perhaps most importantly, provisions for national security protections.
The Department of Homeland Security’s Notice of Proposed Rulemaking touched on some of these issues, though it did not explore them in the depth necessary for a system of such magnitude and such consequence. Given that these issues have not received adequate consideration, the Committee feels it is important that the following comments do not constitute an endorsement of REAL ID or the regulations as workable or appropriate.
I’ve written about REAL ID here.
Tactics, Targets, and Objectives
If you encounter an aggressive lion, stare him down. But not a leopard; avoid his gaze at all costs. In both cases, back away slowly; don’t run. If you stumble on a pack of hyenas, run and climb a tree; hyenas can’t climb trees. But don’t do that if you’re being chased by an elephant; he’ll just knock the tree down. Stand still until he forgets about you.
I spent the last few days on safari in a South African game park, and this was just some of the security advice we were all given. What’s interesting about this advice is how well-defined it is. The defenses might not be terribly effective—you still might get eaten, gored or trampled—but they’re your best hope. Doing something else isn’t advised, because animals do the same things over and over again. These are security countermeasures against specific tactics.
Lions and leopards learn tactics that work for them, and I was taught tactics to defend myself. Humans are intelligent, and that means we are more adaptable than animals. But we’re also, generally speaking, lazy and stupid; and, like a lion or hyena, we will repeat tactics that work. Pickpockets use the same tricks over and over again. So do phishers, and school shooters. If improvised explosive devices didn’t work often enough, Iraqi insurgents would do something else.
So security against people generally focuses on tactics as well.
A friend of mine recently asked me where she should hide her jewelry in her apartment, so that burglars wouldn’t find it. Burglars tend to look in the same places all the time—dresser tops, night tables, dresser drawers, bathroom counters—so hiding valuables somewhere else is more likely to be effective, especially against a burglar who is pressed for time. Leave decoy cash and jewelry in an obvious place so a burglar will think he’s found your stash and then leave. Again, there’s no guarantee of success, but it’s your best hope.
The key to these countermeasures is to find the pattern: the common attack tactic that is worth defending against. That takes data. A single instance of an attack that didn’t work—liquid bombs, shoe bombs—or one instance that did—9/11—is not a pattern. Implementing defensive tactics against them is the same as my safari guide saying: “We’ve only ever heard of one tourist encountering a lion. He stared it down and survived. Another tourist tried the same thing with a leopard, and he got eaten. So when you see a lion….” The advice I was given was based on thousands of years of collective wisdom from people encountering African animals again and again.
Compare this with the Transportation Security Administration’s approach. With every unique threat, TSA implements a countermeasure with no basis to say that it helps, or that the threat will ever recur.
Furthermore, human attackers can adapt more quickly than lions. A lion won’t learn that he should ignore people who stare him down, and eat them anyway. But people will learn. Burglars now know the common “secret” places people hide their valuables—the toilet, cereal boxes, the refrigerator and freezer, the medicine cabinet, under the bed—and look there. I told my friend to find a different secret place, and to put decoy valuables in a more obvious place.
This is the arms race of security. Common attack tactics result in common countermeasures. Eventually, those countermeasures will be evaded and new attack tactics developed. These, in turn, require new countermeasures. You can easily see this in the constant arms race that is credit card fraud, ATM fraud or automobile theft.
The result of these tactic-specific security countermeasures is to make the attacker go elsewhere. For the most part, the attacker doesn’t particularly care about the target. Lions don’t care who or what they eat; to a lion, you’re just a conveniently packaged bag of protein. Burglars don’t care which house they rob, and terrorists don’t care who they kill. If your countermeasure makes the lion attack an impala instead of you, or if your burglar alarm makes the burglar rob the house next door instead of yours, that’s a win for you.
Tactics matter less if the attacker is after you personally. If, for example, you have a priceless painting hanging in your living room and the burglar knows it, he’s not going to rob the house next door instead—even if you have a burglar alarm. He’s going to figure out how to defeat your system. Or he’ll stop you at gunpoint and force you to open the door. Or he’ll pose as an air-conditioner repairman. What matters is the target, and a good attacker will consider a variety of tactics to reach his target.
This approach requires a different kind of countermeasure, but it’s still well-understood in the security world. For people, it’s what alarm companies, insurance companies and bodyguards specialize in. President Bush needs a different level of protection against targeted attacks than Bill Gates does, and I need a different level of protection than either of them. It would be foolish of me to hire bodyguards in case someone was targeting me for robbery or kidnapping. Yes, I would be more secure, but it’s not a good security trade-off.
Al-Qaida terrorism is different yet again. The goal is to terrorize. It doesn’t care about the target, but it doesn’t have any pattern of tactic, either. Given that, the best way to spend our counterterrorism dollar is on intelligence, investigation and emergency response. And to refuse to be terrorized.
These measures are effective because they don’t assume any particular tactic, and they don’t assume any particular target. We should only apply specific countermeasures when the cost-benefit ratio makes sense (reinforcing airplane cockpit doors) or when a specific tactic is repeatedly observed (lions attacking people who don’t stare them down). Otherwise, general countermeasures are far more effective a defense.
This essay originally appeared on Wired.com.
EDITED TO ADD (6/14): Learning behavior in tigers.
Rare Risk and Overreactions
Everyone had a reaction to the horrific events of the Virginia Tech shootings. Some of those reactions were rational. Others were not.
A high school student was suspended for customizing a first-person shooter game with a map of his school. A contractor was fired from his government job for talking about a gun, and then visited by the police when he created a comic about the incident. A dean at Yale banned realistic stage weapons from the university theaters—a policy that was reversed within a day. And some teachers terrorized a sixth-grade class by staging a fake gunman attack, without telling them that it was a drill.
These things all happened, even though shootings like this are incredibly rare; even though—for all the press—less than one percent (.pdf) of homicides and suicides of children ages 5 to 19 occur in schools. In fact, these overreactions occurred, not despite these facts, but because of them.
The Virginia Tech massacre is precisely the sort of event we humans tend to overreact to. Our brains aren’t very good at probability and risk analysis, especially when it comes to rare occurrences. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. There’s a lot of research in the psychological community about how the brain responds to risk—some of it I have already written about—but the gist is this: Our brains are much better at processing the simple risks we’ve had to deal with throughout most of our species’ existence, and much poorer at evaluating the complex risks society forces us to face today.
Novelty plus dread equals overreaction.
We can see the effects of this all the time. We fear being murdered, kidnapped, raped and assaulted by strangers, when it’s far more likely that the perpetrator of such offenses is a relative or a friend. We worry about airplane crashes and rampaging shooters instead of automobile crashes and domestic violence—both far more common.
In the United States, dogs, snakes, bees and pigs each kill more people per year (.pdf) than sharks. In fact, dogs kill more humans than any animal except for other humans. Sharks are more dangerous than dogs, yes, but we’re far more likely to encounter dogs than sharks.
Our greatest recent overreaction to a rare event was our response to the terrorist attacks of 9/11. I remember then-Attorney General John Ashcroft giving a speech in Minnesota—where I live—in 2003, and claiming that the fact there were no new terrorist attacks since 9/11 was proof that his policies were working. I thought: “There were no terrorist attacks in the two years preceding 9/11, and you didn’t have any policies. What does that prove?”
What it proves is that terrorist attacks are very rare, and maybe our reaction wasn’t worth the enormous expense, loss of liberty, attacks on our Constitution and damage to our credibility on the world stage. Still, overreacting was the natural thing for us to do. Yes, it’s security theater, but it makes us feel safer.
People tend to base risk analysis more on personal story than on data, despite the old joke that “the plural of anecdote is not data.” If a friend gets mugged in a foreign country, that story is more likely to affect how safe you feel traveling to that country than abstract crime statistics.
We give storytellers we have a relationship with more credibility than strangers, and stories that are close to us more weight than stories from foreign lands. In other words, proximity of relationship affects our risk assessment. And who is everyone’s major storyteller these days? Television. (Nassim Nicholas Taleb’s great book, The Black Swan: The Impact of the Highly Improbable, discusses this.)
Consider the reaction to another event from last month: professional baseball player Josh Hancock got drunk and died in a car crash. As a result, several baseball teams are banning alcohol in their clubhouses after games. Aside from this being a ridiculous reaction to an incredibly rare event (2,430 baseball games per season, 35 people per clubhouse, two clubhouses per game. And how often has this happened?), it makes no sense as a solution. Hancock didn’t get drunk in the clubhouse; he got drunk at a bar. But Major League Baseball needs to be seen as doing something, even if that something doesn’t make sense—even if that something actually increases risk by forcing players to drink at bars instead of at the clubhouse, where there’s more control over the practice.
I tell people that if it’s in the news, don’t worry about it. The very definition of “news” is “something that hardly ever happens.” It’s when something isn’t in the news, when it’s so common that it’s no longer news—car crashes, domestic violence—that you should start worrying.
But that’s not the way we think. Psychologist Scott Plous said it well in The Psychology of Judgment and Decision Making: “In very general terms: (1) The more available an event is, the more frequent or probable it will seem; (2) the more vivid a piece of information is, the more easily recalled and convincing it will be; and (3) the more salient something is, the more likely it will be to appear causal.”
So, when faced with a very available and highly vivid event like 9/11 or the Virginia Tech shootings, we overreact. And when faced with all the salient related events, we assume causality. We pass the Patriot Act. We think if we give guns out to students, or maybe make it harder for students to get guns, we’ll have solved the problem. We don’t let our children go to playgrounds unsupervised. We stay out of the ocean because we read about a shark attack somewhere.
It’s our brains again. We need to “do something,” even if that something doesn’t make sense; even if it is ineffective. And we need to do something directly related to the details of the actual event. So instead of implementing effective, but more general, security measures to reduce the risk of terrorism, we ban box cutters on airplanes. And we look back on the Virginia Tech massacre with 20-20 hindsight and recriminate ourselves about the things we should have done.
Lastly, our brains need to find someone or something to blame. (Jon Stewart has an excellent bit on the Virginia Tech scapegoat search, and media coverage in general.) But sometimes there is no scapegoat to be found; sometimes we did everything right, but just got unlucky. We simply can’t prevent a lone nutcase from shooting people at random; there’s no security measure that would work.
As circular as it sounds, rare events are rare primarily because they don’t occur very often, and not because of any preventive security measures. And implementing security measures to make these rare events even rarer is like the joke about the guy who stomps around his house to keep the elephants away.
“Elephants? There are no elephants in this neighborhood,” says a neighbor.
“See how well it works!”
If you want to do something that makes security sense, figure out what’s common among a bunch of rare events, and concentrate your countermeasures there. Focus on the general risk of terrorism, and not the specific threat of airplane bombings using liquid explosives. Focus on the general risk of troubled young adults, and not the specific threat of a lone gunman wandering around a college campus. Ignore the movie-plot threats, and concentrate on the real risks.
This essay originally appeared on Wired.com, my 42nd essay on that site.
EDITED TO ADD (6/5): Archiloque has translated this essay into French.
EDITED TO ADD (6/14): The British academic risk researcher Prof. John Adams wrote an insightful essay on this topic called “What Kills You Matters—Not Numbers.”
Dan Geer on Trade-Offs and Monoculture
In the April 2007 issue of Queue, Dan Geer writes about security trade-offs, monoculture, and genetic diversity in honeybees:
Security people are never in charge unless an acute embarrassment has occurred. Otherwise, their advice is tempered by “economic reality,” which is to say that security is means, not an end. This is as it should be. Since means are about tradeoffs, security is about tradeoffs, but you already knew that.
Attackers Exploiting Security Procedures
In East Belfast, burglars called in a bomb threat. Residents evacuated their homes, and then the burglars proceeded to rob eight empty houses on the block.
I’ve written about this sort of thing before: sometimes security procedures themselves can be exploited by attackers. It was Step 4 of my “five-step process” from Beyond Fear (pages 14-15). A national ID card make identity theft more lucrative; forcing people to remove their laptops at airport security checkpoints makes laptop theft more common.
Moral: you can’t just focus on one threat. You need to look at the broad spectrum of threats, and pay attention to how security against one affects the others.
Childhood Safety vs. Childhood Health
Another example of how we get the risks wrong:
Although statistics show that rates of child abduction and sexual abuse have marched steadily downward since the early 1990s, fear of these crimes is at an all-time high. Even the panic-inducing Megan’s Law Web site says stranger abduction is rare and that 90 percent of child sexual-abuse cases are committed by someone known to the child. Yet we still suffer a crucial disconnect between perception of crime and its statistical reality. A child is almost as likely to be struck by lightning as kidnapped by a stranger, but it’s not fear of lightning strikes that parents cite as the reason for keeping children indoors watching television instead of out on the sidewalk skipping rope.
And when a child is parked on the living room floor, he or she may be safe, but is safety the sole objective of parenting? The ultimate goal is independence, and independence is best fostered by handing it out a little at a time, not by withholding it in a trembling fist that remains clenched until it’s time to move into the dorms.
Meanwhile, as rates of child abduction and abuse move down, rates of Type II diabetes, hypertension and other obesity-related ailments in children move up. That means not all the candy is coming from strangers. Which scenario should provoke more panic: the possibility that your child might become one of the approximately 100 children who are kidnapped by strangers each year, or one of the country’s 58 million overweight adults?
Sky Marshals in Australia
Their cost-effectiveness is being debated:
They’ve cost the taxpayer $106 million so far, they travel in business class, and over the past four years Australia’s armed air marshals have had to act only once—subduing a 68-year-old man who produced a small knife on a flight from Sydney to Cairns in 2003.
I have not seen any similar cost analysis from the United States.
"Stop and Frisks" in New York
Interesting data from New York. The number of people stopped and searched has gone up fivefold since 2002, but the number of arrests due to these stops has only doubled. (The number of “summonses” has also gone up fivefold.)
Good data for the “Is it worth it?” question.
The Psychology of Security
I just posted a long essay (pdf available here) on my website, exploring how psychology can help explain the difference between the feeling of security and the reality of security.
We make security trade-offs, large and small, every day. We make them when we decide to lock our doors in the morning, when we choose our driving route, and when we decide whether we’re going to pay for something via check, credit card, or cash. They’re often not the only factor in a decision, but they’re a contributing factor. And most of the time, we don’t even realize, it. We make security trade-offs intuitively. Most decisions are default decisions, and there have been many popular books that explore reaction, intuition, choice, and decision.
These intuitive choices are central to life on this planet. Every living thing makes security trade-offs, mostly as a species—evolving this way instead of that way—but also as individuals. Imagine a rabbit sitting in a field, eating clover. Suddenly, he spies a fox. He’s going to make a security trade-off: should I stay or should I flee? The rabbits that are good at making these trade-offs are going to live to reproduce, while the rabbits that are bad at it are going to get eaten or starve. This means that, as a successful species on the planet, humans should be really good at making security trade-offs.
And yet at the same time we seem hopelessly bad at it. We get it wrong all the time. We exaggerate some risks while minimizing others. We exaggerate some costs while minimizing others. Even simple trade-offs we get wrong, wrong, wrong—again and again. A Vulcan studying human security behavior would shake his head in amazement.
The truth is that we’re not hopelessly bad at making security trade-offs. We are very well adapted to dealing with the security environment endemic to hominids living in small family groups on the highland plains of East Africa. It’s just that the environment in New York in 2006 is different from Kenya circa 100,000 BC. And so our feeling of security diverges from the reality of security, and we get things wrong.
The essay examines particular brain heuristics, how they work and how they fail, in an attempt to explain why our feeling of security so often diverges from reality. I’m giving a talk on the topic at the RSA Conference today at 3:00 PM. Dark Reading posted an article on this, also discussed on Slashdot. CSO Online also has a podcast interview with me on the topic. I expect there’ll be more press coverage this week.
The essay is really still in draft, and I would very much appreciate any and all comments, criticisms, additions, corrections, suggestions for further research, and so on. I think security technology has a lot to learn from psychology, and that I’ve only scratched the surface of the interesting and relevant research—and what it means.
Sidebar photo of Bruce Schneier by Joe MacInnis.