Entries Tagged "risk assessment"

Page 18 of 21

Rare Risk and Overreactions

Everyone had a reaction to the horrific events of the Virginia Tech shootings. Some of those reactions were rational. Others were not.

A high school student was suspended for customizing a first-person shooter game with a map of his school. A contractor was fired from his government job for talking about a gun, and then visited by the police when he created a comic about the incident. A dean at Yale banned realistic stage weapons from the university theaters—a policy that was reversed within a day. And some teachers terrorized a sixth-grade class by staging a fake gunman attack, without telling them that it was a drill.

These things all happened, even though shootings like this are incredibly rare; even though—for all the press—less than one percent (.pdf) of homicides and suicides of children ages 5 to 19 occur in schools. In fact, these overreactions occurred, not despite these facts, but because of them.

The Virginia Tech massacre is precisely the sort of event we humans tend to overreact to. Our brains aren’t very good at probability and risk analysis, especially when it comes to rare occurrences. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. There’s a lot of research in the psychological community about how the brain responds to risk—some of it I have already written about—but the gist is this: Our brains are much better at processing the simple risks we’ve had to deal with throughout most of our species’ existence, and much poorer at evaluating the complex risks society forces us to face today.

Novelty plus dread equals overreaction.

We can see the effects of this all the time. We fear being murdered, kidnapped, raped and assaulted by strangers, when it’s far more likely that the perpetrator of such offenses is a relative or a friend. We worry about airplane crashes and rampaging shooters instead of automobile crashes and domestic violence—both far more common.

In the United States, dogs, snakes, bees and pigs each kill more people per year (.pdf) than sharks. In fact, dogs kill more humans than any animal except for other humans. Sharks are more dangerous than dogs, yes, but we’re far more likely to encounter dogs than sharks.

Our greatest recent overreaction to a rare event was our response to the terrorist attacks of 9/11. I remember then-Attorney General John Ashcroft giving a speech in Minnesota—where I live—in 2003, and claiming that the fact there were no new terrorist attacks since 9/11 was proof that his policies were working. I thought: “There were no terrorist attacks in the two years preceding 9/11, and you didn’t have any policies. What does that prove?”

What it proves is that terrorist attacks are very rare, and maybe our reaction wasn’t worth the enormous expense, loss of liberty, attacks on our Constitution and damage to our credibility on the world stage. Still, overreacting was the natural thing for us to do. Yes, it’s security theater, but it makes us feel safer.

People tend to base risk analysis more on personal story than on data, despite the old joke that “the plural of anecdote is not data.” If a friend gets mugged in a foreign country, that story is more likely to affect how safe you feel traveling to that country than abstract crime statistics.

We give storytellers we have a relationship with more credibility than strangers, and stories that are close to us more weight than stories from foreign lands. In other words, proximity of relationship affects our risk assessment. And who is everyone’s major storyteller these days? Television. (Nassim Nicholas Taleb’s great book, The Black Swan: The Impact of the Highly Improbable, discusses this.)

Consider the reaction to another event from last month: professional baseball player Josh Hancock got drunk and died in a car crash. As a result, several baseball teams are banning alcohol in their clubhouses after games. Aside from this being a ridiculous reaction to an incredibly rare event (2,430 baseball games per season, 35 people per clubhouse, two clubhouses per game. And how often has this happened?), it makes no sense as a solution. Hancock didn’t get drunk in the clubhouse; he got drunk at a bar. But Major League Baseball needs to be seen as doing something, even if that something doesn’t make sense—even if that something actually increases risk by forcing players to drink at bars instead of at the clubhouse, where there’s more control over the practice.

I tell people that if it’s in the news, don’t worry about it. The very definition of “news” is “something that hardly ever happens.” It’s when something isn’t in the news, when it’s so common that it’s no longer news—car crashes, domestic violence—that you should start worrying.

But that’s not the way we think. Psychologist Scott Plous said it well in The Psychology of Judgment and Decision Making: “In very general terms: (1) The more available an event is, the more frequent or probable it will seem; (2) the more vivid a piece of information is, the more easily recalled and convincing it will be; and (3) the more salient something is, the more likely it will be to appear causal.”

So, when faced with a very available and highly vivid event like 9/11 or the Virginia Tech shootings, we overreact. And when faced with all the salient related events, we assume causality. We pass the Patriot Act. We think if we give guns out to students, or maybe make it harder for students to get guns, we’ll have solved the problem. We don’t let our children go to playgrounds unsupervised. We stay out of the ocean because we read about a shark attack somewhere.

It’s our brains again. We need to “do something,” even if that something doesn’t make sense; even if it is ineffective. And we need to do something directly related to the details of the actual event. So instead of implementing effective, but more general, security measures to reduce the risk of terrorism, we ban box cutters on airplanes. And we look back on the Virginia Tech massacre with 20-20 hindsight and recriminate ourselves about the things we should have done.

Lastly, our brains need to find someone or something to blame. (Jon Stewart has an excellent bit on the Virginia Tech scapegoat search, and media coverage in general.) But sometimes there is no scapegoat to be found; sometimes we did everything right, but just got unlucky. We simply can’t prevent a lone nutcase from shooting people at random; there’s no security measure that would work.

As circular as it sounds, rare events are rare primarily because they don’t occur very often, and not because of any preventive security measures. And implementing security measures to make these rare events even rarer is like the joke about the guy who stomps around his house to keep the elephants away.

“Elephants? There are no elephants in this neighborhood,” says a neighbor.

“See how well it works!”

If you want to do something that makes security sense, figure out what’s common among a bunch of rare events, and concentrate your countermeasures there. Focus on the general risk of terrorism, and not the specific threat of airplane bombings using liquid explosives. Focus on the general risk of troubled young adults, and not the specific threat of a lone gunman wandering around a college campus. Ignore the movie-plot threats, and concentrate on the real risks.

This essay originally appeared on Wired.com, my 42nd essay on that site.

EDITED TO ADD (6/5): Archiloque has translated this essay into French.

EDITED TO ADD (6/14): The British academic risk researcher Prof. John Adams wrote an insightful essay on this topic called “What Kills You Matters—Not Numbers.”

Posted on May 17, 2007 at 2:16 PMView Comments

Is Penetration Testing Worth it?

There are security experts who insist penetration testing is essential for network security, and you have no hope of being secure unless you do it regularly. And there are contrarian security experts who tell you penetration testing is a waste of time; you might as well throw your money away. Both of these views are wrong. The reality of penetration testing is more complicated and nuanced.

Penetration testing is a broad term. It might mean breaking into a network to demonstrate you can. It might mean trying to break into a network to document vulnerabilities. It might involve a remote attack, physical penetration of a data center or social engineering attacks. It might use commercial or proprietary vulnerability scanning tools, or rely on skilled white-hat hackers. It might just evaluate software version numbers and patch levels, and make inferences about vulnerabilities.

It’s going to be expensive, and you’ll get a thick report when the testing is done.

And that’s the real problem. You really don’t want a thick report documenting all the ways your network is insecure. You don’t have the budget to fix them all, so the document will sit around waiting to make someone look bad. Or, even worse, it’ll be discovered in a breach lawsuit. Do you really want an opposing attorney to ask you to explain why you paid to document the security holes in your network, and then didn’t fix them? Probably the safest thing you can do with the report, after you read it, is shred it.

Given enough time and money, a pen test will find vulnerabilities; there’s no point in proving it. And if you’re not going to fix all the uncovered vulnerabilities, there’s no point uncovering them. But there is a way to do penetration testing usefully. For years I’ve been saying security consists of protection, detection and response—and you need all three to have good security. Before you can do a good job with any of these, you have to assess your security. And done right, penetration testing is a key component of a security assessment.

I like to restrict penetration testing to the most commonly exploited critical vulnerabilities, like those found on the SANS Top 20 list. If you have any of those vulnerabilities, you really need to fix them.

If you think about it, penetration testing is an odd business. Is there an analogue to it anywhere else in security? Sure, militaries run these exercises all the time, but how about in business? Do we hire burglars to try to break into our warehouses? Do we attempt to commit fraud against ourselves? No, we don’t.

Penetration testing has become big business because systems are so complicated and poorly understood. We know about burglars and kidnapping and fraud, but we don’t know about computer criminals. We don’t know what’s dangerous today, and what will be dangerous tomorrow. So we hire penetration testers in the belief they can explain it.

There are two reasons why you might want to conduct a penetration test. One, you want to know whether a certain vulnerability is present because you’re going to fix it if it is. And two, you need a big, scary report to persuade your boss to spend more money. If neither is true, I’m going to save you a lot of money by giving you this free penetration test: You’re vulnerable.

Now, go do something useful about it.

This essay appeared in the March issue of Information Security, as the first half of a point/counterpoint with Marcus Ranum. Here’s his half.

Posted on May 15, 2007 at 7:05 AMView Comments

Attackers Exploiting Security Procedures

In East Belfast, burglars called in a bomb threat. Residents evacuated their homes, and then the burglars proceeded to rob eight empty houses on the block.

I’ve written about this sort of thing before: sometimes security procedures themselves can be exploited by attackers. It was Step 4 of my “five-step process” from Beyond Fear (pages 14-15). A national ID card make identity theft more lucrative; forcing people to remove their laptops at airport security checkpoints makes laptop theft more common.

Moral: you can’t just focus on one threat. You need to look at the broad spectrum of threats, and pay attention to how security against one affects the others.

Posted on April 30, 2007 at 12:27 PMView Comments

Childhood Safety vs. Childhood Health

Another example of how we get the risks wrong:

Although statistics show that rates of child abduction and sexual abuse have marched steadily downward since the early 1990s, fear of these crimes is at an all-time high. Even the panic-inducing Megan’s Law Web site says stranger abduction is rare and that 90 percent of child sexual-abuse cases are committed by someone known to the child. Yet we still suffer a crucial disconnect between perception of crime and its statistical reality. A child is almost as likely to be struck by lightning as kidnapped by a stranger, but it’s not fear of lightning strikes that parents cite as the reason for keeping children indoors watching television instead of out on the sidewalk skipping rope.

And when a child is parked on the living room floor, he or she may be safe, but is safety the sole objective of parenting? The ultimate goal is independence, and independence is best fostered by handing it out a little at a time, not by withholding it in a trembling fist that remains clenched until it’s time to move into the dorms.

Meanwhile, as rates of child abduction and abuse move down, rates of Type II diabetes, hypertension and other obesity-related ailments in children move up. That means not all the candy is coming from strangers. Which scenario should provoke more panic: the possibility that your child might become one of the approximately 100 children who are kidnapped by strangers each year, or one of the country’s 58 million overweight adults?

Posted on April 12, 2007 at 6:05 AMView Comments

Teenagers and Risk Assessment

In an article on auto-asphyxiation, there’s commentary on teens and risk:

But the new debate also coincides with a reassessment of how teenagers think about risk. Conventional wisdom said adolescents often flirted with the edges of danger because they felt invulnerable.

Newer studies have dismissed that notion. They say that most teenagers are quite cool-headed in assessing risk and reward—and that is what sometimes gets them in trouble. Adults, by contrast, are more likely to rely on experience or gut feelings than rational calculation.

Asked whether it would ever make sense to play Russian roulette for a million dollars, for example, most adults immediately say no, said Valerie F. Reyna, a professor of human development and psychology at Cornell University.

But when Professor Reyna asks teenagers the same question in intervention sessions to teach smarter risk-taking behavior, they often stop to calculate or debate, she said—what exactly would the odds be of getting the chamber with the bullet?

“I use the example to try to get them to see that thinking rationally like that doesn’t always lead to rational choices,” she said.

Of course, reality is always more complicated. We can invent fictional scenarios where it makes sense to play that game of Russian roulette. Imagine you have terminal cancer, and that million dollars would make a huge difference to your survivors. You might very well take the risk.

Posted on March 29, 2007 at 6:48 AMView Comments

The Ultimate Movie Plot Threat: Killer Asteroids

There’s not enough money to track them:

NASA officials say the space agency is capable of finding nearly all the asteroids that might pose a devastating hit to Earth, but there isn’t enough money to pay for the task so it won’t get done.

The cost to find at least 90 percent of the 20,000 potentially hazardous asteroids and comets by 2020 would be about $1 billion, according to a report NASA will release later this week. The report was previewed Monday at a Planetary Defense Conference in Washington.

Congress in 2005 asked NASA to come up with a plan to track most killer asteroids and propose how to deflect the potentially catastrophic ones.

“We know what to do, we just don’t have the money,” said Simon “Pete” Worden, director of NASA’s Ames Research Center.

The hardest risks to evaluate are the ones with very low probability of occurring and a very high cost if they do. Large-scale terrorist attacks are like that; so are asteroid collisions.

Posted on March 22, 2007 at 6:03 AMView Comments

U.S Terrorism Arrests/Convictions Significantly Overstated

Interesting report (long, but at least read the Executive Summary) from the U.S. Department of Justice’s Inspector General that says, basically, that all the U.S. terrorism statistics since 9/11—arrests, convictions, and so on—have been grossly inflated.

As summarized in the following table, we determined that the FBI, EOUSA, and the Criminal Division did not accurately report 24 of the 26 statistics we reviewed.

“EOUSA” is the Executive Office for United States Attorneys, part of the U.S. Department of Justice.

The report gives a series of reasons why the statistics were so bad. Here’s one:

The number of terrorism-related convictions was overstated because the FBI initially coded the investigative cases as terrorism-related when the cases were opened, but did not recode cases when no link to terrorism was established.

And here’s an example of a problem:

For example, Operation Tarmac was a worksite enforcement operation launched in November 2001 at the nation’s airports. During this operation, Department and other federal agents went into regional airports and checked the immigration papers of airport workers. The agents then arrested any individuals who used falsified documents, such as social security numbers, drivers’ licenses, and other identification documents, to gain employment. EOUSA officials told us they believe these defendants are properly coded under the anti-terrorism program activity. We do not agree that law enforcement efforts such as these should be counted as “anti-terrorism” unless the subject or target is reasonably linked to terrorist activity.

There’s an enormous amount of detail in the report, if you want to wade through the 80ish pages of report and another 80ish of appendices.

Posted on February 23, 2007 at 7:13 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.