Entries Tagged "risks"

Page 13 of 16

Mathematical Illiteracy

This may be the stupidest example of risk assessment I’ve ever seen. It’s a video clip from a recent Daily Show, about he dangers of the Large Hadron Collider. The segment starts off slow, but then there’s an exchange with high school science teacher Walter L. Wagner, who insists the device has a 50-50 chance of destroying the world:

“If you have something that can happen, and something that won’t necessarily happen, it’s going to either happen or it’s going to not happen, and so the best guess is 1 in 2.”

“I’m not sure that’s how probability works, Walter.”

This is followed by clips of news shows taking the guy seriously.

In related news, almost four-fifths of Americans don’t know that a trillion is a million million, and most think it’s less than that. Is it any wonder why we’re having so much trouble with national budget debates?

Posted on May 4, 2009 at 6:19 AMView Comments

Conficker

Conficker’s April Fool’s joke—the huge, menacing build-up and then nothing—is a good case study on how we think about risks, one whose lessons are applicable far outside computer security. Generally, our brains aren’t very good at probability and risk analysis. We tend to use cognitive shortcuts instead of thoughtful analysis. This worked fine for the simple risks we encountered for most of our species’s existence, but it’s less effective against the complex risks society forces us to face today.

We tend to judge the probability of something happening on how easily we can bring examples to mind. It’s why people tend to buy earthquake insurance after an earthquake, when the risk is lowest. It’s why those of us who have been the victims of a crime tend to fear crime more than those who haven’t. And it’s why we fear a repeat of 9/11 more than other types of terrorism.

We fear being murdered, kidnapped, raped and assaulted by strangers, when friends and relatives are far more likely to do those things to us. We worry about plane crashes instead of car crashes, which are far more common. We tend to exaggerate spectacular, strange, and rare events, and downplay more ordinary, familiar, and common ones.

We also respond more to stories than to data. If I show you statistics on crime in New York, you’ll probably shrug and continue your vacation planning. But if a close friend gets mugged there, you’re more likely to cancel your trip.

And specific stories are more convincing than general ones. That is why we buy more insurance against plane accidents than against travel accidents, or accidents in general. Or why, when surveyed, we are willing to pay more for air travel insurance covering “terrorist acts” than “all possible causes”. That is why, in experiments, people judge specific scenarios more likely than more general ones, even if the general ones include the specific.

Conficker’s 1 April deadline was precisely the sort of event humans tend to overreact to. It’s a specific threat, which convinces us that it’s credible. It’s a specific date, which focuses our fear. Our natural tendency to exaggerate makes it more spectacular, which further increases our fear. Its repetition by the media makes it even easier to bring to mind. As the story becomes more vivid, it becomes more convincing.

The New York Times called it an “unthinkable disaster”, the television news show 60 Minutes said it could “disrupt the entire internet” and we at the Guardian warned that it might be a “deadly threat”. Naysayers were few, and drowned out.

The first of April passed without incident, but Conficker is no less dangerous today. About 2.2m computers worldwide, are still infected with Conficker.A and B, and about 1.3m more are infected with the nastier Conficker.C. It’s true that on 1 April Conficker.C tried a new trick to update itself, but its authors could have updated the worm using another mechanism any day. In fact, they updated it on 8 April, and can do so again.

And Conficker is just one of many, many dangerous worms being run by criminal organisations. It came with a date and got a lot of press—that 1 April date was more hype than reality—but it’s not particularly special. In short, there are many criminal organisations on the internet using worms and other forms of malware to infect computers. They then use those computers to send spam, commit fraud, and infect more computers. The risks are real and serious. Luckily, keeping your anti-virus software up-to-date and not clicking on strange attachments can keep you pretty secure. Conficker spreads through a Windows vulnerability that was patched in October. You do have automatic update turned on, right?

But people being people, it takes a specific story for us to protect ourselves.

This essay previously appeared in The Guardian.

Posted on April 23, 2009 at 5:50 AMView Comments

What to Fear

Nice rundown of the statistics.

The single greatest killer of Americans is the so-called “lifestyle disease.” Somewhere between half a million and a million of us get a short ride in a long hearse every year because of smoking, lousy diets, parking our bodies in front of the TV instead of operating them, and downing yet another six pack and / or tequila popper.

According to the US Department of Health and Human Services, between 310,000 and 580,000 of us will commit suicide by cigarette this year. Another 260,000 to 470,000 will go in the ground due to poor diet and sedentary lifestyle. And some 85,000 of us will drink to our own departure.

After the person in the mirror, the next most dangerous individual we’re ever likely to encounter is one in a white coat. Something like 200,000 of us will experience “cessation of life” due to medical errors—botched procedures, mis-prescribed drugs and “nosocomial infections.” (The really nasty ones you get from treatment in a hospital or healthcare service unit.)

The next most dangerous encounter the average American is likely to have is with a co-worker with an infection. Or a doorknob, stair railing or restaurant utensil touched by someone with the crud. “Microbial Agents” (read bugs like flu and pneumonia) will send 75,000 of us to meet the Reaper this year.

If we live through those social encounters, the next greatest danger is “Toxic Agents”—asbestos in our ceiling, lead in our pipes, the stuff we spray on our lawns or pour down our clogged drains. Annual body count from these handy consumer products is around 55,000.

After that, the most dangerous person in our lives is the one behind the wheel. About 42,000 of us will cash our chips in our rides this year. More than half will do so because we didn’t wear a seat belt. (Lest it wrinkle our suit.)

Some 31,000 of us will commit suicide by intention this year. (As opposed to not fastening our seat belts or smoking, by which we didn’t really mean to kill ourselves.)

About 30,000 of us will die due to our sexual behaviors, through which we’ll contract AIDS or Hepatitis C. Another 20,000 of us will pop off due to illicit drug use.

The next scariest person in our lives is someone we know who’s having a really bad day. Over 16,000 Americans will be murdered this year, most often by a relative or friend.

Posted on April 7, 2009 at 6:14 AMView Comments

The Zone of Essential Risk

Bob Blakley makes an interesting point. It’s in the context of eBay fraud, but it’s more general than that.

If you conduct infrequent transactions which are also small, you’ll never lose much money and it’s not worth it to try to protect yourself – you’ll sometimes get scammed, but you’ll have no trouble affording the losses.

If you conduct large transactions, regardless of frequency, each transaction is big enough that it makes sense to insure the transactions or pay an escrow agent. You’ll have occasional experiences of fraud, but you’ll be reimbursed by the insurer or the transactions will be reversed by the escrow agent and you don’t lose anything.

If you conduct small or medium-sized transactions frequently, you can amortize fraud losses using the gains from your other transactions. This is how casinos work; they sometimes lose a hand, but they make it up in the volume.

But if you conduct medium-sized transactions rarely, you’re in trouble. The transactions are big enough so that you care about losses, you don’t have enough transaction volume to amortize those losses, and the cost of insurance or escrow is high enough compared to the value of your transactions that it doesn’t make economic sense to protect yourself.

Posted on March 30, 2009 at 6:50 AMView Comments

Fear and the Availability Heuristic

Psychology Today on fear and the availability heuristic:

We use the availability heuristic to estimate the frequency of specific events. For example, how often are people killed by mass murderers? Because higher frequency events are more likely to occur at any given moment, we also use the availability heuristic to estimate the probability that events will occur. For example, what is the probability that I will be killed by a mass murderer tomorrow?

We are especially reliant upon the availability heuristic when we do not have solid evidence from which to base our estimates. For example, what is the probability that the next plane you fly on will crash? The true probability of any particular plane crashing depends on a huge number of factors, most of which you’re not aware of and/or don’t have reliable data on. What type of plane is it? What time of day is the flight? What is the weather like? What is the safety history of this particular plane? When was the last time the plane was examined for problems? Who did the examination and how thorough was it? Who is flying the plane? How much sleep did they get last night? How old are they? Are they taking any medications? You get the idea.

The chances are excellent that you do not have access to all or even most of the information needed to make accurate estimates for just about anything. Indeed, you probably have little or no data from which to base your estimate. Well, that’s not exactly true. In fact, there is one piece that evidence that you always have access to: your memory. Specifically, how easily can you recall previous incidents of the event in question? The easier time we have recalling prior incidents, the greater probability the event has of occurring—at least as far as our minds are concerned. In a nutshell, this is the availability heuristic.

[…]

Although there are many problems associated with the availability heuristic, perhaps the most concerning one is that it often leads people to lose sight of life’s real dangers. Psychologist Gerd Gigerenzer, for example, conducted a fascinating study that showed in the months following September 11, 2001, Americans were less likely to travel by air and more likely to instead travel by car. While it is understandable why Americans would have been fearful of air travel following the incredibly high profile attacks on New York and Washington, the unfortunate result is that Americans died on the highways at alarming rates following 9/11. This is because highway travel is far more dangerous than air travel. More than 40,000 Americans are killed every year on America’s roads. Fewer than 1,000 people die in airplane accidents, and even fewer people are killed aboard commercial airlines.

[…]

Consider, for example, that the 2009 budget for homeland security (the folks that protect us from terrorists) will likely be about $50 billion. Don’t get us wrong, we like the fact that people are trying to prevent terrorism, but even at its absolute worst, terrorists killed about 3,000 Americans in a single year. And less than 100 Americans are killed by terrorists in most years. By contrast, the budget for the National Highway Traffic Safety Administration (the folks who protect us on the road) is about $1 billion, even though more than 40,000 people will die this year on the nation’s roads. In terms of dollars spent per fatality, we fund terrorism prevention at about $17,000,000/fatality (i.e., $50 billion/3,000 fatalities) and accident prevention at about $25,000/fatality (i.e., $1 billion/40,000 fatalities).

I’ve written about this sort of thing here.

Posted on March 23, 2009 at 12:31 PMView Comments

Evaluating Risks of Low-Probability High-Cost Events

Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes,” by Toby Ord, Rafaela Hillerbrand, Anders Sandberg.

Abstract:

Some risks have extremely high stakes. For example, a worldwide pandemic or asteroid impact could potentially kill more than a billion people. Comfortingly, scientific calculations often put very low probabilities on the occurrence of such catastrophes. In this paper, we argue that there are important new methodological problems which arise when assessing global catastrophic risks and we focus on a problem regarding probability estimation. When an expert provides a calculation of the probability of an outcome, they are really providing the probability of the outcome occurring, given that their argument is watertight. However, their argument may fail for a number of reasons such as a flaw in the underlying theory, a flaw in the modeling of the problem, or a mistake in the calculations. If the probability estimate given by an argument is dwarfed by the chance that the argument itself is flawed, then the estimate is suspect. We develop this idea formally, explaining how it differs from the related distinctions of model and parameter uncertainty. Using the risk estimates from the Large Hadron Collider as a test case, we show how serious the problem can be when it comes to catastrophic risks and how best to address it.

Posted on February 2, 2009 at 1:26 PMView Comments

Does Risk Management Make Sense?

We engage in risk management all the time, but it only makes sense if we do it right.

“Risk management” is just a fancy term for the cost-benefit tradeoff associated with any security decision. It’s what we do when we react to fear, or try to make ourselves feel secure. It’s the fight-or-flight reflex that evolved in primitive fish and remains in all vertebrates. It’s instinctual, intuitive and fundamental to life, and one of the brain’s primary functions.

Some have hypothesized that humans have a “risk thermostat” that tries to maintain some optimal risk level. It explains why we drive our motorcycles faster when we wear a helmet, or are more likely to take up smoking during wartime. It’s our natural risk management in action.

The problem is our brains are intuitively suited to the sorts of risk management decisions endemic to living in small family groups in the East African highlands in 100,000 BC, and not to living in the New York City of 2008. We make systematic risk management mistakes—miscalculating the probability of rare events, reacting more to stories than data, responding to the feeling of security rather than reality, and making decisions based on irrelevant context. And that risk thermostat of ours? It’s not nearly as finely tuned as we might like it to be.

Like a rabbit that responds to an oncoming car with its default predator avoidance behavior—dart left, dart right, dart left, and at the last moment jump—instead of just getting out of the way, our Stone Age intuition doesn’t serve us well in a modern technological society. So when we in the security industry use the term “risk management,” we don’t want you to do it by trusting your gut. We want you to do risk management consciously and intelligently, to analyze the tradeoff and make the best decision.

This means balancing the costs and benefits of any security decision—buying and installing a new technology, implementing a new procedure or forgoing a common precaution. It means allocating a security budget to mitigate different risks by different amounts. It means buying insurance to transfer some risks to others. It’s what businesses do, all the time, about everything. IT security has its own risk management decisions, based on the threats and the technologies.

There’s never just one risk, of course, and bad risk management decisions often carry an underlying tradeoff. Terrorism policy in the U.S. is based more on politics than actual security risk, but the politicians who make these decisions are concerned about the risks of not being re-elected.

Many corporate security decisions are made to mitigate the risk of lawsuits rather than address the risk of any actual security breach. And individuals make risk management decisions that consider not only the risks to the corporation, but the risks to their departments’ budgets, and to their careers.

You can’t completely remove emotion from risk management decisions, but the best way to keep risk management focused on the data is to formalize the methodology. That’s what companies that manage risk for a living—insurance companies, financial trading firms and arbitrageurs—try to do. They try to replace intuition with models, and hunches with mathematics.

The problem in the security world is we often lack the data to do risk management well. Technological risks are complicated and subtle. We don’t know how well our network security will keep the bad guys out, and we don’t know the cost to the company if we don’t keep them out. And the risks change all the time, making the calculations even harder. But this doesn’t mean we shouldn’t try.

You can’t avoid risk management; it’s fundamental to business just as to life. The question is whether you’re going to try to use data or whether you’re going to just react based on emotions, hunches and anecdotes.

This essay appeared as the first half of a point-counterpoint with Marcus Ranum in Information Security magazine.

Posted on October 14, 2008 at 1:25 PMView Comments

Taleb on the Limitations of Risk Management

Nice paragraph on the limitations of risk management in this occasionally interesting interview with Nicholas Taleb:

Because then you get a Maginot Line problem. [After World War I, the French erected concrete fortifications to prevent Germany from invading again—a response to the previous war, which proved ineffective for the next one.] You know, they make sure they solve that particular problem, the Germans will not invade from here. The thing you have to be aware of most obviously is scenario planning, because typically if you talk about scenarios, you’ll overestimate the probability of these scenarios. If you examine them at the expense of those you don’t examine, sometimes it has left a lot of people worse off, so scenario planning can be bad. I’ll just take my track record. Those who did scenario planning have not fared better than those who did not do scenario planning. A lot of people have done some kind of “make-sense” type measures, and that has made them more vulnerable because they give the illusion of having done your job. This is the problem with risk management. I always come back to a classical question. Don’t give a fool the illusion of risk management. Don’t ask someone to guess the number of dentists in Manhattan after asking him the last four digits of his Social Security number. The numbers will always be correlated. I actually did some work on risk management, to show how stupid we are when it comes to risk.

Posted on October 3, 2008 at 7:48 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.