Entries Tagged "risk assessment"

Page 9 of 21

Hot Dog Security

A nice dose of risk reality:

Last week, the American Academy of Pediatrics issued a statement calling for large-type warning labels on the foods that kids most commonly choke on—grapes, nuts, carrots, candy and public enemy No. 1: the frank. Then the lead author of the report, pediatric emergency room doctor Gary Smith, went one step further.

He called for a redesign of the hot dog.

The reason, he said, is that hot dogs are “high-risk.” But are they? I mean, I certainly diced my share of Oscar Mayers when my kids were younger, but if once in a while we stopped for a hot dog and I gave it to ’em whole, was I really taking a crazy risk?

Here are the facts: About 61 children each year choke to death on food, or one in a million. Of them, 17 percent—or about 10—choke on franks. So now we are talking 1 in 6 million. This is still tragic; the death of any child is. But to call it “high-risk” means we would have to call pretty much all of life “high-risk.” Especially getting in a car! About 1,300 kids younger than 14 die each year as car passengers, compared with 10 a year from hot dogs.

What’s happening is that the concept of “risk” is broadening to encompass almost everything a kid ever does, from running to sitting to sleeping. Literally!

There’s a lot of good stuff on this website about how to raise children without being crazy paranoid. She comments on my worst-case thinking essay, too.

Posted on June 17, 2010 at 2:28 PMView Comments

Mainstream Cost-Benefit Security Analysis

This essay in The New York Times is refreshingly cogent:

You’ve seen it over and over. At a certain intersection in a certain town, there’ll be an unfortunate accident. A child is hit by a car.

So the public cries out, the town politicians band together, and the next thing you know, they’ve spent $60,000 to install speed bumps, guardrails and a stoplight at that intersection—even if it was clearly a accident, say, a drunk driver, that had nothing to do with the design of the intersection.

I understand the concept; people want to DO something to channel their grief. But rationally, turning that single intersection into a teeming jungle of safety features, while doing nothing for all the other intersections in town, in the state, across the country, doesn’t make a lot of sense.

Another essay from the BBC website:

That poses a difficult ethical dilemma: should government decisions about risk reflect the often irrational foibles of the populace or the rational calculations of sober risk assessment? Should our politicians opt for informed paternalism or respect for irrational preferences?

The volcanic ash cloud is a classic case study. Were the government to allow flights to go ahead when the risks were equal to those of road travel, it is almost certain that, over the course of the year, hundreds of people would die in resulting air accidents, since around 2,500 die on the roads each year.

This is politically unimaginable, not for good, rational reasons, but because people are much more risk averse when it comes to plane travel than they are to driving their own cars.

So, in practice, governments do not make fully rational risk assessments. Their calculations are based partly on cost-benefit analyses, and partly on what the public will tolerate.

Posted on June 11, 2010 at 12:08 PMView Comments

Worst-Case Thinking

At a security conference recently, the moderator asked the panel of distinguished cybersecurity leaders what their nightmare scenario was. The answers were the predictable array of large-scale attacks: against our communications infrastructure, against the power grid, against the financial system, in combination with a physical attack.

I didn’t get to give my answer until the afternoon, which was: “My nightmare scenario is that people keep talking about their nightmare scenarios.”

There’s a certain blindness that comes from worst-case thinking. An extension of the precautionary principle, it involves imagining the worst possible outcome and then acting as if it were a certainty. It substitutes imagination for thinking, speculation for risk analysis, and fear for reason. It fosters powerlessness and vulnerability and magnifies social paralysis. And it makes us more vulnerable to the effects of terrorism.

Worst-case thinking means generally bad decision making for several reasons. First, it’s only half of the cost-benefit equation. Every decision has costs and benefits, risks and rewards. By speculating about what can possibly go wrong, and then acting as if that is likely to happen, worst-case thinking focuses only on the extreme but improbable risks and does a poor job at assessing outcomes.

Second, it’s based on flawed logic. It begs the question by assuming that a proponent of an action must prove that the nightmare scenario is impossible.

Third, it can be used to support any position or its opposite. If we build a nuclear power plant, it could melt down. If we don’t build it, we will run short of power and society will collapse into anarchy. If we allow flights near Iceland’s volcanic ash, planes will crash and people will die. If we don’t, organs won’t arrive in time for transplant operations and people will die. If we don’t invade Iraq, Saddam Hussein might use the nuclear weapons he might have. If we do, we might destabilize the Middle East, leading to widespread violence and death.

Of course, not all fears are equal. Those that we tend to exaggerate are more easily justified by worst-case thinking. So terrorism fears trump privacy fears, and almost everything else; technology is hard to understand and therefore scary; nuclear weapons are worse than conventional weapons; our children need to be protected at all costs; and annihilating the planet is bad. Basically, any fear that would make a good movie plot is amenable to worst-case thinking.

Fourth and finally, worst-case thinking validates ignorance. Instead of focusing on what we know, it focuses on what we don’t know—and what we can imagine.

Remember Defense Secretary Rumsfeld’s quote? “Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know.” And this: “the absence of evidence is not evidence of absence.” Ignorance isn’t a cause for doubt; when you can fill that ignorance with imagination, it can be a call to action.

Even worse, it can lead to hasty and dangerous acts. You can’t wait for a smoking gun, so you act as if the gun is about to go off. Rather than making us safer, worst-case thinking has the potential to cause dangerous escalation.

The new undercurrent in this is that our society no longer has the ability to calculate probabilities. Risk assessment is devalued. Probabilistic thinking is repudiated in favor of “possibilistic thinking“: Since we can’t know what’s likely to go wrong, let’s speculate about what can possibly go wrong.

Worst-case thinking leads to bad decisions, bad systems design, and bad security. And we all have direct experience with its effects: airline security and the TSA, which we make fun of when we’re not appalled that they’re harassing 93-year-old women or keeping first graders off airplanes. You can’t be too careful!

Actually, you can. You can refuse to fly because of the possibility of plane crashes. You can lock your children in the house because of the possibility of child predators. You can eschew all contact with people because of the possibility of hurt. Steven Hawking wants to avoid trying to communicate with aliens because they might be hostile; does he want to turn off all the planet’s television broadcasts because they’re radiating into space? It isn’t hard to parody worst-case thinking, and at its extreme it’s a psychological condition.

Frank Furedi, a sociology professor at the University of Kent, writes: “Worst-case thinking encourages society to adopt fear as one of the dominant principles around which the public, the government and institutions should organize their life. It institutionalizes insecurity and fosters a mood of confusion and powerlessness. Through popularizing the belief that worst cases are normal, it incites people to feel defenseless and vulnerable to a wide range of future threats.”

Even worse, it plays directly into the hands of terrorists, creating a population that is easily terrorized—even by failed terrorist attacks like the Christmas Day underwear bomber and the Times Square SUV bomber.

When someone is proposing a change, the onus should be on them to justify it over the status quo. But worst-case thinking is a way of looking at the world that exaggerates the rare and unusual and gives the rare much more credence than it deserves.

It isn’t really a principle; it’s a cheap trick to justify what you already believe. It lets lazy or biased people make what seem to be cogent arguments without understanding the whole issue. And when people don’t need to refute counterarguments, there’s no point in listening to them.

This essay was originally published on CNN.com, although they stripped out all the links.

Posted on May 13, 2010 at 6:53 AMView Comments

9/11 Made us Safer?

There’s an essay on the Computerworld website that claims I implied, and believe, so:

OK, so strictly-speaking, he doesn’t use those exact words, but the implication is certainly clear. In a discussion about why there aren’t more terrorist attacks, he argues that ‘minor’ terrorist plots like the Times Square car bomb are counter-productive for terrorist groups, because “9/11 upped the stakes.”

This comes from an essay of mine that discusses why there have been so few terrorist attacks since 9/11. There’s the primary reason—there aren’t very many terrorists out there—and the secondary reason: terrorist attacks are harder to pull off than popular culture leads people to believe. What he’s talking about above is the tertiary reason: terrorist attacks have a secondary purpose of impressing supporters back home, and 9/11 has upped the stakes in what a flashy terrorist attack is supposed to look like.

From there to 9/11 making us safer is quite a leap, and not one that I expected anyone to make. Certainly a series of events, before, during, and after 9/11, contributed to an environment in which a particular group of terrorists found low-budget terrorist attacks less useful—and I suppose by extension we might be safer because of it. But you’d also have to factor in the risks associated with increased police powers, the NSA spying on all of us without warrants, and the increased disregard for the law we’ve seen out of the U.S. government since 9/11. And even so, that’s a far cry from claiming causality that 9/11 made us safer.

Not that any of this really matters. Compared to the real risks in the world, the risk of terrorism is so small that it’s not worth a lot of worry. As John Mueller pointed out, the risks of terrorism “are similar to the risks of using home appliances (200 deaths per year in the United States) or of commercial aviation (103 deaths per year).”

EDITED TO ADD (5/10): A response from Computerworld.

Posted on May 10, 2010 at 6:15 AMView Comments

Why Aren't There More Terrorist Attacks?

As the details of the Times Square car bomb attempt emerge in the wake of Faisal Shahzad’s arrest Monday night, one thing has already been made clear: Terrorism is fairly easy. All you need is a gun or a bomb, and a crowded target. Guns are easy to buy. Bombs are easy to make. Crowded targets—not only in New York, but all over the country—are easy to come by. If you’re willing to die in the aftermath of your attack, you could launch a pretty effective terrorist attack with a few days of planning, maybe less.

But if it’s so easy, why aren’t there more terrorist attacks like the failed car bomb in New York’s Times Square? Or the terrorist shootings in Mumbai? Or the Moscow subway bombings? After the enormous horror and tragedy of 9/11, why have the past eight years been so safe in the U.S.?

There are actually several answers to this question. One, terrorist attacks are harder to pull off than popular imagination—and the movies—lead everyone to believe. Two, there are far fewer terrorists than the political rhetoric of the past eight years leads everyone to believe. And three, random minor terrorist attacks don’t serve Islamic terrorists’ interests right now.

Hard to Pull Off

Terrorism sounds easy, but the actual attack is the easiest part.

Putting together the people, the plot and the materials is hard. It’s hard to sneak terrorists into the U.S. It’s hard to grow your own inside the U.S. It’s hard to operate; the general population, even the Muslim population, is against you.

Movies and television make terrorist plots look easier than they are. It’s hard to hold conspiracies together. It’s easy to make a mistake. Even 9/11, which was planned before the climate of fear that event engendered, just barely succeeded. Today, it’s much harder to pull something like that off without slipping up and getting arrested.

Few Terrorists

But even more important than the difficulty of executing a terrorist attack, there aren’t a lot of terrorists out there. Al-Qaida isn’t a well-organized global organization with movie-plot-villain capabilities; it’s a loose collection of people using the same name. Despite the post-9/11 rhetoric, there isn’t a terrorist cell in every major city. If you think about the major terrorist plots we’ve foiled in the U.S.—the JFK bombers, the Fort Dix plotters—they were mostly amateur terrorist wannabes with no connection to any sort of al-Qaida central command, and mostly no ability to effectively carry out the attacks they planned.

The successful terrorist attacks—the Fort Hood shooter, the guy who flew his plane into the Austin IRS office, the anthrax mailer—were largely nut cases operating alone. Even the unsuccessful shoe bomber, and the equally unsuccessful Christmas Day underwear bomber, had minimal organized help—and that help originated outside the U.S.

Terrorism doesn’t occur without terrorists, and they are far rarer than popular opinion would have it.

Small Attacks Aren’t Enough

Lastly, and perhaps most subtly, there’s not a lot of value in unspectacular terrorism anymore.

If you think about it, terrorism is essentially a PR stunt. The death of innocents and the destruction of property isn’t the goal of terrorism; it’s just the tactic used. And acts of terrorism are intended for two audiences: for the victims, who are supposed to be terrorized as a result, and for the allies and potential allies of the terrorists, who are supposed to give them more funding and generally support their efforts.

An act of terrorism that doesn’t instill terror in the target population is a failure, even if people die. And an act of terrorism that doesn’t impress the terrorists’ allies is not very effective, either.

Fortunately for us and unfortunately for the terrorists, 9/11 upped the stakes. It’s no longer enough to blow up something like the Oklahoma City Federal Building. Terrorists need to blow up airplanes or the Brooklyn Bridge or the Sears Tower or JFK airport—something big to impress the folks back home. Small no-name targets just don’t cut it anymore.

Note that this is very different than terrorism by an occupied population: the IRA in Northern Ireland, Iraqis in Iraq, Palestinians in Israel. Setting aside the actual politics, all of these terrorists believe they are repelling foreign invaders. That’s not the situation here in the U.S.

So, to sum up: If you’re just a loner wannabe who wants to go out with a bang, terrorism is easy. You’re more likely to get caught if you take a long time to plan or involve a bunch of people, but you might succeed. If you’re a representative of al-Qaida trying to make a statement in the U.S., it’s much harder. You just don’t have the people, and you’re probably going to slip up and get caught.

This essay originally appeared on AOL News.

EDITED TO ADD (5/5): A similar sentiment about the economic motivations of terrorists.

Posted on May 5, 2010 at 7:09 AMView Comments

Frank Furedi on Worst-Case Thinking

Nice essay by sociologist Frank Furedi on worse-case thinking, exemplified by our reaction to the Icelandic volcano:

I am not a natural scientist, and I claim no authority to say anything of value about the risks posed by volcanic ash clouds to flying aircraft. However, as a sociologist interested in the process of decision-making, it is evident to me that the reluctance to lift the ban on air traffic in Europe is motivated by worst-case thinking rather than rigorous risk assessment. Risk assessment is based on an attempt to calculate the probability of different outcomes. Worst-case thinking ­ these days known as precautionary thinking’—is based on an act of imagination. It imagines the worst-case scenario and then takes action on that basis. In the case of the Icelandic volcano, fears that particles in the ash cloud could cause aeroplane engines to shut down automatically mutated into a conclusion that this would happen. So it seems to me to be the fantasy of the worst-case scenario rather than risk assessment that underpins the current official ban on air traffic.

[…]

Worst-case thinking encourages society to adopt fear as of one of the key principles around which the public, the government and various institutions should organise their lives. It institutionalises insecurity and fosters a mood of confusion and powerlessness. Through popularising the belief that worst cases are normal, it also encourages people to feel defenceless and vulnerable to a wide range of future threats. In all but name, it is an invitation to social paralysis. The eruption of a volcano in Iceland poses technical problems, for which responsible decision-makers should swiftly come up with sensible solutions. But instead, Europe has decided to turn a problem into a drama. In 50 years’ time, historians will be writing about our society’s reluctance to act when practical problems arose. It is no doubt difficult to face up to a natural disaster—but in this case it is the all-too-apparent manmade disaster brought on by indecision and a reluctance to engage with uncertainty that represents the real threat to our future.

Posted on April 29, 2010 at 6:40 AMView Comments

Terrorist Attacks and Comparable Risks, Part 2

John Adams argues that our irrationality about comparative risks depends on the type of risk:

With “pure” voluntary risks, the risk itself, with its associated challenge and rush of adrenaline, is the reward. Most climbers on Mount Everest know that it is dangerous and willingly take the risk. With a voluntary, self-controlled, applied risk, such as driving, the reward is getting expeditiously from A to B. But the sense of control that drivers have over their fates appears to encourage a high level of tolerance of the risks involved.

Cycling from A to B (I write as a London cyclist) is done with a diminished sense of control over one’s fate. This sense is supported by statistics that show that per kilometre travelled a cyclist is 14 times more likely to die than someone in a car. This is a good example of the importance of distinguishing between relative and absolute risk. Although 14 times greater, the absolute risk of cycling is still small—1 fatality in 25 million kilometres cycled; not even Lance Armstrong can begin to cover that distance in a lifetime of cycling. And numerous studies have demonstrated that the extra relative risk is more than offset by the health benefits of regular cycling; regular cyclists live longer.

While people may voluntarily board planes, buses and trains, the popular reaction to crashes in which passengers are passive victims, suggests that the public demand a higher standard of safety in circumstances in which people voluntarily hand over control of their safety to pilots, or to bus or train drivers.

Risks imposed by nature—such as those endured by those living on the San Andreas Fault or the slopes of Mount Etna—or impersonal economic forces—such as the vicissitudes of the global economy—are placed in the middle of the scale. Reactions vary widely. They are usually seen as motiveless and are responded to fatalistically – unless or until the threat appears imminent.

Imposed risks are less tolerated. Consider mobile phones. The risk associated with the handsets is either non-existent or very small. The risk associated with the base stations, measured by radiation dose, unless one is up the mast with an ear to the transmitter, is orders of magnitude less. Yet all round the world billions are queuing up to take the voluntary risk, and almost all the opposition is focussed on the base stations, which are seen by objectors as impositions. Because the radiation dose received from the handset increases with distance from the base station, to the extent that campaigns against the base stations are successful, they will increase the distance from the base station to the average handset, and thus the radiation dose. The base station risk, if it exist, might be labelled a benignly imposed risk; no one supposes that the phone company wishes to murder all those in the neighbourhood.

Less tolerated are risks whose imposers are perceived as motivated by profit or greed. In Europe, big biotech companies such as Monsanto are routinely denounced by environmentalist opponents for being more concerned with profits than the welfare of the environment or the consumers of its products.

Less tolerated still are malignly imposed risks—crimes ranging from mugging to rape and murder. In most countries in the world the number of deaths on the road far exceeds the numbers of murders, but far more people are sent to jail for murder than for causing death by dangerous driving. In the United States in 2002 16,000 people were murdered—a statistic that evoked far more popular concern than the 42,000 killed on the road—but far less than the 25 killed by terrorists.

This isn’t a new result, but it’s vital to understand how people react to different risks.

Posted on April 13, 2010 at 1:18 PMView Comments

Terrorist Attacks and Comparable Risks, Part 1

Nice analysis by John Mueller and Mark G. Stewart:

There is a general agreement about risk, then, in the established regulatory practices of several developed countries: risks are deemed unacceptable if the annual fatality risk is higher than 1 in 10,000 or perhaps higher than 1 in 100,000 and acceptable if the figure is lower than 1 in 1 million or 1 in 2 million. Between these two ranges is an area in which risk might be considered “tolerable.”

These established considerations are designed to provide a viable, if somewhat rough, guideline for public policy. In all cases, measures and regulations intended to reduce risk must satisfy essential cost-benefit considerations. Clearly, hazards that fall in the unacceptable range should command the most attention and resources. Those in the tolerable range may also warrant consideration—but since they are less urgent, they should be combated with relatively inexpensive measures. Those hazards in the acceptable range are of little, or even negligible, concern, so precautions to reduce their risks even further would scarcely be worth pursuing unless they are remarkably inexpensive.

[…]

As can be seen, annual terrorism fatality risks, particularly for areas outside of war zones, are less than one in one million and therefore generally lie within the range regulators deem safe or acceptable, requiring no further regulations, particularly those likely to be expensive. They are similar to the risks of using home appliances (200 deaths per year in the United States) or of commercial aviation (103 deaths per year). Compared with dying at the hands of a terrorist, Americans are twice as likely to perish in a natural disaster and nearly a thousand times more likely to be killed in some type of accident. The same general conclusion holds when the full damage inflicted by terrorists—not only the loss of life but direct and indirect economic costs—is aggregated. As a hazard, terrorism, at least outside of war zones, does not inflict enough damage to justify substantially increasing expenditures to deal with it.

[…]

To border on becoming unacceptable by established risk conventions—that is, to reach an annual fatality risk of 1 in 100,000—the number of fatalities from terrorist attacks in the United States and Canada would have to increase 35-fold; in Great Britain (excluding Northern Ireland), more than 50-fold; and in Australia, more than 70-fold. For the United States, this would mean experiencing attacks on the scale of 9/11 at least once a year, or 18 Oklahoma City bombings every year.

Posted on April 13, 2010 at 6:07 AMView Comments

1 7 8 9 10 11 21

Sidebar photo of Bruce Schneier by Joe MacInnis.