The Real Risk: Traffic Deaths
The New York Times Room for Debate blog did the topic: “Do We Tolerate Too Many Traffic Deaths?”
Page 7 of 23
The New York Times Room for Debate blog did the topic: “Do We Tolerate Too Many Traffic Deaths?”
This essay in The New York Times is refreshingly cogent:
You’ve seen it over and over. At a certain intersection in a certain town, there’ll be an unfortunate accident. A child is hit by a car.
So the public cries out, the town politicians band together, and the next thing you know, they’ve spent $60,000 to install speed bumps, guardrails and a stoplight at that intersection—even if it was clearly a accident, say, a drunk driver, that had nothing to do with the design of the intersection.
I understand the concept; people want to DO something to channel their grief. But rationally, turning that single intersection into a teeming jungle of safety features, while doing nothing for all the other intersections in town, in the state, across the country, doesn’t make a lot of sense.
Another essay from the BBC website:
That poses a difficult ethical dilemma: should government decisions about risk reflect the often irrational foibles of the populace or the rational calculations of sober risk assessment? Should our politicians opt for informed paternalism or respect for irrational preferences?
The volcanic ash cloud is a classic case study. Were the government to allow flights to go ahead when the risks were equal to those of road travel, it is almost certain that, over the course of the year, hundreds of people would die in resulting air accidents, since around 2,500 die on the roads each year.
This is politically unimaginable, not for good, rational reasons, but because people are much more risk averse when it comes to plane travel than they are to driving their own cars.
So, in practice, governments do not make fully rational risk assessments. Their calculations are based partly on cost-benefit analyses, and partly on what the public will tolerate.
At a security conference recently, the moderator asked the panel of distinguished cybersecurity leaders what their nightmare scenario was. The answers were the predictable array of large-scale attacks: against our communications infrastructure, against the power grid, against the financial system, in combination with a physical attack.
I didn’t get to give my answer until the afternoon, which was: “My nightmare scenario is that people keep talking about their nightmare scenarios.”
There’s a certain blindness that comes from worst-case thinking. An extension of the precautionary principle, it involves imagining the worst possible outcome and then acting as if it were a certainty. It substitutes imagination for thinking, speculation for risk analysis, and fear for reason. It fosters powerlessness and vulnerability and magnifies social paralysis. And it makes us more vulnerable to the effects of terrorism.
Worst-case thinking means generally bad decision making for several reasons. First, it’s only half of the cost-benefit equation. Every decision has costs and benefits, risks and rewards. By speculating about what can possibly go wrong, and then acting as if that is likely to happen, worst-case thinking focuses only on the extreme but improbable risks and does a poor job at assessing outcomes.
Second, it’s based on flawed logic. It begs the question by assuming that a proponent of an action must prove that the nightmare scenario is impossible.
Third, it can be used to support any position or its opposite. If we build a nuclear power plant, it could melt down. If we don’t build it, we will run short of power and society will collapse into anarchy. If we allow flights near Iceland’s volcanic ash, planes will crash and people will die. If we don’t, organs won’t arrive in time for transplant operations and people will die. If we don’t invade Iraq, Saddam Hussein might use the nuclear weapons he might have. If we do, we might destabilize the Middle East, leading to widespread violence and death.
Of course, not all fears are equal. Those that we tend to exaggerate are more easily justified by worst-case thinking. So terrorism fears trump privacy fears, and almost everything else; technology is hard to understand and therefore scary; nuclear weapons are worse than conventional weapons; our children need to be protected at all costs; and annihilating the planet is bad. Basically, any fear that would make a good movie plot is amenable to worst-case thinking.
Fourth and finally, worst-case thinking validates ignorance. Instead of focusing on what we know, it focuses on what we don’t know—and what we can imagine.
Remember Defense Secretary Rumsfeld’s quote? “Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know.” And this: “the absence of evidence is not evidence of absence.” Ignorance isn’t a cause for doubt; when you can fill that ignorance with imagination, it can be a call to action.
Even worse, it can lead to hasty and dangerous acts. You can’t wait for a smoking gun, so you act as if the gun is about to go off. Rather than making us safer, worst-case thinking has the potential to cause dangerous escalation.
The new undercurrent in this is that our society no longer has the ability to calculate probabilities. Risk assessment is devalued. Probabilistic thinking is repudiated in favor of “possibilistic thinking“: Since we can’t know what’s likely to go wrong, let’s speculate about what can possibly go wrong.
Worst-case thinking leads to bad decisions, bad systems design, and bad security. And we all have direct experience with its effects: airline security and the TSA, which we make fun of when we’re not appalled that they’re harassing 93-year-old women or keeping first graders off airplanes. You can’t be too careful!
Actually, you can. You can refuse to fly because of the possibility of plane crashes. You can lock your children in the house because of the possibility of child predators. You can eschew all contact with people because of the possibility of hurt. Steven Hawking wants to avoid trying to communicate with aliens because they might be hostile; does he want to turn off all the planet’s television broadcasts because they’re radiating into space? It isn’t hard to parody worst-case thinking, and at its extreme it’s a psychological condition.
Frank Furedi, a sociology professor at the University of Kent, writes: “Worst-case thinking encourages society to adopt fear as one of the dominant principles around which the public, the government and institutions should organize their life. It institutionalizes insecurity and fosters a mood of confusion and powerlessness. Through popularizing the belief that worst cases are normal, it incites people to feel defenseless and vulnerable to a wide range of future threats.”
Even worse, it plays directly into the hands of terrorists, creating a population that is easily terrorized—even by failed terrorist attacks like the Christmas Day underwear bomber and the Times Square SUV bomber.
When someone is proposing a change, the onus should be on them to justify it over the status quo. But worst-case thinking is a way of looking at the world that exaggerates the rare and unusual and gives the rare much more credence than it deserves.
It isn’t really a principle; it’s a cheap trick to justify what you already believe. It lets lazy or biased people make what seem to be cogent arguments without understanding the whole issue. And when people don’t need to refute counterarguments, there’s no point in listening to them.
This essay was originally published on CNN.com, although they stripped out all the links.
Nice essay by sociologist Frank Furedi on worse-case thinking, exemplified by our reaction to the Icelandic volcano:
I am not a natural scientist, and I claim no authority to say anything of value about the risks posed by volcanic ash clouds to flying aircraft. However, as a sociologist interested in the process of decision-making, it is evident to me that the reluctance to lift the ban on air traffic in Europe is motivated by worst-case thinking rather than rigorous risk assessment. Risk assessment is based on an attempt to calculate the probability of different outcomes. Worst-case thinking these days known as precautionary thinking’—is based on an act of imagination. It imagines the worst-case scenario and then takes action on that basis. In the case of the Icelandic volcano, fears that particles in the ash cloud could cause aeroplane engines to shut down automatically mutated into a conclusion that this would happen. So it seems to me to be the fantasy of the worst-case scenario rather than risk assessment that underpins the current official ban on air traffic.
[…]
Worst-case thinking encourages society to adopt fear as of one of the key principles around which the public, the government and various institutions should organise their lives. It institutionalises insecurity and fosters a mood of confusion and powerlessness. Through popularising the belief that worst cases are normal, it also encourages people to feel defenceless and vulnerable to a wide range of future threats. In all but name, it is an invitation to social paralysis. The eruption of a volcano in Iceland poses technical problems, for which responsible decision-makers should swiftly come up with sensible solutions. But instead, Europe has decided to turn a problem into a drama. In 50 years’ time, historians will be writing about our society’s reluctance to act when practical problems arose. It is no doubt difficult to face up to a natural disaster—but in this case it is the all-too-apparent manmade disaster brought on by indecision and a reluctance to engage with uncertainty that represents the real threat to our future.
John Adams argues that our irrationality about comparative risks depends on the type of risk:
With “pure” voluntary risks, the risk itself, with its associated challenge and rush of adrenaline, is the reward. Most climbers on Mount Everest know that it is dangerous and willingly take the risk. With a voluntary, self-controlled, applied risk, such as driving, the reward is getting expeditiously from A to B. But the sense of control that drivers have over their fates appears to encourage a high level of tolerance of the risks involved.
Cycling from A to B (I write as a London cyclist) is done with a diminished sense of control over one’s fate. This sense is supported by statistics that show that per kilometre travelled a cyclist is 14 times more likely to die than someone in a car. This is a good example of the importance of distinguishing between relative and absolute risk. Although 14 times greater, the absolute risk of cycling is still small—1 fatality in 25 million kilometres cycled; not even Lance Armstrong can begin to cover that distance in a lifetime of cycling. And numerous studies have demonstrated that the extra relative risk is more than offset by the health benefits of regular cycling; regular cyclists live longer.
While people may voluntarily board planes, buses and trains, the popular reaction to crashes in which passengers are passive victims, suggests that the public demand a higher standard of safety in circumstances in which people voluntarily hand over control of their safety to pilots, or to bus or train drivers.
Risks imposed by nature—such as those endured by those living on the San Andreas Fault or the slopes of Mount Etna—or impersonal economic forces—such as the vicissitudes of the global economy—are placed in the middle of the scale. Reactions vary widely. They are usually seen as motiveless and are responded to fatalistically – unless or until the threat appears imminent.
Imposed risks are less tolerated. Consider mobile phones. The risk associated with the handsets is either non-existent or very small. The risk associated with the base stations, measured by radiation dose, unless one is up the mast with an ear to the transmitter, is orders of magnitude less. Yet all round the world billions are queuing up to take the voluntary risk, and almost all the opposition is focussed on the base stations, which are seen by objectors as impositions. Because the radiation dose received from the handset increases with distance from the base station, to the extent that campaigns against the base stations are successful, they will increase the distance from the base station to the average handset, and thus the radiation dose. The base station risk, if it exist, might be labelled a benignly imposed risk; no one supposes that the phone company wishes to murder all those in the neighbourhood.
Less tolerated are risks whose imposers are perceived as motivated by profit or greed. In Europe, big biotech companies such as Monsanto are routinely denounced by environmentalist opponents for being more concerned with profits than the welfare of the environment or the consumers of its products.
Less tolerated still are malignly imposed risks—crimes ranging from mugging to rape and murder. In most countries in the world the number of deaths on the road far exceeds the numbers of murders, but far more people are sent to jail for murder than for causing death by dangerous driving. In the United States in 2002 16,000 people were murdered—a statistic that evoked far more popular concern than the 42,000 killed on the road—but far less than the 25 killed by terrorists.
This isn’t a new result, but it’s vital to understand how people react to different risks.
Nice analysis by John Mueller and Mark G. Stewart:
There is a general agreement about risk, then, in the established regulatory practices of several developed countries: risks are deemed unacceptable if the annual fatality risk is higher than 1 in 10,000 or perhaps higher than 1 in 100,000 and acceptable if the figure is lower than 1 in 1 million or 1 in 2 million. Between these two ranges is an area in which risk might be considered “tolerable.”
These established considerations are designed to provide a viable, if somewhat rough, guideline for public policy. In all cases, measures and regulations intended to reduce risk must satisfy essential cost-benefit considerations. Clearly, hazards that fall in the unacceptable range should command the most attention and resources. Those in the tolerable range may also warrant consideration—but since they are less urgent, they should be combated with relatively inexpensive measures. Those hazards in the acceptable range are of little, or even negligible, concern, so precautions to reduce their risks even further would scarcely be worth pursuing unless they are remarkably inexpensive.
[…]
As can be seen, annual terrorism fatality risks, particularly for areas outside of war zones, are less than one in one million and therefore generally lie within the range regulators deem safe or acceptable, requiring no further regulations, particularly those likely to be expensive. They are similar to the risks of using home appliances (200 deaths per year in the United States) or of commercial aviation (103 deaths per year). Compared with dying at the hands of a terrorist, Americans are twice as likely to perish in a natural disaster and nearly a thousand times more likely to be killed in some type of accident. The same general conclusion holds when the full damage inflicted by terrorists—not only the loss of life but direct and indirect economic costs—is aggregated. As a hazard, terrorism, at least outside of war zones, does not inflict enough damage to justify substantially increasing expenditures to deal with it.
[…]
To border on becoming unacceptable by established risk conventions—that is, to reach an annual fatality risk of 1 in 100,000—the number of fatalities from terrorist attacks in the United States and Canada would have to increase 35-fold; in Great Britain (excluding Northern Ireland), more than 50-fold; and in Australia, more than 70-fold. For the United States, this would mean experiencing attacks on the scale of 9/11 at least once a year, or 18 Oklahoma City bombings every year.
Air marshals are being arrested faster than air marshals are making arrests.
Actually, there have been many more arrests of Federal air marshals than that story reported, quite a few for felony offenses. In fact, more air marshals have been arrested than the number of people arrested by air marshals.
We now have approximately 4,000 in the Federal Air Marshals Service, yet they have made an average of just 4.2 arrests a year since 2001. This comes out to an average of about one arrest a year per 1,000 employees.
Now, let me make that clear. Their thousands of employees are not making one arrest per year each. They are averaging slightly over four arrests each year by the entire agency. In other words, we are spending approximately $200 million per arrest. Let me repeat that: we are spending approximately $200 million per arrest.
Interesting research:
Psychologist Jeremy Ginges and his colleagues identified this backfire effect in studies of the Israeli-Palestinian conflict in 2007. They interviewed both Israelis and Palestinians who possessed sacred values toward key issues such as ownership over disputed territories like the West Bank or the right of Palestinian refugees to return to villages they were forced to leave—these people viewed compromise on these issues completely unacceptable. Ginges and colleagues found that individuals offered a monetary payout to compromise their values expressed more moral outrage and were more supportive of violent opposition toward the other side. Opposition decreased, however, when the other side offered to compromise on a sacred value of its own, such as Israelis formally renouncing their right to the West Bank or Palestinians formally recognizing Israel as a state. Ginges and Scott Atran found similar evidence of this backfire effect with Indonesian madrassah students, who expressed less willingness to compromise their belief in sharia, strict Islamic law, when offered a material incentive.
[…]
After giving their opinions on Iran’s nuclear program, all participants were asked to consider one of two deals for Iranian disarmament. Half of the participants read about a deal in which the United States would reduce military aid to Israel in exchange for Iran giving up its military program. The other half of the participants read about a deal in which the United States would reduce aid to Israel and would pay Iran $40 billion. After considering the deal, all participants predicted how much the Iranian people would support the deal and how much anger they would feel toward the deal. In line with the Palestinian-Israeli and Indonesian studies, those who considered the nuclear program a sacred value expressed less support, and more anger, when the deal included money.
This paper, by Cormac Herley at Microsoft Research, sounds like me:
Abstract: It is often suggested that users are hopelessly lazy and
unmotivated on security questions. They chose weak passwords, ignore security warnings, and are oblivious to certicates errors. We argue that users’ rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort. Looking at various examples of security advice we find that the advice is complex and growing, but the benefit is largely speculative or moot. For example, much of the advice concerning passwords is outdated and does little to address actual threats, and fully 100% of certificate error warnings appear to be false positives. Further, if users spent even a minute a day reading URLs to avoid phishing, the cost (in terms of user time) would be two orders of magnitude greater than all phishing losses. Thus we find that most security advice simply offers a poor cost-benefit tradeoff to users and is rejected. Security advice is a daily burden, applied to the whole population, while an upper bound on the benefit is the harm suffered by the fraction that become victims annually. When that fraction is small, designing security advice that is beneficial is very hard. For example, it makes little sense to burden all users with a daily task to spare 0.01% of them a modest annual pain.
Sounds like me.
EDITED TO ADD (12/12): Related article on usable security.
Sidebar photo of Bruce Schneier by Joe MacInnis.