Entries Tagged "risks"

Page 6 of 15

Excess Automobile Deaths as a Result of 9/11

People commented about a point I made in a recent essay:

In the months after 9/11, so many people chose to drive instead of fly that the resulting deaths dwarfed the deaths from the terrorist attack itself, because cars are much more dangerous than airplanes.

Yes, that’s wrong. Where I said “months,” I should have said “years.”

I got the sound bite from John Mueller and Mark G. Stewart’s book, Terror, Security, and Money. This is footnote 19 from Chapter 1:

The inconvenience of extra passenger screening and added costs at airports after 9/11 cause many short-haul passengers to drive to their destination instead, and, since airline travel is far safer than car travel, this has led to an increase of 500 U.S. traffic fatalities per year. Using DHS-mandated value of statistical life at $6.5 million, this equates to a loss of $3.2 billion per year, or $32 billion over the period 2002 to 2011 (Blalock et al. 2007).

The authors make the same point in this earlier (and shorter) essay:

Increased delays and added costs at U.S. airports due to new security procedures provide incentive for many short-haul passengers to drive to their destination rather than flying, and, since driving is far riskier than air travel, the extra automobile traffic generated has been estimated in one study to result in 500 or more extra road fatalities per year.

The references are:

  • Garrick Blalock, Vrinda Kadiyali, and Daniel H. Simon. 2007. “The Impact of Post-9/11 Airport Security Measures on the Demand for Air Travel.” Journal of Law and Economics 50(4) November: 731­–755.
  • Garrick Blalock, Vrinda Kadiyali, and Daniel H. Simon. 2009. “Driving Fatalities after 9/11: A Hidden Cost of Terrorism.” Applied Economics 41(14): 1717­–1729.

Business Week makes the same point here.

There’s also this reference:

  • Michael Sivak and Michael J. Flannagan. 2004. “Consequences for road traffic fatalities of the reduction in flying following September 11, 2001.” Transportation Research Part F: Traffic Psychology and Behavior 7 (4).

Abstract: Gigerenzer (Gigerenzer, G. (2004). Dread risk, September 11, and fatal traffic accidents. Psychological Science, 15 , 286­287) argued that the increased fear of flying in the U.S. after September 11 resulted in a partial shift from flying to driving on rural interstate highways, with a consequent increase of 353 road traffic fatalities for October through December 2001. We reevaluated the consequences of September 11 by utilizing the trends in road traffic fatalities from 2000 to 2001 for January through August. We also examined which road types and traffic participants contributed most to the increased road fatalities. We conclude that (1) the partial modal shift after September 11 resulted in 1018 additional road fatalities for the three months in question, which is substantially more than estimated by Gigerenzer, (2) the major part of the increased toll occurred on local roads, arguing against a simple modal shift from flying to driving to the same destinations, (3) driver fatalities did not increase more than in proportion to passenger fatalities, and (4) pedestrians and bicyclists bore a disproportionate share of the increased fatalities.

This is another analysis.

Posted on September 9, 2013 at 6:20 AMView Comments

Our Newfound Fear of Risk

We’re afraid of risk. It’s a normal part of life, but we’re increasingly unwilling to accept it at any level. So we turn to technology to protect us. The problem is that technological security measures aren’t free. They cost money, of course, but they cost other things as well. They often don’t provide the security they advertise, and — paradoxically — they often increase risk somewhere else. This problem is particularly stark when the risk involves another person: crime, terrorism, and so on. While technology has made us much safer against natural risks like accidents and disease, it works less well against man-made risks.

Three examples:

  1. We have allowed the police to turn themselves into a paramilitary organization. They deploy SWAT teams multiple times a day, almost always in nondangerous situations. They tase people at minimal provocation, often when it’s not warranted. Unprovoked shootings are on the rise. One result of these measures is that honest mistakes — a wrong address on a warrant, a misunderstanding — result in the terrorizing of innocent people, and more death in what were once nonviolent confrontations with police.
  2. We accept zero-tolerance policies in schools. This results in ridiculous situations, where young children are suspended for pointing gun-shaped fingers at other students or drawing pictures of guns with crayons, and high-school students are disciplined for giving each other over-the-counter pain relievers. The cost of these policies is enormous, both in dollars to implement and its long-lasting effects on students.
  3. We have spent over one trillion dollars and thousands of lives fighting terrorism in the past decade — including the wars in Iraq and Afghanistan — money that could have been better used in all sorts of ways. We now know that the NSA has turned into a massive domestic surveillance organization, and that its data is also used by other government organizations, which then lie about it. Our foreign policy has changed for the worse: we spy on everyone, we trample human rights abroad, our drones kill indiscriminately, and our diplomatic outposts have either closed down or become fortresses. In the months after 9/11, so many people chose to drive instead of fly that the resulting deaths dwarfed the deaths from the terrorist attack itself, because cars are much more dangerous than airplanes.

There are lots more examples, but the general point is that we tend to fixate on a particular risk and then do everything we can to mitigate it, including giving up our freedoms and liberties.

There’s a subtle psychological explanation. Risk tolerance is both cultural and dependent on the environment around us. As we have advanced technologically as a society, we have reduced many of the risks that have been with us for millennia. Fatal childhood diseases are things of the past, many adult diseases are curable, accidents are rarer and more survivable, buildings collapse less often, death by violence has declined considerably, and so on. All over the world — among the wealthier of us who live in peaceful Western countries — our lives have become safer.

Our notions of risk are not absolute; they’re based more on how far they are from whatever we think of as “normal.” So as our perception of what is normal gets safer, the remaining risks stand out more. When your population is dying of the plague, protecting yourself from the occasional thief or murderer is a luxury. When everyone is healthy, it becomes a necessity.

Some of this fear results from imperfect risk perception. We’re bad at accurately assessing risk; we tend to exaggerate spectacular, strange, and rare events, and downplay ordinary, familiar, and common ones. This leads us to believe that violence against police, school shootings, and terrorist attacks are more common and more deadly than they actually are — and that the costs, dangers, and risks of a militarized police, a school system without flexibility, and a surveillance state without privacy are less than they really are.

Some of this fear stems from the fact that we put people in charge of just one aspect of the risk equation. No one wants to be the senior officer who didn’t approve the SWAT team for the one subpoena delivery that resulted in an officer being shot. No one wants to be the school principal who didn’t discipline — no matter how benign the infraction — the one student who became a shooter. No one wants to be the president who rolled back counterterrorism measures, just in time to have a plot succeed. Those in charge will be naturally risk averse, since they personally shoulder so much of the burden.

We also expect that science and technology should be able to mitigate these risks, as they mitigate so many others. There’s a fundamental problem at the intersection of these security measures with science and technology; it has to do with the types of risk they’re arrayed against. Most of the risks we face in life are against nature: disease, accident, weather, random chance. As our science has improved — medicine is the big one, but other sciences as well — we become better at mitigating and recovering from those sorts of risks.

Security measures combat a very different sort of risk: a risk stemming from another person. People are intelligent, and they can adapt to new security measures in ways nature cannot. An earthquake isn’t able to figure out how to topple structures constructed under some new and safer building code, and an automobile won’t invent a new form of accident that undermines medical advances that have made existing accidents more survivable. But a terrorist will change his tactics and targets in response to new security measures. An otherwise innocent person will change his behavior in response to a police force that compels compliance at the threat of a Taser. We will all change, living in a surveillance state.

When you implement measures to mitigate the effects of the random risks of the world, you’re safer as a result. When you implement measures to reduce the risks from your fellow human beings, the human beings adapt and you get less risk reduction than you’d expect — and you also get more side effects, because we all adapt.

We need to relearn how to recognize the trade-offs that come from risk management, especially risk from our fellow human beings. We need to relearn how to accept risk, and even embrace it, as essential to human progress and our free society. The more we expect technology to protect us from people in the same way it protects us from nature, the more we will sacrifice the very values of our society in futile attempts to achieve this security.

This essay previously appeared on Forbes.com.

EDITED TO ADD (8/5): Slashdot thread.

Posted on September 3, 2013 at 6:41 AMView Comments

Nassim Nicholas Taleb on Risk Perception

From his Facebook page:

An illustration of how the news are largely created, bloated and magnified by journalists. I have been in Lebanon for the past 24h, and there were shells falling on a suburb of Beirut. Yet the news did not pass the local *social filter* and did [not] reach me from social sources…. The shelling is the kind of thing that is only discussed in the media because journalists can use it self-servingly to weave a web-worthy attention-grabbing narrative.

It is only through people away from the place discovering it through Google News or something even more stupid, the NYT, that I got the information; these people seemed impelled to inquire about my safety.

What kills people in Lebanon: cigarettes, sugar, coca cola and other chemical monstrosities, iatrogenics, hypochondria, overtreament (Lipitor etc.), refined wheat pita bread, fast cars, lack of exercise, angry husbands (or wives), etc., things that are not interesting enough to make it to Google News.

A Roman citizen 2000 years ago was more calibrated in his risk assessment than an internet user today….

Posted on May 28, 2013 at 12:52 PMView Comments

Bluetooth-Controlled Door Lock

Here is a new lock that you can control via Bluetooth and an iPhone app.

That’s pretty cool, and I can imagine all sorts of reasons to get one of those. But I’m sure there are all sorts of unforeseen security vulnerabilities in this system. And even worse, a single vulnerability can affect all the locks. Remember that vulnerability found last year in hotel electronic locks?

Anyone care to guess how long before some researcher finds a way to hack this one? And how well the maker anticipated the need to update the firmware to fix the vulnerability once someone finds it?

I’m not saying that you shouldn’t use this lock, only that you understand that new technology brings new security risks, and electronic technology brings new kinds of security risks. Security is a trade-off, and the trade-off is particularly stark in this case.

Posted on May 16, 2013 at 8:45 AMView Comments

Jared Diamond on Common Risks

Jared Diamond has an op-ed in the New York Times where he talks about how we overestimate rare risks and underestimate common ones. Nothing new here — I and others have written about this sort of thing extensively — but he says that this is a bias found more in developed countries than in primitive cultures.

I first became aware of the New Guineans’ attitude toward risk on a trip into a forest when I proposed pitching our tents under a tall and beautiful tree. To my surprise, my New Guinea friends absolutely refused. They explained that the tree was dead and might fall on us.

Yes, I had to agree, it was indeed dead. But I objected that it was so solid that it would be standing for many years. The New Guineans were unswayed, opting instead to sleep in the open without a tent.

I thought that their fears were greatly exaggerated, verging on paranoia. In the following years, though, I came to realize that every night that I camped in a New Guinea forest, I heard a tree falling. And when I did a frequency/risk calculation, I understood their point of view.

Consider: If you’re a New Guinean living in the forest, and if you adopt the bad habit of sleeping under dead trees whose odds of falling on you that particular night are only 1 in 1,000, you’ll be dead within a few years. In fact, my wife was nearly killed by a falling tree last year, and I’ve survived numerous nearly fatal situations in New Guinea.

Diamond has a point. While it’s universally true that humans exaggerate rare and spectacular risks and downplay mundane and common risks, we in developed countries do it more. The reason, I think, is how fears propagate. If someone in New Guinea gets eaten by a tiger — do they even have tigers in New Guinea? — then those who know the victim or hear about it learn to fear tiger attacks. If it happens in the U.S., it’s the lead story on every news program, and the entire country fears tigers. Technology magnifies rare risks. Think of plane crashes versus car crashes. Think of school shooters versus home accidents. Think of 9/11 versus everything else.

On the other side of the coin, we in the developed world have largely made the pedestrian risks invisible. Diamond makes the point that, for an older man, falling is a huge risk, and showering is especially dangerous. How many people do you know who have fallen in the shower and seriously hurt themselves? I can’t think of anyone. We tend to compartmentalize our old, our poor, our different — and their accidents don’t make the news. Unless it’s someone we know personally, we don’t hear about it.

EDITED TO ADD (2/21): George Burns fatally fell in the shower at age 98.

Posted on February 1, 2013 at 6:08 AMView Comments

Micromorts

Here’s a great concept: a micromort:

Shopping for coffee you would not ask for 0.00025 tons (unless you were naturally irritating), you would ask for 250 grams. In the same way, talking about a 1/125,000 or 0.000008 risk of death associated with a hang-gliding flight is rather awkward. With that in mind. Howard coined the term “microprobability” (μp) to refer to an event with a chance of 1 in 1 million and a 1 in 1 million chance of death he calls a “micromort” (μmt). We can now describe the risk of hang-gliding as 8 micromorts and you would have to drive around 3,000km in a car before accumulating a risk of 8 μmt, which helps compare these two remote risks.

There’s a related term, microlife, for things that reduce your lifespan. A microlife is 30 minutes off your life expectancy. So smoking two cigarettes has a cost of one microlife.

Posted on November 8, 2012 at 6:57 AMView Comments

The NSA and the Risk of Off-the-Shelf Devices

Interesting article on how the NSA is approaching risk in the era of cool consumer devices. There’s a discussion of the president’s network-disabled iPad, and the classified cell phone that flopped because it took so long to develop and was so clunky. Turns out that everyone wants to use iPhones.

Levine concluded, “Using commercial devices to process classified phone calls, using commercial tablets to talk over wifi — that’s major game-changer for NSA to put classified information over wifi networks, but that’s what we’re going to do.” One way that would be done, he said, was by buying capability from cell carriers that have networks of cell towers in much the way small cell providers and companies like Onstar do.

Interestingly, Levine described an agency that is being forced to adopt a more realistic and practical attitude toward risk. “It used to be that the NSA squeezed all risk out of everything,” he said. Even lower-levels of sensitivity were covered by Top Secret-level crypto. “We don’t do that now — it’s levels of risk. We say we can give you this, but can ensure only this level of risk.” Partly this came about, he suggested, because the military has an inherent understanding that nothing is without risk, and is used to seeing things in terms of tradeoffs: “With the military, everything is a risk decision. If this is the communications capability I need, I’ll have to take that risk.”

Posted on September 20, 2012 at 6:02 AMView Comments

Estimating the Probability of Another 9/11

This statistical research says once per decade:

Abstract: Quantities with right-skewed distributions are ubiquitous in complex social systems, including political conflict, economics and social networks, and these systems sometimes produce extremely large events. For instance, the 9/11 terrorist events produced nearly 3000 fatalities, nearly six times more than the next largest event. But, was this enormous loss of life statistically unlikely given modern terrorism’s historical record? Accurately estimating the probability of such an event is complicated by the large fluctuations in the empirical distribution’s upper tail. We present a generic statistical algorithm for making such estimates, which combines semi-parametric models of tail behavior and a non-parametric bootstrap. Applied to a global database of terrorist events, we estimate the worldwide historical probability of observing at least one 9/11-sized or larger event since 1968 to be 11-35%. These results are robust to conditioning on global variations in economic development, domestic versus international events, the type of weapon used and a truncated history that stops at 1998. We then use this procedure to make a data-driven statistical forecast of at least one similar event over the next decade.

Article about the research.

Posted on September 13, 2012 at 1:20 PMView Comments

The Importance of Security Engineering

In May, neuroscientist and popular author Sam Harris and I debated the issue of profiling Muslims at airport security. We each wrote essays, then went back and forth on the issue. I don’t recommend reading the entire discussion; we spent 14,000 words talking past each other. But what’s interesting is how our debate illustrates the differences between a security engineer and an intelligent layman. Harris was uninterested in the detailed analysis required to understand a security system and unwilling to accept that security engineering is a specialized discipline with a body of knowledge and relevant expertise. He trusted his intuition.

Many people have researched how intuition fails us in security: Paul Slovic and Bill Burns on risk perception, Daniel Kahneman on cognitive biases in general, Rick Walsh on folk computer-security models. I’ve written about the psychology of security, and Daniel Gartner has written more. Basically, our intuitions are based on things like antiquated fight-or-flight models, and these increasingly fail in our technological world.

This problem isn’t unique to computer security, or even security in general. But this misperception about security matters now more than it ever has. We’re no longer asking people to make security choices only for themselves and their businesses; we need them to make security choices as a matter of public policy. And getting it wrong has increasingly bad consequences.

Computers and the Internet have collided with public policy. The entertainment industry wants to enforce copyright. Internet companies want to continue freely spying on users. Law-enforcement wants its own laws imposed on the Internet: laws that make surveillance easier, prohibit anonymity, mandate the removal of objectionable images and texts, and require ISPs to retain data about their customers’ Internet activities. Militaries want laws regarding cyber weapons, laws enabling wholesale surveillance, and laws mandating an Internet kill switch. “Security” is now a catch-all excuse for all sorts of authoritarianism, as well as for boondoggles and corporate profiteering.

Cory Doctorow recently spoke about the coming war on general-purpose computing. I talked about it in terms of the entertainment industry and Jonathan Zittrain discussed it more generally, but Doctorow sees it as a much broader issue. Preventing people from copying digital files is only the first skirmish; just wait until the DEA wants to prevent chemical printers from making certain drugs, or the FBI wants to prevent 3D printers from making guns.

I’m not here to debate the merits of any of these policies, but instead to point out that people will debate them. Elected officials will be expected to understand security implications, both good and bad, and will make laws based on that understanding. And if they aren’t able to understand security engineering, or even accept that there is such a thing, the result will be ineffective and harmful policies.

So what do we do? We need to establish security engineering as a valid profession in the minds of the public and policy makers. This is less about certifications and (heaven forbid) licensing, and more about perception — and cultivating a security mindset. Amateurs produce amateur security, which costs more in dollars, time, liberty, and dignity while giving us less — or even no — security. We need everyone to know that.

We also need to engage with real-world security problems, and apply our expertise to the variety of technical and socio-technical systems that affect broader society. Everything involves computers, and almost everything involves the Internet. More and more, computer security is security.

Finally, and perhaps most importantly, we need to learn how to talk about security engineering to a non-technical audience. We need to convince policy makers to follow a logical approach instead of an emotional one — an approach that includes threat modeling, failure analysis, searching for unintended consequences, and everything else in an engineer’s approach to design. Powerful lobbying forces are attempting to force security policies on society, largely for non-security reasons, and sometimes in secret. We need to stand up for security.

A shorter version of this essay appeared in the September/October 2012 issue of IEEE Security & Privacy.

Posted on August 28, 2012 at 10:38 AMView Comments

Five "Neglects" in Risk Management

Good list, summarized here:

1. Probability neglect – people sometimes don’t consider the probability of the occurrence of an outcome, but focus on the consequences only.

2. Consequence neglect – just like probability neglect, sometimes individuals neglect the magnitude of outcomes.

3. Statistical neglect – instead of subjectively assessing small probabilities and continuously updating them, people choose to use rules-of-thumb (if any heuristics), which can introduce systematic biases in their decisions.

4. Solution neglect – choosing an optimal solution is not possible when one fails to consider all of the solutions.

5. External risk neglect – in making decisions, individuals or groups often consider the cost/benefits of decisions only for themselves, without including externalities, sometimes leading to significant negative outcomes for others.

Posted on August 22, 2012 at 12:34 PMView Comments

1 4 5 6 7 8 15

Sidebar photo of Bruce Schneier by Joe MacInnis.