Evaluating Risks of Low-Probability High-Cost Events
“Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes,” by Toby Ord, Rafaela Hillerbrand, Anders Sandberg.
Abstract:
Some risks have extremely high stakes. For example, a worldwide pandemic or asteroid impact could potentially kill more than a billion people. Comfortingly, scientific calculations often put very low probabilities on the occurrence of such catastrophes. In this paper, we argue that there are important new methodological problems which arise when assessing global catastrophic risks and we focus on a problem regarding probability estimation. When an expert provides a calculation of the probability of an outcome, they are really providing the probability of the outcome occurring, given that their argument is watertight. However, their argument may fail for a number of reasons such as a flaw in the underlying theory, a flaw in the modeling of the problem, or a mistake in the calculations. If the probability estimate given by an argument is dwarfed by the chance that the argument itself is flawed, then the estimate is suspect. We develop this idea formally, explaining how it differs from the related distinctions of model and parameter uncertainty. Using the risk estimates from the Large Hadron Collider as a test case, we show how serious the problem can be when it comes to catastrophic risks and how best to address it.
Randall • February 2, 2009 2:31 PM
There’s an important sublety here. The authors say there’s a 1 in 10,000 risk that our explanation of why the LHC is safe is wrong.
But if there’s some flaw in our reasoning, that only means that we haven’t proven the LHC is safe, not that it’s certain that it’s unsafe.
Analogy: If I publish a paper saying that I won’t be impaled by a pink unicorn, and 1 out of 10,000 papers are wrong, that doesn’t mean there’s an 0.01% chance I’ll be impaled by a pink unicorn. It only means that, heuristically, you could argue that there’s an 0.01% chance my explanation is flawed enough to merit withdrawing, if my paper is as likely as the average paper to be withdrawn.
Another key qualification is that not all arguments are equally likely to be flawed. There are lots of reasons I’m unlikely to be impaled by a pink unicorn, and many of them don’t require especially tricky logic or measurements: we’ve never seen a unicorn; there’s no special reason to anticipate that changing; and most people manage to avoid impalement on the horned animals that do exist. (I bet you never expected that sentence to appear in your blog comments.) When there are several alternative arguments that each prove that something won’t happen, and when they’re based on old and reliable principles instead of tricky calculations or measurements, we can be more confident.
There may be several independent arguments that the LHC is safe and some may be based on simple principles — no physical model predicts black holes, small or big; tiny black holes would disappear quickly; similar stuff happens in the upper atmosphere all the time. Then we can be much more confident that the LHC is safe than we are in the average physics paper’s conclusion, which might rely on shaky measurements or complicated, fragile reasoning. Perhaps the paper was written because there was popular worry, not because there was a serious case to be made that the LHC would destroy the world. There was a similar worry before we set off the first atomic bomb test — of course, that turned out not to ignite the atmosphere as people feared. Turned out we had more to worry about from a social and political angle than a physical one!