## Evaluating Risks of Low-Probability High-Cost Events

Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes,” by Toby Ord, Rafaela Hillerbrand, Anders Sandberg.

Abstract:

Some risks have extremely high stakes. For example, a worldwide pandemic or asteroid impact could potentially kill more than a billion people. Comfortingly, scientific calculations often put very low probabilities on the occurrence of such catastrophes. In this paper, we argue that there are important new methodological problems which arise when assessing global catastrophic risks and we focus on a problem regarding probability estimation. When an expert provides a calculation of the probability of an outcome, they are really providing the probability of the outcome occurring, given that their argument is watertight. However, their argument may fail for a number of reasons such as a flaw in the underlying theory, a flaw in the modeling of the problem, or a mistake in the calculations. If the probability estimate given by an argument is dwarfed by the chance that the argument itself is flawed, then the estimate is suspect. We develop this idea formally, explaining how it differs from the related distinctions of model and parameter uncertainty. Using the risk estimates from the Large Hadron Collider as a test case, we show how serious the problem can be when it comes to catastrophic risks and how best to address it.

Randall February 2, 2009 2:31 PM

There’s an important sublety here. The authors say there’s a 1 in 10,000 risk that our explanation of why the LHC is safe is wrong.

But if there’s some flaw in our reasoning, that only means that we haven’t proven the LHC is safe, not that it’s certain that it’s unsafe.

Analogy: If I publish a paper saying that I won’t be impaled by a pink unicorn, and 1 out of 10,000 papers are wrong, that doesn’t mean there’s an 0.01% chance I’ll be impaled by a pink unicorn. It only means that, heuristically, you could argue that there’s an 0.01% chance my explanation is flawed enough to merit withdrawing, if my paper is as likely as the average paper to be withdrawn.

Another key qualification is that not all arguments are equally likely to be flawed. There are lots of reasons I’m unlikely to be impaled by a pink unicorn, and many of them don’t require especially tricky logic or measurements: we’ve never seen a unicorn; there’s no special reason to anticipate that changing; and most people manage to avoid impalement on the horned animals that do exist. (I bet you never expected that sentence to appear in your blog comments.) When there are several alternative arguments that each prove that something won’t happen, and when they’re based on old and reliable principles instead of tricky calculations or measurements, we can be more confident.

There may be several independent arguments that the LHC is safe and some may be based on simple principles — no physical model predicts black holes, small or big; tiny black holes would disappear quickly; similar stuff happens in the upper atmosphere all the time. Then we can be much more confident that the LHC is safe than we are in the average physics paper’s conclusion, which might rely on shaky measurements or complicated, fragile reasoning. Perhaps the paper was written because there was popular worry, not because there was a serious case to be made that the LHC would destroy the world. There was a similar worry before we set off the first atomic bomb test — of course, that turned out not to ignite the atmosphere as people feared. Turned out we had more to worry about from a social and political angle than a physical one!

@ Randall

excellent points. reminds me of the flaw in google search algorithms. initially modeled on peer review, but then automated peers became the problem — exponentially unstrustworthy…and so we’re back to using a combination of the old methods we started with to find pages, as well as judge and mitigate risk.

Rob Adams February 2, 2009 2:44 PM

The authors blithely dismiss the dangers of dropping pencils, but I can easily apply their argument.

I hereby present my theory of a pencilet. Which states that the ground state of all matter is actually a pencil, and if you drop a pencil on the ground fast enough all matter in the universe will be converted into pencil.

Sure, all your so-called physicists will claim that this is a pile of horseshit, but if we arbitrary assign a probably of 10^-4 that their arguments against pencilets are wrong, and then arbitrary assign a probability of .1 that if they’re wrong about pencilets, then pencilets are real, it looks like a chance of 10^-5.

But the consequences of this being wrong is 6 billion human deaths! The expected number of human deaths is in the millions!

John Cairns February 2, 2009 3:07 PM

Give yourself as much latitude as you would like to prove something, if you still can’t prove it, you haven’t got anything.

The problem with catastrophic risk is the probability estimate should include an ‘insurance’ offset for the worst case scenario. In other words, some events are so heinous, that we may wish to avoid them even if they are highly improbable. This is where statistics break down.

John

Randall February 2, 2009 3:16 PM

• Whoever makes the real decision to switch on the LHC better know physicists’ opinions better than I do; all I know is what I read in the newspapers. We can prove the LHC safe as a matter of method, but there’s the small matter of checking that we in fact have.

• Since the investment is trivial even if the risks are trivial as well, it might be worth tasking smart physicists with taking on a paranoid mindset and looking for credible arguments that the LHC could be unsafe. Teller, a physicist who was literally a bit paranoid, did exactly this before the first A-bomb test. (Unfortunately, he went on to father the H-bomb, which probably created more risk to civilization than he mitigated with those A-bomb test safety checks.)

• Rob Adams said it better than I did.

• Speaking of probabilities, one in six objects is a scone: http://xkcd.com/452/ (alt text)

• And Bruce would love today’s xkcd: http://xkcd.com/538/

When I first read the authors list I misread the last author as Awesometown and SNL star Andy Samberg, a comedian. Whoops.

Rob: please read the paper before you criticize. It’s not a fearmongering paper — at least, not as I see it. It’s just asking people to think critically when they read papers that estimate low-probability events. There’s no real shame in that.

Even more endearing about the article is that it highlights anthropic conditionals, which are probably one of the most delightful and bizarre mathematical notions to come from modern cosmology. More on them can be read in Scott Aaronson’s wonderful lectures on the matter:

http://www.scottaaronson.com/democritus/lec17.html

While some of the points are certainly worth considering, I don’t think it’s particularly good as a paper. The obvious problem is that the probability of an argument being wrong and the probability of disaster if that argument is wrong are nearly impossible to determine.

If you want to be pedantic about statistics, there are other significant priors they’ve ignored. The LHC safety papers, for example, have been subject to significant scrutiny. The (estimated) figure for argument error rate is for arguments where no scrutiny has yet been applied.

It reminds me of an often-repeated statement about why, in the search for extraterrestrial life, we’re so heavily biased toward Earth-like life: as we have no way of determining what alien life would be like, there’s no way to search for it. Likewise, since there’s no way to actually determine the likelihood of an event in the absence of an argument, there’s no real analysis that can be done here.

Or, to cut to the core of the argument: systematic uncertainties are hard to estimate and may well be large compared to the very small thing you’re trying to estimate. This is true, and it’s something scientists should be reminded of every now and then (especially if they leave the field and go to work for a hedge fund), but it’s not a new insight — especially to particle physicists.

Carlo Graziani February 2, 2009 4:35 PM

Having read the paper now, I can’t see that it states anything useful, in the sense of actually furnishing tools that might assist risk analysis. In fact, in a sense, the argument in the paper makes things worse.

A great deal of the analysis has a “Drake Equation” type of flavor, in the sense that after pulling some arbitrary hierarchy of essentially unquantifiable conditionals, their unknowable probabilities are chained together through Bayes’ rule, to produce illustrative, but utterly uninformative answers.

So far so bad, but the part that makes things worse, in my opinion, is the insistence of conflating all catastrophic risks in one discussion, so that meteor strikes are presented in the same continuum as events from speculative quasi-physics.

There is a difference between assessing risks from events for which evidence (AKA “data”) exists, and risks from events for which there is no evidence at all. When even small amounts of data (even single events) are available, the world actually pushes back upon our imaginations, and real bounding of the possibility space can occur. When there is no data (or at least no data directly relevant to the alleged risk), then the uncertainty at the end of the calculation is coterminous with the uncertainty at the outset. We’ve learned nothing, except the extent of our own imagination, which is considerable.

There are many valid reasons to argue about the risk from earthquakes, hurricanes, bombs, meteors, etc. There are many reasons why anyone’s assessment of such risks might be wrong. But we may at least hope to adopt some methodology that could converge to a reasonable answer, because these things actually happen in the world, with some regularity, and as such their regular properties may be statistically inferred.

But there is no conceivable reason to worry about “events” dreamed up by string theorists. They’ve never made a single observational prediction (except that the world is 10-dimensional — a “prediction” that was swiftly hustled off the main stage, after a quick count), let alone one that was observationally confirmed. Accepting risk candidates from people this contemptuous of data makes no sense.

To believe that an event someone dreamed up but nobody has any evidence for is worth discussing because its consequences might be horrible, is tantamount to accepting Pascal’s wager — might as well believe in God, because if you don’t and he exists, you’ll burn in hell for eternity. It was a fallacy then, and Pascal was just inventing probability theory. What’s our excuse now?

Clive Robinson February 2, 2009 6:50 PM

Having read the paper I’m reminded of something I was told many many years ago when I was a student about physics.

“Physics is based on a series of aproximations each one a little closer to reality”

What a lot of people forget is that maths is not reality it is used to model reality and the model is refined by the results of experimentation.

There is only so far you can build a model with out experimental confirmation before it’s contact with reality is sufficient to doubt it’s validity.

Domain experts usually have theoretical models that are in advance of experimental reality, sometimes they are valid sometimes they are not.

The expert has no way other than self belief if their model is correct or not.

Using such models to make predictions is neither right or wrong they are just the end product of a chain of reasoning the acuracy of which is unknown.

Oh and one of maths little failings is it is by and large based on the assumption of truth and false that is it is bivalent in origin. The assumption is that a coin will be either heads or tails but what do you do when the coin you are experimenting with lands on it’s edge?

Filias Cupio February 2, 2009 10:42 PM

Off topic: http://www.cringely.com/ latest (Feb 2) entry is on countermeasures against insider sabotage by people with root access. It looks fairly sensible to me (except maybe the monthly-changing passwords), but I’m only an amateur at this.

Here is my quick summary:
You don’t share one ‘root’ account: each admin has their own privileged account. Everything done in the privileged accounts is logged. Scripts scan the accounts to make sure no new ones are created in violation of the system. HR have the ability to cancel/suspend accounts, which they do before firing someone.

As an aside, back when I had root access in a medium sized IT company, I considered (as an intellectual challenge) how to abuse it, and came up with this:
1) Change the backup system so that it transparently encrypts backups when writing and decrypts when reading: no visible change, unless someone tries to read a backup on a computer which I haven’t hacked.
2) Wait a year
3) Wipe all the computers (including the hacked backup-decryption software)
4) Not only have they lost all their current data, the last year of backups are unreadable. Offer to sell them the decryption key for lots of money.

Randall February 2, 2009 11:51 PM

Failing to use the LHC could possibly cause human extinction. How? Physics enabled by he LHC could enable new technologies or bring them about sooner, and those technologies could end up critical to human survival.

Obviously physics advancements were crucial to developing things like nuclear power and information technology. Now imagine we someday need interstellar travel or a new energy source or just continuing economic growth to stave off war. It’s entirely possible that having that in time vs. not having it depends on whether we fired up the LHC back in the 21st century.

Chances of that? I don’t know. But they’re nonzero, and that’s enough to affect the calculations.

Pity that in all probability, they’ll fire up the collider and the world won’t end, and that won’t get us any closer to an answer about which ethical/methodological/philosophical argument was right. 🙂 We need some kind of philosophy-of-science collider to sort it out.

Randall February 3, 2009 12:00 AM

Rereading that, one key clarification. As a resident of the world, and someone who rather likes the place, I must say it’s great that the world won’t end.

What’s a pity is that the LHC won’t tell us anything about how we should have estimated the risk of firing it up, or what the correct response to that risk was.

Randall says:
Since the investment is trivial even if the risks are trivial as well, it might be worth tasking smart physicists with taking on a paranoid mindset and looking for credible arguments that the LHC could be unsafe.

You might not consider this credible, but if the LHC doesn’t find the Higgs or new physics, then it might be:

http://www.lns.cornell.edu/spr/2006-02/msg0073320.html

The bottom line here is that tension between the vacuum and ordinary matter increases as particle creation drives vacuum expansion, so any little simulation of the big bang could at some point in the history of the universe produce a needle/balloon effect.

Or not. I want nothing more than to see the LHC do it’s thing and come up empty.

Anyway, the physics is explained in more depth here:

http://dorigo.wordpress.com/2007/10/18/

corvi42 February 3, 2009 7:14 AM

@Randall et al.

Clearly the base-rate of paper retractions is a naively simplistic estimate of the probability of a bad theory. However, you must take the whole paper in the context the authors propose: when dealing with incredibly unlikely events. In the case of incredibly unlikely events, the probability of getting the theory wrong is far larger than the probability estimated by the theory – that’s the key point. You are quite right, that this does not make the event itself more likely. However, this is contingent upon the idea that of all possible reasonable alternative theories, they will all predict a probability of the event of a similar (tiny) order of magnitude.

I think what the others are trying to say is that without evaluating other reasonable theories, there is no way of knowing that.

Physics dude February 3, 2009 7:34 AM

If there could be any life threating issue with the LHC then the universe would not be here. It is that simple. We don’t need to understand the physics to know this.

The cosmic rays that hit our atmosphere and every other body in the galactic plane, can often have very high energy. 1000 of times higher than what the LHC can do. This happens every day (many times a day) for 4.5 billions years in earths case. Longer in the case of the observable universe.

The only assumption we have made is that the laws of nature are the same in the upper atmosphere as they are in the LHC. Or extending that to the observable universe… That the laws are the same for the observable universe.

Fred P February 3, 2009 9:52 AM

@island-

Your overestimating the force that the LHC can produce by many orders of magintude. Here’s my test for if the LHC will cause the world to end:

Look at the moon.

Do you see it? Is it still there? Then you don’t have to worry about the LHC causing the world to end. The moon is hit by higher energy levels (via cosmic rays) frequently enough that it would be gone if doing something like this would be world-destroying.

Here’s a more detailed analysis:

http://particle-physics.suite101.com/article.cfm/can_the_cern_lhc_destroy_earth

If you don’t like to click links, here’s the basic facts:

1) The energy produced in LHC collisions should be around 1000 watts. My company emits more watts in lighting alone (as, very likely, will the LHC).
2) The energy density in the LHC would need to be over 20,000,000,000,000,000 times higher to even think about generating a black hole (never mind a “big bang”). Actually it would need to be quite a bit higher (since the base calculation assumes a stellar mass), but when the number is that far off to start with, why worry?

Filias Cupio February 3, 2009 11:46 PM

Physics dude and Fred P: It isn’t so simple.

(1) An incoming cosmic ray proton with energy 10^18 eV (say) colliding with a stationary proton doesn’t make 10^18 eV of energy available for creating exotic particles or mini-black holes, because whatever is produced has to travel at the speed of the center of mass of the two proton system. From distant memory, the energy available (center of mass energy) in beam-stationary target experiments like this grows only as the square root of the beam energy. Hence the reactions which that 10^18 eV proton can cause are about the same as you can get from a 10^9 (GeV) collider. LHC is a TeV collider.

(2) If a cosmic ray collision created a potential world-killer mini-black hole, it would be traveling too fast to be captured by the earth’s gravity. In a collider, it could be created stationary.

(I am a former Physics Dude. I haven’t read the LHC safety analysis, but whoever wrote it must have been aware of these facts.)

MysticKnightoftheSea February 4, 2009 12:15 AM

I would have said “dingo’s kidneys”, but that’s me.

MKotS

another bruce February 4, 2009 1:14 AM

black holes are fascinating things, i’ve followed their evolution in the literature from theory to fact.

we’re all gonna die one way or another. some of us will have heart failures, others will be struck by vehicles.

if i had a choice, i’d be hard-pressed to pick a better way to go than to be sucked into a black hole. the satisfaction of long-standing scientific curiosity at the moment of death.

so i say, turn that large hadron thingie on and rev her up. apres moi, le hole noir. maybe we’ll come out the other side of something.

Guiro February 4, 2009 7:19 AM

While I agree with the main point of the authors (don’t take a probability face value), the paper is of questionable quality.

They should have just sticked to a 1-2 page short article or something giving an intuition why taking a probability face value is flawed. The rest of the paper is just filler and leaves the reader in the dark on how to “probe the improbable”.

Johnde June 29, 2010 9:43 AM

I would appreciate more visual materials, to make your blog more attractive, but your writing style really compensates it. But there is always place for improvement

Sidebar photo of Bruce Schneier by Joe MacInnis.