Comments

Steve Loughran November 14, 2008 7:06 AM

We actually have proposed a 0-10 scale for “systemic failure”; 0 being “a server reboots but you don’t notice” and 10 being “accidental end of humanity”.

What’s interesting in this scale is where you put the interim things. Loss of car and 4-5 lives due to a system failure. Hearbreaking for friends and family, but usually minor for humanity as a whole -and a risk that society is prepared to accept in exchange for being able to drive everywhere. But what about the space shuttle? Why was the loss of that more traumatic for the US? Similarly, where does the collapse of the global financial system get rated? Above or below “loss of small country”?

From a software engineering perspective, its a reminder of what the consequences of failures can be. And it emphasises that projects with high risks on the scale need to be treated differently than others. Because we can’t afford to pay the price when they fail.

Steve Loughran November 14, 2008 7:09 AM

I should add that the current title is the “Loughran-Guijarro Scale” after the two authors, and is listed on the link above. I really think the unit should the Parnas, after David Lorge Parnas, whose mid-1980s treatise on the implementation risk of Star Wars should be required reading for everyone who works on big projects.

A nonny bunny November 14, 2008 7:34 AM

I think human-civilization extinction is much more probable than human-species extinction. Which can also be an answer to the Fermi paradox (although it’s not as if there is a shortage of answers to it).

I can’t say I’ve noticed a lack of publication on the subject though. But perhaps I hang out with the “wrong” crowds.

ax0n November 14, 2008 8:03 AM

Thanks for the laugh. Short of a sudden earth-becomes-inhabitable catastrophe, I think the human race will probably be just fine. I don’t know how to say this without coming off like an asshole, but the would could use a few more large-scale disasters. There are far too many people on this planet. Yes, I realize that I could lose my own life or the ones of those I love as a result of said disasters.

ax0n November 14, 2008 8:04 AM

and I meant “earth-becomes-uninhabitable” – now where the hell is my coffee? Lack of coffee is at least a 2 on the 0-10 scale Steve talked about in the first comment.

Grahame November 14, 2008 8:13 AM

In this century a number of events could extinguish humanity

oh? Is it just me who thought that this has evidently already happened far ago in history and repeated ad nauseum?

Carlo Graziani November 14, 2008 8:54 AM

What a vacuous paper. A bunch of uninformative prior probability distributions are compounded together. Uncontaminated by any actual data, they are in no danger of actually issuing forth an informative posterior probability distribution. To press!

“High-Energy physics experiments” as a risk factor to humanity. This is risk analysis? So long as we’re mining science fiction instead of actual science, how about invasions by homicidal, all-poweful alien fleets? Shouldn’t we worry about those?

It’s hard to believe this thing got past peer review. I guess the standards of intellectual rigor at “Risk Analysis” are very forgiving.

Mailman November 14, 2008 9:53 AM

“Reducing the Risk of Human Extinction. Not a threat people think a lot about.”

People think about risks that affect them personally, in order to avoid dangers that have negative consequences on them. Each individual as their own survival instinct. Survival of the human species is coded on a global scale.

The evaluation of risk and the necessity for security controls is very relative. A lot of risk-mitigating controls are calibrated to a benchmark of comparables: I don’t need to have the best home security system in the world; I just need to have a security system that is more efficient than my neighbors’. Or if you prefer, “I don’t need to outrun the bear, I just need to outrun you.”

The risk of human extinction is perceived the same way. “Sure, I will die, but so will everyone else. My exposure to this risk is identical to everyone else’s.” In today’s competing world, this quickly becomes, “This risk is non existent.”

Sunshine November 14, 2008 9:54 AM

There is an end of physical life for all humans. If the endpoint for all currently-existing humans is within a narrow-enough time frame — probably about ten years — then humans will become extinct. The time frame would have to be small enough so species survivability would not be possible from offspring born within that period.

One could imagine quite a number of things that could cause that, and few things that we could actually do about it. So, trying to produce formulae to model associated risks is somewhat pointless.

Earth will eventually become uninhabitable for humans.

Have a nice day.

kangaroo November 14, 2008 10:05 AM

How can you apply risk-analysis to an infinite loss? This is the kind of stupidity that leads to our screwed up economic system/academics — people apply statistics that are all good and fair in some small subset (say, a finite number), and then go outside of where the mathematics has any meaning!

You can’t apply risk analysis to everything. Because you have a hammer, not everything is a damn nail.

Milan November 14, 2008 10:06 AM

At least some people working on climate change policy think about the possibility of human extinction fairly often.

Canadian climatologist Andrew Weaver has said that: “[U]nless we reach a point where we stop emitting greenhouse gases entirely, 80 per cent of the world’s species will become extinct, and human civilization as we know it will be destroyed, by the end of this century.”

Milan November 14, 2008 10:10 AM

From the Stern Review:

“The evidence shows that ignoring climate change will eventually damage economic growth. Our actions over the coming few decades could create risks of major disruption to economic and social activity, later in this century and in the next, on a scale similar to those associated with the great wars and the economic depression of the first half of the 20th century. And it will be difficult or impossible to reverse these changes…

Under a BAU scenario, the stock of greenhouse gases could more than treble by the end of the century, giving at least a 50% risk of exceeding 5°C global average temperature change during the following decades. This would take humans into unknown territory. An illustration of the scale of such an increase is that we are now only around 5°C warmer than in the last ice age.

Such changes would transform the physical geography of the world. A radical change in the physical geography of the world must have powerful implications for the human geography – where people live, and how they live their lives.”

peri November 14, 2008 10:19 AM

I am still reading it but my thoughts on “2. Humanity’s Life Expectancy” are that the human genome project represents to me the start of an “explosion” of biotech advances over the next 50 years. I hope soon enough to install a backup heart:

Growing living-tissue heart valves a reality in 3-5 years: study
http://findarticles.com/p/articles/mi_kmafp/is_/ai_n19496374

The only people who will be strictly Human 50 years from now will probably be technophobes who choose not to take control of their physiology.

denis bider November 14, 2008 10:31 AM

The reason these risks are not being looked at is because they are of negligible interest to any individual. To an individual, an outcome of planetary annihilation is not much worse as one’s own death. Yet, the chances of one’s own death are way higher than the chances of planetary annihilation. Hence, it makes more sense for every individual to focus on prolonging one’s own life – and many don’t even care for that.

Plus, 50% of the population live in a fantasy world where the Earth was created a few thousand years ago, and where, if anything happens on a large scale, it is unavoidable because God makes it happen.

Roxanne November 14, 2008 10:42 AM

What do you mean, “Not a threat people think a lot about” – there’s at least one new movie per month that deals with the imminent demise of the human race. Add in shows like “Heroes” and “Supernatural” and you’re up to multiple plots every week. Add in the scripts that never do get produced … I’d say folks are thinking about it quite a lot!

It’s our personal extinction that’s more critical. Sorry, but “all those folks over there” going extinct doesn’t bother me near as much as “total extinction of this one individual” does, for varying values of this one.

Frankly, I want to be standing on top of Yellowstone when it goes ballistic. Get it over with fast, you know?

denis bider November 14, 2008 10:43 AM

@peri (10:19): Good luck with that. Medicine is still in the dark ages, and the human genome is the ultimate in obfuscated code. Our best computers cannot even figure out the shape of proteins in a reasonable while. First we need a simulator that can simulate the execution of the genome, then we need to rewrite te genome to de-obfuscate it, and then we can improve it. Before the results are ready for use in practice, chances are that we’ll be dead and gone, and I’m still 28.

peri November 14, 2008 10:46 AM

“5. Discounting”

I wanted to comment that there is version of Moore’s law for technological advance in general. If it were a doubling of capability and halving the cost every three years, then shouldn’t we be discounting based on the idea that what we do to protect us the future population can do for themselves only twice as well and for half the cost?

Mikey November 14, 2008 11:40 AM

Using expected value as a metric in these computations is silly. After all, if a humanity-ending event does take place, no one would be around to care. The actual end cost of such an event would actually be zero.

If the cost of realizing the event is zero, then we should spend zero attempting to prevent it.

peri November 14, 2008 11:42 AM

“7. Example: The Cost Effectiveness of Reducing Extinction Risks from Asteroids”

“Suppose humanity would, in the absence of preventable extinction events during the century, survive as long as our closest relative, homo erectus, and could thus survive another 1.6 million years (Avise et al., 1998).1”

For better or worse humanity effectively collectively controls its own environment so the results of the cited study are pretty useless for estimating how long humanity will survive.

Seriously, how much bearing could the median duration of previous mammalian species have on a species with the potential to colonize space?

The author even argued in an earlier section “if we survive the next century, we are likely to build self-sufficient colonies in space.”

peri November 14, 2008 11:53 AM

@denis

I agree the time for reaping the full fruits of the genome project is fairly distant in the future. In the meantime science is confronting the problem of growing tissues and organs artificially right now and the article I linked to was supposed to suggest that medicine should be able to help us live much longer than you might expect.

Bernie November 14, 2008 12:08 PM

Nick said, “On a long enough timeline, the survival rate for everyone drops to zero.”

What about Schrödinger’s Cat, if we keep the box closed? 🙂

PS: I stopped reading the article went I got to “a high energy physics experiment could go awry.”

peri November 14, 2008 12:19 PM

@Bernie

That line wasn’t so bad by itself. If you had kept reading you would have encountered the even more damning:

“There is currently no independent body assessing the risks of high-energy physics experiments. Posner (2004) has recommended withdrawing federal support for such experiments because the benefits do not seem to be worth the risks.”

Andrew Garland November 14, 2008 12:19 PM

The science lecturer finished up his talk about the life and death of the Sun by asking for questions. A man in the back waved excitedly.

Man: Professor, did I hear you correctly? You said that the sun was going to eventually burn through its nuclear fuel and explode, burning the Earth to a cinder. How long will that be?

Prof: About 4 billion years.

Man: What a relief. I thought you said 4 million years.

David Harmon November 14, 2008 12:25 PM

The thing is, there isn’t a single “chance of human extinction”, there’s a bunch of them, corresponding to different scenarios, and they have different likelihoods and trade-offs.

At one end, vacuum decay from a particle accelerator is implausible on general principles — there are plenty of ultra-high-energy events within the visible universe, and there’s simply no indication that the universe as a whole is that “brittle”.

Asteroid impacts are certainly a possibility, but watching for large objects on earth-impact orbits is not only practical, but comes “almost free” with our existing astronomical programs. Diverting such an object would be difficult, but straightforward.

One can similarly go through biotech disasters (nasty potential, but it’s harder than it sounds to make something that’s universally lethal but won’t “burn itself out”within a small (nukeable) area), global nuclear war (not so likely as it used to be), and so on.

But along the way, we hit one that’s different — global climate change isn’t a certain doom, but it does have significant potential to wipe out our civilization, and perhaps our species. More to the point, it’s not just imminent, but already in progress! Unfortunately, it’s also one of the toughest scenarios to avert….

Reuben Hemmings November 14, 2008 12:31 PM

@Mikey

no one would be around to care. The
actual end cost of such an event
would actually be zero … we should
spend zero attempting to prevent it.

If the extinction event is instantaneous, I wholeheartedly agree. I’ll even go further – it’s not clear to me that the presence of humans make the Universe (or Earth) a better place.

But if our extinction were a drawn-out process, I’d hate for my kids to be alive during humanity’s tortuous demise.

Mailman November 14, 2008 12:32 PM

@ kangaroo
“How can you apply risk-analysis to an infinite loss? ”

My thoughts exactly, only beautifully summed up in one simple question.
Bravo, Sir, Madam or Marsupial.

Skorj November 14, 2008 3:12 PM

Wow, hard to judge whether the science was worse in that article, or from the “scientists” Milan was quoting. (Off topic but: CO2 levels have been 6-10 times as high in the past, and the planet was perfectly habitable – also we’re currently in an ice age, we just happen to be in a warm spell, and temps were significantly more than 5 C cooler in the cool periods.)

Carlo Graziani November 14, 2008 3:42 PM

I just remembered what this is: it’s a version of Pascal’s Wager — the 17th Century argument from expectation that since the gain from eternal salvation is infinite, one should wager that God exists no matter how small the probability we ascribe to God’s existence (so long as it’s non-zero).

Here the infinity has been transfered to the loss instead of to the gain, but it’s the same degenerate argument otherwise. Doesn’t matter how small the probability, if there’s even a tiny chance that something will wipe out humanity, we should drop everything and deal with it.

So if some nutcase thinks that CERN could destroy the world, well, he’s probably wrong, but maybe there’s an infinitesimal chance he’s right, so close down the LHC. That’s what the cost-benefit analysis says, see?

Pascal’s wager is silly, but at least Pascal was writing when probability and statistics and risk analysis were new, and their use and meaning were not yet clear. But that was over 300 years ago. What’s Matheny’s excuse?

Peter E Retep November 14, 2008 8:09 PM

re Steve Laughlin

Would you be good enough to post the 0-10 scale you refer to, from reboot to “systemic failure”?

Steve Loughran November 15, 2008 2:48 AM

The current draft is
1. Remote server reboots, user has to hit reload.
2. Loss of evening’s work
3. Destruction of machine and rebuildable state stored on, rebuild effort and recovery; no permanent loss other than time.
4. Loss of irreplaceable data (e.g complete family photos)
5. Cars crash (ABS failure, engine control accelerates car into wall, &c), other loss of life.
6. Airplane falls out the sky
7. Loss of small city few people care about. this is why CERN is in Geneva.
8. Loss of city.
9. Loss of country. The Chernobyl test could fall into this category, at least for a small country: “let’s test that our reactor doesn’t melt down when we turn off the cooling, by turning off the cooling”.
10. Accidental nuclear exchange, end of humanity.

I’d put complete collapse of financial system between 8 and 9, because without it working the western world starves. Which implies that all those people working in banking were on higher risk projects than they thought.

Steve Loughran November 15, 2008 7:05 AM

Note that in our scale High Energy Physics experiments come in a 7 if all CERN does is make Lake Geneva slightly bigger, we can handle that. As long a Zurich, where the banks live, isn’t affected. Unlike the authors of the original papers, I have spent time at CERN and have more confidence that when The Beam goes live there isn’t that much risk, but we still used to look out the window to make sure Geneva was still there, just in case. The joke being of course, if the physicists had screwed up, well, we wouldn’t be around to look out the window.

Normal companies measure risk in terms of financial liability, but that doesnt work because the maximum exposure of a limited company is its net worth. Screw up beyond that and you go bankrupt. But society can be damaged way more than your net worth -look at Lehman Brothers. Similary, Governments can manipulate the value of money, and have to deal with the consequences of other people’s mistakes. You could just use lives lost as metric, but as well as numbers of lives lost and that future lives lost minefield, there’s quality of life too. So a vague handwaving “a failure of this system would be a level 5 event on the Loughran-Guijarro scale” is a better way of putting things in perspective.

Which is presumably why the venture capitalists are backing off from my nuclear-powered fly-by-wire airborne hospital project.

neighborcat November 15, 2008 8:37 AM

I’m disturbed that this supposedly intellectual essay is predicated on the assumption of the intrinsic value of human existence.

Some may find it logically inconsistent, but even as a member, I don’t like humanity. We are noisy, invasive, and arrogant, eventually destroying everything we touch, throwing fast food containers out the window as we do it. If we judge ourselves as a species against the values we profess, we are miserable, hypocritical failures.

Mind you, I don’t spend a great deal of time pondering this on a daily basis, as a pissy, misanthropic attitude is a hindrance to my genetic directive to reproduce as often as possible.

Sure I value plenty of individual humans and my genes have pre-programmed my so-called intellect to protect their own vehicle and assist others (but only in direct proportion to the probability that they carry copies of my genes), but if I step back from the paradigm of genetic preservation, I know I have no intrinsic value or purpose. All assessments of value are relative. Only individual humans value humanity, as far as we know there is no one/nothing else around to care.

If I found out humanity would cease to exist tomorrow, it wouldn’t bother me in the least, as long as I have time to hug my daughters, time for one last pizza, a good f**k, and a cigarette.

NC

a nonny bunny November 15, 2008 4:27 PM

If human civilization were to collapse, it would pose a bit of a problem to ever recover. We’ve nearly used up all easily accessible fuel reserves that have driven our civilization for the last two centuries. It’s not something future generation could do over.
And we have to consider how (supposedly) the Mayans and Easter islanders came to their end, deforestating themselves out of civilization/existence.
And it’s a bit more likely to happen than extinction.

Jurjen November 17, 2008 6:26 AM

If you think about those kinds of risks, it is a bit shortsighted to look at it from the human side only. After all, we aren’t there after the occasion.
For the lifeforms that do survive the “accidental loss of humanity”, it might actually be a relief.
It is a bit bitter, but on the positive side is that the loss is not infinite anymore, so we can now calculate without infinities.
🙂

John November 17, 2008 3:20 PM

I once had a (young) auditor ask me, also a (veteran) auditor quite a intersting question regarding our BCP. We have very good data loss mitigation strategies, including frequent backups housed well over 200 miles apart (both in major cities). Their question was, since both are major cities and potentially enemy targets, what would happen if both cities were destroyed at nearly the same time. I looked at him for a moment, and I said “if that happened, no one would much care about our data anymore.” (Anyone who would care would likely be dead, and anyone else that might care would have bigger problems.)

Perfect is the enemy of the good. Some disasters are too catastrophic to recover from. And unless humanity depends on it, it’s a waste to try.

moo November 17, 2008 9:24 PM

@Milan: I’m not worried about anything that will happen to the earth more than 40 years from now.

I fully expect us to have functioning MNT by then ( http://en.wikipedia.org/wiki/Molecular_nanotechnology ), possibly a lot sooner. Not only will our pollution and waste levels go down quite a bit, but we will have the technology to repair most of the damage we’ve already done.

Now, in the off chance that we are not able to get MNT working by then, or that we are too short-sighted or downright stupid to apply it intelligently to solve the world’s problems at that time, then we will richly deserve the extinction that awaits us. So I’m not going to get too worried about it.

RH November 18, 2008 5:51 PM

Looks like I’m late to the party, but I’d just like to share my two cents.

Solving the problem of human extinction is a superorganism problem. Humanity doesn’t care about humans – just itself. Avoiding extinction is something which concerns Humanity.

For the most part, our brains have been developed in a way such that caring about human is nearly optimal for caring about humanity, but when we start dealing with things like this, that is no longer true. There simply is no way to assign dollar values to the survival of humanity… there’s no one for humanity to give the dollars to to do the work!

The only argument that I read in the article that made rational sense was the investment theory – invest in the humans now, so that they can invest in humanity later. One can then use probabilities to determine how much humanity should invest in the current humans for the maximum payback to humanity later.

peri November 19, 2008 7:25 PM

Not that anyone is going to notice but I just realized that extinction doesn’t necessarily entail a failure to survive:

http://arstechnica.com/news.ars/post/20081119-new-genome-data-raises-prospect-of-resurrecting-the-mammoth.html

To be fair, mammophants will not be mammoths but only because of what is technically feasible right now. Still this really underscores the point I made about comparing humanity’s life expectancy with other mammals.

Unlike other mammals, humanity is considering which species we’d like to bring back.

Steve Martin November 20, 2008 2:55 PM

“earth-becomes-uninhabitable”

Whew! You know why people can get away with stuff like that? I’ll tell you exactly why people get away with that. Because the public has a short memory. That’s why all these big stars do these crazy, terrible things and two years later they’re back in the biz, you know. ‘Cause the public has a short memory. Let me give you a little test, okay? This is my thesis — the public has a short memory and, like– How many people remember, a couple of years ago, when the Earth blew up? How many people? See? So few people remember. And you would think that something like that, people would remember. But NOOO! You don’t remember that? The Earth blew up and was completely destroyed? And we escaped to this planet on the giant Space Ark? Where have you people been? And the government decided not to tell the stupider people ’cause they thought that it might affect– [dawning realization, looks around] Ohhhh! Okay! Uh, let’s move on!

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.