Schneier on Security
A blog covering security and security technology.
« "If You See Something, Say Something" |
| Fifth Annual Movie-Plot Threat Contest Semi-Finalists »
May 13, 2010
At a security conference recently, the moderator asked the panel of distinguished cybersecurity leaders what their nightmare scenario was. The answers were the predictable array of large-scale attacks: against our communications infrastructure, against the power grid, against the financial system, in combination with a physical attack.
I didn't get to give my answer until the afternoon, which was: "My nightmare scenario is that people keep talking about their nightmare scenarios."
There's a certain blindness that comes from worst-case thinking. An extension of the precautionary principle, it involves imagining the worst possible outcome and then acting as if it were a certainty. It substitutes imagination for thinking, speculation for risk analysis, and fear for reason. It fosters powerlessness and vulnerability and magnifies social paralysis. And it makes us more vulnerable to the effects of terrorism.
Worst-case thinking means generally bad decision making for several reasons. First, it's only half of the cost-benefit equation. Every decision has costs and benefits, risks and rewards. By speculating about what can possibly go wrong, and then acting as if that is likely to happen, worst-case thinking focuses only on the extreme but improbable risks and does a poor job at assessing outcomes.
Second, it's based on flawed logic. It begs the question by assuming that a proponent of an action must prove that the nightmare scenario is impossible.
Third, it can be used to support any position or its opposite. If we build a nuclear power plant, it could melt down. If we don't build it, we will run short of power and society will collapse into anarchy. If we allow flights near Iceland's volcanic ash, planes will crash and people will die. If we don't, organs won’t arrive in time for transplant operations and people will die. If we don't invade Iraq, Saddam Hussein might use the nuclear weapons he might have. If we do, we might destabilize the Middle East, leading to widespread violence and death.
Of course, not all fears are equal. Those that we tend to exaggerate are more easily justified by worst-case thinking. So terrorism fears trump privacy fears, and almost everything else; technology is hard to understand and therefore scary; nuclear weapons are worse than conventional weapons; our children need to be protected at all costs; and annihilating the planet is bad. Basically, any fear that would make a good movie plot is amenable to worst-case thinking.
Fourth and finally, worst-case thinking validates ignorance. Instead of focusing on what we know, it focuses on what we don't know -- and what we can imagine.
Remember Defense Secretary Rumsfeld's quote? "Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns -- the ones we don't know we don't know." And this: "the absence of evidence is not evidence of absence." Ignorance isn't a cause for doubt; when you can fill that ignorance with imagination, it can be a call to action.
Even worse, it can lead to hasty and dangerous acts. You can't wait for a smoking gun, so you act as if the gun is about to go off. Rather than making us safer, worst-case thinking has the potential to cause dangerous escalation.
The new undercurrent in this is that our society no longer has the ability to calculate probabilities. Risk assessment is devalued. Probabilistic thinking is repudiated in favor of "possibilistic thinking": Since we can't know what's likely to go wrong, let's speculate about what can possibly go wrong.
Worst-case thinking leads to bad decisions, bad systems design, and bad security. And we all have direct experience with its effects: airline security and the TSA, which we make fun of when we're not appalled that they're harassing 93-year-old women or keeping first graders off airplanes. You can't be too careful!
Actually, you can. You can refuse to fly because of the possibility of plane crashes. You can lock your children in the house because of the possibility of child predators. You can eschew all contact with people because of the possibility of hurt. Steven Hawking wants to avoid trying to communicate with aliens because they might be hostile; does he want to turn off all the planet's television broadcasts because they're radiating into space? It isn't hard to parody worst-case thinking, and at its extreme it's a psychological condition.
Frank Furedi, a sociology professor at the University of Kent, writes: "Worst-case thinking encourages society to adopt fear as one of the dominant principles around which the public, the government and institutions should organize their life. It institutionalizes insecurity and fosters a mood of confusion and powerlessness. Through popularizing the belief that worst cases are normal, it incites people to feel defenseless and vulnerable to a wide range of future threats."
Even worse, it plays directly into the hands of terrorists, creating a population that is easily terrorized -- even by failed terrorist attacks like the Christmas Day underwear bomber and the Times Square SUV bomber.
When someone is proposing a change, the onus should be on them to justify it over the status quo. But worst-case thinking is a way of looking at the world that exaggerates the rare and unusual and gives the rare much more credence than it deserves.
It isn't really a principle; it's a cheap trick to justify what you already believe. It lets lazy or biased people make what seem to be cogent arguments without understanding the whole issue. And when people don't need to refute counterarguments, there's no point in listening to them.
This essay was originally published on CNN.com, although they stripped out all the links.
Posted on May 13, 2010 at 6:53 AM
• 82 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
I bet you were popular at that conference. There is a lot of money to be made in fearmongering and then you come in talking sense and all.
If it's worth saying, it's worth repeating! ;-)
Not all worst case thinking is bad.
It is said "Nothing exists in a vacuum" and this is true of all events in three ways.
Firstly you have to have a place for an event to occure (the stage), that it is appropriatly set out (stage set) for the event to be possible, thirdly you need the people (actors) who both cause and respond to the event as it happens.
That is something can only be damaged by an earthquake if it is put in an earthquake zone.
At some point on your "disaster senario" the probability of an event coming to fruition changes and either increases or decresses from the norm.
Interestingly many many "disaster scenarios" share basic commonality in the stage set and actors. which gives you "classes of events".
Now some "disaster scenarios" are highly improbable (lightning strikes) but share comananlity with other more probable events (fire). Enough improbable events may make the occurance of one event in a class probable and thus it is worth mitigating the class of event but not an individual event.
Also the mitigation of a highly improbable event might almost come for free. That is if you have to get equipment for a specific probable event it is worth making the training to use it slightly more general to cover all the events in that class.
However any improbable event that does not fall in any other class is generaly not worthy of inclusion unless you can show the probability curve is no the normal bell but fat tailed or even square.
Our estimation of risk is almost always based on unrelated random events that are constrained by the laws of physics and are dealt with statisticaly on those assumptions.
However in the intangable world of information the laws of physics don't apply and thus our implicit assumptions about events and their probabilities may not apply...
For instance "force multipliers" in the tangable world they have quite specific restraints which limit what they can do and over what period of time. When it comes to virus code with a time based payload you can have what is effectivly world wide similtanious event...
For such a possability the cost of mitigation in some forms may well be justified (that is having a different manner of keeping time or strict segregation etc).
Reason 1 ("Half the cost-benefit analysis") and reason 3 ("Can be used to justify any position or its opposite") are the same reason.
Nice essay, otherwise.
"does he want to turn off all the planet's television broadcasts "
Uh, no. That would be stupid. The chances of aliens picking up any of our current SETI, or otherwise, transmissions are very low. Hawking thinks we should be more careful when planning more targeted attempts at contact.
I would have liked more stress on the fact that worst-case thinking leads to misallocation of funds and in a rational analysis actually makes society a place less safe - even disregarding the effect it has on the collective psyche.
I would be interested if this applies to the financial markets as well. There is reason to speculate the collapse of the dollar.
The precautionary principle is essentially poor quality risk analysis.
Rather than going through a proper risk analysis exercise, which may well take a lot of time and money and involve a lot of research and input from many people they simply label every scenario 'high risk'.
Quite often this is linked to people's jobs. In other words, they're told "you're doing the risk analysis but remember: if it isn't perfect, you lose your job". Understandably, with that kind of threat, someone is going to be so excessively cautious that they'll recommend gold-plated solutions with money thrown at them.
The worst case scenario with the Iceland volcanic dust isn't that one plane will crash, it's that the entire fleet will be damaged and transportation capacity will be dramatically cut for months if not years. What will that cost in terms of lost medical care, etc.?
There's precedence for this - in the 80s or 90s a transoceanic flight over the Indian Ocean unknowingly flew threw a volcanic ash cloud. They crew reported a number of weird phenomena (e.g., St. Elmo's fire causing a glow around the wings and nose of the plane) and the engines cut out. Twice. They finally landed in Australia and discovered that the plane had been sandblasted and would require extensive repairs. Instrumentation was damaged, the windshields were heavily scoured, the engines were toast, etc. Dateline NBC, if I recall correctly, has aired an hour-long show on this incident.
You can repair the damage to one aircraft quickly. But what do you do if you have thousands of planes requiring repairs? The wait lists will be horrendous.
While I certainly agree that "worst case planning" is dangerously inefficient for a large category or problems, I think that there certainly are problems whose worst case scenario should be studied.
For example, a "trading error" appears to have caused nearly a 1000pt drop in the Dow in less than an hour. That appears to have been caused by a glitch in the system (whether the glitch was really trader-based, software-based, or policy-based remains a point of investigation). To me, this is a clear demonstration that a determined adversary could inflict great financial harm to millions of people by infiltrating the computer systems that regulate our financial markets. I would consider it irresponsible if our leaders didn't consider this possible "worst case scenario" and plan against it.
Similarly, the 2003 power blackout in the NE United States demonstrated how systemic under-maintenance and a "that couldn't realistically happen" planning attitude allows a single fault to create a cascading power failure that took down power service for millions of people.
The fact that these "worst case scenarios" do occasionally occur means that it is important for our policy makers to plan for them. Yes, it is critical that they balance the cost of mitigation against the rarity of the events, but to dismiss them out of hand as mere "movie plot threats" ignores a fundamental law of nature: over long time period or large populations, even improbable events will eventually occur.
There is a difference between "worst-case" thinking and risk assessment based on data. Worst-case thinking is based on the absence of data, and therefore far too open to abuse either deliberate or fear-driven.
To use the trading error example, the evidence shows that a hostile entity could use glitches to cause significant drops in the Dow for a short period of time. Risk assessment would look at ways to prevent that or keep the damage from spreading.
"Worst case" thinking is that the trading error shows that an entity could inflict "great financial harm" to millions of people. That is not helpful thinking. It doesn't lead to efficient use of resources because it doesn't specify the harm precisely enough and it doesn't specify the victims precisely enough.
To use an earthquake example, risk assessment looks at data from earthquakes and suggests building codes to make sure structures can withstand certain types of forces. As we learn more we revise (or should revise) the codes.
"Worst case" thinking about earthquakes would demand that every structure everywhere in the US be able to withstand the greatest ground accelerations we could imagine. This would make construction prohibitively expensive in most of the US.
One might argue that "worst case" thinking is useful for some types of disaster. I don't think that works because "worst care" thinking is like a virus that seems to spread from topic to topic.
Essentially "worst case" thinking is not thinking. Its a fear-based brain process that mimics thinking but has opposite effects.
The problem with worst-case thinking is that when you're seriously considering the possibility of worst-case, the action for preventing it tends to be the second-worst case. Because in a situation where that would come up, those would be the choices: bad and worse.
That's why we're eroding the human rights - we're "preventing" the death of millions or even billions in the hands of terrorists.
Now, that *would* be a valid short-term choice *if* there really were a credible threat of that. But there isn't. And the creation of that situation would take active encouragement.
("Luckily", the goverments are on the case.)
That said, I'd really like to see some of that mentality or at least some sense of urgency transfered to needed copyright, financial, climate, social, etc. reforms. But since none of those make big bucks to the rich and powerful (or that's what they think), there are people who are being paid to actively thwart and confuse the issues.
I think it depends on who is being asked, in what capacity and what the response is. A worse case scenario for me as a home owner is that my house will burn down. Is it possible? Yes. Likely, no. Should I abolish all open flame, matches, candles, fireplaces, furnaces, etc from a 50' perimeter of my home to prevent that? umm, no. Does it mean that I should get home owner insurance and make sure my family knows how to escape the house if it is on fire? Sounds reasonable to me. Worst case scenarios aren't invalid thinking, its responses to those scenarios that require sceptacism...
Bruce: "The new undercurrent in this is that our society no longer has the ability to calculate probabilities."
Except that, in many cases, we simply can't. We don't have a probability distribution for fairly unique events.
In that case, do a "probabilistic" risk-assessment where the distribution is just a bullshit, made up distribution is, in fact, worse than recognizing your ignorance.
Look up the Gaussian copula's used in pricing the mortgage derivatives that lead to the banking collapse. Beautiful mathematics built on bullshit, pulled-out-of-their-ass distributions which had no connection with reality.
Almost nothing outside of the lab falls into the easy distributions we all know and love. The math is nasty, sometimes close to impossible, and pretending it's otherwise is just delusional.
Obviously, failures of oil rigs don't fit the distributions they've used to do risk-assessments, know do they?
phone home ET, phone home.
"("Luckily", the goverments are on the case.)"
Do I need to clarify that?
Nah, I'll do it anyway:
We all know the other name for terrorists.
If they are "threatening" millions or billions, then it's more of a case of extra-governmental democracy than a doomsday, though it may become one if escalated.
Like everything else, any "worst case" event (or just a bad one) has to be judged rationally. Most resources I read cite probability vs. impact. While important, I don't think it is enough. A comet landing on North America's impact would be near total destruction, which would cause it to be rated almost infinitely high regardless of the probability (as long as the probability is greater than zero).
What also needs included in the equation is the effectiveness of possible mitigating controls and the cost of such controls. The cost also has to consider opportunity cost--it's not of much value to make a building that can withstand an enormous earthquake and perhaps a tsunami if the more likely event of a fire can kill everyone above a certain floor and isn't addressed because of wasted resources on the unlikely.
Using the ridiculous comet risk, factoring only probability and impact, it would mathematically be worth addressing (since infinity times any positive number that is not zero is infinity). However, if we include effectiveness of mitigation (approaches zero) and cost of mitigation (approaching infinity), then the equation is more realistic and it isn't worth addressing.
This is something that I see at work at a smaller-scale pretty continuously as well.
Just because a security flaw is theoretically vulnerable means that you have to spend time mitigating it (while ignoring all the input validation vulnerabilities and weak passwords that might actually be used to hack into the company).
Just because a software developer might do something wrong on a production system it is treated like every software developer will do something wrong on a production system (no sense of trust whatsoever, resulting in inane chinese walls).
Just because a change affects a production system it requires 10 day of advance notification to partners, even though there's a track record of similar changes being successfully implemented (obliterating the ability of the company to get work done, and to be able to proactively fix problems).
As long as you've thought of the possible worst case scenario and mitigated it, you've done your job and won't get fired. The fact that you're racking up the costs to the company in wasted effort, inability to complete work, infrastructure that can't be proactively fixed, etc isn't showing up as any kind of incentive.
And generally the people who are thinking up these worst-case scenarios and mitigating them clearly feel that they're being smart. In some case it feels like there's one-upsmanship to be as worst-case as possible. Its like we've become so clever at determining possible threats that we've collectively given ourselves a case of paralyzing agoraphobia.
Any policy-side of an IT organization that is responsible for security or change management or something like that needs to first be held accountable for the company being able to do work.
> Uh, no. That would be stupid. The chances of aliens picking up any of our current SETI, or otherwise, transmissions are very low. Hawking thinks we should be more careful when planning more targeted attempts at contact.
The point was to illustrate how stupid that would be.
Betting on the future is a loser's game, but predicting the future is a good racket, if you know what people want to here.
Your best essay in a while Bruce. Great work.
This is so brilliant and so coherently argues my often tongue-tied beliefs, especially when I am trying to argue that we are overprotecting (and stunting) our children out of a wildly out-of-whack fear of predators. Thank you for this. I hope I meet you some day. Lenore Skenazy, founder, Free-Range Kids (freerangekids.com)
I don't think the decision to ground planes due to volcanic dust was entirely safety related. It was almost certainly largely economic as well. From what I understand, a plane flying through volcanic dust actually wasn't likely to crash, but it would incur damage that would cost far more to repair than the profits from the flight.
Safety is a good excuse; one could argue that the airlines have an implied contract to transport people with tickets whether they're making or losing money on the transaction, but they don't have to do so if it will endanger lives.
Likewise, as people have said, there's a lot of money to be had from fearmongering. There are a heck of a lot of people and companies selling stuff to combat worst-case scenarios.
I would add, how refreshing (and rare) it is to see the phrase "begs the question" used correctly.
The counter example plays out before our eyes in the Gulf of Mexico. An industry dismisses as unthinkable a scenario that was perhaps inevitable.
The only panelist who didn't give a "predictable" answer was the Canadian representative. He is worried about capability and capacity to respond and a lack of redundancy: a reflection of Canada's resilience and "all-hazard" approach to national security and public safety.
If the bomb planted in a green 1993 Nissan Pathfinder SUV on the evening of May 1 had exploded, here’s what would have happened, according to retired New York police department bomb-squad detective Kevin Barry. The car would have turned into a “boiling liquid explosive.” The propane tanks that the bomb comprised would have overheated and ignited into “huge blowtorches” that could have been ejected from the vehicle. The explosion, lasting only a few seconds, would have created a thermal ball wide enough to swallow up most of the intersection. A blast wave would have rocketed out in all directions at speeds of 12,000 to 14,000 ft. per sec. (3,700 to 4,300 m per sec.); hitting the surrounding buildings, the wave would have bounced off and kept going, as much as nine times faster than before. Anyone standing within 1,400 ft. (430 m) — about five city blocks — of the explosion would have been at risk of being hit by shrapnel and millions of shards of flying glass.
You're very correct Bruce. This "ideology" has all sorts of problems. Most "worst-case" risk analysis presume randomness is the worst-case scenario - this is not always true.
It reminds me of high school algebra ballistics problems and of commercial cost optimization. In the former, you get two solutions, the real one and the one your ignore with hand-waving where you never fire the cannon - that there are two solutions is a artifact of the model being used. Similarly you can "maximize" profit by minimizing cost which in a naive model means never spending any money at all. Just like your examples.
There are other issues as well.
There's the "false effectiveness" shown by the Bayesian statistics you've highlighted before. It's primarily and numerically innocents that get trapped in the security tar-baby not the terrorists.
It should be pointed out that asymmetric warfare suffers from exactly the same limitation: weapons/operations targeting of insurgents in a civilian population has the same Bayesian mathematics. This is why being an insurgent is a relatively safe endeavor and why simply avoiding Afghan wedding parties is an excellent insurgent protection strategy.
Even worse: such a Lanchester is correctly a 3-body problem with all the implications and to add insult to injury, some forms are identical to a chaotic Lorenz equation/attractor. There's an explicit mathematical reason for the military intuition to avoid asymmetric conflict.
There's also a control systems issue with any regulation that seeks "zero tolerance": by definition such a regulatory system is seeking primarily to have zero error signal. The problem is that if this is ever or even nearly achieved (due to noise and detection resolution limits) such a system is by definition an open loop system that is utterly uncontrolled and uncontrolling rather than actually the control system imagined or sought.
You have to have some bad things happen (and detectable) to ever control/regulate the bad thing incidence. Nature is a bitch about this but there's nothing you can do to get around it. It is akin to and closely related to your thesis that risk is never absolute. Traffic and terror deaths are a requirement for controlling either. Otherwise, ignore the problem entirely - that's a better, cheaper and as-effective strategy.
And lastly, there is a implicit flaw in most "Maximum Entropy" derived risk models - these presume randomness is the worst-case scenario when often scale-free or even nearly scale-free causality is far more dangerous - the Black Swan. Scale-free Black Swans are a case of "risk by possibility" rather than "risk by impossibility".
Most every financial pricing and risk/insurance model presumes the critical requirement for maximum entropy: event independence.
Yet, it is pretty clear that such a condition is rarely met in an globalized, interconnected world. Especially when you have organizations that are too big to fail. The only way to deal with this is not to focus on the effect but rather the structural flaws that enable scale-free cascades.
Things like Glass-Steagall are exactly the kind of "firewall" that helps to re-enforce a system that actually tends toward event independence. Event independence is like linearity: everything is actually manageable when you have it rather than mathematically impossible to manage.
This is the problem with ideologies of absolutes vs. magnitude scaled probabilities: eventually absolute assumptions run up against the Fallacy of Composition and you can no longer assume event independence.
Most probabilistic, maximum entropy risk models are only truly valid when applied to small subset cases within a larger population. Applied to the larger population the act of exploiting the risk management itself changes the system risk of the whole. We're seeing that in spades in the financial system of the world today. The assumption of event independence is destroyed.
These are all reasons why nearly everything done by government and financial industry since 9-11 and the Financial Crash have been 100% wrong and even counterproductive in the worst possible way.
In vacuum (again, a presumption of event independence), people make money but it's a misallocation of critical and limited resources which have a path dependence. The path back will generally be as long as the path in and there are no short-cuts on the path.
I had expect that you were going to say something about disaster recovery. It seems like many times, the best counter-argument to worst-case thinking is simply to point at the real disasters that do happen, and make sure we have the capability to respond to those. And as it turns out, if we can respond to disasters, we can probably respond better to worst-case scenarios.
Applying that to cybersecurity, we can look at things that we know do happen, such as worms/viruses (or the scanners for the same) hobbling entire companies for a day. It is easy to observe some of the things we can to do make systems more robust in the face of failures that we see. Apply those same notions to cybersecurity for a power plant (or a set of power plants), and you not only make it more difficult for the theoretical worst-case terrorist, but also easier for the organization to recover when something does go wrong, as we know it will.
When people avocate for defense against worst-cae scenarios, I respond "you can't protect against the worst case: effectively omnipotent aliens throw the Earth in the Sun, then chuck the Sun into a black hole for good measure".
It's important to focus on "reasonably possible high-impact" scenarios. Prove you worst-case scenario is more likely than mine if you want consideration. As Bruce suggested, change the discussion from "prove it can't happen" to "prove it's likely enough to matter".
I'll presumptuously indulge in a small exigetic effort to try to connect and organize a few strands of Bruce's thought.
The core organizing principle that informs Bruce's writings on security --- including this one --- is the cost-benefit analysis. This is the central tool for assessing risk and allocating risk-abatement resources. When he writes about worst-case thinking, or about externalities, he's writing about pathologies that distort the cost-benefit analyses that are essential for a reasoned approach to risk management.
Externalities, for example, are a pathology wherein costs and benefits become bifurcated, with some parties bearing costs while others receive benefits from each option line in the analysis. As a consequence, the analysis carried out by any party is not global, and is distorted by a partial reading of the global risk.
Similarly, in the worst-case thinking pathology, the probability distributions required to calculate the cost and benefit values to be entered in the table become highly distorted by emotional factors, leading to analyses wherein one option line, selected by fearful reasoning, improperly outweighs all the other options. This is the reason that it can be used to justify "any position or its opposite": If we can treat the probabilities as being utterly subjective and subordinated to our fears, different phobias will lead to different risk assessments, irrespective of any dispassionate quantitative reasoning or data.
In my view, this is the larger answer to claims such as "the counterexample to the critique of worst-case thinking is the unthinkable scenario of a blowout-preventer failure in the Gulf of mexico". This is tantamount to singling out the wrong pathology. The right pathology to single out is the externality effect -- the diffused-responsibility coalition of BP and platform owners saw only benefits in poor disaster preparedness and only costs in meticulous preparation. Given the sadly deficient legal and regulatory environment in which they operate, it's hard to say that their choice wasn't rational, in the sociopathic, profit-maximizing sense in which this term applies to corporations. It's the _global_ cost-benefit analysis that got the short shrift, because it was nobody's responsibility to compile it or to act upon it. It should have been the government's, but well...
As far as this essay is concerned, the point is that yes, there are worst cases, and yes, they bear thinking about, but only as part of a larger analysis, and only attended by honest probability assessments. Then they can be an entry in a table of risks that is an informative guide to policy. Absent those conditions, discussions of worst cases can only produce irrational policy distortions. This is the case that people who care about honesty in risk assessment should keep banging away at.
-----BEGIN PGP MESSAGE-----
Version: GnuPG v1.4.9 (GNU/Linux)
-----END PGP MESSAGE-----
> There is a lot of money to be made in fearmongering
cf Al Gore
"A blast wave would have rocketed out in all directions at speeds of 12000 to 14000 ft. per sec"
I think "Kevin Barry" is over egging the pudding.
More correctly it is a "Boiling liquid vapor explosion" or blve's vary rarely.propagate at anything close to the speed of sound let alone thousands of feet a second.
One reason for this is a lack of oxidizer in the fuel mix. Another is lack of energy capacity in the uncombusted fuel. Even with conventional high exposives outside of a very small radius the blastwave propergates at close to the speed of sound as can be seen on films of explosions. The reason for this is that air undergoes a state change when you try to compresse it at the velocity of the speed of sound and all the blast energy gets trapped in the very very narrow thickness at the wave front.
To get the propergation speeds he is alluding to requires complex and very specialised fuel oxidizer mixes as seen in high end fuel air explosives and some rocket engine designs.
Carlo: "In my view, this is the larger answer to claims such as "the counterexample to the critique of worst-case thinking is the unthinkable scenario of a blowout-preventer failure in the Gulf of mexico". This is tantamount to singling out the wrong pathology. The right pathology to single out is the externality effect -- the diffused-responsibility coalition of BP and platform owners saw only benefits in poor disaster preparedness and only costs in meticulous preparation."
That's part of it -- but not all of it. It's not JUST that you have a problem with externalities and limitation of liabilities. Just as important is the problem that the probabilities themselves are almost impossible to calculate.
What is the distribution of blow-outs? How do they compound and depend on non-white noise external events? How do we actually do a risk assessment for a complex system that just won't sit still for the common distributions that we use?
Is a blowout inevitable over 20 years? How are the sizes of the blowouts distributed -- we know that the distribution is pretty damn funny since there's a wall of zero at one side?
There's a lot more to this than simple statements about externalities. That's a cheap shot, in that it avoids the deeper issues of risk assessments in the face of systems that don't 'fess up to simple probabilities.
You can't apply statistics lazily. A lot of risk assessment is pretty damn lazy (even if clever). Like I said, look at the application of Gaussian copulas to the mortgage derivatives market.
They did risk assessments. The problem wasn't externalities -- the problem was that the statistical assumptions were crap, and without those assumptions the math was intractable.
Math is hard, even for mathematicians. Plugging numbers into your handy-dandy statistical software and pretending to do a risk assessment is bullshit.
kangaroo, the fact that the probabilities are hard to assess does not absolve anyone from having to estimate them anyway. There is literally no alternative, if risk assessment is to by driven by anything other than the "bullshit" that you rightly decry.
If necessary, the relevant probabilities can be issued as bounds, rather than as point estimates, possibly even over-conservative bounds, representing as honestly as possible the extent of our ignorance. But to give up on this part because it's "too hard" is worse than laziness. It's simply dishonest. It's the royal road to bullshit.
I think some of the problems with this discussion might be simple semantics.
If the CEO of my company would ask me what my “nightmare scenario” was, I would happily talk about logic bombs, firewall bypass attacks, evil insiders, DNS poisoning, SSL compromises, targeted state-sponsored hacking attacks, EMP blasts, and ninjas. Then I would explain how unlikely some of this was, and why we shouldn’t spend more time and money on protecting ourselves from much of this.
I wouldn’t take him to task for asking the wrong question. There’s a difference between discussing worst-case scenarios and actually using them as a guideline to plan your defense.
As an auditor, my nightmare scenario is everything that should be examined that doesn't get examined because decision makers focus on both the petty (low impact) and the spectacular (low probability).
Unpredictable events happen. The modifier is correct: you cannot predict them. Trying to predict specific events is stupid, but you can predict that something unexpected will happen. So be ready: pay for rapid response teams with supplies useful in a wide variety of cases.
And somehow, put qualified people in charge. Among other things, they must be able to pass exams requiring knowledge of and the abilty to apply the facts of chemistry, physics, and non-parametric and Bayesian statistics.
Unfortunately, such people rarely win elections, and most voters do not value those attributes and never even took the courses much less passed the exams.
You radical! I'm a former Internet security market PR consultant, now spending more time as a singer-songwriter. My sister calls me a hippie for this kind of thinking... but my version is the flip side. Love Can Change the World...
Excellent piece Bruce. However, you simply must stop inserting logic into the equation. Everyone knows logic can play no part in the pretend reality that the fear mongers foment. If it weren't for worse case thinking, the ignorant charlatans that predict our world is going to come to an end, somehow, some way, some day by some form of terrorism wouldn't be able to justify their existence and the continuing waste of billions of dollars.
Of course, if BP, Transoceanic, and Halliburton had engaged in a bit of worst case thinking, we might not be in the middle of an environmental disaster which may very well be close to a worst case scenario, If our government had engaged in a bit of worst case scenario spinning, it might have imposed some actual regulation on the oil drilling industry instead of letting the industry "police" itself.
It's only a worst case scenario until it happens.
Then it's reality.
Carlo: kangaroo, the fact that the probabilities are hard to assess does not absolve anyone from having to estimate them anyway. There is literally no alternative, if risk assessment is to by driven by anything other than the "bullshit" that you rightly decry.
If necessary, the relevant probabilities can be issued as bounds, rather than as point estimates, possibly even over-conservative bounds, representing as honestly as possible the extent of our ignorance. But to give up on this part because it's "too hard" is worse than laziness. It's simply dishonest. It's the royal road to bullshit."
No -- that's the problem. Claiming knowledge ("point estimates" "bounds") when, in fact, they aren't estimates or bounds at all because your assumptions are bad is the ultimate in bullshit.
This is like the uncertainty principle -- you have to be very, very careful when "estimating" the position of a particle as a fixed momentum/position pair. It's simply not that way -- and that "approximation" is a very, very bad mistake in cases where tunnelling is possible.
Some ignorance is inherent. Some systems aren't amenable to our current statistical methods. The distribution of hurricanes aren't independent -- treating it as so except on the longest time frames is a very big mistake, and claiming that it's okay "as a try" is scientifically dishonest.
We see a lot of that in scientific fields. Folks half-assing statistics, then claiming knowledge when there is none. That's much worse than face ignorance directly -- you forget your ignorance when you use incorrect estimates.
A CORRECT estimate requires knowledge of the error in the estimate. But if you can't do the statistics because it's not tractable, you can't know your error, in which case any estimate is simply bullshitting -- it's in essence a LIE.
So in many cases you have no bounds, you don't have anything you can call an "estimate" and being conservative gains you nothing -- if your "conservative" estimate is say assuming independence and your system isn't independent, then "conservative" is just as WRONG as "radical".
A Gaussian distribution isn't an "approximation" of a Cauchy. Claiming so is just self-delusion -- and I'd rather be honestly ignorant than self and other deluded.
In decades past, psychology studies were undertaken concerning persons who had lost their time perceptions:
- - -condition:
- - - - - - - what I may recall of the observations
- - - loss of memory of the past events:
- - - - - - - they tended to act as innocent optomists
- - - inability to project future events:
- - - - - - - they tended to not callibrate their actions,
- - - - - - - but acted in extreme fashions
- - - inability to process the presenting time arrow
- - - or the flow of present time in the present:
- - - - - - - they tended to act with confusion, and fear,
- - - - - - - lose track of the existence and continuity
- - - - - - - and connection of objects and relationships.
Each of these produced certain pathological mis-perception biases.
My point is that the voting public, dazzled as it is by news fragments and media multi-sources, is in the situation of something like an ADD patient, often be-set by the above experiential conditions with respect to differing news and security issues, rendering them often dependent on apparent experts. The voice of continuity and connection must be presented to counter the pathological effects for a majority of people. [Berkeley Apparent Majority Decision Study]
I have 3 issues with people talking about "worst cases".
First, if you look at real world examples of "worst cases" involving use of technology, almost invariably you find that the final disaster is at the end of cascading series of small errors and problems. Relatively minor preventative measures correctly applied would stop the cascade and the final "worst case". But these "worst casers" only talk about mitigating the final disaster, ignoring the earlier, simpler, cheaper options. Granted, this cascade of errors doesn't apply to natural disasters like hurricanes, and earthquakes. That is, beyond the initial decision to live in certain disaster prone locations and not having and enforcing adequate building standards to mitigate the specific threat.
Second, too many intelligent people are publicizing relatively easy to implement scenarios for the "terrorists" to pick up on and modify. "Blow ****** up, and NY has to close down", "Put this type of undetectable poison in that place".
Third, the "worst cases" are rarely as bad as the real world disasters that have happened:
- the Titanic was "unsinkable".
- ferry boats allowed to be so badly overloaded they would capsize.
- 4 passenger aircraft highjacked on the same day and crashed into buildings.
- emergency response so badly managed that people had to live in a US stadium for a week before relief reached them
- 1/4 million people dieing in a Christmas tidal wave
The corallary to this is that often persons only think with remembered real or pseudo-virtual experiences, instead of integrating present technological facts and information, because the current facts and information relationships are so new and changing so fast.
[I offer my e-spectrum in this regard for consideration, as is useful]:
a - those who think with only past memory - - - - twipme
b - think with present observation - - - - twipro
c - think with calculated future probabilities - - - - twicafupro
d - think with uncalculated future probabilities - - - - twiuncafpro
e - think with calculated future possibilities - - - - twicafuposs
f - think with uncalculated future possibilities - - - - twicuncafposs
g - think with all inputs to perception - - - - twiapercepts
It is as helpful as
1) source's proximity to material,
2) agreement with other sources,
3) record of reliability of source,
4) degree of contact or separation from experience reported,
5) content evidence for past or current referenced generations of focus.
And then imagining a "worst-case-scenario" leads people to the impression that they can imagine worse scenarios than there enemy (may that be mother nature, a terrorist or pure bad luck). Which, as everybody should know, is absolutely wrong. Things will always go worse than anyone can imagine. So what's the point of so called "worst-case-scenarios" then?
Bruce, you are one hell of a man.
[Obviously, failures of oil rigs don't fit the distributions they've used to do risk-assessments, know do they?]
They do when corners are cut (as we are beginning to see). When the MMS relies 100% on the industry's self-assessment of maintenance and equipment standards, that 3-5% quoted as the probability of a catastrophic failure suddenly gets much larger.
The practical problem behind this discussion is that we have yet to find an effective method for developing risk scenarios, scenarios that people feel invested in and want to find a solution for - scenarios that management understand and see where resources need to be focused. The Attack Trees that Bruce and others have suggested are a logical approach, but too difficult for the average person to understand. Without an intuitive process to follow, people just jump to conclusions - the world will come to an end tomorrow - how do we prevent it.
What we need is a scenario process that encourages people to look at the little things that could go wrong and, when combined together, could culminate in a major disaster. Take the aborted Times Square terrorist attack, last weeks stock crash or the BP oil platform disaster. There were a host of small missteps in these examples, any one of which could have prevented the event. There needs to be a scenario process that leads people through all the small vulnerabilities and mistakes that can culminate in a significant risk - and show them the reasonable steps they could take to prevent a future disaster.
The point is not to focus on preventing the end of life as we know it. The unknown unkowns will always catch up with us. The point is that the small things count and, if we don't pay attention to them, we only increase the risk of a major calamity.
Can anyone suggest a way that we can develop a risk scenario process that management will understand and will lead to a better allocation of resources to address risk?
WRT the oil spill:
What I find most interesting is the complex legal relationship between the parties. Who all effectively work for BP on a BP project.
I wonder, if this "shared responsibility" relationship is not in fact the legal realization of BP's "worst case" planning.
It'll be interesting to see just how legally impregnable this little charade, actually is. If nothing else it appears to delay any immediate liability proceedings against BP, so that in itself has a NPV to BP.
Remember, Exxon only recently settled the claims for Exxon Valdez, so the liability proceedings for this little disaster could be still working it's way through the courts in 2030.
Let's put your point on worst-case thinking into action. Let's say we go about our ordinary lives and decide to invest in a smart electric grid and pervasive computing. First, let's consider a worst-case scenario: the Sun emits a huge burst of radiation that fries all electronics in North America (and potentially the people too). Holy shit! That would totally wreck all these plans. Let's just shelve electronics altogether and do a purely mechanical society. Yep, worst case thinking certainly gets us farther, huh? ;)
I wonder how to reconcile the fact that Chinese society expects their leaders to keep them safe, yet they aren't exposed to all of the sensationalist news reporting to which Western society is exposed. Both censoring news and a "free" news system causes the populace to expect mama government to protect us from the bad guys.
Even worse, it plays directly into the hands of terrorists, creating a population that is easily terrorized -- even by failed terrorist attacks like the Christmas Day underwear bomber and the Times Square SUV bomber.
Is this a bad thing? Assuming terrorists want to terrorize us (which by definition they do), by being terrorized by non-deadly "attacks" that means terrorists have little motivation to actually take the time to make sure their attacks are actually likely to work.
Perhaps our chicken-little attitude to terrorism actually does keep us safe!
That's not specifically true. Chinese media has it's own sensationalism. You don't recall the HIV/AIDS needle terrorism scare around the time of the Xinjiang riots? Or the recent fear of school stabbings in the news right now?
"Both censoring news and a "free" news system causes the populace to expect mama government to protect us from the bad guys."
Well, discounting the hyperbole, it is not an unfair expectation, since we pay for the legal system and law enforcement..
The Rumsfeld quote may have been a re-hash of something Confucius said in the Analects. H.D. Thoreau quotes something very similar in Walden. But which was Secretary Rumsfeld reading - the arch-conservative Confucius or the ultra-independent Thoreau?
I skipped to the end so my apologies if this has been stated already:
Isn't this an ironic example of "worst-case thinking about worst-case thinking"?
Consider Pascal's wager (high cost, low probability decisions) as it relates to things other then belief in the supernatural.
If you can do something to prevent the nuclear meltdown in downtown New York City, don't you think you should? Rather then just cite how improbable it is?
Saw a presentation by Peter Evans of GE Energy Global Strategy and Planning at MIT. He talked about a scenario planning exercise they are using and two possibilities, Slow Brown and Fast Green. Slow Brown is the status quo and Fast Green ain't that green and ain't that fast. Within his thinking was no possibility that things can get worse.
As Evans is a former Cambridge Energy Research Associates, I asked an acquaintance who still works there about Evans and his work. She said that CERA never includes a worst case scenario which is why their clients hire them.
That should please you but it worries the hell out of me, especially since neither GE Energy nor CERA seems to believe that the world energy situation can deteriorate nor that climate change can continue to exceed our predictions.
We can criticize people for bad judgment but issues as a whole are now approaching us too quickly for us to compute them.
We're becoming more aware of the fragility of our systems as a species, and we also seem to be facing more and more big threats more often (for some reason).
There's an old joke about 'things ducks are afraid of'. Ducks are afraid of very small fast things ( bullets ) and very large slow things ( people ). They're not afraid of very small very slow things, and there's no point in being afraid of very large very fast things...
The potential scope of all the issues on the table and their associated velocity is exceeding our capacity to think rationally. Like ducks we are literally too stupid to be able to see through this kind of 'fog of war'.
For example some people are talking about the BP disaster as an extinction level event... it's fear mongering in the most extreme case... http://www.truthout.org/things-fall-apart59152 ... and it can be persuasive to some... fear mongering puts dinner on the table for others.
To discard these threats we need something new - we need to combat each fear as fast as it arises - and to deal with this "calculus of suffering" where people are taking the astronomically huge populations now on earth and the magnifying of risk and making scary noises with this data.
We actually probably need some kind of crude digital model to simulate holistic outcomes of whole systems, like to simulate forward in time the effect of an oil spill on a watershed, or an economic policy shift or any of a number of other large scale possibilities. There's some new work started up here that might be relevant: http://www.technologyreview.com/blog/arxiv/... .
Maybe this is a matter of the semantics of "worst case", but I can't help thinking you're wrong. From the standpoint of manufacturing/QC/reliability, if you don't check for the worst case, it'll bite you in the ass.
I've worked on products where mechanical engineers didn't calculate worst case tolerance build up on parts and have ended up with products that couldn't be assembled because of it. When electronics engineers specify a 5% or 10% resistor, or look at circuit timing, the good ones stop and figure out if the circuitry will still work at the extremes. When a product is spec'd to operate from -40C to 85C, you test it at both temperatures (and some in between). You find things that fail that way and fix them before your product or your customer dies.
I don't think "real life" is that different - coal mines explode or cave-in, O-rings don't seal, and blowout preventers fail. Some things - like Hawking's concern about aliens - just seem silly, and not everything can be hazard free. Shit happens. But I don't know that there's a hard and fast rule to apply. BP probably did cost-benefit analysis. If Ford knew Pintos could explode, they probably let it happen because of the way cost-benefit analysis worked out.
Murphy's law is one of the few irrefutable principles of the universe. The complete law is "if anything can go wrong, it will, and in the worst possible way at the worst possible time". And as has been pointed out, Murphy was an optimist.
Interesting that we still await comments on the dialectics of
the Best of All Possible Outcomes, often used to shape public perceptions as powerfully as the Worst Possible case.
Then there is the Strangest of All Possible Outcomes, the Most Vivid,
the Most Harmful, and MacNamara's favorite, the Most Likely, etc.
Your blog prompted me to look up definition of terror and terrorism on wikipedia, which I consider authoritative on conventional wisdom. Terror is fear of a future event. Terrorism is "a policy or act intended to intimidate or cause terror, usually for the furtherance of ideological goals." A terrorist is a person who engages in terrorism. It follows, in my thinking that the security industry is populated by terrorists, and that there is a great deal of terrorism happening as we write on the issue. It is not at all rare. But the perps are mostly not Muslim. If we focus on the Muslim terrorists, we miss at least 99% of the problem.
A lot of this must be contextual. I cannot BE terrorized, so 'terrorists' and 'terrorism' hold little meaning for me.
My 'worst case' scenario is that the economy will crash, completely and without warning. It may do so naturally; or a 1.3 megaton nuke burst 400 miles above Kansas would take out every power grid and vehicle in America and there would be no way to recover.
A dozen years ago I moved to a place where I could be (and am) completely self-sufficient..not in anticipation of such an event, but I wanted to see if I could become a subsistence farmer. As a secondary benefit, I am completely unaffected by...well, by almost anything.
So I live in rural Hawaii on a 1-acre farm. I am on the grid and Internet, but I don't have to be. Other than for the sake of convenience, I can use solar and a windmill for electricity. I love this life.
When you put the cops and the military on a pedestal because you are afraid....you end up with a police state...where the aforementioned mostly call the shots. The US has yet to add two plus two and admit it's part as part of the "cause" of 9/11. We are basically duplicating the situation of Israel vs Islam...and going halfway around the world to do it...when we don't need to....and how smart is that? Of course there's always that oil...and again...how smart is that?
The worse case scenario that I see is that the US never grows up...just like Peter Pan.
The worst case scenario has already occurred - Adam ate the apple - so why bother considering any other case other than at an appropriate level of risk mitigation that is cost (in all its meaning) effective?
...but... hey, this essay is "worst-case thinking ON worst-case thinking"!:-)
In other words: the essay is good, but a little unbalanced. I agree on comment by stijn (+-3rd from top), that the essay does not state WHEN wct is bad.
Excellent article. As it covers much of what we see every day in the media (about all kinds of 'bad things' ) it should be 'boiled down' and taught in schools.
the person who postulated that was no expert. there are cases from iraq where our soldiers entering a room got into firefights and bullets hit those propane cooking gas five gallon bottles and exploded them, the container flies away and a plume of fire comes out but soldiers in the room were not severly injured and continued the firefight.
the correct name for this logical fallacy is "catastrophising" and it is a logical fallacy because you act on what you don't know, not on what you
you cannot know.
locally, the police on a noise call for a nutjob invaded her house and gunned her down because she had a knife in her house and they postulated that she might be a danger to someone else in the house, they invaded and she retreated to the bedroom where she recieve the ad hoc death penalty however. she lived alone and there was no one there, the catastrophising was just the coverup story and the public bought it with some of the usual demonizing of the victim. google barbara schneider mpd.
the plane volcano incident was north of australia, and the engines restarted when the plane decended beneath the level of the ash cloud.
locally, the new public library has a four story glass wall leaning outward over the main entrance, I hope you wont see this architecture in earthquake fault zones, that glass would be deadly to people running out of the building,
thrivent financial has a leaning glass wall but it leans toward the roof not away from it, if there were a temblor that could break this, it would probably only be one floor of glass falling inside on each floor away from entrances. and it makes interesting noises when ice comes sliding down the glass wall. minnesota does get minor temblors, but the chance of this glass falling into people exiting is like the chance of me winning the powerball.
the times square "fougasse" would not have worked unless someone who knew what they were doing had removed the flash powder from all the firecrackers and combined them into a larger firecracker, (each m80 has the equivalent of an aspirns worth of flash powder) then placed the bigger cracker under the gas cans if the cans were plastic, less likely to initiate if the cans were jerrycans of metal.
A real fougasse is a block of c4 under a 55gallon drum and sandbagged in place such as were built into the berm around the Di An base where nixon once visited the troops in 69 but those troops were junior officers in enlisted mens uniforms because the army wouldn't trust the real expendibles.
"If we allow flights near Iceland's volcanic ash, planes will crash and people will die. If we don't, organs won’t arrive in time for transplant operations and people will die."
we can use remote controlled airplanes .. without humans inside ... just the things it has to transport ... so nobody will die even when the airplane crashes because of the volcanic ash ...
Thanks for this great post and your thoughts about worst-case thinking. This seems to me the way that both journalism and religion (my two fields of work) have worked: go to the extremes, avoid middle grounds. My question in both has always been how one counters a message of fear (filling ignorance with imagination, what a great line!) when the fear is so dang attractive. Anyway, I'm linking to this from our site, so thanks! I'm hoping it'll stir some good conversations for us.
I have a comment on your essay on Worst-Case Thinking. There is a
body of thought that recommends to people who have a free-floating
anxiety that fixes on a specific worry, to handle the situation by
considering a worst case scenario about that worry. The aim is to
"put a fence" around the worry and not let it exist and grow to a
point where the worry is the worry, and that concern knows no boundaries.
The procedure is to say, "You're worried about this? Think of a
worst case scenario. What is the cost to you if the worst were to take
place? Ok, now you know what the worst is, begin to think of how
you can avoid or mitigate the problem."
does your analysis apply to calculation of existential risk?
While I agree with your logic, I have seen this reasoning mis-used as a way to avoid taking prudent steps to be prepared for highly probable events. As an example, I have wondered if this type of thinking hindered vigilant maintenance or contingency plans for the BP oil spill in the Gulf of Mexico. I am sure BP execs felt this scenario would never happen or had an extremely low probability which in turn may have constrained resources and budget needed to be adequately prepared.
Excellent post, Bruce, but most of all, thank you for using "begs the question" correctly!
Now all you have to do is teach the BBC how to use it correctly.
@ James at May 14, 2010 12:40 PM
"If you can do something to prevent the nuclear meltdown in downtown New York City, don't you think you should? Rather then just cite how improbable it is?"
Well yes, but the question becomes where do you draw the line? "Something" is meaningless - for example having no nuclear plants in downtown NYC is doing something to prevent a meltdown. Why should anything else be done?
Fearmongering is a great way to extract endless budgets and keep people in line, but thats about it.
I have a plan that will reduce the risk of a nuclear meltdown in NYC by 99.9% but will cost every single citizen of the USA $5000 per year for the next 50 years. Should we do it? If not, why not?
(knowing the specifics of the plan is irrelevant as it *will* reduce the risk, you dont need to know how this will happen)
Am I the only one that kept thinking about Dr. Strangelove as I read this essay??
[after learning of the Doomsday Machine]
President Merkin Muffley: But this is absolute madness, Ambassador! Why should you *build* such a thing?
Ambassador de Sadesky: There were those of us who fought against it, but in the end we could not keep up with the expense involved in the arms race, the space race, and the peace race. At the same time our people grumbled for more nylons and washing machines. Our doomsday scheme cost us just a small fraction of what we had been spending on defense in a single year. The deciding factor was when we learned that your country was working along similar lines, and we were afraid of a doomsday gap.
President Merkin Muffley: This is preposterous. I've never approved of anything like that.
Ambassador de Sadesky: Our source was the New York Times.
Perhaps part what this article should point out to us is that planning and preparation are not the same thing. It certainly makes sense to plan for worst-case scenarios (who is in charge of who? who is responsible for what). Nothing is lost but man hours and paper.
However, when it comes to preparation, I think the comet example is apt. Given how high the impact would be, the risk would be extremely high no matter what probability. Does it really make sense to spend billions of dollars producing shelters and hoarding resources to prepare for life after impact? Or, building kill vehicles or solar sails or a really really really big bomb in the attempt to deflect said comet?
This does not make disaster planning any less simple. We must still weigh benefits and costs of actions in the face of probability and impact. The lesson to be learned is that there are many tools available to us and we must be just as meticulous in weighing our solutions as we are in weighing the problem.
Nuclear weapons ARE worse than conventional weapons. They have long-term consequences that ordinary explosives or something like a vacuum bomb don't even approach. They prctically guarantee worst-case scenarios, and because of the fear attached to them they guarantee massive retaliation. They render land unusable for decades, and they kill not just people in the blast radius, but everybody downwind of the target. They have the capacity for rapid distribution of the nastiest chemical weapons, except it lasts hundreds of times longer. And to make it worse, we've got muilitary R&D developing bombs that have less explosive power thn conventional nuclear weapons but MORE long-term radiation hazard.
Otherwise, good article.
The excitement of considering extreme but improbable risks can be intoxicating and even addictive to some. But it can lead us on a never-ending spiral that brings us no closer to the answer of how best to allocate resources to reasonably mitigate risk / expected loss. Sadly, what usually follows are poor outcomes and bad decisions.
I propose that we eliminate the phrase "worst case scenario" from our vocabularies (OK, just decommission it - it's cheaper) and substitute "bad enough situations". What I mean is that improbable, unexpected events will happen. Unforseen consequences will materialize out of nowhere, and dynamic impacts will cascade like dominos. And they will be "bad enough" to be called or perceived as disasters. But the key is to use this information in a positive way.
In the context of disaster planning and preparedness, "bad enough" thinking helps us shift our focus from particular disaster scenarios and their likelihood to the dependencies that support our activities or operations. "Bad enough" tells us how long we can be without these dependencies, and then moves us to figuring out and testing work-arounds. In my opinion this should result in a "good enough" plan, the Holy Grail of disaster planning. I've heard various EMA and health officials refer to taking an "all hazards approach" when preparing for disasters. Seems quite reasonable to me.
"Bad enough" thinking is a much less depressing and exhausting exercise than flexing our worst-case imaginations. Besides, the only real benefit of worst-case thinking is that it can be used as empathy - "You think it's bad now? Well, it could be worse!"
Coming in very late on this...but a lot of worst case scenarios have already been looked at, in science fiction. Laugh if you will, but one I recall was making all the traffic lights in the country turn green at the same time. The story was written back in the 60s, when such a thing would not have been possible, because the lights were (I presume) all on analog timers. I'm not sure that it's impossible (at least for big cities) now.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.