Imagining Threats

A couple of years ago, the Department of Homeland Security hired a bunch of science fiction writers to come in for a day and think of ways terrorists could attack America. If our inability to prevent 9/11 marked a failure of imagination, as some said at the time, then who better than science fiction writers to inject a little imagination into counterterrorism planning?

I discounted the exercise at the time, calling it “embarrassing.” I never thought that 9/11 was a failure of imagination. I thought, and still think, that 9/11 was primarily a confluence of three things: the dual failure of centralized coordination and local control within the FBI, and some lucky breaks on the part of the attackers. More imagination leads to more movie-plot threats—which contributes to overall fear and overestimation of the risks. And that doesn’t help keep us safe at all.

Recently, I read a paper by Magne Jørgensen that provides some insight into why this is so. Titled More Risk Analysis Can Lead to Increased Over-Optimism and Over-Confidence, the paper isn’t about terrorism at all. It’s about software projects.

Most software development project plans are overly optimistic, and most planners are overconfident about their overoptimistic plans. Jørgensen studied how risk analysis affected this. He conducted four separate experiments on software engineers, and concluded (though there are lots of caveats in the paper, and more research needs to be done) that performing more risk analysis can make engineers more overoptimistic instead of more realistic.

Potential explanations all come from behavioral economics: cognitive biases that affect how we think and make decisions. (I’ve written about some of these biases and how they affect security decisions, and there’s a great book on the topic as well.)

First, there’s a control bias. We tend to underestimate risks in situations where we are in control, and overestimate risks in situations when we are not in control. Driving versus flying is a common example. This bias becomes stronger with familiarity, involvement and a desire to experience control, all of which increase with increased risk analysis. So the more risk analysis, the greater the control bias, and the greater the underestimation of risk.

The second explanation is the availability heuristic. Basically, we judge the importance or likelihood of something happening by the ease of bringing instances of that thing to mind. So we tend to overestimate the probability of a rare risk that is seen in a news headline, because it is so easy to imagine. Likewise, we underestimate the probability of things occurring that don’t happen to be in the news.

A corollary of this phenomenon is that, if we’re asked to think about a series of things, we overestimate the probability of the last thing thought about because it’s more easily remembered.

According to Jørgensen’s reasoning, people tend to do software risk analysis by thinking of the severe risks first, and then the more manageable risks. So the more risk analysis that’s done, the less severe the last risk imagined, and thus the greater the underestimation of the total risk.

The third explanation is similar: the peak end rule. When thinking about a total experience, people tend to place too much weight on the last part of the experience. In one experiment, people had to hold their hands under cold water for one minute. Then, they had to hold their hands under cold water for one minute again, then keep their hands in the water for an additional 30 seconds while the temperature was gradually raised. When asked about it afterwards, most people preferred the second option to the first, even though the second had more total discomfort. (An intrusive medical device was redesigned along these lines, resulting in a longer period of discomfort but a relatively comfortable final few seconds. People liked it a lot better.) This means, like the second explanation, that the least severe last risk imagined gets greater weight than it deserves.

Fascinating stuff. But the biases produce the reverse effect when it comes to movie-plot threats. The more you think about far-fetched terrorism possibilities, the more outlandish and scary they become, and the less control you think you have. This causes us to overestimate the risks.

Think about this in the context of terrorism. If you’re asked to come up with threats, you’ll think of the significant ones first. If you’re pushed to find more, if you hire science-fiction writers to dream them up, you’ll quickly get into the low-probability movie plot threats. But since they’re the last ones generated, they’re more available. (They’re also more vivid—science fiction writers are good at that—which also leads us to overestimate their probability.) They also suggest we’re even less in control of the situation than we believed. Spending too much time imagining disaster scenarios leads people to overestimate the risks of disaster.

I’m sure there’s also an anchoring effect in operation. This is another cognitive bias, where people’s numerical estimates of things are affected by numbers they’ve most recently thought about, even random ones. People who are given a list of three risks will think the total number of risks are lower than people who are given a list of 12 risks. So if the science fiction writers come up with 137 risks, people will believe that the number of risks is higher than they otherwise would—even if they recognize the 137 number is absurd.

Jørgensen does not believe risk analysis is useless in software projects, and I don’t believe scenario brainstorming is useless in counterterrorism. Both can lead to new insights and, as a result, a more intelligent analysis of both specific risks and general risk. But an over-reliance on either can be detrimental.

Last month, at the 2009 Homeland Security Science & Technology Stakeholders Conference in Washington D.C., science fiction writers helped the attendees think differently about security. This seems like a far better use of their talents than imagining some of the zillions of ways terrorists can attack America.

This essay originally appeared on Wired.com.

Posted on June 19, 2009 at 6:49 AM34 Comments

Comments

JRR June 19, 2009 9:02 AM

I’ve said it before; if nobody in the FBI/etc imagined a 9/11 scenario before 2001, they must be the most unimaginative bunch I’ve ever not met. I’d been in many discussions prior to that where we were just casually chatting about hijacking, and suicide bombing, and many people then came up with the idea of “well, once you’re in control of a large airplane, use it as a terror weapon. You’ve got a guaranteed couple hundred casualties on board, which is many times more than most suicide missions get, plus, crash the SOB into a crowd at a football game, or an amusement park, or an office building.”

The only thing that changed on 9/11 was our perception of invulnerability; things aren’t any more dangerous now than they were in 2000, we’re just not as naive about the risks (though I think we’ve gone too far the other way; as a group, Americans seem to be far too risk-averse, willing to trade almost anything to avoid a miniscule chance of harm, and we worry about stupid stuff and not about likely stuff).

bob June 19, 2009 9:04 AM

I think its a good idea – they should have these brainstorming sessions, generate 1,000s of possible attacks –>BUT<– then implement ONLY those responses (programs, processes, tools, tactics) which would apply to -ALL- of them.

BF Skinner June 19, 2009 9:32 AM

Science of Fear IS a good read. Odd that everything that scares us is really inside our brain.

@bob “apply to -ALL- of them”
like what?
Be careful out there.
Look both ways before crossing the street.
Always wear clean underwear.

po134 June 19, 2009 9:39 AM

Why does it say that experiment #2 (1min cold + 30sec with temp raising) is worst than the #1 which lasted for 10min ?

Great post though, I love learning how peopel think, just this little “we remember the last thing more than the rest” goes a long way in everything in life when you have to do something boring. That’s what, I think, differentiate good and bad teachers in high school.

Keep these kind of post comming (talking about how we think ^^), I love them 🙂

BF Skinner June 19, 2009 9:39 AM

@ JRR “unimaginative”

Well you can stand relieved JRR. There was enough imagination. That’s why field agents got REAL worried about people learning to fly and not to land. And why they wanted to search Moussaoui’s laptop.

The Feebies breakdown was command and control and communication and information. (they still haven’t licked the last one.)

casey June 19, 2009 9:44 AM

If risk analysis can lead to over-optimistic overconfidence, then a risk assessment of risk assessment needs to be done. The paper concludes that a sort of meta-risk analysis include the risk analysis and other effort-based factors to determine the outcome. It seems to me that what is being asked for is better risk analysis.

There is a format of article that I notice where something is described as having a list of bad attributes and then the article concludes with the thing in question being let off the hook. My takeaway from reading this is that risk analysis can lead the unwary to make bad decisions. Since I deal with the unwary regularly, I am not sure what to do.

kashmarek June 19, 2009 9:45 AM

Since crash landing that plane in the Hudson river produced no deaths, and the crew handled the situation magnificantly, it seems to follow that flyers don’t fear that scenario nearly as much as before. In other words, the risk of a goose hit isn’t all that bad (as long as you have the same pilot and crew plus clear daylight landing in the Hudson).

kangaroo June 19, 2009 10:12 AM

Well, there’s a simple rule in engineering:
a project will always take 3 times as long as you predict.

If you include that factor in your prediction, the project will take 3 times as long as that latter prediction.

Recur until bankrupt.

stevelaudig June 19, 2009 10:24 AM

So, chairs are the most dangerous ladders in the world? But the apes just love their chair/ladders.

Karl Lembke June 19, 2009 11:08 AM

Another problem with risk analysis is that no matter how well you quantify the risk factors you uncover, you can’t quantify the ones you overlook. If you find ten failure modes, and engineer them down to one in a million, your risk of failure may be one in ten, thanks to the failure mode that got overlooked.

And I must say, I find it interesting that the number of risks suggested for the science fiction writers to find is the reciprocal of the fine structure constant. Was that on purpose?

Harry June 19, 2009 11:30 AM

Another superb book on the subject of cognative biases is “The Psychology of Intelligence Analysis” by Richards Heuer, available for free at http://www.cia.gov (no kidding), copyright 1999. It should be titled “The Psychology of Analysis” because what Mr. Heuer writes is broadly applicable.

The most important item in the book to me was that you can’t unring the bell. You cannot overcome hindsight. Here’s a direct quote from the book:

“Overseers of intelligence production who conduct postmortem analyses of an intelligence failure normally judge that events were more readily foreseeable than was in fact the case.”

https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/psychology-of-intelligence-analysis/index.html

Davi Ottenheimer June 19, 2009 12:06 PM

Very nicely written! It’s an important message…up to the point about “over-reliance on either can be detrimental”.

This sounds a lot like “too much of anything is bad for you”. Also a good message, but can you also make these statements actionable?

The obstacle (or solution, if we want a positive spin) that could be addressed in your message is how to recognize the point of over-reliance. When is too much? What’s a stop sign? Perhaps this is like asking how to recognize a political extreme, or even an addiction, but it’s the next logical step to your assertions here.

Clive Robinson June 19, 2009 12:30 PM

The big problem I have seen with people doing risk analysis is not “wild imagination” or “inability to asign realistic numbers”, it’s a lot more basic than that.

Put simply you should first ask “why am I doing this?” if you cann’t answer it realisticaly then it’s going to be a GIGO effort.

The second and more subtal question is “Am I being defencive or offensive?”. This is actually a very big issue.

When you are being offensive you are on the attack and you are examining “specific threats”. However when you are being defensive you are not dealing with specific attacks against you but broad classes of attack against you.

This realy should effect the way you go about your analysis and subsequent planning.

Therefor in defensive planning think up all the risks you can nomater how far fetched they are then importantly split them into classes of attack, then design your deffence for the most probable attack in that class. Then see what the cost is for defending that defense to cover the less likley attacks. Often you will find you have them effectivly covered for free.

As an example a business is required to take steps to reduce the effects of fire on the personel in the building. With only very very minor changes you get your drills for bomb threats as well.

As we (should) know on average the likley hood of a building being bombed is considerably less than bits of debris from space dropping on the roof.

Therefore you could argue that it is a risk that does not need to be dealt with. However tucking it under the much much greater risk of fire costs next to nothing therefore it can be cost justified equitably and is worth doing.

bob June 19, 2009 2:19 PM

@BF Skinner: Exactly (although #3 would be more of a personal choice thing). You have just issued doctrine which, if obeyed by the population of the US, would result in a return on investment (ie safety incurred vs. dollars invested) of ~10,000,000,000,000x as much safety as the US government has provided since 9/11.

pfogg June 19, 2009 3:12 PM

@casey: “If risk analysis can lead to over-optimistic overconfidence, then a risk assessment of risk assessment needs to be done.”

The article lists four possible problems that may crop up during risk assessment, giving ‘the anchor effect’ as the last (which honestly doesn’t sound like that difficult a thing to overcome). My conclusion, therefore, is that there aren’t very many problems with risk assessment, and the problems are not, in general, very severe.

I’m now highly optimistic about the applicability and accuracy of standard risk assessment methods.

Lawrence D'Oliveiro June 20, 2009 4:49 AM

“When thinking about a total experience, people tend to place too much weight on the last part of the experience.”

On a (slightly) unrelated note, could this principle apply to criminals being put in jail as well? That, even though they may suffer a long sentence, their memories will tend to be weighted towards the latter part, namely the anticipation of release, and the release itself, thereby dulling the effect of the total incarceration? Should punishment methods be redesigned to take this into account?

Paul Renault June 20, 2009 6:41 AM

Thanks Bruce and Harry!

Now, this is EXACTLY why I keep coming here!

(Richards? There’s gotta be a story behind that name…)

bf skinner June 20, 2009 7:42 AM

@bob “You have just issued doctrine ”

Nah. It’s Mama Skinner’s doctrine; I just follow it.

moo June 20, 2009 10:32 AM

@Lawrence D’Oliveiro: What would you suggest, that we torture inmates for the last little bit of their sentence?

…or get rid of the parole system?

Wesley Parish June 22, 2009 3:01 AM

As an aspiring (or perspiring) SF writer myself, I would say the assessment of their contribution is spot-on. Writers are creatures of their biases – just read the Pournelle-Niven team any time you’ve got a moment, and it punches you right in the gut. (Neither Jerry nor Larry were able to conceive of the breakdown of the Soviet system in Eastern Europe: it was outside their reference system. And the recycled British-Roman Empire in the Moties cycle is exquisitely painful to read, particularly when it comes to recycling the failed 1980s US military adventure in the Lebanon as part of its forestory.)

I personally would prefer to see SF writers concentrate on social systems: how people react to unusual situations – read anything by JG Ballard and you’ll see what I mean.

Clive Robinson June 22, 2009 3:19 AM

@ Wesley Parish,

“As an aspiring (or perspiring) SF writer myself, I would say the assessment of their contribution is spot-on.”

Got one published yet?

Saddly the old gental way into SiFi writting (via the mags) is not realy available these days.

It’s like the old truisum about “only go to a good Doctor” it neglects the fact that people have to “learn on the job” to become good. If people can not do it then as the “old’uns” retire we are left with a lack of experianced talent…

Steven Hoober June 22, 2009 10:26 AM

I think something got missed here. The point of a red team exercise is to pick people who are (if available) good at carrying out such attacks. Ambushes and so on are taught in the military, so set a group of intelligent, well-trained army guys loose on an exercise and they are good at coming up with good, cheap ways to carry out terror attacks. Bad guys follow our lead on training.

In practice, from the few I have seen (the best are classified) they tend to be a bit more clever than what gets carried out. Hence, presumably, paranoia about chemical plants and pipelines and bridges, with essentially zero attacks on such things.

Janis June 24, 2009 2:49 PM

Not convinced entirely about the cold-hot water thing. The “experience” isn’t a closed thing like the experiment seems to think; you don’t stick your hand int he ice water, pull it out, and then wink out of existence.

In reality, you are left standing there with two freezing cold paws and quite likely no way to warm them up. At the end of the full experiment, you can be one of two people:

1) the person who kept their hands in the water while the temperature was raised and hence has two warm hands now, or
2) the person who kept their hands in the water for only the first part and is now standing next to the first person with two still-cold hands.

This is a common error made when people talk about psychological “experiments.” The subjects don’t just vanish when the experiment is over. They continue to exist, really.

folbec June 24, 2009 7:36 PM

Quoting : “Driving versus flying is a common example.” :

first question to ask is : on what non marketing-contaminated metric ?

If you compute in “death by miles traveled” flying wins, if you compute in “deaths by number of trips” or “deaths by time spent in vehicle”, driving wins.

This is because typically plane trips are few and very long while car trips are typically a few mile long 2 or 4 times a day…
So if the question is “what is the probability that I will die on this trip ?”, the answer is not obviously lower for plane than for car (honestly, I don’t care if I die on take off (0 mile) or landing (800 miles further)).

Vickie Galante July 16, 2009 9:58 AM

I always enjoy Bruce’s ruminations on security. Here is an alternative perspective on his “Imagining Threats” piece.

First, I agree that we do tend to underestimate probability of events not in the news.

Skip to “If you’re asked to come up with threats, you’ll think of the significant ones first. If you’re pushed to find more, if you hire science-fiction writers to dream them up, you’ll quickly get into the low-probability movie plot threats.”

I offer another consideration. So far, we seem focused on probability. But there is another, equally important consideration. Risk assessment is shallow if we examine only probability and ignore consequence. For example, consider the following.

Given the choice between a “fair game” and one that offers an 83% likelihood of winning, the latter seems the better bet until we learn that the choice is between a dime coin-toss and Russian roulette. Then, consequence tints our assessment to a darker hue.

Finally, I think DHS engaging science fiction writers is totally okay, as long as it doesn’t exhaust the entire solution space. After all, 9-11 was fantastic in the primitive sense of the word. Osama, cleverly lurking outside the box, chose a low-probability but high-consequence scenario that sadly worked.

Get a bunch of like-minded, well-educated people in a room to brainstorm a set of risk scenarios. After they have exhausted the potential risks, bring in an eight year-old boy or two. The kids may suggest situations of which the pros never dreamt. Why? They are unencumbered by adult fears… nor do they spend their days coloring inside policy lines. Policy, schmallicy; they’re kids.

Oh, and was: “Clear, the company that sped people through airport security, has ceased operations. It is unclear what will happen…” an intentional pun? 🙂

Vickie

Jon July 18, 2009 6:47 PM

@Janis “Not convinced entirely about the cold-hot water thing.”

I’m not either… The heat conductivity of (warm) water is much higher than that of (warm) air…. the #2 ppl probably returned to comfort much sooner than #1 ppl did…. I think this is an inaccurate measurement of ‘total discomfort.’

Clive Robinson July 19, 2009 9:14 AM

@ Jon,

“The heat conductivity of (warm) water is much higher than that of (warm) air”

As a working average it’s about 25 times.

However it depends on a number of factors including humidity and air movment.

The odd thing is that the body can easily get confused.

I used to do a lot of sailing etc when I was younger.

On a cold day the air temprature can easily be quite a bit less than the water temprature.

So your body says “it’s more comfortable in the water” even though it might be lossing heat four or five times faster.

Also the evaporation of a thin layer of water on the skin will always make it feel coller than it realy is.

Oh and there is clear indication that Australias native population have evolved a very different technique for dealing with cold nights and hot days that also effects there perception of hot and cold differently to other native populations in other countries.

MS July 20, 2009 8:45 AM

One challenge I continually face in my organization is that people make critical business (security) decisions based on everything but rational thinking. Overinflated egos, greed and self-preservation are among the top drivers that shape the IT solutions we deliver. How do you make a rational argument with non-rational upper management?

At the ISF conference in Barcelona last year, Prof. John Maule — Director of the Centre for Decision Research at the Leeds University Business School — enthralled the attendees with an enlightening presentation about the misperception of security risks. Reduced, simplified thinking based on intuition, emotions or gut reactions was labeled “System 1 Thinking”, whereas systematic, analytical, rule-based thinking was called “System 2 Thinking”. Prof. Maule suggested that the role of the Information Security Analyst is to help an organization with System 1-prone Thinking to develop a capacity for System 2 Thinking by revealing how people make faulty, overconfident judgments and asking “disconfirming questions” that challenge their gut assumptions. Fascinating research…

Janis July 21, 2009 8:33 PM

Jon and Clive,

Yup. I can either let you freeze my hands and then warm them up for me, or I can let you freeze my hands.

No wonder the first group was less grumpy.

Janis July 21, 2009 8:37 PM

I should specify that I always have cold hands. 😛 They NEVER warm up in air. Ever. Warm water will warm them up fairly handily, but left to their own devices in air? Not a chance. I’d be miserable for the rest of the day if I had my hands in freezing water and couldn’t put them in warm water afterwards.

Clive Robinson July 22, 2009 3:51 PM

@ Janis,

“I should specify that I always have cold hands. 😛 They NEVER warm up in air. Ever.”

Just promise me that you will not go outside without gloves if the temprature is below 10C/50F.

I do not wish to worry you but one significant cause of heart attacks in winter is people not wearing gloves if their hands feel cold.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.