Schneier on Security
A blog covering security and security technology.
« Random Observation from the RSA Conference |
| Eyewitness Identification Reform »
February 6, 2007
The Psychology of Security
I just posted a long essay (pdf available here) on my website, exploring how psychology can help explain the difference between the feeling of security and the reality of security.
We make security trade-offs, large and small, every day. We make them when we decide to lock our doors in the morning, when we choose our driving route, and when we decide whether we're going to pay for something via check, credit card, or cash. They're often not the only factor in a decision, but they're a contributing factor. And most of the time, we don't even realize, it. We make security trade-offs intuitively. Most decisions are default decisions, and there have been many popular books that explore reaction, intuition, choice, and decision.
These intuitive choices are central to life on this planet. Every living thing makes security trade-offs, mostly as a species -- evolving this way instead of that way -- but also as individuals. Imagine a rabbit sitting in a field, eating clover. Suddenly, he spies a fox. He's going to make a security trade-off: should I stay or should I flee? The rabbits that are good at making these trade-offs are going to live to reproduce, while the rabbits that are bad at it are going to get eaten or starve. This means that, as a successful species on the planet, humans should be really good at making security trade-offs.
And yet at the same time we seem hopelessly bad at it. We get it wrong all the time. We exaggerate some risks while minimizing others. We exaggerate some costs while minimizing others. Even simple trade-offs we get wrong, wrong, wrong -- again and again. A Vulcan studying human security behavior would shake his head in amazement.
The truth is that we're not hopelessly bad at making security trade-offs. We are very well adapted to dealing with the security environment endemic to hominids living in small family groups on the highland plains of East Africa. It's just that the environment in New York in 2006 is different from Kenya circa 100,000 BC. And so our feeling of security diverges from the reality of security, and we get things wrong.
The essay examines particular brain heuristics, how they work and how they fail, in an attempt to explain why our feeling of security so often diverges from reality. I'm giving a talk on the topic at the RSA Conference today at 3:00 PM. Dark Reading posted an article on this, also discussed on Slashdot. CSO Online also has a podcast interview with me on the topic. I expect there'll be more press coverage this week.
The essay is really still in draft, and I would very much appreciate any and all comments, criticisms, additions, corrections, suggestions for further research, and so on. I think security technology has a lot to learn from psychology, and that I've only scratched the surface of the interesting and relevant research -- and what it means.
EDITED TO ADD (2/7): Two more articles on topic.
Posted on February 6, 2007 at 1:44 PM
• 211 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Good idea for an essay.
> New York in 2006 is different from
> Kenya circa 100,000 BC.
Your dates need corrections - 2006 sounds so last year, and 100,000 BC was well before God created Adam.
Page 8:"But when faced with a loss, most people (70%) chose Alternative C (the risky loss) over Alternative D (the sure loss)."
Wouldn't Alternative C be considered the sure loss?
How good are humans at deciding "Should I eat this fruit of knowledge or not?"
The essay's discussion of the endowment effect was very interesting. How big an impact does this effect have in something like the stock market? If the sellers' estimate of something's value tends to be higher than the buyers' estimate, which one does the actual market value end up closer to? My guess is that the market value leans towards the buyers' estimate if there are more buyers, and the sellers' estimate if there are more sellers. But I know next to nothing about economics, and eagerly await someone knowledgeable in the field to tell me what really happens.
"Every living thing makes security trade-offs, mostly as a species -- evolving this way instead of that way -- but also as individuals. Imagine a rabbit sitting in a field..."
sounds like security trade-offs or evolution "for the good of the species" the popular yet inaccurate meme from TV nature programs. Correct me if I'm wrong but doesn't practically all (natural) selection happen at the level of the individual, as your example illustrates?
Suggestion: consider a bit of anthropology. The neuroscience angle is easy to push too far, given the state of the field and the flexibility of the human mind. Jared Diamond's "Collapse" has some good examples of historical ecological trade-offs, even if his limited anthropological background doesn't give him the analytical tools necessary.
For example, you have the Norse Greenlanders starving a few miles from succesful Inuits, because of a cultural revulsion to eating fish. I'd expect that a number of our security illusions aren't hard-wired, but trained from millenia of living under agricultural serf/lord conditions -- not good training for information age conditions.
Prospect Theory: You mixed up C and D:
" * Alternative C: A sure loss of $500.
* Alternative D: A 50% chance of losing $1,000. "
"...most people (70%) chose Alternative C (the risky loss) over Alternative D (the sure loss)."
It is interesting to compare perceived risk to actual risk. It might be even more interesting to compare the differences in perceptions of risk among all the people, relative to actual risk.
Actual risk might be a mythical number (or at least an unattainable one). When we talk about real risk vs perceived risk, we are probably actually talking about two perceived risks, both of which vary substantially from the unknown actual risk.
Public policy needs to manage the varying levels of perceived risk across the population. When those perceptions are too widely divergent, policy makers will find their jobs much more difficult and maybe impossible. Rather than implementing solutions they may find themselves in a stalemate. Until people feel safe, public policy has not finished its job.
Public policy also needs to be concerned with actual risk. Maybe the most important consideration would be to make sure that the consensus perceived risk (whatever that is...) is at least as high as the actual risk. Addressing the perceived risk then becomes the focus for public policy.
Anyway, this seems like an important part of the discussion of perceived versus actual risk.
Natural selection happens at the level of the individual, but species improve rather than individuals. (Yes, this is an oversimplification.)
The rabbit doesn't change its mind. If it gets away from the predator, it doesn't need to. If not, it can't.
However, the rabbit population as a whole will get better at evading predators, because the less effective rabbits will be removed more than the more effective rabbits. This doesn't happen because of some mysterious force, but it improves the species just the same.
It's pretty widely recognised that the utility function for money is not linear, and so some of your comments on the utility trade-offs for sure and probable gains and losses could do with some tidying up.
There are a couple of points I think need more attention. The first is intentionality, which you address in your table but not really in the text. People are much more worried by risks from people who intend them harm (e.g. terrorists) than from those who don't (e.g. careless drivers).
Second, and related, is that our brains are hardwired to detect cheating, and to get upset about it. If you rephrase a problem in such a way that a correct answer will involve detecting social cheating (e.g. under-age drinking), then people are much more likely to get it right. There's a reference here: . I'd be pretty certain that any risk involving someone cheating so as to get an advantage they're not "entitled" to will be perceived as more serious.
Natural selection occurs primarily at the level of the gene. The individual is ephemeral, the information in genes is eternal. Otherwise, you wouldn't see kin selection, and therefore no super-organisms like ants, bees and nake mole rats.
You would be correct to say that the evidence for group selection is still rather weak, that the primary bottle-neck for information propagation is the individual; that is, however, quite different from say that the unit of selection is the individual.
"Prospect Theory: You mixed up C and D."
Thanks. I will fix.
Typo: "Stated" should be "Started", page 12, third paragraph.
BTW: In my humble opinion, it's cheating to consider words that start with "Unk", like unknown and unkempt, to be words with K in the third position. Not that that has anything to do with the statistic. :)
Your Innumeracy section is underdeveloped; I'd either develop it more, or strike it.
As this is a long essay, I'd tend to re-cap the conclusions you make about security at the end of a number of your selections near the end of the essay, perhaps in its own section prior to "Making Sense of the Perception of Security" (or at the beginning of that section).
My personal issue (which may not be common in your audience) was that since I have a significant statistics and Psychology background, I found a lot of the groundwork material repetitive, and had difficulty separating out the individual points you were trying to make about security from that groundwork. If you think that your audience is unlikely to have such background, you can safely ignore this point.
>How good are humans at deciding "Should I eat this fruit of knowledge or not?"
They were fine until a serpent got through the firewall with a phishing attack.
At least on my computer, there's something strange about the O in the word "two-man" on page 14, paragraph 4 (it looks like a different font).
I feel like a secretary :)
Page 14, paragraph 1 (first indent): "Linda is a 31 years old, single, outspoken, and very bright."
Should either be
"Linda is 31 years old, single, outspoken, and very bright."
"Linda is a 31 year old female, single, outspoken, and very bright."
And you forgot to mention if she is looking for a boyfriend.
@Alan: there is no "Actual" risk. Things either happen or they don't. We don't know which.
A probability is a useful number generated from a model built from the knowledge we do have. If we integrate all our knowledge into the model the number is as good as it can be.
That depends on what knowledge you have (or can afford the effort to integrate): I have just tossed a coin. What is the probability that it is heads?
For you, 50%. For me it is not. Our knowledge is different, therefore so are our probabilities.
(Frequent events have a frequency, which has an important relationship to the probability of the event in a short unit time. The large number of events have given us a lot of knowledge to build the model on. This might be confused as "actual" risk).
Very interesting discussion, with some good comments already.
One of the things that horribly complicates this discussion in the modern world is the use of media to give everything a spin. You can now convince very large groups of people that the real threat is not what it is (it is something different from reality), and can do the same thing for percieved threats. This makes both of them that much harder for the "common person" to measure, as this now has the ability to alter all of the standard heuristics when taken outside of the lab. While there have been good materials and studies on the use of media and media manipulation, I suspect this field (the use of media to influence security decisions, etc.) is very much in its infancy. You may be too far ahead of the curve on this one, but perhaps the discussion alone is what's needed to get the ball rolling.
This use of media to alter perception specifically in regards to security issues is a good part of the basis for the "Knowing Your Enemy" essay. It's also kind of the basis, I think, for "Wag The Dog", though perhaps in a different way.
As a side note, it's a shame Turner Broadcasting caved. They are telling everyone in the US that stupidity is acceptable, and the costs for said stupidity will be covered. A very bad precedent. "If I give you a sandwich" indeed!
I agree with your points.
My post had more to do with bruce's contrast of security trade-offs made "mostly as a species..." vs. "but also as individuals...."
If the rabbit example is an example of an individual security trade-off, which eventually becomes prevalent in the species as a whole, then what would be an example of those tradeoffs (presumably the majority) "made ... as species"?
I'd just get rid of the whole "made mostly as a species" and the initial reference to evolution.
Suggest: (as long as were suggesting.)
"Every living thing makes security trade-offs. Imagine a ..." Then I think the discussion of consequences for human behavior follow.
The psychometric and economic models of understanding risk are undoubtedly critical, but I think they do not fully represent the "whole story". Studying risk has a long history in sociology and geography (under hazards research), and help us to understand the way society contributes to actual risks and risk perceptions.
I suggest looking for "social amplification of risk framework" if you decide to look into if further. For a current overview of the topic, I would also suggest Taylor-Gooby, P. (2006) "Current directions in risk research: new developments in psychology and sociology" Risk Analysis 26(2):397-411.
keep up the good word.
Typo: delete forth, insert fourth, bottom of page 1.
@Mark Smith, or at least write the article backwards.
oh, I feel like such an old programmer....
Good point, did you also notice that the american media didn't initially show pictures of the cartoon LEDs? It was hard to find when the story initially broke.
The foreign press displayed it almost immediately. Only later did *some* of the American media websites show a photo of the cartoon character.
You say 1/250th:
Subjects tended to prefer ideographs they saw after the happy face, even though the face was flashed for 1/250th of a second and they had no conscious memory of seeing it. That's the affect heuristic in action.
But the source says 1/100th:
The shutter speed for the subliminal prime was set at 4msec which, after adding open and shut delay, results in a 10msec flash.
Figure 1 in the source also talks of "Subliminal 10 milliseconds"
Really enjoyed reading the article. A couple word edits:
evolutionary -> evolutionarily, before footnote 18
tw0 -> two, near footnote 36
'pari-mutuel' near footnote 46 is correct, but (IMO) clearer without the hyphen.
Page 13 bullet point 2: "One his way out the door," should be "On his way out the door,"
BTW: great read so. Thanks for sharing it.
It would be interesting to see the theory applied to modern advertising. And to the marketing of War on Terr'sm.
From the conclusion of Bruce's essay:
In the past I've criticized palliative security measures that only make people feel more secure as "security theater." But used correctly, they can be a way of raising our feeling of security to more closely match the reality of security.
But used in conjunction with real security, a bit of well-placed security theater might be exactly what we need to both be and feel more secure.
That is an intriguing conclusion, and one that I think politicians intuitively get. It's sort of a marketing approach -- the branding of a product needs to set a user's expectations for that product. "Theater" can therefore send the message "we are aware of your concern and we are taking care of it."
There is a (small) percentage of us who are allergic to manipulation of this kind. The advertising age has taught us to be mistrustful of the marketing messages that we receive. We read Consumer Reports instead of Motor Week. We want the real facts, not the glossy commercials.
But the issues described in the essay are not only for use in explaining why John Q. Public does a poor job of risk assessment, or how to make him feel comforted at the airport. This is also a checklist for how to make objective cost-benefit decisions in public policy -- i.e. "real security".
So if a risk's SPCET (severity/probability/cost/effectiveness/trade-offs) have been objectively considered in the making of public policy, thereby producing real security, then it may indeed also be appropriate to add some "theater" dressing on the policy, to comfort human nature.
On the other hand, if the public doesn't "get" the SPCET of an issue at the outset, their elected representatives are less likely to actually produce rationally-based policy.
Here's to hoping that more objectivity goes into the decision making process, and that it's professional advocates educate the public at large as well as the politicians.
PS - Bruce, I spotted a couple of typos in the draft. In one place, "stated" instead of "started" was used. In another, you wrote "...that I mean" instead of "...by that I mean" (or something similar). HTH.
Bruce, maybe you should consider vetting your articles on a wiki for easy editorial :)
(1) The science in this bit is pretty inaccurate: "It's what pumps adrenaline and other hormones into your bloodstream, triggering the fight-or-flight response, causing increased heart rate and beat force, increased muscle tension, and sweaty palms."
(2) The two clauses here don't fit together grammatically: "subjects didn't care whether they received $15 today or $60 in twelve months, while at the same time indifferent to receiving $250 today or $350 in twelve months, and $3,000 today or $4,000 in twelve months. "
But interesting essay.
tw0 instead of two: "Groups of six observers watched a tw0-man conversation from different vantage points"
There are several examples where I find that my own instincts run strongly with the experimental results and against the "rational" choice. I can't fault the argument for the rational choice but nonetheless would find it very hard to adopt it as my standard behaviour. Interesting.
Bruce, a sugustion. There is a something I have observed that I think you could give some valuable insight into.
Unfortunately I am not a psychologist so I do not know the correct terminology for what I am describing, however I shall try to be as clear as possible, and I'm sure someone will know the right jargon.
People have a tendency to assume something is better because it costs more. Most of the time the more expensive thing is better, but not always, as any smart shopper can tell you.
I think there is a similar effect at work when it comes to evaluating security at the "gut" level. People often think that the more restrictive (costly in terms of freedome and convinence) security measures are the more effective ones. Yes, everyone complains about draconian security, but at some level there is an intuitive notion that they are getting more security by paying more.
Also, It is easy for people to compare the cost they pay in time to go through different security protocols. It is much harder for them to compare the benefits of two different security protocols, because the chances of a real attack affecting a particular person is so so small.
Of course an overly-slow screening line is seen as inefficient, implying incompetent security. But at the same time, a screening line that moves "too fast" leads people to believe that the guards are just waiving them through without scrutiny.
I think this may explain why so much of the US has embraced particularly costly forms of security theater.
Mike Scott, "People are much more worried by risks from people who intend them harm (e.g. terrorists) than from those who don't (e.g. careless drivers).", may or may not be correct in general, but is certainly wrong in these cases.
A constant barrage of propaganda ensures fear of "terrorists". A more subtle but just as pervasive snow of reassurance gently presses us to discount traffic risks.
Evolution can be deceptive. We, 21st century Anglos, tend strongly to think about individuals, while evolution deals only with large numbers. Furthermore, without unscientific variation, evolution does not work at all.. the large numbers become predictable quickly become lunches.
Security experts, at least those who hire them, have no interest in increasing the likelyhood that the target population will survive. On the contrary, they would cheerfully eliminate the entire rest of the population to improve their own individual chances. In the wild, that individual would leave zero offspring.
Julius Caesar can choose to kill 200 or 600 Gauls, indeed to kill exactly 200. A (huge) pride of lions can make no such choice.. they will kill approximately 200 antelopes, maybe 201, maybe 199, rarely 202 or 198. The Gauls can attempt to influence Caesar. The antelopes cannot influence the lions, they can only run and jump randomly.
Indeed, they must jump somewhat randomly, even if a hotshot security expert could prove lions are incapable of dodging left. Should any significant fraction breed to become exclusive left-jumpers, the ability to right-jump (even sometimes) would be quickly eliminated from their genetic repertoir, along with the right-dodging lions. Then, either the antelopes would over-populate and starve, or some left-dodging predators would invade and eat them.
For evolution to have a happy out-come, security experts must be randomly disobeyed. A suitable number of predators must be fed, every day; and a larger number of prey must escape, every day.
Note, too, there are 200? lions and 200,000? antelopes. The security expert thinks there is only 1 Julius Caesar and 600 Gauls. Of course, not. However, the kinds of decision making suitable for evolution are nonsensical in the scenarios presented by security experts. Evolution preserves (the better) representative samples of populations. Security preserves non-representative individuals.
It occurs to me, having just written the previous sentence, that I ought to complete it with, "at the expense of the population".
Furthermore, to avoid the inevitable fate of losers in the game of evolution, the population ought to kill all the security experts.. first interrogating them aggressively to find out who hired them, so we can kill their bosses too.
1. This is hugely important work - but just scratching the surface. Therefore I would be careful about any conclusions at this time.
2. Who is the target audience?
3. I add my vote to "nurtur" side (learned social norms / sheep like tendency / fear of rejection); your focus might be too strong on the "nature" side.
For example; fear of exclusion from social acceptance is one immensely strong motivator, amongst many. The existence of intellectual understanding or rational thought does not guarantee it's application.
4. I think we must never forget that people are not just statistics or animals. A suicide bomber motivated by a sense of injustice is hard to fit into a darwinian or statistic social model.
I didn't find the Linda example convincing. When presented with options like:
A. Linda is a bank teller.
B. Linda is a bank teller and is active in the feminist movement.
then a mathematician might think "B is a subset of A, therefore A is at least as likely as B". But most people, trained in language rather than mathematics, will assume that the question is shorthand for:
A. Linda is a bank teller and is not active in the feminist movement.
B. Linda is a bank teller and is active in the feminist movement.
With those choices, B seems more likely. The assumption that a set of presented choices is intended to be mutually exclusive is pretty strong.
"If the sellers' estimate of something's value tends to be higher than the buyers' estimate, which one does the actual market value end up closer to? "
If the sellers's estimate is higher than the buyers' - then they would do well not to be sellers. Seems contradictory to want to be a seller if you estimate the price to go up.
Estimates don't really matter - only actual trading counts in determining stock price.
"My guess is that the market value leans towards the buyers' estimate if there are more buyers, and the sellers' estimate if there are more sellers."
On the stock market, every buyer must be matched with a seller. There can not be a purchase without a corresponding sale, and therefore there can not be more buyers than sellers or vice versa.
What matters is the quantity of money being offered. If buyers anticipate that the stock will go up, but too few others think that it will go down (i.e. buyers market) - then buyers must induce the others to part with their stock. To do so the buyers must increment their bid price upward until the market is cleared - until those who initially wanted to keep the stock change their minds and become sellers.
a) buyers pay a premium to convert holders into sellers.
b) sellers lose a premium to attract buyers.
c) sellers RENT out stock from holders and quickly sell it (b) with the intention of later re-buying it (a) and returning it to the holders [short selling].
The price of the stock goes up if the quantity of money performing (a) outmatches that of (b) and (c).
The price of the stock goes down if the quantity of money performing (b) and (c) outmatches that of (a).
I hope that helps.
"Humans have evolved a pair of heuristics that they apply in these trade-offs. The first is that a sure gain is better than a chance at a greater gain. ("A bird in the hand is better than two in the bush.") And the second is that a sure loss is worse than a chance at a greater loss. Of course, these are not rigid rules—given a choice between a sure $100 and a 50% chance at $1,000,000, only a fool would take the $100—but, all things being equal, they do affect how we make trade-offs."
All so true until the example - the presented options are very inequal also from mathematical point of view.
How about taking an example from lottery - gaining $1 surely instead of getting $million with the odds of 1/1000000 or less?
@Marko: "All so true until the example - the presented options are very inequal also from mathematical point of view.
How about taking an example from lottery - gaining $1 surely instead of getting $million with the odds of 1/1000000 or less?"
This is why one would use the expected (average) value: 1$ for sure = 1$ expected -- 1.000$ with a chance of 1/000 = 1$ expected
Fact is that people will tend towards the sure alternative if the expected value is equal. They will even tend towards the sure alternative if the expected value of the risky alternative is higher than the sure value.
The exact point where they would swing towards the risky alternative is individually different. The term is "risk-aversion" and economists use so-called risk-aversion-functions to modify expected values accordingly.
Problem is, that it is very hard to find a risk-aversion-function for a given individual.
correction: ' and therefore there can not be more buyers than sellers or vice versa.'
That should be buys/sells not buyers/sellers. That mistake is so common - even professional investors make it a habit.
There can be, for example, 2 buyers and 100 sellers, yet the total amount of buy/sell contracts would be 100.
@ Bruce, @ Benny
The 'endowment effect' experiment is highly flawed.
From the draft, page 9:
'It’s called the “endowment effect,��? and has been directly demonstrated in many experiments. In one half of a group of subjects were given a mug. Then, those who got a mug were asked the price at which they were willing to sell it, and those who didn’t get a mug were asked what price they were willing to offer for one. Economic utility theory predicts that both prices will be about the same, but in fact the median selling price was over twice the median offer.'
This experiment is flawed because there is NO economic transactions occurring at all. There is no buying/selling but merely ASKING prices. There is also absolutely no mention of the money that they would use in exchange. Such a parallel world scenario yields absolutely no insights.
Asking prices are useful in real life to initially figure out how many buyers there would be - but as every business knows you don't always get what you ask for, and therefore you have to lower it until enough are willing to buy it - even if it means you have to sell it at a loss.
Economic utility theory makes no prediction whatsoever that they would be about equal. All it states that if buying and selling were to occur b/w the mug holders and non-holders, a market price would quickly emerge as each would quickly discover who is offering the best deals. We can not know in advance how many would want to buy or sell in that experiment. However, if the experiment required that all must buy and sell - it should be noted that the price would be ZERO. Why it would be zero is left to the reader to figure out.
@Bruce: "Steven Johnson relates ... If you're a higher-order primate living in the jungle and you're attacked by a lion, it makes sense that you develop a lifelong fear of lions,"
Steven describes what many may recognise as a mini case of Post Traumatic Stress Disorder. Current thinking on the mechanisms of this differs slightly from your analysis. In lower animals the violent stressor plants a memory, but it needs *reinforcing* to become a memory that triggers a fight or flight response. It's not one lion attack but two, spaced apart. If the second lion attack doesn't take place soon enough the memory doesn't entrench itself in the same way.
In humans we think situations like this over again, restimulating the memory. The remembering can entrench the traumatic memory and turn it into a fight or flight trigger. That is what, in some people, can turn a one time violently stressfull event into a lifetime of autonomic irrational response to it - a situation now known as PTSD. It's also why counselling immediately after such an event may do more harm than good.
@quincunx: Spot on, except your last paragraph. I see no reason the price should be zero, at least for the assumptions as given.
@Paeniteo: The lottery is not an investment -- it is entertainment, a fee for the pleasure of sitting around discussing what you will do if you win. People pay a dollar into their workplace pool so they can then chat about it, not because they think it is a good investment.
Would it be more precise to describe Alternatives B and D as a 50/50 chance of gaining (B) or Losing (D) $1,000 or $0? That seems to be implied but stating it might make the point more clearly to someone considering this for the first time. I agree that for those of us in the choir it is already understood.
Fantastic article, I don't really have much more to add other than if you're going to start looking into psychiatry more you should definitely check out some of the magic/mentalist material out there. It's absolutely fascinating learning about how and why we are fooled.
@ Ben Liddicott
"Spot on, except your last paragraph. I see no reason the price should be zero, at least for the assumptions as given."
I should have been more accurate and said the price would tend to zero, not necessarily zero.
The reason for this is that if everyone is required to buy/sell, then it is no longer a market - it is an exogenous edict, that to be effective would have to be enforced at the point of a gun (if not physically then theoretically).
So imagine we have an equal number of mug holders and non-holders, at the end of the experiment their roles will be completely reversed, or else we will have a bunch of dead bodies.
The first mug holder steps up and tries to get the best offer. Since every non-holder knows that the holder will be shot, they will offer a zero price - since no matter what the holder will be compelled to give it up.
The only reason they would not offer zero is out of benevolence, not out of economic self interest.
The same process will be true for the rest. The point is that once you make a transaction an apriori requirement, you are no longer dealing with free agents - the holders will be more concerned with not getting shot as opposed to getting a good price.
Thank you. That's an interesting observation you made, and one that could probably be reflected in many "news" stories. I don't actually watch American news, not living down there, so I got my first information about this from places like BoingBoing which, if I remember correctly, had a mock-up of one from the start.
Based on some other blog comments I've seen, others were not so fortunate, pre-empting their ability to make a valid (read: useful) evaluation of the unfolding events. By the time word got out that these things had been around for two weeks (and what they really looked like), these people were firmly in the "punish the offenders" camp.
So, "informed" media let me make a more realistic assessment of the situation immediately. This gave my entire attitude toward the unfolding events a very different spin than that of others watching only mainstream media. By knowingly or unknowingly limiting the information given, mainstream media helped bolster the "valid threat" scenario.
"Similarly, it seems to be evolutionary better to risk a larger loss than to accept a smaller loss. There may be some benefit to this bias, but it may simply be adverse selection based on individual appetite for risk."
One plausible explanation is that this is because for an animal living on the razor's edge between starvation and reproduction (not uncommon since populations often expand until they are limited by food scarcity), a small or a large loss of food may actually be equally bad: both result in death. Therefore the best option is to risk everything for the chance of no loss at all.
Tiny minor edit: You write "Groups of six observers watched a tw0-man conversation". Replace tee-doubleyou-zero with tee-doubleyou-oh "tw0" -> "two".
I have just read "Stumbling on Happiness" by Daniel Gilbert which is a funny book on how humans make decisions and the faults that are built into the decision making process. I would recomend this as background reading for your essay.
My thoughts on Risk after reading this:
If I drive at 90 mph and don't die then I percieve it to be safe. Every consecutive day I achieve this without dieing makes driving fast safer in my perception.
However if the gap between instances is great enough my risk measuring system resets and each instance is as dangerous as the first.
From a highway emergency services point of view they will spend most of their working day dealing with the aftermath of accidents and so form a much worse view of the risk involved (but will associate the risk with drivers who are "worse" than they are and so continue to drive fast while condeming the public for doing so).
I bet that after reading more about brain heuristics, cognitive theory, general psychology, tacit-explicit knowledge conversion, Bruce winds up being a proponent of complex adaptive systems theory.
Really, this is a knowledge management problem in a specific domain (security). Getting people to absorb explicit information properly into their knowledge base is a topic of a staggeringly large number of research papers.
Bruce, you need to post your recent reading list on your blog.
Automobiles kill 40.000 a year, while commercial airplanes only kill a few hundred people a year.
That statistic is a bit about apples and oranges.
The airplane figure is a 100% external risk: when you take a seat on an aircraft, there is absolutely nothing that you can do to affect the probability of dying.
The 40.000 automobile deaths figure is composed of both external and self-imposed risk, as it includes speeding drivers that wrap themselves around a tree (self-imposed) as well as the kid that gets run over by a drunk driver (external).
For comparison, I think it would be interesting to learn the figures for self-imposed vs. external traffic deaths.
That doesn't change the fact that drivers accept the much larger self-imposed risk of driving more easily than the small external risk of flying, but it will allow us to specifically compare the external risk of driving with the external risk of flying.
Your hypothetical situation verges on the bizarre.
I think the problem may be that you've introduced a cost into your model without acknowledging that it's a cost. One presumably values one's own life over just about any amount of money. Effectively you've turned a commodity into a hot potato that sellers will readily get rid of at any price.
If you redefine the system so that sellers who fail to sell or buyers who fail to buy are, say, fined $5, then I believe you'd find that prices stabilize right around the amount of the fine. Of course, that continues to ignore real-life complications -- the commodity in this case is provided free to the seller and has equal worth to buyer and seller. In fact, the commodity is worthless except in terms of avoiding the fine. What I'm getting at is that by imposing a condition like this you've actually overridden the primary point of the experiment, which is to allow the subjects to hypothesize the value of a commodity *in a vacuum*. And the sole insight is that, again, humans naturally value what they already have higher than what they might some day acquire. In other words, a mug in the hand is intuitively valued at approximately 200% of one in the bush. This actually has very little to do with market economics, it's purely a psychological insight.
"Your hypothetical situation verges on the bizarre."
Poetic license and dramatic effect. Compelling someone to pay a fine is still compulsion. What happens when they refuse to pay? (OK, granted I don't suggest the experiment should turn into a hostile environment).
"If you redefine the system so that sellers who fail to sell or buyers who fail to buy are, say, fined $5, then I believe you'd find that prices stabilize right around the amount of the fine."
A sale price of zero is still an exchange.
Your setup doesn't work out. If the first buyer comes up to sell, everyone would still bid $0, and that buyer would still prefer to get nothing as opposed to paying a $5 fine - and the same holds true for everyone.
The game is still a forced market in favor of buyers no matter which way you slice it: theft or death.
Are you to have us believe that the actors would voluntarily pay $5 to avoid a $5 fine - if they can avoid the fine altogether and exchange for free?
"Of course, that continues to ignore real-life complications -- the commodity in this case is provided free to the seller and has equal worth to buyer and seller."
No, in real life exchanges occur precisely because the two parties to an exchange have a reverse subjective preference for a given item. There is no 'equal worth'.
You buy a good because you prefer a good to your money - and the seller prefers the money to the good. If there is a perceived equal worth of your money and the good - you would not take action, since there is no benefit and an actual time/effort cost in making the exchange.
"In fact, the commodity is worthless except in terms of avoiding the fine."
Well yes, still assuming my stipulation that an exchange must occur - you have just proven yourself why the price must then be zero, 'the commodity is worthless' if it must be exchanged.
If my stipulation does not hold - then the commodity is not worthless, it's a MUG! It has a use and it can be traded for something else.
"What I'm getting at is that by imposing a condition like this you've actually overridden the primary point of the experiment, which is to allow the subjects to hypothesize the value of a commodity *in a vacuum*"
Yes, but then it is entirely pointless and cannot be used to gauge values at all. Again, no insight. I can shout all the prices I want, and still there would be no external value for anyone to observe, because it's all in my head. Furthermore since it's all in my head, there is no reason why I couldn't lie - how would you REALLY know how much I valued something?
"And the sole insight is that, again, humans naturally value what they already have higher than what they might some day acquire."
Uhm, no, you see if that were the universal case there would be no trade at all. Everyone would value what they have higher than what they could get, and you couldn't get anything since no one is willing to part with theirs.
Theorizing prices in a vacuum eliminates the price system entirely.
To suggest that sometimes people value what they own over what they could get, is more accurate, but nontheless obvious. No insight here at all.
"This actually has very little to do with market economics, it's purely a psychological insight. "
Market economics incorporates many psychological insights - after all the economy consists of humans.
@Sanctimonymous: Nice one. I'm lol.
@Bruce: Great essay, seems to be a nice expansion on your last essay on risk. I'm trying to some of the people in my life to read this one. For some, the points you make are real eye openers.
For the past month or so, I've been working on a paper (for a class--no current ambition to publish it) on a similar topic: how groups of people make decisions. One of the biggest challenges, I think, is figuring out how to improve the collective decision-making process. You, and many of your readers, are passionate about security issues, but most of the rest of the world sees security, or the lack of it, as just one more thing in life they have to deal with. Yet through democratic and market processes, their preferences strongly affect how their societies choose to deal with security issues. Regardless of whether you believe legislators vote with their constituencies or with their contributors, those legislators face strong pressures to satisfy the desires of people who are making their security decisions based on heuristics. So, I'd suggest expanding your treatment of how to change things to improve the quality of the security decisions individuals make despite the distortions arising from the heuristics you catalog. Either that, or propose a mechanism by which society as a whole can make good security decisions even though many of the people in it might want less-than-optimal tradeoffs.
"Second, when considering security gains, they're more likely to accept an incremental gain than a chance at a larger gain"
In practice, this is the preference for patching up leaky systems, rather than replacing them with systems that work.
It explains much about the security market :)
"Either that, or propose a mechanism by which society as a whole can make good security decisions even though many of the people in it might want less-than-optimal tradeoffs."
To be read: Use force if necessary.
Good reads on unreasonable beliefs:
"Why People Believe Weird Things" by Michael Shermer
Addressess many of Bruce's points on fallacies in reasoning in greater depth.
"The Lucifer Principle" by Howard K. Bloom
Addresses people choosing actions on basis of group beliefs as opposed to personal safety (among many other things).
"To be read: Use force if necessary."
Or a representative democracy. Or a security czar. Or a standing advisory panel to committees in Congress/whatever other legislature you have. Or converting Homeland Security from a department to an appointed or elected board. :-)
There are probably a bunch of others approaches the creative minds on this forum could devise.
@ False Data
"Or a representative democracy. Or a security czar. Or a standing advisory panel to committees in Congress/whatever other legislature you have. Or converting Homeland Security from a department to an appointed or elected board."
Force. Force. Force. Force.
"There are probably a bunch of others approaches the creative minds on this forum could devise."
Perhaps they can, but it will inevitably come down to force (the state) or not (the market). Most will pick the first, because they incorrectly perceive security deficiencies to be part of the market, whereas few will pick the market because they correctly perceive that security deficiencies are the result of the accumulations of bad legislation, unsound monetary systems, and unnecessary gov institutions in general.
@ Fraud Guy
""The Lucifer Principle" by Howard K. Bloom
Addresses people choosing actions on basis of group beliefs as opposed to personal safety (among many other things)."
Interesting. Never heard of that name for the principle, always known it as 'groupthink'.
Great piece. One tiny typo:
>The first group was told to imagine
>that they has spent $50 earlier in
>the week on tickets to a basketball game,
Should be "have spent" :)
Bruce, my compliment for a great paper on a complex topic.
Of many possible comments to this highly interesting field that will raise mysteries for a long time
a) You write that test are often done on students. Problems is that students are perhaps bad test characters because they have not yet positioned themselves with debts, kids, carriers and all the other sensitivities that make people feel vulnerable and thereby likely increasingly riskaverse.
b) You should in my view have a bit more attention on the learning capability of man. A child once burned learn to fear fire. It is a real threat even though the risk may be overestimated.
c) You clearly do not include technologies that can handle trade-offs such as Privacy Enhancing Technologies.
d) I would REALLY like to see studies on why DEVELOPERS and IT buyers tend to underestimate the threat they are to others while they perhaps overestimate the threat of others to themselves. That seems to be the root security problem today.
e) The concept of CHOICE and AUTONOMY would require the availability of services that also incorporated individual security and control (both real and the perception). We learn with spouses that if you VOLUNTARY give op control for no other reason that do show trust, you would build must stronger trust-relationships. But the difference is that you want your spouse to trust you, Why care if some service provider trust you? He trust his security that he get his money, trust in me is not relevant.
"Bruce, maybe you should consider vetting your articles on a wiki for easy editorial"
I'd love to. Is there anyone willing to set it up?
I can't afford the bandwidth on my colo box but I'll talk to the security group on campus and see if I can swing it.
"it will inevitably come down to force (the state) or not (the market)"
So what's the distinction between being outvoted by your fellow citizens and being outvoted by your fellow shareholder(s)? ;-)
Thanks for interesting commentary but I was a bit confused by this bit:
"You, and many of your readers, are passionate about security issues, but most of the rest of the world sees security, or the lack of it, as just one more thing in life they have to deal with."
It is true that many of the blog readers have strong opinions about security but I am sure they all deal with normal day to day life like anybody else. You can become habitually inquisitive and questioning about the percieved security of your daily normal life by practice and study but that is not the same thing as passion. I would argue that passion tends to distort good (security) judgement.
"... propose a mechanism by which society as a whole can make good security decisions even though many of the people in it might want less-than-optimal tradeoffs."
IMO, the only possible answer to a question like this education. Educating everybody in society to the level that they can deal with complex security problems does not seem practical or useful to me; that's why we have security specialists.
@ Grey Man
Thanks for the compliment. My guess is we're just disagreeing over terms. Here's the difference I'm talking about. I chose to label it as a passion about a subject, but I'm happy to use "interest" or some other word:
You and I read this blog regularly. I'd guess you have the same tendency I do to be willing to analyze security in terms of tradeoffs and security models, to follow security-related stories, and to be interested in the latest attacks.
Contrast that with most people I know who are much more interested in the latest episode of CSI, who's in the Superbowl, whether they got the tee time, and what time the kids' soccer practice starts. Their interest in security as a topic comes up when they're at the airport or getting their bag searched on the way into Sea World, or when they read the latest news about a suicide bomber in Iraq. They have absolutely zero interest in developing a security model or calculating the probability of an attack. As Bruce's essay says, as long as they feel pretty safe and don't have to spend too long at the airport security line, they're good to go.
I think there are a bunch more people in group 2 than in group 1, so I'm hoping Bruce will add something to his essay about how to deal with that fact. As you said, education might be a tough job, but if you go with experts instead, then you need to figure out how get from that expertise to a quality security policy that all the non-experts are at least willing to tolerate.
8th paragraph of the article:
"There is also direct research into the psychology of risk. Psychologists have studied risk perception, trying to figure out when we exaggerate risks and when they downplay them."
You should make your pronouns the same across the conjunction:
"...when we exaggerate risks and when we downplay them."
"...when they exaggerate risks and when they downplay them"
Upon reflection, it should definitely be:
"...when we exaggerate risks and when we downplay them."
Using "they" would imply that the psychologists are only investigating their own behaviors.
@ False Data
"So what's the distinction between being outvoted by your fellow citizens and being outvoted by your fellow shareholder(s)?"
Mu. The very framing of your question shows a great deal of misunderstanding of the market, which explains the initial call to mass democracy and state planning.
The most important difference is you are not bound to edicts of the charlatans voted in by your fellow voters, you can sell your stock and never buy again.
Now try to declare your property a sovereign territory not bound to any authority other than international law, and see what happens.
The ability to opt out is the ultimate check and balance.
" They have absolutely zero interest in developing a security model or calculating the probability of an attack."
Thank goodness. This way they can concentrate on their own comparative advantage to society.
@ Stephan Engberg
"d) I would REALLY like to see studies on why DEVELOPERS and IT buyers tend to underestimate the threat they are to others while they perhaps overestimate the threat of others to themselves."
I believe the analysis in the paper explains this under Prospect Theory.
They underestimate the threat to others because it is considered a loss (an externality), and therefore they will take their chances.
They perhaps overestimate the threat to themselves because it is a gain in knowledge and competitive advantage (internalization), and therefore they will keep it up.
Also, control is an issue that is also raised in the draft. An IT/Dev firm obviously has more control over itself than its clients.
"...or trading some security against a particular kind of explosive terrorism on airplanes against the expense and time to search every passenger..."
The string of "against" used with slightly different meanings is initially confusing. I would suggest something along the lines of:
"...or trading some security against a particular kind of explosive terrorism on airplanes versus the expense and time to search every passenger..."
"The more your perception diverges with reality in any of these five aspects..."
should probably be:
"The more your perception diverges from reality in any of these five aspects..."
"Why is it that, when food poisoning kills 5,000 people per year and 9/11 terrorists killed 2,973 people in only one year, are we spending tens of billions per year on terrorism defense and almost never think about food poisoning?"
It might help to drive home this point by using some actual numbers on the food-safety side:
"The Food and Drug Administration (FDA) performance budget request for FY 2007 is $1,947,282,000"
"Like a squirrel whose predator-evasion techniques fail when confronted with a car..."
Good analogy. Another one that may be useful is armadillos: their instinct is to jump when approached by a car (which would otherwise probably pass harmlessly over them, if missed by the tires).
"Dealing with risk is one of the most important things a living creature has to deal with..."
The "Dealing...deal with" juxtaposition is a little awkward. Perhaps something like:
"Assessing and reacting to risk is one of the most important things a living creature has to deal with..."
"...the amygdala is what reacts immediately. It's what pumps adrenaline and other hormones..."
should probably be reworded...perhaps:
"...the amygdala is what reacts immediately. It's what causes adrenaline and other hormones to be pumped..."
Under "Risk Heuristics":
"How we get the risk wrong, and when we overestimate and when we estimate..."
is a bit confusing. Did you mean one of the following, instead?
"How we get the risk wrong, and when we overestimate and when we underestimate..."
"How we get the risk wrong, and when we overestimate and when we correctly estimate..."
"For half the subjects, the deck consisted of 70% happy faces and 30% frowning faces. Subjects faced with this deck were very accurate in guessing the face type: 68% of the trials. The other half was tested with a deck consisting of 30% happy faces and 70% frowning faces. These subjects were much less accurate with their guesses, only predicting a frown on 58% of the trials. The type of face affected accuracy."
This doesn't appear to be a congruent set of comparisons. Perhaps the second group wasn't
"...predicting a frown on 58%..."
"...predicting the face type correctly in 58%..."
"Groups of six observers watched a tw0-man conversation..."
typo in "tw0" should be "two"
"Linda is a 31 years old, single, outspoken, and very bright. She majored in philosophy As a student..."
The "As" should be "as".
"...and salient sensory input—but the issue really broader than that."
This should probably read:
"...and salient sensory input—but the issue is really broader than that."
"—and that I mean trade-offs that give us genuine security for a reasonable cost—"
probably should read:
"—and by that I mean trade-offs that give us genuine security for a reasonable cost—"
"Bruce, maybe you should consider vetting your articles on a wiki for easy editorial"
"I'd love to. Is there anyone willing to set it up?"
Hi Bruce, I'm a student at Polytechnic University's Information Security lab and we're all really big fans of your work. We discussed your need for a wiki and quickly agreed that we would be willing to set one up and maintain it for you. Send me an e-mail if you'd like to talk about this! :-)
dguido at gmail dot com
You might want to change.... "If you misevaluate the trade-off, you won't accurately balance the costs and benefits."
Is misevaluate a word? Perhaps... "If you incorrectly evaluate the trade-off,...."
Another comment. You claim that there is an objective quantifiable risk, i.e. the likelihood of something occurring. But since the propensity of crime has a behavouristic component you easily end up in a circular argument. This likelihood is historic, but may not tell you anything about the future.
Example. Today people niavely assume that biometrics identification adds to security. But since biometris is inherentely spoofable and you cannot get new keys, there is in present models no fallback, no graceful degradation. So as soon as criminals and other forces (undercover agents, witness relocation and VIPs need new identities) get the tools and competences for systemic spoofing for Credential-based identity theft, the assumed "objective" risk change dramatically.
Security is dynamic unless you have context isolation in the system, i.e. damage control so you know as much as this is possible the worst consequence of (inevitable) security failure.
Without the dynamic component, you ignore that criminals learn - faster than new protections can be deployed.
Absolutely - but fact is that that is what is happening and a major factor explaining why we have so bad security. Almost all security is deployed to take control AWAY from you and thus reduce your security because someone else wants to have control.
Well, Trust (incl.) Privacy Enhancing technologies and for the above case a shift to on-card match of biometrics ONLY are examples that we can DESIGN risk down or away even in multi-stakeholder systems.
Risk reduction is the main design issue to deal with this dynamics - assume the server fails, assume the attackers succeed and then what !?
That the main reason why I advocate for a shift to National Id 2.0 where you incorporate Citizen Control through MANY identities or keys for logical separation of context. National Id is the present model only make risk and crime escalate.
I would suggest, in relation to the ordering of options part (linked to footnote 54), that at a certain point in list length, the emphasis will shift from the end of the list to the start of the list for a written question, but remain at the end of the list for a spoken question.
That would certainly appear to be the case in elections with long ballot papers for multiple candidates (standard Irish ballot paper would have 10 to 15 candidates, with at least 6 having a chance of getting elected, and there's a huge advantage to being at the top of the ballot paper - people get bored before they've read the whole list).
Forgot to say, though - great essay! Just kept getting curioser and curioser.
Hi. I also seem to remember some research that showed mens long term vs short term evaluations of risk are affected by the recent sight of a beautiful woman - if you see a beautiful woman, then your brain automatically decreases how much you worry about future ramifications. They did a test where you were offered an amount now or a larger amount in a months time.
Also, I'm interested in how all this feeds into gambling behaviour, since it is the most obvious form of people having an incorrect tradeoff strategy.
I don't think even 10000 years ago our responses were perfectly honed to our environment - superstitions have been around a long time, and are an example of feeling safer when you've done something with low to 0 effectiveness, and feeling less safe than you really are when someone has cursed you. I think, as long as we've been what we'd call human, we've had faulty risk assessment.
Page 11, 2nd paragraph: I think you meant "higher risk perception" rather than "higher risk perfection"
@ Food Poisoning comparison.
First, the question is not the size of the problem vs. cost, it is the size of the *mitigation* vs. cost. The achievable mitigation might be larger for food poisoning than terror attacks. Or it might not.
Second, security is like law enforcement and ticket punching. If done well, it seems superfluous.
So the question is not: Why are we spending more on security than attacks cost, but how much worse might the problem be if we reduced the investment?
(None of this is meant to undermine Bruce's points about where money is best spent, only the comparison with food poisoning, which does not respond to enforcement activity in the same way).
I think that was the point of the evolutionary comparison. Our risk perceptions are not honed to protect us, as an individual. They are honed to protect our genes, which may or may not overlap. The problem today is that their is a bit of a lag between conditions and genome shift, in addition to our individualist assessment of value.
Additionally, I think that our culture get short shrift. Our culture is going to train us in ways that maximize the cultures survival and propagation, not our own. For example, many people today believe that they are immortal: that death is an illusion, which will be dispelled in the afterlife. Christians, Muslims, Hindus, New-Agers....
I doubt that this belief maximizes realistic threat assessment -- if I believe I'm immortal, instead of paying attention to the bear about to eat me, I'm going to be worried about how to properly pray. However, the culture of immortality is attractive. It can be easily evangelized. And so this belief spreads world-wide, in the face of all evidence.
Another example of the importance of culture can be seen in "Pigs for the Ancestors," Rappaport. In the New Guinea highlands, there has existed a cultural group that believed that pigs must be sacrificed to the ancestors to break a truce, and that that can only be done with adequate numbers of pigs, and the proper ceremony. This leads to truces enforced by the limited carrying capacity of the land.
When tribe A is preparing to attack their neighbors, it would be in the interest of their neighbors to mount a pre-emptive attack. However, if neighbor B has insufficient pigs, they'll hold off, believing that the attack will fail if they don't pacify the ancestors. Clearly a misguided assessment, in terms of the individuals of tribe B. But, in terms of the entire system, these biases in risk assessment limit war to the maximum supportable by the local carrying capacity, keeping the entire community of communities from collapsing, and so the belief lives on, century after century.
How many of our beliefs are similar? And could we identify them, if we wanted to?
Spot on. You're bringing up poli-sci as much as anthro.
Page 20, discussing the responses to the questions about divorce. This paragraph essentially says:
"In response to the first question, 23% said X; 36% said Y; 41% said Z.
"In response to the second question, 26% said X; 29% said Z; 46% said Y."
(Sorting the responses by percentage in each case.) I found this harder to compare the two cases than it would have been if the responses were in the same order in each case. That is, I feel it would be clearer as:
"First question: 26% said X; 36% said Y; 41% said Z.
"Second question: 23% said X; 46% said Y; 29% said Z."
It makes it easier to compare the two questions.
Bruce, I just got home from RSA. Caught your talk on Weds. My general comment about your talk--and admittedly I didn't get a chance to read your whole essay till now, so I am only basing this on what I heard live at RSA--is that you cited some very interesting things about human perception and our psychological foibles; however, I felt you failed to tie these in adequately to our thinking on security--specifically IT security, which is really what the conference was about.
My guess is that you ran out of time on the talk, right? The written essay seems to make better conclusions than what I heard in your presentation.
I really appreciate the thoroughness with which you research something when you decide to "go after it." That is a great quality. However, a professor of mine said of one of my essays that I was "mining nuggets" here and there, without adequately tying them into an overall framework of thinking.
The psychological studies you cite, while interesting, do not tell me why, for example:
1) Gen. Colin Powell, week after he retired as Secretary of State, was strip searched before boarding a plane for NY. Powell cited this in his closing RSA keynote address.
2) Furthermore, why Gen. Powell said he didn't mind the strip search in the name of security, even though admittedly, it added no security to the system.
3) Why, despite hard quantitative statistics, it is still difficult to justify ROI on a security budget.
Well, those were just some things I thought might be useful to address.
"I really appreciate the thoroughness with which you research something when you decide to 'go after it.' That is a great quality. However, a professor of mine said of one of my essays that I was 'mining nuggets' here and there, without adequately tying them into an overall framework of thinking.
Certainly my talk was guilty of that. I don't yet have an overall framework of thinking about this. Right now I have a lot of isolated facts and explanations, and I'm not really sure how they all tie together. I made a stab at it at the end of the talk -- and at the end of the essay -- but it's not really enough.
I'm still working on it, though. I don't know if this is a book-length idea, but it certainly is an idea that needs further research and analysis.
BTW, the way I think of it is this: any piece of writing needs both a "so" and a "so what?" Right now, the essay has a lot of "so," and not very much "so what?"
I'm working on the "so what?" part.
Small correction: an page number 14, in the seventh paragraph (starting with "Here's one...") in line two, in the word(s) "two-man", the letter 'o' seems to be of wrong size or font. Please correct this one in your OOo although it might only be noticable by very accurate readers ;-)
btw, *very* good essay so far. Can't wait until the whole thing is complete and published.
I am not a security person, nor do I play one on TV. I never finished my Anthropology degree. But I think a lot. My husband suggested I read this essay and contribute what I can. So here's a few thoughts:
1. The heart of your topic is philosophical, not psychological, in nature; Why do people react with fear to the unknown, and How do we differentiate between the New and the Dangerous. Psychology and neurology will offer insights to how the brain works, but they won't tell you everything.
2. Western political though has operated under the *incorrect* interpretation of Macchiaveli; he did not say it's better to be feared than loved, he said that the paranoid, double crossing, unethical Prince will probably live longer if the people are so terrified that they will not rise against him. He (Macchiaveli) says nothing about the health of the community or the people in it; the concept of "the people" as entities with rights was still a few hundred years away. When we rethink Macchiaveli, we'll come up with some more useful political philosophies.
3. Fight-or-flight is *not* the only response to stress. It is the typical MALE response. New research as reported by Psychology Review shows that there is a very different response typically seen in WOMEN: "tend and befriend." (also seen in "rally the troops", "circle the wagons," "Communicate and collaborate" "work as a team", etc. ) Read this review at http://www.apa.org/monitor/jan04/habit.html - And you will recognize this pattern once you think about it. A man who's had a crappy day at work goes home, pops a beer, and kicks the dog. A woman calls her girlfriends. Our current response to risk is the fight-or-flight method (hit back harder) and not the tend-and-befriend method (circle the wagons and get the attackers some therapy.)
4. Instead of looking for bigger, harder ways to hit back, maybe if we (meaning YOU because I know jack about security technology) were to consider tend-and-befriend methods to Risk assessment and management. One current method that comes to mind is certificates and certifying authorities. I trust the authority, the authority trusts you, so I can trust you. I'm sure you smart people can think of more ways to extend the "tend-and-befriend" metaphore to security.
5. Hackers in China and Russia wouldn't have to attack us if the societies in China and Russia provided sufficient opportunities for personal and professional growth. People who can earn an honest living are less motivated to steal. What if, instead of spending all our attention and money on stopping foreign bad guys, we just billed their government for the loss? Just a thought, like I said, I know jack about security.
Your article is very similar to one written in a Time article (I think aroound Nov/Dec last year) about how bad humans are at evaluating risks.
What was very interesting in the article was a pyramid chart about how the various Americans who died last year (eg what did them in).
I dont remember numbers but basically pointing out aspects that more people died falling out of bed last year than shark attacks and similar parallels.
I mean lets face it Americans aren't afraid of eating hamburgers but more of them have been killed by that than terrorists.
Sir take off your shoes before you eat that big mac we want to do a cholestral check :)
I really like where this is going but would like to see a clearly relationship between the front matter and the conclusion (as previously noted).
Also, is this sentence correct?
"Second, when considering security gains, people are more
likely to accept an incremental gain than a chance at a larger gain; but
when considering security losses, they're more likely to risk a larger
loss than accept a larger gain."
Should it not say "... accept a smaller loss."
A few comments:
1. Ethical consideration:
In my opinion, raising the perception of security to the real level of the risk is ethical, but raising it higher than the real level is not.
2. Reality/perception of the countermeasure
The reality/perception of a security risk is distinct from the reality/perception of an available countermeasure.
For example, there is a reality/perception of the security risk of getting a disease or attacked by a missile.
However, equally important there is a reality/perception people assign to the available countermeasures such as acupuncture or SDI.
3. Cost of a countermeasure?
Sometimes it’s is in negative dollars. Countering obesity by eating less, or lowering the risk of cancer by giving up smoking cigarettes saves money. Sometimes a countermeasure has no dollar cost - locking a door, or putting on a seat belt.
Two questions and thoughts on this very interesting and encompassing essay. My questions and thoughts center around the question whether some of the apparent errors of risk heuristics might not be errors at all when seen from a different point of view.
1. Risk from flying vs. risk from car driving: Certainly more people die due to car driving, but people spend more time with car driving, too. So something like deaths per km driven/flown or deaths per hour driven/flown might give a different view as to the comparable size of the risk
2. Natural vs. man-made risk: There is a significant difference between a deer ramming my car and a mugger stealing my money. The first is a basically random process and the probability of another deer ramming another car is not (too much) affected by previous similar events. This is different for mugging. A mugger who successfully acquired gains from his illicit trade will continue it, will even be an example for some other members of society, so the frequency of mugging might well increase. This might then lead to further risks becoming more frequent (see some suburbs for this). Therefore, if risk probability times damage is similar in both instances, it is very reasonable to fight the non-Markovian risk versus the entirely random one. The same goes, to a certain degree, for comparing deaths from terrorism to deaths from heart disease. The rate of heart disease will not increase dramatically if we do not fight it, the rate of terrorism might, if terrorists find that they can kill at leisure (to exaggerate)
Just these two ideas which might make some of the unreasonableness more reasonable.
Under the heading Prospect Theory, please state that the subject was to invest $1,000. This can be inferred by the results, but it will allow the reader to make the connection faster.
You might find the sensemaking literature helpful. It focuses less on a single decision, and more on the dynamics of making sense of a specific context. I won't try to list my favorite articles/research, but here's a few starting points:
1. anything by Karl Weick - "The Social Psychology of Organizing" is my favorite, though most of his recent work focuses on high-reliability organizations
2. Gary Klein's Data/Frame model - from an area usually termed "naturalistic decision making"
3. Sensemaking research publicized by the Office of Force Transformation (www.oft.osd.mil) and the DoD Command and Control Research Program (dodccrp.org) - this research highlights the cognitive & social aspects of translating information into action (google for "sensemaking", "leedom", or "ntuen")
4. Although it's focused more on ontology than epistemology, I find Snowden & Kurtz's Cynefin framework useful.(http://www.research.ibm.com/journal/sj/423/kurtz.html)
5. Kevin Burns at Mitre has some interesting interactive sensemaking games/tools (mentalmodels.mitre.org) that allow people to more clearly see their biases.
Finally....it's telling that over 1/3 of the pages in the Spring 2006 MIT Press "Cognition, Brain, and Behavior" catalog were devoted to the category "Philosophy of Mind"....not a good indication that our knowledge in this area is anywhere near mature.
Great piece. I've only read half so far. My one comment is that I don't think it's fair to suggest that people are making an irrational decision in their fear of terrorist compared to food poisoning. Food poisoning kills more people every year, yes. But the food doesn't plot against the country. It is quite a different kind of threat. Risks involved in a terrorist attack, no matter how unprobable, when taken to their extreme could throw our country into turmoil for decades and threatens the fabric of our society. While it could be argued that things like the recent spinach contamination impacted the economy, that was mostly due to the overblown media coverage and doesn't compare to an agressive act by other human beings. Ecoli and botchulism don't have a cultures that clash with our own.
the latest essay is very compelling. I'm curious to know if you've considered speaking with Jeff Hawkins? In his book On Intelligence, his theory of intelligence posits that prediction is what the brain does. He describes an act of coming home and entering the house through the front door. You've done it so many times that without consciously thinking about it, your brain is predicting the experience like the sound the door makes, the weight of it as you push inward, the feel of the door handle, the tension of the lock as you turn the key. Then he suggests how the brain's predictions are knocked off course if he had arrived minutes before you and was able to significantly change the door's wieght. A change that you wouldn't detect upon initial approach to the door. Your brain just detects business as usual.
I think this is related to your notions of the security feeling and how we assess risk. Steve Johnson's brain must have picked up on something in the sound of the wind or the creaking wood or glass to trigger his need to move- a predicition. From as security perspective it would be great if we could know how this works and seek to emulate it with software. I think Jeff's company Numenta is on the right path compared to the failed AI attempts of the past.
I suspect the two of you could have a very good conversation.
And lastly,...you should write another book.
I'm not an economist, but I'm pretty sure you're misrepresenting microeconomic theory when you quote the study about a sure gain/loss of $500 compared to a 50% gain/loss of $1000 in the "Prospect Theory" section. The fact that people chose differently when it's a gain, compared to when it's a loss, is not "unexplained" by economics.
When you said alternatives A and B have the same expected utility, you were making an assumption about the person's utility curve; namely, you were assuming it is linear. If you give me $20 instead of $10, I will be twice as happy.
Again, I am no expert, but I expect economists would assume a log curve, or other strictly convex function, is a better approximation of a utility curve. It certainly unlikely that it is linear, and there's a simple example.
Suppose my personal savings are $100,000 and you give me a bet where I have 50% odds of losing, in which case I owe you $100,000, and 50% odds of winning, in which case you give me $200,000. According to your version of utility theory, I would accept the bet because the expected outcome is in my favor. But very few people would accept this bet, because losing the bet and having $0 is much, much worse than winning the bet and having $300,000. To appropriately calculate the expected utility, you need to weigh the outcomes with the utility of that outcome.
It's the same reason we pay money for insurance. Insurance companies make a profit, so we know it's a bad bet. But paying out-of-pocket for a new car after a wrech decreases your utility more more than making smaller monthly payments.
Hence, we don't expect a gain and loss of the same amount to increase and decrease utility by the same amount. Of course, what economics doesn't know is what exact utility curve people have, but this is impossible to know. People have different curves, just as people have individual reactions to risk (which is also included in conventional micro-theory).
The "prospect" theory you mention seems out-of-place. The conclusions are logical, but my point is that I don't think they are anything new. Before mentioning it in a book, I would consult with a trusted micro-economist to clarify whether it really is a new theory to explain otherwise irrational behavior, or if it can be explained by conventional theory.
"The feeling and reality of security are certainly related to each other, but they're just as certainly not the same as each other. We'd probably be better off if we had two different words for them."
I would say that we already do: "Security", which relates to the mathematical/statistical side, and "Insecurity" (in its noun form), which relates to the emotional side.
I agree with Richard Braakman's analysis of the Linda example. I think it tells us more about the subjects' ability to comprehend lists of statements than their ability to estimate probabilities. Most of the statements listed do appear mutually exclusive, so it would be very easy to infer that "Linda is a bank teller" was also supposed to be a mutually exclusive option. The reader would thus interpret it as "Linda is a bank teller and not an active feminist."
This seems to be a moderately common fault in psychological tests: the psychologist assumes that the subject's understanding of the question is the same as the psychologist's own.
Great idea for a book Bruce. A chapter of interest to security folks would be a discussion of how the perception of risk (often due to litigation) impacts business decisions. Businesses will invest significant dollars because a jury would preceive a risk that, in reality, was minimal. Because the jury is convenced the risk was significant they'll award large judgements in cases where statistically it was not a "forseable risk" to the business. Just a thought.
Very good data about hueristics and trade-offs, although as a theist, I don't buy the evolutionary paradigm. Anyway, I would appreciate your adding how the research examples illustrate typical security scenarios.
I think class distinction plays into many people's attitude toward risks. In a suburban neighborhood with good schools where people argue about the neighbor's leaves blowing onto their lawns, many residents think they are ``above'' petty car vandalism, so when it happens, it's just that one guy in that one house that looks different from the other houses, must be him that's doing it. If only we could catch him in the act, it'd stop. Anyway, it's not the sort of thing that happens in _our_ neighborhood, so it's like it's not really happening. It's just an anomaly, and doesn't change the risk analysis. It's still so much safer here. It's just that one guy.
In NYC a lot of people learn to drive as adults, so they are quite bad at it and have strange ideas about what's appropriate. They are constantly breaking off mirrors, banging into bumpers while parallel parking, tailgating and stopping suddenly and causing 4-car pileups at 30mph. Trucks too wide for the street smash into parked cars. And this seems to cost more to fix and happen many times more often than the guy who smashed my friend's window to grab his CD's. It's my problem to worry about it, so I worry about the hit-and-run trucks and the horrible drivers. but ask an outsider from the other side of the pseudo-class divide, ``OMG you took your suburban car to The City? Wow, I hope you parked in a garage. It's not _safe_ there---those people, they're not like us. They'll just take a baseball bat and smash every car window on the block, because they're angry at you for having a car, that's why they do it!'' The wildly-inflated risk analysis is really just disguised class arrogance. It has much more to do with pride than fear.
It works the other way, too. In Manila, bombs go off in the mall or on the elevated train every year or two, and people die, and people get angry, maybe even call the bombers ``animals,'' whatever, but since most don't consider themselves ``too good'' to put up with this risk, it's analyzed next to murder/rape/kidnapping/traffic-accident, and the mall and the train remain crowded, relatively happy places. But if a bomb goes off in the U.S. (as long as it wasn't set by white supremacists), the fucking sky is falling and ``I thought stuff like that only happened in third-world countries, not here in the Greatest Nation [blah blah]. Suddenly we're not invulnerable!'' and officeladies in sneakers are weeping loudly in the streets. I think their pride was injured as much as their feeling of safety.
"The lottery is not an investment -- it is entertainment, a fee for the pleasure of sitting around discussing what you will do if you win. People pay a dollar into their workplace pool so they can then chat about it, not because they think it is a good investment."
Here's another way of looking at paying into your workplace's lottery pool: insurance. If all your colleagues became millionaires and you didn't, not only would it be emotionally painful, you'd end up having to do all their work after they quit as well.
Of course the smarter approach would probably be to get a real insurance company to insure you against that risk. The UK national lottery pays out something like 50% in prizes. Your insurance company might only be making around a 10% margin on premiums though, meaning they pay out 90% in "prizes" - so the insurance premium ought to be cheaper than buying the lottery ticket.
When you say "9/11 terrorists killed 2,973 people in one non-repeated incident", I think the reason we spend billions on it is because we believe it *would* be repeated otherwise. Some of us think something like it will anyway, even with the "security" that's been put in place. The cost of security is based on the perception of *future* risk. For ongoing threats, statistics are a valid approach, but since the 9/11 event changed that particular game calling it "one non-repeated incident" underplays the threat.
In our case (the US), you ignored half of the security equation: how authorities act and react.
much of Bush's security has been for show (as widely shown in your columns) - not as much to make people feel secure as much as to show that THEY are doing something.
psychologically, the degree of over-reacation almost always mirrors the degree of culpability. how much of this administration's unwise, over-blown, and wildly cost ineffective security policy has been driven by this?
what the Bush administration is doing in my behalf far outweighs any of my personal security decisions, and, in balance, should be addressed more if not equally.
One curious and very common human trait that I didn't see considered in your paper but that would fit in very nicely is the tendency people have to attribute foresight and good judgment--even brilliance--to someone whose decision resulted in a positive outcome, even if it did so by running a risk that would be considered reckless had the outcome been different.
Great essay, love it. All the examples and the way that it was written.
There is one more phenomena that I have observed around security and I did not see represented in the essay.
When the risk is perceived as big enough (and the media has to do a lot with this) people wants more security, but wants to delegate it rather than make themselves responsible for it. Part of the society that we live in is a society of consume, and security becomes a product rather than a duty or a responsibility. And as such it tends to be delegated to the provider (read government, IT, 3 party vendor, etc).
It is like people is scared of opening the can of worms and understand the worms , the they become subject to manipulation. This has become very visible with insurance companies (example "Safe sanctuary" for churchs program).
And I agree with the posting that said that some of us have a natural allergy to being manipulated...
I bet you enjoy the TV show
"Deal or No Deal"
In the experiment where people were asked if they would go to a play after either attending a $50 basketball game or paying a $50 parking ticket, I would like to point out that time spent on the previous activity may have as much to do with the fact that fewer basketball attendees went to the play as which mental monetary budget paid for the prior event. The parking ticket will cost a little time to write the check and to mail it, but the basketball game cost a couple of hours at least, not to mention time driving to and from the event, looking for and paying for parking, etc. In this regard, the play is similar to the basketball game in the amount of time that would need to be spent and aggravation suffered. If the person's entertainment time budget was mostly spent on the game, that person may not feel he has enough time for other entertainments that week.
I think what is missing here is (as in most discussions of 'market value') is the question of effect over time. The stock market is not divorced from the rest of the economy and over time tends to adjust to that (slowly or dramatically). I think the same could be of interest regarding how humans place value on security.
"The essay's discussion of the endowment effect was very interesting. How big an impact does this effect have in something like the stock market? If the sellers' estimate of something's value tends to be higher than the buyers' estimate, which one does the actual market value end up closer to? My guess is that the market value leans towards the buyers' estimate if there are more buyers, and the sellers' estimate if there are more sellers. But I know next to nothing about economics, and eagerly await someone knowledgeable in the field to tell me what really happens.
Posted by: Benny at February 6, 2007 02:46 PM"
But what is the individual and what affects it's behaviour? In fact one could say that it happens at the genetic level - as it is this that is passed on. Thus sexual behaviour - a very social activity - becomes determinate. (Richard Dawkins is good on this)
"sounds like security trade-offs or evolution "for the good of the species" the popular yet inaccurate meme from TV nature programs. Correct me if I'm wrong but doesn't practically all (natural) selection happen at the level of the individual, as your example illustrates?
Posted by: k at February 6, 2007 02:47 PM"
Just a bit of clarity in evolution that doesn't really apply equally to humans anymore.
The rabbit that makes the wrong choice and gets eaten only gets factored out if he hasn't reproduced already. If young rabbits make the same mistake, the stupid decisions fall out of the species over time. If rabbits had more of a society, and smart rabbits told stupid rabbits what to do, you still keep stupid rabbits in the mix.
I suppose the point I'm going towards, humans do many things to ensure people keep living, and reproduce. I'm not so sure traditional evolution works on the same level as it does for out rabbit friends.
"It’s just that the environment of New York in 2007 is different from Kenya circa 100,000 BC."
I think it would be instructive to highlight the dangers that Kenya man faced, how his brain became wired to survive, and how those adaptations affect our current perceptions.
Lions and Tigers and Crocs...Oh, My!
/// I remember in the weeks after 9/11, a reporter asked me: "How can we prevent this from ever happening again?" "That's easy," I said, "simply ground all the aircraft." ///
I wish you could keep such references out of your essay. 9/11 was clearly an inside job.
Thus, the psychology and security in the 9/11 case is no different than with those who collect protection money (and burn down businesses periodically to illustrate the threat).
Important work. Dismissing something as "security theater" is possibly to underestimate the importance of theater.
But the current version of the essay is too boring. Maybe because I have read and thought about much of this before.
I am more interested in the consequences of understanding how humans handle risk. Reading your essay I found myself skimming and then not reading. I went back, read more, then started skimming again. I think this essay is important homework behind something more readable. (Note, I have read and enjoyed several of your books, and I like most of Cryptogram, so I know I am not always agin you.)
Side note on rabbits and running vs. eating: I suspect their communal strategy is to alternate. They mostly eat, and sitting still are pretty well camouflaged, but occasionally they run, flashing their white butts up and down, pretty much inviting chase, then they stop again, nearly disappearing. And maybe another runs a bit. A predator could easily chase one, then another, then a third, never catching any but getting pretty darn tired. Being in motion attracts a lot of attention. Adding a white flash apparently benefits the group more than it risks the individual.
You should not use "exact same" as you are repeating yourself. Use either instead of both
Referring to the comparison of spending for a non-repeated 9/11 vs food poisoning -- It seems to me that you are saying that the terrorism spending is not contributing to the lack of repetition. If only $1.9 billion were spent on terrorism, the numbers might reflect different ratios. My guess is that spending tens of billions on food poisoning would not reduce the incidence to zero. I am not convinced you are comparing apples to apples. Even so, enjoyed the essay.
I really enjoy your books and your articles. Glad you do what you do.
When I first read in your article that "feelings" of security (as against real security) might be useful in some contexts, I bristled. Still do. To me, it seems axiomatically, tautologically incorrect. I feel very ill toward the notion that illusions of security can be good for anyone.
I understand what you're saying when you say that an illusion of security might trick people into a more accurate estimate of their actual security in certain contexts. But something is still nagging at me, telling me that resorting to such manipulation of perceptions is fundamentally unsafe.
It seems to me that there's no acceptable alternative to aiming for authentic accuracy in one's own estimation of risk. If we get used to leading people toward accurate estimations of risk by resorting to trickery, by exploiting holes in their rationality, that means that those who control or influence the media and other sources of information will eventually have the power to lead the general public to any conclusion they might want, correct or incorrect.
I.e., trying to fool people into correct conclusions seems to me to be the same thing as encouraging people to believe things without knowing why.
In this way, the validity of a given risk estimate wouldn't be coming from thought and facts that can be looked at, pinned down, and analyzed -- it's being produced from inside of a person's own cognitive blind spot, where it can't be independently verified and relied on.
I don't like the idea of a public that is conditioned to permit themselves to be manipulated this way.
Most people in this world are already very much like sheep. Encouraging them to feel good about risk X or risk Y via the manipulation of their gullibility and ignorance, means that they'll be in a much worse situation -- bred to feel safe for reasons that they can't know on their own.
So, I like your term "security theatre" -- it names something that is important to notice and be aware of. But I don't like the thing that it stands for, and -- if connotations could be voted for -- I'd vote for the term to be considered pejorative permanently and in all contexts.
P.S. (in response to another comment): There's nothing wrong with saying "exact same." It is a form of emphasis which permits the author to counter a possible misinterpretation: sometimes, "same" is used loosely; saying "exact same" clarifies the author's thought by removing any possibility of misinterpretation in a case where it is important to get it right.
The risk thermostat has a well supported example, better I think than seat belts: anti-lock
brakes have been shown to NOT provide a better loss experience for insurance companies
and therefore are NOT a justification for a
'good driver' type discount.
That's an interesting essay.
I think you also need to look at what happens when people make (security and other) decisions for others as opposed to themselves. Intuition suggests that we are more risk-adverse with the fortunes of groups than we are for ourselves, but I wonder if research backs this up.
I have noticed that with IT security that the more knowledgeable the person is about the methods of attack the more risk is perceived. Those who attend Black Hat conferences come back more convinced that more drastic security measures need to be deployed. This tendency is mentioned in the essay in light of media exposure but could have an IT example instead.
IT risks from inside the organization are generally perceived as less threatening than those from outside although I believe that attacks from inside are more common.
It would be a huge task, but one may be able to use these concepts to analyze real-world security spending to determine a) is it rational (ha!), b) is it poorly done but not biased? or c) are the people creating these security ideas deliberately biasing security decisions in their favor? That is, are the ideas being implemented just stupid or is there actual malice in them?
"Of the first group of subjects, 85% responded that Linda more resembled a stereotypical feminist bank teller more than a bank teller. This makes sense. But of the second group of subjects, 89% of thought Linda was more likely to be a feminist bank teller than a bank teller. Mathematically, of course, this is ridiculous. It is impossible for the second alternative to be more likely than the first; the second is a subset of the first."
The last sentence certainly looks backwards. The reader has to go back to the previous paragraph (not copied here) to see what you mean.
Keep the order of the alternatives the same throughout (instead of switching them in the second paragraph).
I have a slight issue with the mentioned results of the experiment involving people's perception of words starting with 'k' or having 'k' as the third letter. I haven't read the cited paper, but in /usr/share/dict/words on my workstation, there are:
* 2785 words with 'k' as the third letter
* 46 words with 'K' as the third letter
* 3856 words with 'k' as the first letter
* 2633 words with 'K' as the first letter
This suggests that there are over twice as many words with 'k' as the first letter than the third, and that people's guess on the matter is, in fact, accurate.
Many years ago there occurred to me a thought experiment which bears on the tradeoff between risk and reward. Suppose that I offer you the following bet: You will flip a coin three times, betting on the outcome. I will pay you 2:1 odds (instead of the fair 1:1 odds) on each toss for which you correctly predict the outcome, and you may bet any amount of money that you can actually come up with - your entire net worth if you wish. (You may not bet with borrowed money unless you have adequate collateral.) Clearly this is a great bet; and you would be ill-advised not to take me up on it. But how should you manage your money?
To maximize your mathematical expectation, the answer is simple: Bet all you've got on each flip. Unfortunately, this also leads to a 7/8 chance of your winding up broke. As a result, few rational folks would do it this way. At first this seemed rather paradoxical to me.
When I have posed the question to a number of (fairly perceptive) folks, the typical response is that they would wager about a third to half of their net worth on each toss. From this, I have inferred that, in the net worth context, the psychological "value of" function for money is not linear, but more nearly logarithmic. With such a valuation, the possibility of winding up completely broke is then no longer acceptable. (Actually, when I calculate the fraction of one's net worth that would maximize the expected logarithm of the result on a single toss, I get that one should risk about 2/3 of his net worth.)
I personally would appreciate a book on this topic.
I did notice that all/most of the examples are about individual choice given a certain situation. While it is true that in the end every decision is an individual one, group pressure should also be considered in cases like these. It might also be a good idea to consider the influence of the press (in all forms, therefore also including TV, radio, internet, etc) in forming / influencing group opinion, and the influence that group opinion can have on individual decision-making.
This is a great essay. It's the most interesting thing I've read by
Schneier, by about an order of magnitude. My thoughts on it are a bit
Most of the essay applies to safety as much as security. If security
engineers are going to study human vulnerabilities, they might start by
asking how those differ from human factors. How do people tell satan
from murphy, what happens when we confuse the two, how do our responses
to them differ, and how can satan fool us into thinking he's murphy?
Intelligent malice is very new in evolutionary terms. If our risk
perception is in beta testing, our malice perception must be pre-alpha.
We kick doors that jamb our fingers. That's probably harmless. We also
take elaborate precautions against crime and terrorism, which distract
us from deterring or catching criminals and terrorists. Is that because
we see "crime" and "terrorism" as hazards instead of threats?
"The feeling and reality of security are certainly related to each other,
but they're just as certainly not the same as each other. We'd probably be
better off if we had two different words for them."
We do have them: comfort and safety. What can't be said with words
such as safety, danger, benevolence, malice, comfort, fear, reckless,
sensible, bold, and timid? The shades of meaning are there if you listen.
Fear is a feeling that bold people ignore; danger a reality ignored only
by the reckless.
Before September 11, most Americans believed it couldn't happen.
It did happen. I believe psychologists call this situation cognitive
dissonance: the classic example is when members of a millenial cult wake
up the day after the world was supposed to end. Delusional thinking and
desperate attempts to make reality conform to the original belief are
the usual consequences. This could be a powerful concept in security,
where attacks could almost be defined as things the defenders think
can't happen, but the attackers know can.
I'm a rock climber, and it's interesting to read your ideas about the
psychology of safety from that perspective. Climbing, like skydiving, is
largely about responding to fear. Apes evolved very effective climbing
instincts, which humans retain, but our evolved feelings about cliffs
are very simple: pure terror! Our risk heuristics are about as poorly
adapted to the vertical environment as they are to New York City.
On the other hand, people have been climbing mountains for a couple
of centuries. We've developed a culture and folklore around "head",
which is what we call the quirks Schneier discusses. These aren't at
all scientific, but they've been Darwinised enough to work pretty well.
Note that all this is about responding to danger. To find folklore about
malice, you might want to look at combat and military culture.
The best exposition of this folklore is "The Rock Warrior's Way", by Arno
Ilgner. His description of how we perceive and respond to fear is very
similar to Schneier. His explanations of it involve Freud and Castaneda (I
said this was folklore), and Schneier has done us a service by replacing
them with Darwin and Skinner. Ilgner goes beyond Schneier by offering
prescriptions for climbers to respond to risk more objectively.
Schneier tacitly assumes that the amygdala and neocortex respond to
risk independently, so that our thoughts and feelings follow the same
heuristics whatever our bodies and emotions are doing. It isn't so.
Our thoughts tend to be slow, rational and objective when we're
comfortable, and quick, heuristic and subjective when we're terrified.
Our emotions don't like being overridden, and present delusional thoughts
and fixations to make us do what they want.
This has an important consequence: you can't objectively analyse risk
while you're afraid. You have to forsee the risks and make your decisions
while you're comfortable, and stick with what you've decided when you
I'm out of time now, but I'll follow this discussion with interest.
I found this facinating; but there's a problem. It lacks a coherent thesis, because too much time and energy goes into providing supporting examples for a very simple point (that humans don't think well about risk).
Perhaps those supporting examples should be trimmed (which would be a pity), or perhaps the main argument should be reemphasised and brought to bear within each of the examples. (I assume that the main argument is that security theater has a certain amount of use.)
I like your point; it reminds me of the point made by some economists: although the market can correct most problems, it has to do so within the sturcture of human institutions; so shaping the institutions will shape the market.
Does the article describe the truth about human nature? If it does, how to prove it?
The best way is probably the classic method for evaluating an economic model; a large group of people observe the model in action and agree with it.
The ideas presented should be easier to evaluate if re-arranged into more of an input/output model format. OTOH, maybe we're all just in the early fact finding stage on this topic...
As a side note, when discussing choosing between a sure loss of $500 and a risky loss of $1000, you make the claim that "classical economists would have predicted" that both were the same... But this isn't true. Classical econonomics has a non-linear demand curve, so it's pretty certain that one would be preferable to the other. This experiment is very revelatory, but doesn't reveal a hole per se.
The strength of the results even suggests that people have a limited tendancy to automatically recast questions: the people being given money seem to be thinking of the gain in two steps, one in which they're given $500 (no choice) and one in which they risk that $500, double or nothing. The loss doesn't seem to show the same psychological strength, because they lose $500 with no choice, and then have the option to possibly gain it back (or double their loss).
It's an interesting experiment, but the results can be read in many different ways.
You really should expand upon this and write a book. I agree with Mr. Tanksley above where he says that the essay lacks a coherent thesis; perhaps the book should be a guide for information security professionals on the psychology of security.
Humans suck at assessing risks. Information security professionals are hamstrung when advising decision makers as a consequence. I would treasure a book that helped me do that part of my job better.
I emailed this back to Bruce first, but will post it here for everyone else too:
> We can also calculate how much more secure a burglar alarm will make
> your home, or how well a credit freeze will protect you from
> identity theft. Again, given enough data, it's easy.
Is it worth pointing out here that all your data can become invalid immediately in the face of significant change. An obvious case being an efficient mathematical attack against SHA1 or RSA in the computing world, but also consider the widespread publication that previously "secure" bicycle locks could be opened with the end of a BIC pen [I said I was on the train without web access to find a link. Here's a link:]
> Second, when
> considering security gains, people are more likely to accept an
> incremental gain than a chance at a larger gain; but when
> considering security losses, they're more likely to risk a larger
> loss than accept a larger gain.
ITYM "smaller loss" here.
I suspect this is an instance of "you can't reliably proof-read text/audit code that you wrote yourself because your brain will read what you remember writing" [instead of what's there]. Another brain shortcut failing!
I have just a minor comment about a statement early in the essay:
> Every living thing makes security trade-offs, mostly as a species --
> evolving this way instead of that way
This statement makes it look like evolution is directed towards goals, which is precisely the sort of misconception scientists are trying to counter in the creationist / intelligent design debate. I think your example is probably OK, just this introductory sentence needs to change.
How about something like:
"Every living thing makes security trade-offs, and the results of these trade-offs influence how species evolve."
I have enjoyed the essay and will have some further comments later. I do feel that a mention of the law in economic theory of diminishing returns should be mentioned.
You state that it is unlikely to be able to provide perfect (or absolute) security. It may be good to mention that expenditure is asymptotic in nature. Thus you can spend to infinity and never achieve perfect security, but that also for each dollar spent, the value of the additional controls diminishes.
Further, a comment on the nature of self interest would increase its value. Learning to work to shape peoples self interest in amanner that promotes security is more effective than fighting it. This is true both psychologically and economically.
Now to be picky, it should be 100,000 BCE; BC is no longer held to be correct due to social constructs and in academic circles has been long replaced.
I might also look at the impact of confidence intervals and reporting of data. It is common to report surveys of political figures with tests of who do you think will win the election as an example.
A Common Example, the reported results state party A has a 55% chance of winning the election. Party B has a 48% chance of winning the election.
When adding confidence intervals the same result may be that Party A has a 95% confidence of getting between 40% and 70% of the vote. Party B has a 95% confidence Interval of between 41% and 55%.
When reported in the true statistical manner, it is clear that we can not be certtain who will win, but when reported (incorrectly) as we do in the popular media it seems clear which party shall win.
Very interesting art. .
There is inconsistency in the literature.
To clear the ground at the start I feel you should start with definitions of risk threat and vulnerabilty and their relationship e.g. r is a function of v and t
I think many of the psychological experiments are problematic. The questions need to elicit some sort of emotional engagement, so that the subject is likely to give some consideration to the outcome. But these emotive questions are then largely devoid of context, so the subject is left to provide his own. For example, in the "Asian disease" problem, I framed the question as "there are 600 people" - nothing about anyone else. I picked Program A over B - because some people will live and the tribe (some of these 600) will survive. Program C just says that 400 people will die - nothing about the rest. So I picked Program D, which gives the tribe a chance of survival. The experimenters probably assumed an entirely different context. So it comes down to how I interpret the question and how I interpret the information available to me at least as much as whether the lizard part of my brain swamps the higher functions.
Which brings me to a related point (insufficent information), and one you have alluded to, though largely avoided making in your essay. This is that we are capable of making better decisions collectively, than we are as individuals. But this requires that individuals trust the framework in which those decisions are made and the information they are based upon. In the absence of these, individuals will still make decisions. But those decisions will be based upon incomplete or untrusted information about the actual problem, and known information about the framework in which they are made. This is the subject of politics and not psychology.
[I'm posting an emailed comment. It looks like some other posters have already touched on some of this, but maybe this point of view is still of interest to some.] I've read through most (but not quite all) of your most recent cryptogram. A couple things stood out:
 It looks like you've made a tacit assumption, when describing security risks, that information about those risks is complete. And, it looks like people in general make the tacit assumption that information is incomplete.
This is a scope issue, as much as anything else. Put differently, sometimes it's good to take a risk for non-obvious reasons. [But, not always.]
 Given a tacit assumption that information is incomplete, any change in risk could be the begining of a trend. In other words, it's reasonable (at least biologically) to overinflate one's reaction to changes in risk. Then again, even evolutionarily speaking, you only want to do this some of the time, because of .
That said, I don't know if either of these points are helpful to you.
This is a groundbreaking topic: psychology and security, especially as it boils over into economics, sociology, neuroscience, statistics, evolution theory, hermeneutics, polling, etc. It raises a lot of questions, which I think confirms that it is groundbreaking.
First, a concern: I think some assumptions get blurred when these diverse fields of study are harvested to yield unqualified statements, especially when those statements are applied across the fields. For example, evolution theorists may premise that human nature is changing over the course of 100,000 years. In contrast, psychologists and sociologist may premise a relatively static human nature. Psychology may premise a human nature that is the same across statistical samples and even across articles using diverse sample populations, written since psychologists began publishing. In another example, psychology admits to anxiety, obsession, acts of perception, feelings of love/hate/etc., quasi-rational thoughts/behaviors, human interaction/intimacy for which there is no limit and for which principles of supply and demand do not always apply. In contrast, economics focuses on economic exchanges and the principles relating to supply and demand. Some statisticians have all of the facts in view (for example, every trade on the NYSE may be known) whereas other statisticians measure a only a small sampling of facts, and these facts may be measured only indirectly. I believe this tends towards different fields relying on different assumptions, and using different standards for the admission of "facts". Philosophers of science still argue over differing standards for the admission of facts, and the problems of factual admissibility related to "theory laden observation".
An illustration of assumptions that differ across the fields cited in this essay may help. Consider this slogan from investing: "Past performance is no indicator of future results." Is this true? Is this true across economics? What if this were applied in psychology? Does this apply to evolution theory? Does this slogan apply to fear of flying, fear of terrorism, optimism about the lottery, voter psychology, etc.? Does it apply to risk management in general?
Wouldn't unqualified acceptance of this statement challenge many truths in the natural sciences, e.g. the effect of gravity, conservation of mass, observations made by Copernicus, etc.? Is there a categorical difference between the natural sciences and these: security, economics, psychology, etc.? Was David Hume right about induction? Was Edmund Husserl right about the "crisis"? In any case, care should be taken to harvest an observation made in one field and apply it to another field, especially with respect to the assumptions relating to the standard of admissibility for facts, the directness of observations, and the methods used to observe the subject matter.
Second, Are the statistics of security the "reality" of security? What happens when there are numerous conflicting statistical predictions, as is the case in our culture when economists predict that a security or sector will "outperform"? Following the similarity of economics and security proposed by Ross Anderson, Is the cacophony of any given week's economic predictions the "reality" of the economy? Restating the investment slogan above, Is a statistical prediction based on historical data likely to predict accurately? Doesn't our predictive power depend a lot on our ability to interpret historical facts, to prioritize some and cast out others? Is a math model (or statistical model) just one chosen method of interpretation among many, or do some math/statistical models yield reality itself?
So I think I'm arguing more for the first statement about reality in this pair:
(1) "Security is both a feeling and a reality. And they're not the same."
(2) "The reality of security is mathematical, based on the probability of different risks and the effectiveness of different countermeasures."
However, to qualify statement (1), it seems to me that both the mathematical probabilities and the feelings relating to security are the same in this way: they are both human interpretations. I believe that the methods used to arrive at these interpretations should be the topic of discussion, and that pronouncements of "reality" are not helpful.
A stimulating essay, but something I'd need to read through a couple more times before I get my head round the implications.
You write: "One, our perceptions of risk are deeply ingrained in our brains, the result of millions of years of evolution. And two, our perceptions of risk are generally pretty good, and are what have kept us alive and reproducing during those millions of years of evolution."
Or is the the actual perception created in our very early childhood when we're unable to apply logic? From that point, we'll always tend to react in the same way, even if we "know" we shouldn't, just like your example from Steven Johnson about wind and windows.
This reminds me of the Watson experiment on a young child, Albert B, that showed how early experiences stay with us. When Albert B was introduced to a white rat and a rabbt, he was not scared of them at all. Then, Albert B was given the white rat while Watson struck a metal bar with a hammer behind his back. Albert B then developed a fear not just of the white rat but also the rabbit and other similar object. This experiment can be referenced at http://psychclassics.yorku.ca/Watson/emotion.htm.
Just reading through, I think your may have made an area here as answers from both groups appear the same and therefore son't support your conclusion (or I may have misread it): "Of the first group of subjects, 85% responded that Linda more resembled a stereotypical feminist bank teller more than a bank teller. This makes sense. But of the second group of subjects, 89% of thought Linda was more likely to be a feminist bank teller than a bank teller.
Mathematically, of course, this is ridiculous. It is impossible for the
second alternative to be more likely than the first; the second is a subset of the first."
A minor instance of Security Theater:
The product I have in mind is Thayer's Slippery Elm Throat Lozenges, which probably date back to the late 1800s, and a Godsend to singers. Before Tylenol, they were packaged in a folded paper wrapper inside a traditional cardboard box, with the usual locking flaps. The box had flaps both top and bottom. After the scare, the company added a transparent, printed adhesive plastic sticker to the box, but, on the top flap, only!
Only recently did they change to a whole-box sleeve of clear, unprinted heatshrink tubing; it wouldn't be difficult for somebody familiar with locating industrial products to purchase a supply.
> In the past I've criticized palliative security
> measures that only make people feel more
> secure as "security theater." But used
> correctly, they can be a way of raising our
> feeling of security to more closely match the
> reality of security.
While I also think that "security theater" has the kinds of costs you mentioned attached to it, my main issues are slightly different:
First, by constantly remembering people on these highly unlikely risks it makes them feel scared _more_ instead of less, contributing to the divergence between real and perceived risks. While I agree that security theater can help in the short term, in the long term it only keeps the irrational fear about the risk alive.
Second, these purely virtual risks tend to accumulate. If security theater is everywhere, the perceived total risk is the sum of all _perceived_ individual risks instead of the sum of all _real_ individual risks.
Last, it also doesn't help that these effects tend to be subconscious, making them hard to avoid and creating a fuzzy feeling of being threatened. In turn, this feeling makes it easier to justify further, probably more drastic, security theater with all its attached costs regarding civil liberties, money, time and lifestyle.
At least in the case when the risk is one of death, I have wondered if we don't also exaggerate risks that would come with a period of time knowing that we are about to die. My intuition is that one of the factors that adds to my instinctive (as opposed to rational) fear of, say, a plane hijacking is the fear of having to face minutes or hours where I know that my chances of survival are very slim. Maybe this is a legitimate assessment of the mental pain associated with this experience, but I feel like it's an exaggeration of the risk.
It certainly is true that there is a difference between the fact of security and the perception of security. In this essay, as well in other places where you have discussed this issue before, however, I feel that some things have been overlooked in the effort to emphasize this point. (Actually, this observation also applies to writings on this subject by others.)
To begin with, the perception of security can sometimes provide security. The classical example is: try to walk on a board which is held four inches above a sidewalk by a couple of bricks without falling off. If you can do that, does it automatically follow that you can do the same thing if the board is between the 25th floor of two adjacent buildings?
Fear is a great inducer of irrational behavior, and irrational behavior can cause injury, so reducing the level of fear in itself increases security.
Another factor that hasn't been addressed is one that will impinge on the next point I will discuss. Insurance rates aren't just based on actuarial forecasts for one's group, plus a percentage of profit for the insurance company. There is also something called "moral hazard": since people at an elevated risk for reasons the insurance company cannot measure are more likely to purchase insurance than those with lower risk, this, too, has to be reflected in premiums.
When making security decisions in situations that involve security against other people, instead of security against threats coming from forces of nature, one has to consider how one's choice of precautions will affect their behavior.
Thus, it isn't irrational to respond to the events of September 11, 2001 on the basis that those who were responsible for them would be willing to, and might have the opportunity to, mount an attack that would leave millions dead instead of thousands dead. Estimates of risk from terrorism can't simply be calculated from past experience in such a case.
One point in your essay that I have seen before from other authors is one that I wish to particularly single out for criticism.
We tend to overestimate the risk from PCBs dumped in our rivers from factories, and we tend to underestimate the risk of a skiing trip. Is that so?
I think you may be right that we are making a false estimate for psychological reasons, but the real issue isn't an estimate of risk.
Basically, if I take a skiing trip, I have subjected myself to a risk. That's my own business.
If someone puts carbon monoxide in the air I breathe without my permission, his act is essentially similar to putting arsenic in a cup of coffee I am about to drink. Releasing pollutants in the environment constitutes an act of assault against every individual who is subjected to a risk, however insignificant, as a result of that act.
And we would be quite rational to fear the consequences if murder were made legal, or if the police disappeared, and anarchy were to reign.
One way to keep people off the edge of panic about environmental problems would be to make it explicit that every factory that emits pollution does so by special permission, and it does so because that factory's existence serves a public good.
Because the real problem is not an overestimate of risk. Yes, the risk is estimated as being high because people suspect that Big Business has too much influence with government, so they fear existing pollution controls are a sham. If Big Business didn't have a lot of influence on the government, would we have a DMCA?
The real problem is an underestimate of cost. It is assumed that pollution problems are caused by a few greedy businessmen cutting corners to put a few extra dollars in their pockets, so putting a complete and absolute stop to this sort of nonsense would affect no one but these wicked and greedy men.
If all heavy industrial activity in the United States were transferred offshore, and if motor vehicles were banned, the costs would actually be severe, and affect everyone.
There are other aspects of your division between rational and irrational estimates of risk that I have no quarrel with. It's unfortunate that nuclear power is the victim of an exaggerated perception of its risk due to its novelty, particularly when it is badly needed to address both energy shortages and the potential danger of global warming.
But in general I think that your analysis lacks nuance: some of the psychological factors that are 'irrational' when compared with risk statistics are still rational for other reasons.
Security measures, or the lack of them, can influence the behavior of others.
It isn't just a matter of risk versus cost; it's a matter of who pays the cost, and whether there is a legitimate right to impose or tolerate the risk.
>Does the article describe the truth about human nature? If it does, how to prove it? The best way is probably the classic method for evaluating an economic model; a large group of people observe the model in action and agree with it.
I've never heard of that test before, but it seems dubious to call it "proof". It reminds of the following line of reasoning, which I excerpt:
Exactly. So, logically...
If... she... weighs... the same as a duck,... she's made of wood.
Or in a less Pythonesque vein, consider the geocentric model of the Sun, Moon, planets, and stars orbiting the Earth. This had a large group of people observing the model in action and agreeing with it, for quite a long time.
My question is this: Did this extensive observation and agreement affect the correctness of the model or "prove" its "truth" in any way, or did it simply show what the beliefs were?
Fascinating read. Here are the comment I jotted down while reading it, in no particular order:
- What effect does religious faith have on risk assessment?
- How do professional gamblers fare at risk assessment? Compulsive ones?
- What are the differences in risk assessment between genders? Age groups?
- What can be done at the individual level to improve one's risk-assessment skills?
- How about a Security Theater "baloney detector", in the spirit of the late Carl Sagan's? (http://www.xenu.net/archive/baloney_detection.html)
Come to think of it, isn't detecting bad security theater a poper subset of detecting "baloney" in general?
I've just got around to reading this for the first time, and merely skimmed the comments, so this is a first impression and I apologise if I repeat what someone else has said.
Firstly, if you were one of my students I'd advise you that you have done a great report, but a poor essay! You don't seem to have a unifying theme, nor is there a clear structure drawing the reader through. It is strong on facts, but weak on analysis.
Secondly, I have no idea who this is aimed at - the assumed knowledge level of the reader varies wildly. You need to address this in your own mind.
Thirdly, I think this is way too big a topic for an essay - either you need to cut out some of the examples to present an overview, deal with just one element in depth, or (preferably, IMO), admit that this is an entire book project.
You have suggested some of your own criticisms in the essay, such as the number of studies done on college students. These cannot be representative of the wider population, and therefore have limited reliability. I wonder if you can get the bean-counters at BT to spring some cash for properly constructed research in a lot of the areas you've identified!!
I'll probably have more comments after a second read tomorrow.
Imagine the following scenario:
In 2001, Homeland Security analysis rightly concludes that TSA is not cost effective for any reasonable estimates of another airline hijacking with comparable casualties and physical damage, does not implment it, yet another attack eventually occurs, which might (very, very, low probability) have been prevented by the actually existing TSA methodology.
An email trail exists, warning about such a future attack and the possibility that a TSA would have prevented it.
As we contemplate the state of the airline industry under the ownership the trial lawyers, we need to remember that "costs" is an ill-defined term and some forms of "security" may be in reality litigation insurance.
Thanks Bruce, I have a pitch to make on Wednesday and this information will be *very* useful ;-)
Two random comments:
1. One way to emphasize how ingrained some of these principles are in our minds and brains (and how unlikely they are to have arisen from the peculiarities of human cultures) is to note that they are also ingrained in the minds and brains of other animals. For example, even nonhuman primates show loss aversion in trading games. This work is new (and still mostly unpublished) and hasn't yet filtered into popular-press books, but see, for example:
Chen, M. K., Lakshminaryanan, V., & Santos, L. R. (2006). The evolution of our preferences: Evidence from capuchin monkey trading behavior. Journal of Political Economy, 114, 517 - 537.
2. One recent concrete illustration of how our evolutionarily ancient intuitions can actually mess us up now concerns the case mentioned here, about fear of planes vs. cars. In fact, following 9/11, such fears actually led to *more* deaths overall, since far more people were on the road. See:
Gigerenzer, G. (2004). Dread risk, September 11, and fatal traffic accidents. Psychological Science, 15, 286 - 287.
I think your dissertation on “security is a trade off��? is mostly on target, but I tend to disagree with some of your comments. I did not have the time to read the whole thing, as it was quite long, but here are my comments on what I read.
Security is not as much of a feeling as the absence of a feeling.
It can be due to ignorance, innocence, lack of conscience, or self confidence.
People that feel secure don’t even think about it, the paranoid do, those with security jobs do, and those in the business of insecurity do.
I’m not sure that ones perception on what is “irrational��? holds much weight with the paranoid. Paranoia may be the rational method of coping with a situation. Who decides what is irrational?
If you neighbor is shot and killed by a prowler, you will buy the best alarm system with the fastest response, ask for police patrols and be paranoid until the prowler is caught.
Is this irrational?
If the prowler was caught, and in jail, you would make a mental evaluation as to the likelihood of another prowler in your neighborhood. Depending on how good you felt about your neighborhood, you might put security signs in your window, and then go on with your life.
Is cost the only justification on what makes a decision irrational?
Look at all the families that decide to keep their loved ones alive in hospitals and at great expense, when they can no longer contribute to society. Is that irrational?
"The other comes from a chemical company. Most people would choose Oprah's, even though they have no facts at all about what's in either glass."
Very probably, neither does Oprah.
Thanks for many useful and enjoyable reads!
It's good, Bruce, but it needs aggressive editing. You're wandering around, reaching the point that was formenting in your mind as you wrote the preceding stream of consciousness .. then you score. Good: now, go back and eliminate the stream, leaving only the score. Take a printed copy, a Sharpie, and a strong of very strong beverage of your choice, and attack those sentences.
Thanks for interesting essay. Most people are quite bad on statistics and its use in decision making. There is a ton of stuff in that area in the field of Operations Research. A classical example which baffles many is described in:
--see the example with blue and green taxis.
I can't be bothered to read all the responses so apologies in advance if this has been nitpicked before but...
Re your airplanes vs cars comment: is it possible that the relatively higher safety of airplane travel has been brought about by a pre-existing distorted public perception of airplane travel risk?
On the area of counting 1,2,3, many, there's a really interesting bit on that in The Velocity of Honey by Jay Ingram (ISBN 1560256540)
In particular, he talks about research covering animals and their ability to count (between 1,2,many and 1,2,3,4,5,6,7,many), and how long it takes us to perform mental number functions if the number is above about 4.
What jumped out at me in the paper is when talking about the Psychology of Security we must take into account the environment the decision is being made in. It appears the subjects in the studies used where doing one thing during the studies and that was following instructions. Most security decisions are made as one of a hundard decisions made during the day. The person making the the security decision may first rank the importance of the outcome as critical as what to have for lunch. I'm always amazed at how long it takes some people to make up their mind about what to eat. Hell, you get three chances a day.
Few people ever face a fight or flight decision these days. I beleive many decisions are made based on irrelavent concerns such as what will others thinik of my decision? Will they think I'm scared or don't care enough? Will this cause me to get fired or get a promotion. Also in the last 60 years or so, here in the US, we've been lead to believe that the folks in Washington have security under control. Look at all the laws we have for protecting ourselves from ourselves. From our higways to our drugs, the food we eat, monitary controls, etc. I really beleive most people in the US don't really feel threatened by most anything.
One final observation is the average Joe is not going to read this paper or really care about security until they're woken up from their zombie state. Most the real cheerleaders for a cause are those that went through some life changing event and decided to do something. As the paper mentioned, everyone else is just taking up space assuming the worst will happen to everyone else.
I ran across this 1994 Ethology and Sociobiology journal article (http://www-personal.umich.edu/~nesse/Articles/Fear&Fitness-Ethol&Sociobiol-1994.PDF) following the twisted strands of the web, and it looks like a good reference for your paper. Here's a germane quote:
We adapt slowly to some evolutionarily recent dangers. Though we fear much that now carries little risk, we accept many new perils with equanimity. We have too much fear of spiders, but too little fear of driving fast, saturated fat, and very loud music.
Our brain’s flexibility does help us to (slowly) learn anxiety to totally new dangers, but this carries the cost of frequent misconnections of anxiety to cues that do not signal danger. We make inappropriate connections, thrust meaning on random sequences, and develop superstitious fears. We make false correlations between events (Mineka and Tomarken 1989) and misattribute them, particularly when anxious. People who are poor judges of probability report more experiences of illusory causality (Blackmore 1990a,b).
I expect there is a lot the weigh on this to be found in sociology/anthropology research on taboo. Putting "too much" effort into security is definitely a social schism, resulting in the practitioner being called paranoid, or worse. In many instances it may result that the practitioner draws extra attention because of the uniqueness of some actions, and that this attention draw affects future security decisions in an attempt to balance between attracting attention, offending taboo, and the like.
Also important and related is the notion of "being rude". I have worked at a number of companies that require badged access for doors, all with the policy of "Do not allow anybody to walk through with you, even co-workers; politely ask them to badge themselves in." Yeah......right....this doesn't work anyplace I have worked because actually doing so is rude - it potentially communicates distrust, or perhaps obedience, and those that actually follow it are like to be labelled paranoid, etc., as in above.
I think Mike Kinney's comments are a good example of what you are talking about, Bruce. Statistically, the chance of another random prowler murder in the same area *ever again* is vanishingly small. So, yes, the response to armour-plate the house is irrational, and I know it. Yet, if a murder/serious attack happened near where I live, I would be unable to resist the demands from my wife that we spend a small fortune on security! All the rationality in the world will make no difference.
I think Mike Kinney's comments are a good example of what you are talking about, Bruce. Statistically, the chance of another random prowler murder in the same area *ever again* is vanishingly small. So, yes, the response to armour-plate the house is irrational, and I know it. Yet, if a murder/serious attack happened near where I live, I would be unable to resist the demands from my wife that we spend a small fortune on security! All the rationality in the world will make no difference.
> This essay is a draft. It's something I'm working on -- possibly it
> will become a book, but probably not -- and something I'm interested in
> comments about.
I found it very interesting and well-written. I have come across most
of these concepts before, but as usual you have a very cogent way of
tying things together.
> This means that there's an evolutionary advantage to being
> able to hold off the reflexive fight-or-flight response while you work
> out a more sophisticated analysis of the situation and your options for
> dealing with it.
> So here's the first fundamental problem: we have two systems for
> reacting to risk -- a primitive intuitive system and a more advanced
> analytic system -- and they're operating in parallel. And it's hard for
> the neocortex to contradict the amygdala.
> In other words, how
> do we get people to recognize that they need to question their default
In most of your CRYPTO-GRAM issues, you seem to hammer on the idea that
people need to think things through and avoid knee-jerk reactions when
making decisions about security (or, by extension, about anything).
Getting people to think things through seems to be the difficult part,
as philosophers and religious sages seem to have discovered, and
indeed if you figure out how to do that, you'll be a hero!
> a bit of well-placed security theater might be exactly what we need to
> both be and feel more secure.
... is not a conclusion I would have expected from you. It sounds as
though you've given up. :-(
As you said this is a work in progress and as such, reads well. Personally I think that some more explaination of some of the subjects may be required for the final draft as you are pulling out interesting stuff from alot of areas now. Possibly would do with a re-draft now that you have reached your conclusions so that the line of thought is developed in a more coherent way.
I found the article expansive in its background, but lacking a thread tying the various factors together.
A "Case Study" that put many of the factors together might be illuminating. Perhaps an example that's familiar from real life (ie, airport security or voting) showing the influence of psychological factors - then revisit the scenario, this time in a 'rational' manner, and show how actions, responses and the final outcome differ.
As always, thank you for your tireless work on security.
Very interesting reading, this is something I’ve been thinking a lot about myself.
A major problem is that media repeat, amplify, and distort any bad news that comes in. If it bleeds, it leads. Following the tsunami disaster in Asia recently, stories began appearing about a potential disaster if a certain volcano in the Canary Islands erupted, the resulting tsunami wiping out coastal cities in northern Europe and the Eastern Americas from Canada to Argentina; or the story about the supervolcano under Yellowstone National Park destroying global civilisation when it goes off.....not to mention asteroid strikes and suchlike. The odds of such events occurring are remote, but hysterical bleating by the media scares the daylights out of people.
An interesting example of security theatre, though, is the advice following 9\11 concerning duct tape. I’m in two minds as to this: On the one hand, in practical terms, it’s ridiculous to tell people to take such precautions, it smacks of scaremongering. On the other hand, if you give people concrete steps that they can take, however ineffective they may be, it may be useful in giving them something to concentrate on. Pure security theatre ( I love that phrase, by the way!).
But is it useful?
On the face of it, I’m tempted to go with the first option when I read stuff like this:
Is this responsible? I find the following more reassuring:
No good if you’re at ground zero below a nuke or in front of the sarin canister when it goes off, but otherwise sound, rational advice, the recurring theme being DON’T PANIC!
Terrorism is about scaring people. Press releases like the above from the government succeed only in creating anxiety, and the press picks up on this in a feeding frenzy. Who’s afraid of the big bad wolf? Not you? Think again: he might be outside your door…then again probably not, but can you be sure?
I once read an article describing us as having Stone Age minds with nuclear weapons. The ten thousand years or so of human civilisation are but the blink of an eye from an evolutionary standpoint. As Carl Sagan put it, we still possess evolutionary baggage, such as innate hostility to strangers, territoriality, nationalism, deference to leaders. The fear response is linked intimately to this. Our brains are still in the Stone Age, our reactions to events pretty much instinctive. We rarely stop and question our feelings, especially fear. We trust our leaders to make it go away, and more often than not, they give us the illusion of safety, not safety itself, as in absolute it’s impossible. But when the illusion fails, we want someone’s head.
So here we have it then, the crux of the issue:
How do we close the gap between security and the illusion of security? In other words, can we provide a minimum of “show��? security embedded in real, effective security procedures?
Whether we’re speaking of national security, IT security, home security, or personal protection, how do we effectively protect our assets, all the while reassuring those who need it that all reasonable steps have been taken? The key word here is “reasonable��?. We shouldn’t spend outrageous sums on ineffective security theatre. This “reassurance��? is of course dependent on context. An army base has different requirements to an airport, and these have different needs again to a corporate network or a VIP bodyguard detail (although in many instances, these seem to be the ultimate in security theatre).
Coming back to the trade off, I would suggest that good security reassures the client, in addition to, but not to the detriment of, its primary function.
Given your previous comments about people externalising costs etc, in your list
There are several specific aspects of the security trade-off that can go
wrong. For example...
I expected to see something about who pays the cost of the fix.
Around six years ago there was an interesting article on the evolution of consciousness. I have not been able to find the reference, but the core of the argument went along these lines:
If you are here and the food is over there, one way to get the food is to throw a heavy or sharp thing at it and slow it down. So an early cognitive skill was built around envisioning ballistic trajectory. Now suppose the food is running – you have to throw the projectile at where it will be, not where it is now. Even harder – you are riding a horse, and the food is running – you have to account for your motion and the food’s motion.
This fundamental survival skill became ingrained in our consciousness very early (the argument runs), and we still see it in artifacts of common thinking. For example, people tend to expect the future will be much like the recent past, rather than the distant past. Some of the examples cited examine this. Other work on stock market behavior suggests that people expect that if the market went up yesterday it will probably go up today, too.
The dramatic unities – unity of time, place, action, and character – conform to this innate desire for simple linear projection, or the expectation of ballistic trajectory. The Law of Causality is an expression of this same preconception. And the common fallacy of logic – post hoc ergo propter hoc – is a consequence of this bias in our thinking.
I don’t mean to advocate the position, but it is thought-provoking.
As an aside, one very big benefit from artificial intelligence (should it come to pass) would not be that computers can start thinking like people, but that they can think more rationally and aide us in liberating our thought processes from this evolutionary bias.
It is often the case that where two dominant and conflicting impulses collide the result, is a third behaviour rather than opting for one of the two conflicting ones. These so called displacement behaviours are often wholly innapproriate and pointless. Examples from the animal kingdom abound. The clasic one is self grooming behaviour in rodents when conflicted between the classic fight or flight reaction. A more human example is the fearful individual who whilst making the trade of between real fear and perceived risk engages in grooming, nailbiting, smoking and other such nervous behaviours. We all at some time have put off dealing with some kind of conflict by finding something else to do. Such behaviour in a group context is often evidenced by government and policy makers, in the face of a fearful public. Powerful dominating factors often don't result in a straightforward choice on the part of either public or policymaker.Displacement activity is often more likely than a straightforward choice of behaviour regardless of its efficacy. In our modern and horribly complex society, the scenario is not fight or flight, but fight, flight or procrastinate!
The psychology of security is one that has not been properly addressed especially within the security community. Look forward to reading the book Bruce!
This is a fascinating essay and a really worthwhile area of study.
Whenever you consider human perception, it's worth looking at the context for communication. People perceive meaning by actively constructing it from the messages they receive. The language of security is particularly emotive and evocative, and of course the visual imagery of the terrorist threat is often very disturbing.
The choice of language, or the selection of images, has a very strong influence on the message, and in this particular context must be a major factor in how people judge risk. There is certainly some useful literature in the area of law and the use of juries - the way events are described can cause often unconscious judgements and biases.
And wherever there's communication, there's a cultural context. Communication relies implicitly on shared cultural referents such as how 'serious' messages should be conveyed (non-verbal cues, credibility of the message source, choice of delivery channels, use of media etc). The cultural context differs both within and between different parts of the world, so it follows that messages and understanding will also vary according to where they are perceived - and in what language.
This is just another factor to consider - the 'noise' between the message source and the recipient can radically alter *what* is actually perceived.
Comments for Bruce Schneier
« I would very much appreciate any and all comments, criticisms, additions, corrections, suggestions for further research, and so on. ». Here it goes!
You write : « And yet, at the same time we seem hopelessly bad at it. We get it wrong all the time. » But, sometimes, you might think that we got it wrong, but at the other end, some Jackals are making millions and billions of dollars. So, in resume, some people WANTS us to be bad at it. I don't know if you want to put that dimention into the debate or article.
You write : « Why is it that, when food poisoning kills 5,000 people every year and 9/11 terrorists killed 2,973 people in one non-repeated incident, we are spending tens of billions of dollars per year (not even counting the wars in Iraq and Afghanistan) on terrorism defense while the entire budget for the Food and Drug Administration in 2007 is only $1.9 billion? It's my contention that these irrational trade-offs can be explained by psychology. » Or can be explained with my argument up here.
The Prospect Theory is very interesting.
You write : « People make very different trade-offs if something is presented as a gain than if something is presented as a loss. » So, that's why all business presentation are better at putting focus on gains rather then losses.
About all those happy face cards tests, it makes me wonder why people are actually looking at the news or papers. Don't they trend to show us all the bad news? Interesting.
You wrote : « It explains why we're worried about risks that are in the news at the expense of risks that are not ...» Another thing is that local newspapers wont put the blame on the people. They will unblame the culprit. For example : « The road killed 10 persons this weekend. » In my lifetime, I never saw a road attack cars.
In the Mental Accounting you write : « Here are the illogical results of two experiments. ». I can actually see why people trend to do what the results are. I can somehow feel the difference. In the trade-off 1, you only loose a 10$ and you would have pay for the ticket anyway. In the trade-off 2, you loose the ticket and have to buy another one. That's two losses.
The order of the alternatives is very interesting in reference # 57, the divorce question. I think it would be interesting to push some research in that direction. That question is about divorce, but what about something a little more difficult? Like freedom vs security?
In the end, our reaction and evaluation will be base on past experiences and sometimes related to ourself 0 to 6 years old. Are we, personally, able to fly? Not really. The risk is big then. But for someone who flies a lot, on planes, does he get use to it? Yes. Is it less risky? Yes.
What about heat? Did your mother told you a 100 times not to touch something that was hot? Yes. So fire is risky.
What about cars? No risk unless you had some accident. And your mother told you very little about cars when you where young. Actually, you where playing with them!
I just found "The Psychology of Security" in my email, and am anxious to read it. I just finished "Secrets and lies", what an out outrageous book! Just finishing up the Cisco CCSP program, and am researching this myself. Keep up the great work!
At 03:06 AM 3/1/2007 -0600, Bruce Schneier wrote:
>This essay is a draft. It's something I'm working on -- possibly it will become
>a book, but probably not -- and something I'm interested in comments about.
Have you read Blink (The Power of Thinking Without Thinking) by Malcolm Gladwell http://www.gladwell.com/blink/ ? It's not about security but about "rapid cognition, about the kind of thinking that happens in a blink of an eye." Like this essay, he uses a lot of research from the field of psychology.
>This essay is my initial attempt to explore the feeling of security: where it
>comes from, how it works, and why it diverges from the reality of security.
I think some of the stuff Gladwell writes about in Blink will help you figure out why we get security wrong and look back and say "how did we miss it."
I enjoyed the article on the psychology of security.
The software produced by the company that employs me is for the health care industry--specifically for behavioral health. One of the issues industry-wide that have slowed the acceptance of converting from paper to electronic chart records has been the perception of health care professionals that electronic records are not secure. Whether electronic records are secure or not, as Bruce accurately points out, is not the issue--the important thing is that there is the feeling that they are not secure. Motivation to change is really an emotional issue, not a rational one. One of the key points then is to identify the strategies that we can use to intervene to change how health care professionals (most of whom are not technically savvy) feel about computer security.
An example might be that you take a physician or nurse and instead of explaining or talking with them about the technical aspects of records security, you decide to put them on the system and let them have the experience of the security protections that are in place. What we discover is that those 20 second blurbs they hear in the media are more important to them than their own personal experience, and so they'll say "Well just because I can't hack into the records doesn’t mean that some evil genius hacker can't do it." Or they'll say "I get a virus on my home PC every other week." I'm sure you know what I mean.
Anyway, the issue of the perception of electronic records security is a significant one in the health care industry. Any attention given to this area is much appreciated.
You weaken your arguments by an over-emphasis on evolutionary "explanations" that are nothing more than just-so stories. If there is good evidence that some mental structure exists, we gain no explanatory power by adding the fable "this structure evolved because it warned primitive man that a leopard was hanging about in the bushes." Perhaps that is why it evolved, but how will we ever know that? And if we do somehow determine that it evolved for some other purpose, does that invalidate your argument in any way?
Almost no behavior fossilizes; and vanishingly little else does, either. Here's the rough math: there are about 250,000 known fossil species stretched out across about 500-million years of evolutionary time after the Cambrian, when non-microscopic life appeared. Current estimates are that, at present, there are between 5 and 10-million extant species, and that the typical species becomes extinct in about a million years. If we are very conservative and say that there may be 1-million species living at any given time that might be evolutionarily interesting (in relation to your issues) at any given moment, then there have been at least 500-million "interesting" species, of which we have fossil evidence of no more than 250,000 (fewer, since not every fossil species will be "interesting" in this sense). So we have physical evidence for just bare existence of less than .05% of the probably "interesting" species whose lives might bear on the issues you're discussing. That's just their existence as species -- no behavior, certainly no behavioral evolutionary explanations for "why" they developed some feature.
No serious paleontologist offers these little fables any more; in fact, they no longer even talk about one species being the "ancestor" of another, since the probability of actually finding fossil evidence for organisms in a true parent/child relationship to each other is also vanishingly small. Paleontology used to indulge in these how-the-elephant-got-its-trunk stories just as the "evolutionary psychologists" and their ilk do today. But do the math and you'll see why they outgrew it.
This is a very interesting article which I enjoyed reading. A lot of questions and problems have been posed. What is needed is how can we, as security experts, turn this to our advantage? For example, how do we get organisations to take the insider threat seriously, while at the same time they use multiple anti-virus products to protect against the virus threat? How do we sell patch management to those organisations which ignore it, while at the same time taking expensive measures to protect confidentiality to a high standard? As you state in your article, people do not make rational decisions.
I'm in the process of reading the latest draft you sent out in your last cryptogram (http://www.schneier.com/essay-155.html). First off, I'd like to say it's definitely looking good, and it's also the kind of paper that needs to be published in order to address the issues surrounding security (or lack thereof), in real life and in the mentality of everyone out there.
One part that caught my eye is where you mention that you're interested in analysing security from a neurological perspective. You already talk about some aspects of how fear is generated in the brain, etc. There is another topic that may be of interest to you, and could conceivably be applied to the research of security in the human mind.
Neuromarketing, you may have heard about, is a field that has come about in recent years. It is used to try and understand what makes a particular service/product or advertisement appealing to people, in order to improve the appeal to some extent, or, some believe, to find a 'buy button' that will lead people to actually act on a given stimulus.
My girlfriend (Linda Boewing) wrote a dissertation/thesis, entitled "Neuromarketing: Fact or Fiction" (PDF Available at: http://tinyurl.com/2tnzxg), on how Neuromarketing works (or not) and how it may compare to other more traditional methods of marketing. Now I know you're probably not interested in the marketing aspect, but in theory the methodology employed in Neuromarketing could be applied to human subjects in order to determine more precisely what it is, in the nature of threat, that generates fear in the human mind.
Her paper is a pretty good overview of the topic, compared to other papers we've seen out there. Should you have any interest in discussing Neurosecurity or something of the sort, you may be interested in reading her paper.
Just a thought ;)
I enjoyed your essay, thank you.
A slight quibble: you say that you consider control bias a manifestation of optimism bias. I have an indirect example of the distinction.
Imagine you have a choice between a flight and a train journey. The weather forecast is 50% chance of storms. Which should you take?
My sister is an optimist. She once took a flight from London to Edinburgh. It was several hours late owing to poor weather. On arrival she surprised the airline by asking to fly straight back. Having missed her meeting the original journey was pointless.
The pessimist would take the train. If the storm materialises she can get off at the next station and return home.
Or, in short, somebody with a control bias should choose a different mode of transport to somebody with an optimism bias.
I enjoyed reading this special issue and a few things comes to mind:
"Normal" humans are in fact quite good about forgetting about bad things, as in most of them learn from previous past experiences, and only "remember" the improvement in their behaviour.
If there is a "exit door" to the situation, any risk is downplayed, just look at the number of injuries from car accidents, versus aviation accidents. Yet people in general do not understand that the most dangerous part of their journey is on the road to and from the airport.
A similar situation occurs when someone stops to help people on a busy highway. The individual that stops does not generally think that their car could be hit by someone, but this happens quite often.
In the end I also admire nature for the wast redundancy that is involved, as you partly point out as well.
Some of the experienments you describe actually apply to user interfaces and web navigation, given too much or too little information, makes the usage difficult.
I absolutely loved this article, and I thought your analysis was very fascinating. There is only one thing missing that I expected you to discuss. Throughout the article, I kept waiting for you to make a certain point, but you never seemed to get around to it. You have eluded to making this point in other articles, and it seemed like it would be appropriate to include here.
It seems that the cost of Security Theatre could many times be justified if it increases the feeling of security for enough people to result in an increase in the amount of revenue that can be made. Even though security might not increase, people would be more apt to want to do business with a company if they feel more secure with that company. I think your example in another article concerned women in hospitals where there is more protection from having their babies kidnapped (if I recall correctly), and this made women more apt to use those hospitals. It seems this same concept could apply to many, many more things. If security cameras in a city, which don't provide much security, make a person feel more comfortable, they may be more willing to travel to that city, which would increase revenue from tourism. If an airline implements security theatre, it may give them a better advantage over other airlines, which would also increase revenue. It seems that implementing security theatre could often have a very good return on investment and increase profits in the long run. I'll leave it to you to come up with some brilliant examples to illustrate this...
I know this seems contrary to thinking that security theatre is not worth the money, as it does not increase security. However, if it can end up paying for itself, then it's not a waste at all, and the decision to implement the 'security measure' would not be a security risk decision at all, it would be an economic decision. It may even be easier to get funding approved for the 'security' project.
Once again, I love reading your work, and I would never have even thought of all this stuff alone. I look forward to reading future articles.
Michael T. Bourgoin, CEH, CISSP
Northrop Grumman Corporation
Information Security Specialist
Certainly, how we individually make these security decisions is relevant. However, many security decisions are made by groups of larger and larger dimensions -- individual, family, school/job, community, state, nation, etc. Group dynamics inject even more complexity into the overall scenario. Individuals may actually have very little control over these decisions, yet have to live with their outcomes.
Consequently, I am less concerned with why individuals make poor choices than with why larger groups (e.g., businesses, governments) do so.
Typically, my individual security decisions will affect primarily me. When our businesses, government(s), etc., make inappropriate security decisions, they have a much broader impact.
We were broken into just over a week ago. This arrived as I struggle
with the stress and thoughts around securing the house against this
type of thing.
I have not read your whole essay yet but wanted to share some
thoughts brought up in the first half while they are in the front of my mind.
First, we got home later in the evening and as I am a routine person,
I walked into the back yard and checked a few things I normally check
with our hot tub. It was then that I was a broken window. My first
thoughts were if the people were still in the house as all the doors
seemed to be closed except this broken window. In that initial
stressed state, I asked my wife for her cell phone to call 911. This
is strange as I had my day pack with me and my cell phone was inside
it. Why I would ask for hers is beyond me as I was unfamiliar with
how to turn it on and unlock the keyboard as she does all the time.
Once we called 911, they said if we felt there was no one in the
house we could go in as they did not have someone to dispatch right
away. Me, being the 'man' of the house told my wife to stay outside
while I went in to search and make sure it was safe. The front door
was unlocked so I figured that is how they got in / out. I started
the rounds of the house starting with the obvious to the more
secluded portions of the house, ruling out some places as they were
still locked (like the door to the garage). I was sprung so tight, I
don't know what I would have done if I had come across anyone in the
house. Luckily not. Though still feeling insecure, and stressed,
there was a method / instinct of how to search the house to ensure it
was safe to be in.
While waiting for police we checked out what was missing and where
they may have been in the house. This was short as they stayed in
one room. The feeling of having your personal space violated by
someone uninvited brought many thoughts of dangerous things to do to
'capture' the bad guys should they try again.
I was stressed that whole night waiting for police to show up. They
called a couple of times and eventually said they would not make it
that night so I boarded it up. That night, every noise, every creak
in the house woke me up. The thoughts of insecurity and of the
potential of them coming back while we are in the house and such kept
me from sleeping well up until 2 nights ago. I guess this shows that
security is also learned earned in a trusted environment. After a
week of the bad guy not coming back, there is a sense of security
coming back. We do things to mitigate the risks like having the
windows boarded up and I put a temporary alarm in (something humerous
around that I'll share in a minute)
How does this relate to security and our minds? Up until now, I felt
safe in the house. I felt secure with all the locks on the doors and
windows, all the open spaces around the house and motion sensing
lights around the house. Until now, I felt this was a safe house and
that I had done most things to make the house less desirable for
someone to break in. Being security minded I was stressed we did not
have a covering over a window by the front door where you could see
right in to where the TV and stereo were so my immediate blame was on
my wife for not putting one up when I asked her two a few months
before. Small things like that make us feel more secure and probably
make the house less desirable.
Funny, now I am thinking more seriously about an alarm but on the
first night and a few days afterwards thoughts of how to rig up
windows to hurt someone trying to break in. This is irrational and
makes it less safe for us as a family to live here. Thoughts of bars
on the windows to make us safe are an idea but we decided that we
want to live a life not be locked in a prison. A neighbour suggested
fancy painted rod iron to make it look fancy but to me it is still
being locked in and in a prison. So, we have traded off physical
security for mental sanity of not staying locked in. I put double
barreled deadbolts on the doors to prevent the doors from easily
being opened from the inside forcing them to break more if they want
to take things out or let others in. This may result in more damage
but in my mind may also make them realize they can't get out easily
and they might leave the way they came in or have to make more noise
which I assume they don't want to do.
Part of my stress is dealing with the insurance companies and the
contractors to fix the windows. I am sure that part will go away
once everything is back to normal.
The alarm story. I put a simple alarm in with two magnetic window
sensors and one motion sensor. I was coming home early on Friday to
meet a contractor. About 1/3 of the way there, I got a phone call
from the alarm at the house. I called a neighour to check on the
house. When he was not home it took all I could to not speed the
rest of the way home. The neighbour called about 20 minutes later so
feeling as stressed as I did it felt better when he said the house
looked secure. When I got home and went into the house, our vacuum
was just finishing it's cleaning routine. I went from feeling
insecure and violated again when the alarm went off to humour when I
realized it was an automated robotic vacuum that set it off.
That is one reason I have not put in an alarm. I feel I would panic
every time it went off even if it was a false alarm (which I imagine
that is what it would be most of the time). A reminder of being
vulnerable and being broken into.
I will continue to make things more secure and less desirable for the
bad guy to want to try and break in to this house but I am sure over
time, it will happen again but I will end up feeling safe and secure
long before then. Most of that I am sure are things that I read
about and implement to do this and I'll mentally feel more secure.
Have I said anything helpful here? I don't know. After reading the
one part of the essay, it does bring out in my mind that the feeling
of security is a mixed bag of things, especially with complacency and
being secure for a longer time. Right after an event, you feel less
secure and implement a bunch more stuff to make you feel secure then
over time, you become complacent again unless more people remind you
(much like the 911 terrorist stuff) over and over again. I can only
assume that if you did not do anything and did not remind yourself
about the break in (in my case) in time you would feel much more
secure again. The constant reminders and making changes to make you
"feel" more secure only keep it fresh in your mind and it takes
longer forget the event so you are stressed and worried when chances
are nothing is going to happen again weather you made any changes or not.
I am not saying we should not mitigate risks but I am saying we don't
need to constantly remind ourselves about it. Security seems to be a
mind set and once we become more complacent and relaxed, we feel safe
and secure. We can only do so many things and need to be reasonable
about it all.
Don't know if this helps at all but from the recent experience some
tings come to mind. After re-reading, it is a bit disjointed but if I
may lose some of the initial reactions and thoughts.
(These comments are on the 2/28 update of the essay)
Below are an idea, a suggestion, and a comment.
I must say that I do not have a background in psychology, so this point should be viewed as an unsubstantiated idea. That being said, I wonder if there is something like a “protective apathy��? that also influences perceptions of security in order to let us go on with our everyday lives. I think particularly about the oft-cited cars vs. planes safety statistics. Since most people drive more than they fly, I wonder if they overlook the risk in order to be able to function. If people who regularly drive to work or to the store did so with the level of fear that many of them likely have about air travel life would likely be intolerable for them. In contrast, people can tolerate anxiety about air travel because they need to endure it less frequently, and based on this hypothesis, I would expect that people who travel frequently on airplanes have a lower level of anxiety than infrequent air travelers. People need to deaden themselves to everyday risk (or change their lifestyles). Those who do not are likely heavily medicated.
The use of examples with outdated details may result in a perception that your overall argument is also outdated. I suggest changing references to expensive calculators and VCRs to more generic equivalents. For example “subjects who expected to receive a piece of electronic equipment in twelve months…��? The statement is still true without distracting the reader with the idea that they’d rather have a DVR than a VCR regardless of the timeframe.
Unfortunately, I worry that the example in the “mental accounting��? section about the theater tickets may be viewed by some as being as anachronistic as the reference to a VCR or a $125 calculator. As a playwright (and sometimes actor and/or director and/or producer) it often feels like most people choose to not see the play even before any bills or tickets have been lost. Perhaps you could teach me how to use heuristics the evil way in order to convince people that seeing plays (particularly my plays) will protect them from terrorists?
I look forward to reading more on this topic.
A couple of things struck me as I read the article:
* 5000 dead from food poisoning vs. 2973 in the WTC
If terrorists killed 5000 people individually spread troughout all states in the US over a full year then maybe the total wouldn’t be as visible. I’d contend that it’s not the number killed, but the number killed in a single event. Particularly if the location is known to, and may have been visited by people making their personal risk assessment.
* Fear or flying vs. fear of driving / being driven
A car is inherently a stable vehicle – it usually has 4 or more wheels and in many cases can roll safely to a stop if something goes wrong. An aircraft is stable only under certain circumstances which are barely understood by most members of the public. If motive power stops large passenger carrying aircraft become unstable in a very short period, and unless something dramatic happens, everyone inside it dies, and this relates directly to the perception of safety.
On the time discounting question, could the preference for $15 now versus $60 in twelve months stem from distrust that they would actually get $60 in twelve months? Also, my wife suggested to me that if college students were the population for this experiment, they might perceive themselves as having immediate needs for the money, and other populations might not respond the same way.
Feelings vs. reality. Reality based on statistics, data.
My thought is that worldview has a significant role to play in this. And worldview can be right or wrong. What is real/true? Look at the Muslim worldview, how do they perceive risk--or maybe that isn't the right question.
A Christian looks at any situation, not through the lense of risk, but of calling, of faith (Paul said, "anything not done in faith is sin", meaning that every action should be done knowing it is what God wants one to do). If God says Go, then I Go and I have His protection. Like the attitude of Daniel facing the king's wrath. It was clear what was the right choice even though it was disobeying the king's clear orders under penalty of death. God could protect him or let him be martyred. But that wasn't a factor in the risk calculation. Daniel felt no insecurity. He was secure in God's clear direction.
So, there is an element of risk reality that is not statistics. Statistics say that if you walk in harm's way you are more likely to be harmed. Faith says, "if the boss tells me to go and he can control all risk according to his good plan" then risk isn't an issue. Reality isn't statistics. Faith is preeminent, and all else are side issues.
Whether you or I am a man of faith does not change the fact that there are people of faith with different worldviews who do not judge reality by statistics. You may judge them as fools, and doubt any different reality. I'm like you in having mild distain for those who ignore statistics, who fear the wrong things. But I'm saying that there are also those who judge on a very different plain, based on a different reality that is not statistics. So, I judge based on statistics, until the boss says to ignore them. In neither case do I fear. There is another truth from the Bible--"there is no fear in love". That ties in here too.
The person of faith understands these things, whether Christian or Muslim, or something else, at least something similar. Clearly, some faith is foolish (since many faiths are contradictory, although some believe that contradictory things can both be true). But faith drives actions and perceptions and feelings.
Can we say the risks we willingly take are driven by worldview? Some people don't trust statistics (and there is something to be said for that). Some are ignorant. Some are superstitious. Some act in faith (which is not much different from superstition except that the object of faith is presumably more reliable).
Can we say that what we fear is determined by our upbringing/experience and our worldview? I guess so.
Thank you for the informative essay.
Of course, security theater has a cost, just like real
security. It can cost money, time, capabilities, freedoms, and so
on, . . . But used in conjunction with real security, a bit of
well-placed security theater might be exactly what we need to both
be and feel more secure.
I disagree, for several reasons; most of them you have explained
before quite clearly. But there is one point I have not seen you
Our reason and our emotions constantly battle for control of
our perceptions of the world. When our emotions dominate, our reason
has no chance, and arguments from reason cannot prevail. The most
important effect of security theater is not to make people more
secure; it's to increase the salience of their fears. Every
pill-bottle seal is a reminder that without pill-bottle seals, a
criminal might put murder your children with poisoned Tylenol, many
years after the incident would otherwise be forgotten. This does not
improve our overall ability to make sensible security tradeoffs as a
society; it contributes to a climate of unreasoning fear.
Of course, we can never hope to be immune to the reason-distorting
effects of our natural cognitive biases, but we can minimize and
compensate for them.
The other things you point out --- that it makes it harder to analyze
security measures, harder to convince the public that they are flawed,
and it has the costs you list.
I think a lot of your examples are only examples of miss-estimation if you assume the subject of the experiment
a) Believes that the experimenter is fair
b) Prefers to be fair with the experimenter
I would suggest that the reality is that the subject
a) Assumes that they are being swindled
b) Prefers to swindle the experimenter
Or more generally, people are always looking for an angle, and assume everyone else is too as the default. With these assumptions, several (but not all) of your examples of irrational behavior are merely selfish and suspicious instead.
I reside in a developing country and would make the comment that many of the short term heuristic behavior patterns you speak of seem to be more pronounced in a country that has many people subsisting at survival level. Long term planning is next to non-existent amongst most of the population. People in 1st world countries have the 'time' to indulge in long term planning. In otherwards it isn't just the difference between 100,000 BC and today, but a difference between countries in the modern day depending on their development level. Just a comment that supports shat I found in your very excellent essay. I very much look forward to the finished product.
I found this draft to be very enlightening. Too many times we who have the responsibility for cyber security become frustrated at the attitudes and positions of our peers and leadership on why we need to implement security controls. We rarely step outside of our shoes and look at the world from the view of those we are trying to convince that the world is bad. I have passed this draft to my team as required reading, and to my peers to help them understand why we get so many roadblocks when we want to do the right thing. I have also passed this to my leadership to help them understand the issue.
Please let all of know when the final document is ready for review, I await it with baited eyes!
The article does a good job with behavioral psychology.
Two additional thoughts:
1. Page 1, 4th paragraph: "You might feel like you're at high risk of burglary...."
You blur a critical distinction between feeling and thinking. Feelings are subjective and can never be wrong, e.g. "I feel frightened" cannot be wrong. As soon as "feel" is followed by "like" or "that", one is no longer talking about a feeling, but a judgment. "I feel that a PC is more elegant than a Mac" is an opinion or a judgment wrapped up as a feeling. It can be — and often is — wrong. People often feel fear when there is little likelihood of danger. Thinking and feeling really are different.
2. Consider throwing the Prisoner's Dilemma into the stew. In trying to optimize individual security, we frequently sacrifice society security and vice versa. It's always a difficult trade off. Same concept is found in "Tragedy of the Commons", Science, 1968. Happy to discuss further, if you wish.
I really enjoyed your paper but it really came to mind while trying to show my horse that a plastic bag won't eat her. For some reason free-range plastic bags are perceived as dangerous by most equines.
Earlier this winter this same horse had an "event" by the water trough. I don't know what it was, but in her fear she, tied to the gate post, snapped the post and pulled it, gate and boards down into the pasture. After that event, she became so fearful of the water trough that she quit drinking from it. Although the other two horses were minding their own business in the pasture, nowhere near the gate or water trough, they too developed a phobia of the water trough although nothing had happened to them. FUD amongst horses is contagious. And it took over a month of tough love to get them drinking freely again. (How long before we can keep our shoes on?)
Humans have become the most successful predators on Earth because of the neocortex. But without the neocortex, humans would just be another puny prey animal. So it seems prey-psychology might shed light on human behavior.
Of all the prey animals humans have domesticated, horses and their behavior have been the most observed because influencing/controlling that behavior has real economic value (and getting it wrong can be very risky).
Modern horse-training, often identified as natural horsemanship, is based on prey-psychology and soothing irrational (from the human perspective) fears. One technique, approach and retreat, is designed to acclimate the equine to perceived danger. Just as I repeatedly showed the horse the plastic bag, it called to my mind how humans become accustomed to the dangers of the daily commute, living in a war zone, or other routine risks.
There are a number of books, videos, and practictioners available on this topic, some better than other. But not a lot of hard statistics, since on the whole the horse-industry would rather be riding than writing.
While I'm a big fan of the horse, if I thought dog-psychology (predator) or cow-psychology (not studied) were a useful lead, I'd suggest that instead.
Re: "Why is it that.....he is more afraid of airplanes than automobiles?"
Not sure where this fits in the puzzle, but my own irrationality is this: Last week I had another bad dream about being in an airplane crash (commercial jet). I have these dreams regularly (every 1-2 months), and my conscious self knows that the dreams are simply a metaphor for whatever is really bothering me; but I am also unable to contain the effect it has on my conscious self, which is that I do not like to fly (only flown once since Sept 11).
Now if I could change my bad dreams to use a metaphor of car crashes, or falling elevators, or alien abductions, then I could have appropriately irrational fears about them, and ride on airplanes happily.
I just read your article, "Human Brain a Poor Judge of Risk," in Wired magazine. It's excellent, and I particularly appreciate your drawing attention to the work of some researchers I was not familiar with (in addition to Kahneman, whose work has fascinated me for years).
Even since I went to grad school a little over a decade ago, and wound up working on a project to estimate extinction risk and mitigation prospects for an endangered species, I've been wondering if humans are any smarter than yeast. A neurologically correct answer is "yes", but it's provisional. In a vat of grape juice, yeast simply don't have the tools they need to avoid poisoning their own environment. Similarly, given the complexity of the environmental and social changes we're experiencing, we seem to lack the cognitive tools we need to invent good enough solutions fast enough to avoid making things considerably worse for ourselves.
Julian Simon might point out (had he not become, in the long run, already dead) that I'm pretty comfortable in my air-conditioned office, etc., but that just makes me think of Steve McQueen's lines in "The Magnificent Seven" about the guy falling off the top of a new six-story skyscraper: as he passed each floor, someone inside would ask how he was doing, and to each he replied, "So far, so good."
Optimism can be useful, but so can realism. Thanks for an illuminating and thought-provoking article.
This is overall an interesting survey of an area that requires a lot more study.
I have a few specific comments on the Cost Heuristics section (I apologize if these are duplicative, I have not reviewed all of the comments). Specifically, many of the the trade-offs described in that section as "exactly the same" are not actually so. To take the first example, it is not the same to lose a $10 bill or a $10 ticket. Many people buy tickets for plays in advance, and then change their view to some extent on whether they really want to see the play (but if they already have the ticket they are more likely to go). In Trade-off 1 (where a $10 bill is lost), the person is at the theatre without having bought an advance ticket, meaning that she really wants to see the play. In Trade-off 2 (where a $10 ticket is lost), the person is at the theatre with an advance ticket, and may have changed her views on wanting to see the play since buying the ticket. That is, the trade-offs are not "exactly the same" in terms of opportunity cost weighted by preference, even if they are the same in pure economic terms.
The other comparisons in the Cost Heuristics section tend to have similar flaws.
- Maury Shenk
I'm responding to the newsletter version of this article, rather than the above which is an earlier draft (in case the distinction matters).
Regarding fear of risk vs. actual danger of same, I think there are two concepts that may have been omitted:
If we don't at least somewhat understand a particular mechanism of risk, we are less likely to think it will apply to us. A common descriptive phrase is "fat, dumb and happy". (This may also tie into the "availability heuristic" section of your paper.)
I believe that "slow" (or extended-build or -duration) risk, like risk of a heart attack, is less likely to be feared than "fast" risk, like that of attack by a shark while swimming. There is no definitive, observable threshold we can know we have crossed for "slow" risk, at least (as in the heart attack example) without extensive analysis.
Also, in one section, you wrote:
Subjects were shown cards, one after another, with either a cartoon happy face or a cartoon frowning face. [...] Subjects' preference for happy faces reduced their accuracy.
I disagree. I believe the reflection of normal reality, not preference, guided the results. (It is more likely to find happy than frowning faces in our surroundings, because frowning is somewhat of a social taboo in our society unless there are specific extenuating circumstances.)
Overall, I have to agree with a quote by Helen Keller, at least in the proper context:
"Security is mostly a superstition; it does not exist in nature. Life is either a daring adventure or nothing."
Great article, but who are you going to trust to do the security theatre honestly?
Is definitely a nice essay. However, the experiments and the results need to be better organised. Its mind crunching reading the experiments one after the other.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.