Essays in the Category “Psychology of Security”
In 1989, handgun expert Jeff Cooper invented something called the Color Code to describe what he called the 'combat mind-set.' Here is his summary:
In White you are unprepared and unready to take lethal action. If you are attacked in White you will probably die unless your adversary is totally inept.
In Yellow you bring yourself to the understanding that your life may be in danger and that you may have to do something about it.
In Orange you have determined upon a specific adversary and are prepared to take action which may result in his death, but you are not in a lethal mode.
It is easy to feel scared and powerless in the wake of attacks like those at the Boston Marathon. But it also plays into the perpetrators' hands.
As the details about the bombings in Boston unfold, it'd be easy to be scared. It'd be easy to feel powerless and demand that our elected leaders do something—anything—to keep us safe.
It'd be easy, but it'd be wrong.
The focus on training obscures the failures of security design
Should companies spend money on security awareness training for their employees? It's a contentious topic, with respected experts on both sides of the debate. I personally believe that training users in security is generally a waste of time, and that the money can be spent better elsewhere. Moreover, I believe that our industry's focus on training serves to obscure greater failings in security design.
Against Security: How We Go Wrong at Airports, Subways, and Other Sites of Ambiguous Danger, by Harvey Molotch, Princeton University Press, 278 pages, $35.
Security is both a feeling and a reality, and the two are different things. People can feel secure when they’re actually not, and they can be secure even when they believe otherwise.
This discord explains much of what passes for our national discourse on security policy.
In May, neuroscientist and popular author Sam Harris and I debated the issue of profiling Muslims at airport security. We each wrote essays, then went back and forth on the issue. I don't recommend reading the entire discussion; we spent 14,000 words talking past each other. But what's interesting is how our debate illustrates the differences between a security engineer and an intelligent layman.
Horrific events, such as the massacre in Aurora, can be catalysts for social and political change. Sometimes it seems that they're the only catalyst; recall how drastically our policies toward terrorism changed after 9/11 despite how moribund they were before.
The problem is that fear can cloud our reasoning, causing us to overreact and to overly focus on the specifics. And the key is to steer our desire for change in that time of fear.
Our brains are specially designed to deal with cheating in social exchanges. The evolutionary psychology explanation is that we evolved brain heuristics for the social problems that our prehistoric ancestors had to deal with. Once humans became good at cheating, they then had to become good at detecting cheating -- otherwise, the social group would fall apart.
Perhaps the most vivid demonstration of this can be seen with variations on what's known as the Wason selection task, named after the psychologist who first studied it.
At a security conference recently, the moderator asked the panel of distinguished cybersecurity leaders what their nightmare scenario was. The answers were the predictable array of large-scale attacks: against our communications infrastructure, against the power grid, against the financial system, in combination with a physical attack.
I didn't get to give my answer until the afternoon, which was: "My nightmare scenario is that people keep talking about their nightmare scenarios."
There's a certain blindness that comes from worst-case thinking. An extension of the precautionary principle, it involves imagining the worst possible outcome and then acting as if it were a certainty.
It's hard work being prey. Watch the birds at a feeder. They're constantly on alert, and will fly away from food -- from easy nutrition -- at the slightest movement or sound. Given that I've never, ever seen a bird plucked from a feeder by a predator, it seems like a whole lot of wasted effort against a small threat.
Natural human risk intuition deserves respect -- even when it doesn't help the security team
This essay also appeared in The Sydney Morning Herald, and The Age.
People have a natural intuition about risk, and in many ways it's very good. It fails at times due to a variety of cognitive biases, but for normal risks that people regularly encounter, it works surprisingly well: often better than we give it credit for.
This struck me as I listened to yet another conference presenter complaining about security awareness training.
If the size of your company grows past 150 people, it's time to get name badges. It's not that larger groups are somehow less secure, it's just that 150 is the cognitive limit to the number of people a human brain can maintain a coherent social relationship with.
Primatologist Robin Dunbar derived this number by comparing neocortex -- the "thinking" part of the mammalian brain -- volume with the size of primate social groups. By analyzing data from 38 primate genera and extrapolating to the human neocortex size, he predicted a human "mean group size" of roughly 150.
A couple of years ago, the Department of Homeland Security hired a bunch of science fiction writers to come in for a day and think of ways terrorists could attack America. If our inability to prevent 9/11 marked a failure of imagination, as some said at the time, then who better than science fiction writers to inject a little imagination into counterterrorism planning?
I discounted the exercise at the time, calling it "embarrassing." I never thought that 9/11 was a failure of imagination. I thought, and still think, that 9/11 was primarily a confluence of three things: the dual failure of centralized coordination and local control within the FBI, and some lucky breaks on the part of the attackers.
When I was growing up, children were commonly taught: "don't talk to strangers." Strangers might be bad, we were told, so it's prudent to steer clear of them.
And yet most people are honest, kind, and generous, especially when someone asks them for help. If a small child is in trouble, the smartest thing he can do is find a nice-looking stranger and talk to him.
This essay appeared as the first half of a point-counterpoint with Marcus Ranum. Marcus's half is here.
We engage in risk management all the time, but it only makes sense if we do it right.
"Risk management" is just a fancy term for the cost-benefit tradeoff associated with any security decision. It's what we do when we react to fear, or try to make ourselves feel secure.
People tend to be risk-averse when it comes to gains, and risk-seeking when it comes to losses. If you give people a choice between a $500 sure gain and a coin-flip chance of a $1,000 gain, about 75 percent will pick the sure gain. But give people a choice between a $500 sure loss and a coin-flip chance of a $1,000 loss, about 75 percent will pick the coin flip.
People don't have a standard mathematical model of risk in their heads.
Security is both a feeling and a reality, and they're different. You can feel secure even though you're not, and you can be secure even though you don't feel it. There are two different concepts mapped onto the same word -- the English language isn't working very well for us here -- and it can be hard to know which one we're talking about when we use the word.
There is considerable value in separating out the two concepts: in explaining how the two are different, and understanding when we're referring to one and when the other.
Uncle Milton Industries has been selling ant farms to children since 1956. Some years ago, I remember opening one up with a friend. There were no actual ants included in the box. Instead, there was a card that you filled in with your address, and the company would mail you some ants.
Return to Part 1The Availability Heuristic
The "availability heuristic" is very broad, and goes a long way toward explaining how people deal with risk and trade-offs. Basically, the availability heuristic means that people "assess the frequency of a class or the probability of an event by the ease with which instances or occurrences can be brought to mind."28 In other words, in any decision-making process, easily remembered (available) data are given greater weight than hard-to-remember data.
In general, the availability heuristic is a good mental shortcut. All things being equal, common events are easier to remember than uncommon ones.
Security is both a feeling and a reality. And they're not the same.
The reality of security is mathematical, based on the probability of different risks and the effectiveness of different countermeasures. We can calculate how secure your home is from burglary, based on such factors as the crime rate in the neighborhood you live in and your door-locking habits.
Two people are sitting in a room together: an experimenter and a subject. The experimenter gets up and closes the door, and the room becomes quieter. The subject is likely to believe that the experimenter's purpose in closing the door was to make the room quieter.
This is an example of correspondent inference theory.
If you encounter an aggressive lion, stare him down. But not a leopard; avoid his gaze at all costs. In both cases, back away slowly; don't run. If you stumble on a pack of hyenas, run and climb a tree; hyenas can't climb trees.
Everyone had a reaction to the horrific events of the Virginia Tech shootings. Some of those reactions were rational. Others were not.
A high school student was suspended for customizing a first-person shooter game with a map of his school.
The security literature is filled with risk pathologies, heuristics that we use to help us evaluate risks. I've collected them from many different sources.Risks of Risks Exaggerated Risks Downplayed Risks Spectacular Pedestrian Rare Common Personified Anonymous Beyond one’s control More under control Externally imposed Taken willingly Talked about Not discussed Intentional or man-made Natural Immediate Long-term or diffuse Sudden Evolving slowly over time Affecting them personally Affecting others New and unfamiliar Familiar Uncertain Well understood Directed against their children Directed toward themselves Morally offensive Morally desirable Entirely without redeeming features Associated with some ancillary benefit Not like their current situation Like their current situation
When you look over the list of exaggerated and downplayed risks in the table here, the most remarkable thing is how reasonable so many of them seem. This makes sense for two reasons.
The human brain is a fascinating organ, but it's an absolute mess. Because it has evolved over millions of years, there are all sorts of processes jumbled together rather than logically organized. Some of the processes are optimized for only certain kinds of situations, while others don't work as well as they could. There's some duplication of effort, and even some conflicting brain processes.
While visiting some friends and their new baby in the hospital last week, I noticed an interesting bit of security. To prevent infant abduction, all babies had RFID tags attached to their ankles by a bracelet. There are sensors on the doors to the maternity ward, and if a baby passes through, an alarm goes off.
Infant abduction is rare, but still a risk.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient Systems, Inc.