Why the Human Brain Is a Poor Judge of Risk
By Bruce Schneier
March 22, 2007
The human brain is a fascinating organ, but it's an absolute mess. Because it has evolved over millions of years, there are all sorts of processes jumbled together rather than logically organized. Some of the processes are optimized for only certain kinds of situations, while others don't work as well as they could. There's some duplication of effort, and even some conflicting brain processes.
Assessing and reacting to risk is one of the most important things a living creature has to deal with, and there's a very primitive part of the brain that has that job. It's the amygdala, and it sits right above the brainstem, in what's called the medial temporal lobe. The amygdala is responsible for processing base emotions that come from sensory inputs, like anger, avoidance, defensiveness and fear. It's an old part of the brain, and seems to have originated in early fishes.
When an animal -- lizard, bird, mammal, even you -- sees, hears or feels something that's a potential danger, the amygdala is what reacts immediately. It's what causes adrenaline and other hormones to be pumped into your bloodstream, triggering the fight-or-flight response, causing increased heart rate and beat force, increased muscle tension and sweaty palms.
This kind of thing works great if you're a lizard or a lion. Fast reaction is what you're looking for; the faster you can notice threats and either run away from them or fight back, the more likely you are to live to reproduce.
But the world is actually more complicated than that. Some scary things are not really as risky as they seem, and others are better handled by staying in the scary situation to set up a more advantageous future response. This means there's an evolutionary advantage to being able to hold off the reflexive fight-or-flight response while you work out a more sophisticated analysis of the situation and your options for handling it.
We humans have a completely different pathway to cope with analyzing risk. It's the neocortex, a more advanced part of the brain that developed very recently, evolutionarily speaking, and only appears in mammals. It's intelligent and analytic. It can reason. It can make more nuanced trade-offs. It's also much slower.
So here's the first fundamental problem: We have two systems for reacting to risk -- a primitive intuitive system and a more advanced analytic system -- and they're operating in parallel. It's hard for the neocortex to contradict the amygdala.
In his book Mind Wide Open, Steven Johnson relates an incident when he and his wife lived in an apartment where a large window blew in during a storm. He was standing right beside it at the time and heard the whistling of the wind just before the window blew. He was lucky -- a foot to the side and he would have been dead -- but the sound has never left him:
Ever since that June storm, a new fear has entered the mix for me: the sound of wind whistling through a window. I know now that our window blew in because it had been installed improperly.... I am entirely convinced that the window we have now is installed correctly, and I trust our superintendent when he says that it is designed to withstand hurricane-force winds. In the five years since that June, we have weathered dozens of storms that produced gusts comparable to the one that blew it in, and the window has performed flawlessly.
I know all these facts -- and yet when the wind kicks up, and I hear that whistling sound, I can feel my adrenaline levels rise.... Part of my brain -- the part that feels most me-like, the part that has opinions about the world and decides how to act on those opinions in a rational way -- knows that the windows are safe.... But another part of my brain wants to barricade myself in the bathroom all over again.
There's a good reason evolution has wired our brains this way. If you're a higher-order primate living in the jungle and you're attacked by a lion, it makes sense that you develop a lifelong fear of lions, or at least fear lions more than another animal you haven't personally been attacked by. From a risk/reward perspective, it's a good trade-off for the brain to make, and -- if you think about it -- it's really no different than your body developing antibodies against, say, chicken pox based on a single exposure.
In both cases, your body is saying: "This happened once, and therefore it's likely to happen again. And when it does, I'll be ready." In a world where the threats are limited -- where there are only a few diseases and predators that happen to affect the small patch of earth occupied by your particular tribe -- it works.
Unfortunately, the brain's fear system doesn't scale the same way the body's immune system does. While the body can develop antibodies for hundreds of diseases, and those antibodies can float around in the bloodstream waiting for a second attack by the same disease, it's harder for the brain to deal with a multitude of lifelong fears.
All this is about the amygdala. The second fundamental problem is that because the analytic system in the neocortex is so new, it still has a lot of rough edges evolutionarily speaking. Psychologist Daniel Gilbert wrote a great comment that explains this:
The brain is a beautifully engineered get-out-of-the-way machine that constantly scans the environment for things out of whose way it should right now get. That's what brains did for several hundred million years -- and then, just a few million years ago, the mammalian brain learned a new trick: to predict the timing and location of dangers before they actually happened.
Our ability to duck that which is not yet coming is one of the brain's most stunning innovations, and we wouldn't have dental floss or 401(k) plans without it. But this innovation is in the early stages of development. The application that allows us to respond to visible baseballs is ancient and reliable, but the add-on utility that allows us to respond to threats that loom in an unseen future is still in beta testing.
A lot of the current research into the psychology of risk are examples of these newer parts of the brain getting things wrong.
And it's not just risks. People are not computers. We don't evaluate security trade-offs mathematically, by examining the relative probabilities of different events. Instead, we have shortcuts, rules of thumb, stereotypes and biases -- generally known as "heuristics." These heuristics affect how we think about risks, how we evaluate the probability of future events, how we consider costs, and how we make trade-offs. We have ways of generating close-to-optimal answers quickly with limited cognitive capabilities. Don Norman's wonderful essay, Being Analog, provides a great background for all this.
Daniel Kahneman, who won a Nobel Prize in Economics for some of this work, talks (.pdf) about humans having two separate cognitive systems, one that intuits and one that reasons:
The operations of System 1 are typically fast, automatic, effortless, associative, implicit (not available to introspection) and often emotionally charged; they are also governed by habit and therefore difficult to control or modify. The operations of System 2 are slower, serial, effortful, more likely to be consciously monitored and deliberately controlled; they are also relatively flexible and potentially rule governed.
When you examine the brain heuristics about risk, security and trade-offs, you can find evolutionary reasons for why they exist. And most of them are still very useful. The problem is that they can fail us, especially in the context of a modern society. Our social and technological evolution has vastly outpaced our evolution as a species, and our brains are stuck with heuristics that are better suited to living in primitive and small family groups.
And when those heuristics fail, our feeling of security diverges from the reality of security.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..