Schneier on Security
A blog covering security and security technology.
« Security in Montana |
| Bomb Squad Defuses Turnip »
March 18, 2008
Risk and the Brain
New research on how the brain estimates risk:
Using functional imaging in a simple gambling task in which risk was constantly changed, the researchers discovered that an early activation of the anterior insula of the brain was associated with mistakes in predicting risk.
The time course of the activation also indicated a role in rapid updating, suggesting that this area is involved in how we learn to modify our risk predictions. The finding was particularly interesting, notes lead author and EPFL professor Peter Bossaerts, because the anterior insula is the locus of where we integrate and process emotions.
"This represents an important advance in our understanding of the neurological underpinnings of risk, in analogy with an earlier discovery of a signal for forecast error in the dopaminergic system," says Bossaerts, "and indicates that we need to update our understanding of the neural basis of reward anticipation in uncertain conditions to include risk assessment."
Posted on March 18, 2008 at 6:51 AM
• 11 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Interesting read. Wonder if there is research to be done based on the findings about how to mitigate incorrect risk assessment (be it via simple concious effort or chemical help).
The mitigation of "incorrect risk assessment" is via the accurate transmission of priors and evidence (using a Bayesian framework).
That is, if you lie to people, if you deny them information, then incorrect assessments will necessarily follow.
Bruce's (and many other people) premises about the inability for lowly humans to make "correct" risk assessments completely ignore the GIGO principle. As a species, we would have died out long ago if our fundamental notions of what is or is not risky were in some way defective.
I have watched enough animals -- human and otherwise -- to know that P(A|B)=P(B|A)*P(A)/P(B) is built into the core neural algorithms.
So what they're saying is people make mistakes in risk assessment when they use their emotions.
I hope the government never learns this, can you imagine how they could misappropriate funds if they had scientifically driven fearmongering? It would make todays security theater look like an "our gang" show.
Which explains why fearmongering is the guiding principle of government.
@"GIGO principle" poster:
P(A|B)=P(B|A)*P(A)/P(B) is all good and well, and the GIGO concept (by itself) is certainly true enough but, seriously, come on. The problem with risk assessment ain't GIGO.
Case in point: Airport security.
We have very good statistics about the odds and methods of being hijacked, bombed, and otherwise being killed on a plane. Despite this we still have security that is provably disproportional and ineffective to the risk.
The ability to correctly respond to immediate physical risks may be effectively hardwired into the brain and seems to do a pretty good job (though the obvious flaw with your logic there would be almost every teen suicide case).
But being good at that kind of risk assessment does not mean that people are good at dealing with less immediate, less physical risk assessments such as those involving gambling, global warming, and, apparently, airport security.
"But being good at that kind of risk assessment does not mean that people are good at dealing with less immediate, less physical risk assessments such as those involving gambling, global warming, and, apparently, airport security."
I think the difference here is one of Concrete vs. Abstract.
The notion of being splatted by a bus is concrete, and I'm sure we have lots of "hard-wiring" that comes into play in avoiding it. Other types of risks require a lot more of our higher reasoning to assess.
"The notion of being splatted by a bus is concrete, and I'm sure we have lots of "hard-wiring" that comes into play in avoiding it. Other types of risks require a lot more of our higher reasoning to assess."
Actually the abstracted risk of getting hit by a bus is not different than the abstracted risk of getting killed by a terrorist, with the exception that the bus might be viewed as accidental while terrorists are viewed as malicious, therefore more threatening.
When faced with a moving bus however the hardwire risks come into play. I'd assume similarly (not having faced this situation) having a terrorist with guns or threatening with bombs nearby would be a very similar "base instinct" level response to fight or flight.
The abstraction isn't from the type of threat per se, but rather the proximity in space and time to you.
Now if I was faced with a terrorist on the left and a moving bus on the right blocking my escape, then I could probably do a comparitive risk analysis :)
"Bruce's (and many other people) premises about the inability for lowly humans to make "correct" risk assessments completely ignore the GIGO principle. As a species, we would have died out long ago if our fundamental notions of what is or is not risky were in some way defective.
I have watched enough animals -- human and otherwise -- to know that P(A|B)=P(B|A)*P(A)/P(B) is built into the core neural algorithms."
As a species we evolved in an environment much different from the modern technological society we have developed. We have yet to prove that we can survive in such an environment. (One of the solutions to the Fermi paradox is that we in fact can't.)
Part of risk assessment is determining P(A) and P(B) - it isn't all conditional probability. Humans tend to be bad at that when it involves modern technological threats like terrorism or, say, child predators on the internet. The use of modern technology, like television, to present threats out of context, also skews the evaluation (which gets back to our ability to survive the modern technology we've created).
As far as the article, I've never found a way to read through the various misinterpretations journalists often add when writing about science, but:
"Contrary to what Descartes held dear, the finding that risk prediction and processing of emotions are related suggests that emotions may be intimately involved in rational decision making -- they may help us to correctly assess risk in an uncertain world."
Why would anyone, including Descartes, expect differently? Fear and happiness and all the other emotions exist for a reason. They were how animals evaluated their world and made decisions long before humans came along and humans are still animals. More than that, emotions are about communication with other humans as well as evaluation of the rest of the world. Why wouldn't they be part of our "rational decision making"? Humans make decisions based on emotions almost always. Just look at politics. Reason is used to justify the emotional commitment or sometimes to change the emotions. Emotions are the basis of the decision.
This is quite interesting though not at all new. In Prof. Dörner's Psi-Theory modulation of risk assessment by emotions was described and proven in simulations quite a while ago.
Emotions served us well in gauging risk when all forms of risk involved actual intimate immediate contact with "suspicious agents". The problem is now that all information is projected creating a false sense of intimacy where there really isn't any.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.