Entries Tagged "risk assessment"

Page 15 of 21

Risk Preferences in Chimpanzees and Bonobos

I’ve already written about prospect theory, which explains how people approach risk. People tend to be risk averse when it comes to gains, and risk seeking when it comes to losses:

Evolutionarily, presumably it is a better survival strategy to—all other things being equal, of course—accept small gains rather than risking them for larger ones, and risk larger losses rather than accepting smaller losses. Lions chase young or wounded wildebeest because the investment needed to kill them is lower. Mature and healthy prey would probably be more nutritious, but there’s a risk of missing lunch entirely if it gets away. And a small meal will tide the lion over until another day. Getting through today is more important than the possibility of having food tomorrow.

Similarly, it is evolutionarily better to risk a larger loss than to accept a smaller loss. Because animals tend to live on the razor’s edge between starvation and reproduction, any loss of food—whether small or large—can be equally bad. That is, both can result in death. If that’s true, the best option is to risk everything for the chance at no loss at all.

This behavior has been demonstrated in animals as well: “species of insects, birds and mammals range from risk neutral to risk averse when making decisions about amounts of food, but are risk seeking towards delays in receiving food.”

A recent study examines the relative risk preferences in two closely related species: chimanzees and bonobos.

Abstract

Human and non-human animals tend to avoid risky prospects. If such patterns of economic choice are adaptive, risk preferences should reflect the typical decision-making environments faced by organisms. However, this approach has not been widely used to examine the risk sensitivity in closely related species with different ecologies. Here, we experimentally examined risk-sensitive behaviour in chimpanzees (Pan troglodytes) and bonobos (Pan paniscus), closely related species whose distinct ecologies are thought to be the major selective force shaping their unique behavioural repertoires. Because chimpanzees exploit riskier food sources in the wild, we predicted that they would exhibit greater tolerance for risk in choices about food. Results confirmed this prediction: chimpanzees significantly preferred the risky option, whereas bonobos preferred the fixed option. These results provide a relatively rare example of risk-prone behaviour in the context of gains and show how ecological pressures can sculpt economic decision making.

The basic argument is that in the natural environment of the chimpanzee, if you don’t take risks you don’t get any of the high-value rewards (e.g., monkey meat). Bonobos “rely more heavily than chimpanzees on terrestrial herbaceous vegetation, a more temporally and spatially consistent food source.” So chimpanzees are less likely to avoid taking risks.

Fascinating stuff, but there are at least two problems with this study. The first one, the researchers explain in their paper. The animals studied—five of each species—were from the Wolfgang Koehler Primate Research Center at the Leipzig Zoo, and the experimenters were unable to rule out differences in the “experiences, cultures and conditions of the two specific groups tested here.”

The second problem is more general: we know very little about the life of bonobos in the wild. There’s a lot of popular stereotypes about bonobos, but they’re sloppy at best.

Even so, I like seeing this kind of research. It’s fascinating.

EDITED TO ADD (5/13): Response to that last link.

Posted on April 17, 2008 at 6:20 AMView Comments

Seat Belt Usage and Compensating Behavior

There is a theory that people have an inherent risk thermostat that seeks out an optimal level of risk. When something becomes inherently safer—a law is passed requiring motorcycle riders to wear helmets, for example—people compensate by riding more recklessly. I first read this theory in a 1999 paper by John Adams at the University of Reading, although it seems to have originated with Sam Peltzman.

In any case, this paper presents data that contradicts that thesis:

Abstract—This paper investigates the effects of mandatory seat belt laws on driver behavior and traffic fatalities. Using a unique panel data set on seat belt usage in all U.S. jurisdictions, we analyze how such laws, by influencing seat belt use, affect the incidence of traffic fatalities. Allowing for the endogeneity of seat belt usage, we find that such usage decreases overall traffic fatalities. The magnitude of this effect, however, is significantly smaller than the estimate used by the National Highway Traffic Safety Administration. In addition, we do not find significant support for the compensating-behavior theory, which suggests that seat belt use also has an indirect adverse effect on fatalities by encouraging careless driving. Finally, we identify factors, especially the type of enforcement used, that make seat belt laws more effective in increasing seat belt usage.

Posted on April 11, 2008 at 1:44 PMView Comments

Overestimating Threats Against Children

This is a great essay by a mom who let her 9-year-old son ride the New York City subway alone:

No, I did not give him a cell phone. Didn’t want to lose it. And no, I didn’t trail him, like a mommy private eye. I trusted him to figure out that he should take the Lexington Avenue subway down, and the 34th Street crosstown bus home. If he couldn’t do that, I trusted him to ask a stranger. And then I even trusted that stranger not to think, “Gee, I was about to catch my train home, but now I think I’ll abduct this adorable child instead.”

Long story short: My son got home, ecstatic with independence.

Long story longer, and analyzed, to boot: Half the people I’ve told this episode to now want to turn me in for child abuse. As if keeping kids under lock and key and helmet and cell phone and nanny and surveillance is the right way to rear kids. It’s not. It’s debilitating—for us and for them.

It’s amazing how our fears blind us. The mother and son appeared on The Today Show, where they both continued to explain why it wasn’t an unreasonable thing to do:

And that was Skenazy’s point in her column: The era is long past when Times Square was a fetid sump and taking a walk in Central Park after dark was tantamount to committing suicide. Recent federal statistics show New York to be one of the safest cities in the nation—right up there with Provo, Utah, in fact.

“Times are back to 1963,” Skenzay said. “It’s safe. It’s a great time to be a kid in the city.”

The problem is that people read about children who are abducted and murdered and fear takes over, she said. And she doesn’t think fear should rule our lives.

Of course, The Today Show interviewer didn’t get it:

Dr. Ruth Peters, a parenting expert and TODAY Show contributor, agreed that children should be allowed independent experiences, but felt there are better—and safer—ways to have them than the one Skenazy chose.

“I’m not so much concerned that he’s going to be abducted, but there’s a lot of people who would rough him up,” she said. “There’s some bullies and things like that. He could have gotten the same experience in a safer manner.”

“It’s safe to go on the subway,” Skenazy replied. “It’s safe to be a kid. It’s safe to ride your bike on the streets. We’re like brainwashed because of all the stories we hear that it isn’t safe. But those are the exceptions. That’s why they make it to the news. This is like, ‘Boy boils egg.’ He did something that any 9-year-old could do.”

Here’s an audio interview with Skenazy.

I am reminded of this great graphic depicting childhood independence diminishing over four generations.

Posted on April 10, 2008 at 1:00 PMView Comments

The Feeling and Reality of Security

Security is both a feeling and a reality, and they’re different. You can feel secure even though you’re not, and you can be secure even though you don’t feel it. There are two different concepts mapped onto the same word—the English language isn’t working very well for us here—and it can be hard to know which one we’re talking about when we use the word.

There is considerable value in separating out the two concepts: in explaining how the two are different, and understanding when we’re referring to one and when the other. There is value as well in recognizing when the two converge, understanding why they diverge, and knowing how they can be made to converge again.

Some fundamentals first. Viewed from the perspective of economics, security is a trade-off. There’s no such thing as absolute security, and any security you get has some cost: in money, in convenience, in capabilities, in insecurities somewhere else, whatever. Every time someone makes a decision about security—computer security, community security, national security—he makes a trade-off.

People make these trade-offs as individuals. We all get to decide, individually, if the expense and inconvenience of having a home burglar alarm is worth the security. We all get to decide if wearing a bulletproof vest is worth the cost and tacky appearance. We all get to decide if we’re getting our money’s worth from the billions of dollars we’re spending combating terrorism, and if invading Iraq was the best use of our counterterrorism resources. We might not have the power to implement our opinion, but we get to decide if we think it’s worth it.

Now we may or may not have the expertise to make those trade-offs intelligently, but we make them anyway. All of us. People have a natural intuition about security trade-offs, and we make them, large and small, dozens of times throughout the day. We can’t help it: It’s part of being alive.

Imagine a rabbit, sitting in a field eating grass. And he sees a fox. He’s going to make a security trade-off: Should he stay or should he flee? Over time, the rabbits that are good at making that trade-off will tend to reproduce, while the rabbits that are bad at it will tend to get eaten or starve.

So, as a successful species on the planet, you’d expect that human beings would be really good at making security trade-offs. Yet, at the same time, we can be hopelessly bad at it. We spend more money on terrorism than the data warrants. We fear flying and choose to drive instead. Why?

The short answer is that people make most trade-offs based on the feeling of security and not the reality.

I’ve written a lot about how people get security trade-offs wrong, and the cognitive biases that cause us to make mistakes. Humans have developed these biases because they make evolutionary sense. And most of the time, they work.

Most of the time—and this is important—our feeling of security matches the reality of security. Certainly, this is true of prehistory. Modern times are harder. Blame technology, blame the media, blame whatever. Our brains are much better optimized for the security trade-offs endemic to living in small family groups in the East African highlands in 100,000 B.C. than to those endemic to living in 2008 New York.

If we make security trade-offs based on the feeling of security rather than the reality, we choose security that makes us feel more secure over security that actually makes us more secure. And that’s what governments, companies, family members and everyone else provide. Of course, there are two ways to make people feel more secure. The first is to make people actually more secure and hope they notice. The second is to make people feel more secure without making them actually more secure, and hope they don’t notice.

The key here is whether we notice. The feeling and reality of security tend to converge when we take notice, and diverge when we don’t. People notice when 1) there are enough positive and negative examples to draw a conclusion, and 2) there isn’t too much emotion clouding the issue.

Both elements are important. If someone tries to convince us to spend money on a new type of home burglar alarm, we as society will know pretty quickly if he’s got a clever security device or if he’s a charlatan; we can monitor crime rates. But if that same person advocates a new national antiterrorism system, and there weren’t any terrorist attacks before it was implemented, and there weren’t any after it was implemented, how do we know if his system was effective?

People are more likely to realistically assess these incidents if they don’t contradict preconceived notions about how the world works. For example: It’s obvious that a wall keeps people out, so arguing against building a wall across America’s southern border to keep illegal immigrants out is harder to do.

The other thing that matters is agenda. There are lots of people, politicians, companies and so on who deliberately try to manipulate your feeling of security for their own gain. They try to cause fear. They invent threats. They take minor threats and make them major. And when they talk about rare risks with only a few incidents to base an assessment on—terrorism is the big example here—they are more likely to succeed.

Unfortunately, there’s no obvious antidote. Information is important. We can’t understand security unless we understand it. But that’s not enough: Few of us really understand cancer, yet we regularly make security decisions based on its risk. What we do is accept that there are experts who understand the risks of cancer, and trust them to make the security trade-offs for us.

There are some complex feedback loops going on here, between emotion and reason, between reality and our knowledge of it, between feeling and familiarity, and between the understanding of how we reason and feel about security and our analyses and feelings. We’re never going to stop making security trade-offs based on the feeling of security, and we’re never going to completely prevent those with specific agendas from trying to take care of us. But the more we know, the better trade-offs we’ll make.

This article originally appeared on Wired.com.

Posted on April 8, 2008 at 5:50 AMView Comments

Security Perception: Fear vs Anger

If you’re fearful, you think you’re more at risk than if you’re angry:

In the aftermath of September 11th, we realized that, tragically, we were presented with an opportunity to find out whether our lab research could predict how the country as a whole would react to the attacks and how U.S. citizens would perceive future risks of terrorism. We did a nationwide field experiment, the first of its kind. As opposed to the participants in our lab studies, the participants in our nationwide field study did have strong feelings about the issues at stake—September 11th and possible future attacks—and they also had a lot of information about these issues as well. We wondered whether the same emotional carryover that we found in our lab studies would occur—whether fear and anger would still have opposing effects.

In pilot tests, we identified some media coverage of the attacks (video clips) that triggered a sense of fear, and some coverage that triggered a sense of anger. We randomly assigned participants from around the country to be exposed to one of those two conditions—media reports that were known to trigger fear or reports that were known to trigger anger. Next, we asked participants to predict how much risk, if any, they perceived in a variety of different events. For example, they were asked to predict the likelihood of another terrorist attack on the United States within the following 12 months and whether they themselves expected to be victims of potential future attacks. They made many other risk judgments about themselves, the country, and the world as a whole. They also rated their policy preferences.

The results mirrored those of our lab studies. Specifically, people who saw the anger-inducing video clip were subsequently more optimistic on a whole series of judgments about the future—their own future, the country’s future, and the future of the world. In contrast, the people who saw the fear-inducing video clip were less optimistic about their own future, the country’s future, and the world’s future. Policy preferences also differed as a function of exposure to the different media/emotion conditions. Participants who saw the fear-inducing clip subsequently endorsed less aggressive and more conciliatory policies than did participants who saw the anger-inducing clip, even though the clip was only a few minutes long and participants had had weeks to form their own policy opinions regarding responses to terrorism.

So, to summarize: we should not be fearful of future terrorist attacks, we should be angry that our government has done such a poor job safeguarding our liberties. And that if we take this second approach, we are more likely to respond effectively to future terrorist attacks.

Posted on March 23, 2008 at 12:42 PMView Comments

Fraud Due to a Credit Card Breach

This sort of story is nothing new:

Hannaford said credit and debit card numbers were stolen during the card authorization process and about 4.2 million unique account numbers were exposed.

But it’s rare that we see statistics about the actual risk of fraud:

The company is aware of about 1,800 cases of fraud reported so far relating to the breach.

And this is interesting:

“Visa and MasterCard have stipulated in their contracts with retailers that they will not divulge who the source is when a data breach occurs,” Spitzer said. “We’ve been engaged in a dialogue for a couple years now about changing this rule…. Without knowing who the retailer is that caused the breach, it’s hard for banks to conduct a good investigation on behalf of their consumers. And it’s a problem for consumers as well, because if they know which retailer is responsible, they can rule themselves out for being at risk if they don’t shop at that retailer.”

Posted on March 21, 2008 at 6:39 AMView Comments

Risk and the Brain

New research on how the brain estimates risk:

Using functional imaging in a simple gambling task in which risk was constantly changed, the researchers discovered that an early activation of the anterior insula of the brain was associated with mistakes in predicting risk.

The time course of the activation also indicated a role in rapid updating, suggesting that this area is involved in how we learn to modify our risk predictions. The finding was particularly interesting, notes lead author and EPFL professor Peter Bossaerts, because the anterior insula is the locus of where we integrate and process emotions.

“This represents an important advance in our understanding of the neurological underpinnings of risk, in analogy with an earlier discovery of a signal for forecast error in the dopaminergic system,” says Bossaerts, “and indicates that we need to update our understanding of the neural basis of reward anticipation in uncertain conditions to include risk assessment.”

Posted on March 18, 2008 at 6:51 AMView Comments

Hacking Medical Devices

Okay, so this could be big news:

But a team of computer security researchers plans to report Wednesday that it had been able to gain wireless access to a combination heart defibrillator and pacemaker.

They were able to reprogram it to shut down and to deliver jolts of electricity that would potentially be fatal—if the device had been in a person. In this case, the researcher were hacking into a device in a laboratory.

The researchers said they had also been able to glean personal patient data by eavesdropping on signals from the tiny wireless radio that Medtronic, the device’s maker, had embedded in the implant as a way to let doctors monitor and adjust it without surgery.

There’s only a little bit of hyperbole in the New York Times article. The research is being conducted by the Medical Device Security Center, with researchers from Beth Israel Deaconess Medical Center, Harvard Medical School, the University of Massachusetts Amherst, and the University of Washington. They have two published papers:

This is from the FAQ for the second paper (an ICD is a implantable cardiac defibrillator):

As part of our research we evaluated the security and privacy properties of a common ICD. We investigate whether a malicious party could create his or her own equipment capable of wirelessly communicating with this ICD.

Using our own equipment (an antenna, radio hardware, and a PC), we found that someone could violate the privacy of patient information and medical telemetry. The ICD wirelessly transmits patient information and telemetry without observable encryption. The adversary’s computer could intercept wireless signals from the ICD and learn information including: the patient’s name, the patient’s medical history, the patient’s date of birth, and so on.

Using our own equipment (an antenna, radio hardware, and a PC), we found that someone could also turn off or modify therapy settings stored on the ICD. Such a person could render the ICD incapable of responding to dangerous cardiac events. A malicious person could also make the ICD deliver a shock that could induce ventricular fibrillation, a potentially lethal arrhythmia.

Of course, we all know how this happened. It’s a story we’ve seen a zillion times before: the designers didn’t think about security, so the design wasn’t secure.

The researchers are making it very clear that this doesn’t mean people shouldn’t get pacemakers and ICDs. Again, from the FAQ:

We strongly believe that nothing in our report should deter patients from receiving these devices if recommended by their physician. The implantable cardiac defibrillator is a proven, life-saving technology. We believe that the risk to patients is low and that patients should not be alarmed. We do not know of a single case where an IMD patient has ever been harmed by a malicious security attack. To carry out the attacks we discuss in our paper would require: malicious intent, technical sophistication, and the ability to place electronic equipment close to the patient. Our goal in performing this study is to improve the security, privacy, safety, and effectiveness of future IMDs.

For all our experiments our antenna, radio hardware, and PC were near the ICD. Our experiments were conducted in a computer laboratory and utilized simulated patient data. We did not experiment with extending the distance between the antenna and the ICD.

I agree with this answer. The risks are there, but the benefits of these devices are much greater. The point of this research isn’t to help people hack into pacemakers and commit murder, but to enable medical device companies to design better implantable equipment in the future. I think it’s great work.

Of course, that will only happen if the medical device companies don’t react like idiots:

Medtronic, the industry leader in cardiac regulating implants, said Tuesday that it welcomed the chance to look at security issues with doctors, regulators and researchers, adding that it had never encountered illegal or unauthorized hacking of its devices that have telemetry, or wireless control, capabilities.

“To our knowledge there has not been a single reported incident of such an event in more than 30 years of device telemetry use, which includes millions of implants worldwide,” a Medtronic spokesman, Robert Clark, said. Mr. Clark added that newer implants with longer transmission ranges than Maximo also had enhanced security.

[…]

St. Jude Medical, the third major defibrillator company, said it used “proprietary techniques” to protect the security of its implants and had not heard of any unauthorized or illegal manipulation of them.

Just because you have no knowledge of something happening does not mean it’s not a risk.

Another article.

The general moral here: more and more, computer technology is becoming intimately embedded into our lives. And with each new application comes new security risks. And we have to take those risks seriously.

Posted on March 12, 2008 at 10:39 AMView Comments

Risk of Knowing Too Much About Risk

Interesting:

Dread is a powerful force. The problem with dread is that it leads to terrible decision-making.

Slovic says all of this results from how our brains process risk, which is in two ways. The first is intuitive, emotional and experience based. Not only do we fear more what we can’t control, but we also fear more what we can imagine or what we experience. This seems to be an evolutionary survival mechanism. In the presence of uncertainty, fear is a valuable defense. Our brains react emotionally, generate anxiety and tell us, “Remember the news report that showed what happened when those other kids took the bus? Don’t put your kids on the bus.”

The second way we process risk is analytical: we use probability and statistics to override, or at least prioritize, our dread. That is, our brain plays devil’s advocate with its initial intuitive reaction, and tries to say, “I know it seems scary, but eight times as many people die in cars as they do on buses. In fact, only one person dies on a bus for every 500 million miles buses travel. Buses are safer than cars.”

Unfortunately for us, that’s often not the voice that wins. Intuitive risk processors can easily overwhelm analytical ones, especially in the presence of those etched-in images, sounds and experiences. Intuition is so strong, in fact, that if you presented someone who had experienced a bus accident with factual risk analysis about the relative safety of buses over cars, it’s highly possible that they’d still choose to drive their kids to school, because their brain washes them in those dreadful images and reminds them that they control a car but don’t control a bus. A car just feels safer. “We have to work real hard in the presence of images to get the analytical part of risk response to work in our brains,” says Slovic. “It’s not easy at all.”

And we’re making it harder by disclosing more risks than ever to more people than ever. Not only does all of this disclosure make us feel helpless, but it also gives us ever more of those images and experiences that trigger the intuitive response without analytical rigor to override the fear. Slovic points to several recent cases where reason has lost to fear: The sniper who terrorized Washington D.C.; pathogenic threats like MRSA and brain-eating amoeba. Even the widely publicized drunk-driving death of a baseball player this year led to decisions that, from a risk perspective, were irrational.

Posted on March 6, 2008 at 6:24 AMView Comments

1 13 14 15 16 17 21

Sidebar photo of Bruce Schneier by Joe MacInnis.