Entries Tagged "brain"

Page 2 of 3

India Using Brain Scans to Prove Guilt in Court

This seems like a whole lot of pseudo-science:

The technologies, generally regarded as promising but unproved, have yet to be widely accepted as evidence—except in India, where in recent years judges have begun to admit brain scans. But it was only in June, in a murder case in Pune, in Maharashtra State, that a judge explicitly cited a scan as proof that the suspect’s brain held “experiential knowledge” about the crime that only the killer could possess, sentencing her to life in prison.

[…]

This latest Indian attempt at getting past criminals—defenses begins with an electroencephalogram, or EEG, in which electrodes are placed on the head to measure electrical waves. The suspect sits in silence, eyes shut. An investigator reads aloud details of the crime—as prosecutors see it—and the resulting brain images are processed using software built in Bangalore.

The software tries to detect whether, when the crime’s details are recited, the brain lights up in specific regions—the areas that, according to the technology’s inventors, show measurable changes when experiences are relived, their smells and sounds summoned back to consciousness. The inventors of the technology claim the system can distinguish between people’s memories of events they witnessed and between deeds they committed.

EDITED TO ADD (10/13): An expert committee said it is unscientific, but their findings weren’t accepted.

Posted on September 22, 2008 at 6:10 AMView Comments

Nasal Spray Increases Trust for Strangers

Okay; this’ll be fun. What’s the most creative abuse for this that you can think of ?

Previous studies have shown that participants in “trust games” took greater risks with their money after inhaling the hormone via a nasal spray.

In this latest experiment, published in the journal Neuron, the researchers asked volunteer subjects to take part in a similar game.

They were each asked to contribute money to a human trustee, with the understanding that the trustee would invest the money and decide whether to return the profits, or betray the subject’s trust by keeping the profit.

The subjects also received doses of oxytocin or a placebo via a nasal spray.

After investing, the participants were given feedback on the trustees. When their trust was abused, the placebo group became less willing to invest. But the players who had been given oxytocin continued to trust their money with a broker.

“We can see that oxytocin has a very powerful effect,” said Dr Baumgartner.

“The subjects who received oxytocin demonstrated no change in their trust behaviour, even though they were informed that their trust was not honoured in roughly 50% of cases.”

In a second game, where the human trustees were replaced by a computer which gave random returns, the hormone made no difference to the players’ investment behaviour.

“It appears that oxytocin affects social responses specifically related to trust,” Dr Baumgartner said.

Posted on May 26, 2008 at 1:30 PMView Comments

Boring Jobs Dull the Mind

We already knew this, but it’s good to reinforce the lesson:

In the study, Dr Eichele and his colleagues asked participants to repeatedly perform a “flanker task”—an experiment in which individuals must quickly respond to visual clues.

As they did so, brain scans were performed using functional magnetic resonance imaging (fMRI).

They found the participants’ mistakes were “foreshadowed” by a particular pattern of brain activity.

“To our surprise, up to 30 seconds before the mistake we could detect a distinct shift in activity,” said Dr Stefan Debener, of Southampton University, UK.

“The brain begins to economise, by investing less effort to complete the same task.

“We see a reduction in activity in the prefrontal cortex. At the same time, we see an increase in activity in an area which is more active in states of rest, known as the Default Mode Network (DMN).”

This has security implications whenever you have people watching the same thing over and over again, looking for anomalies: airport screeners looking at X-ray scans, casino dealers looking for cheaters, building guards looking for bad guys. It’s hard to do it correctly, because the brain doesn’t work that way.

EDITED TO ADD (4/28): This video demonstrates the point nicely.

Posted on April 26, 2008 at 6:37 AMView Comments

Risk and the Brain

New research on how the brain estimates risk:

Using functional imaging in a simple gambling task in which risk was constantly changed, the researchers discovered that an early activation of the anterior insula of the brain was associated with mistakes in predicting risk.

The time course of the activation also indicated a role in rapid updating, suggesting that this area is involved in how we learn to modify our risk predictions. The finding was particularly interesting, notes lead author and EPFL professor Peter Bossaerts, because the anterior insula is the locus of where we integrate and process emotions.

“This represents an important advance in our understanding of the neurological underpinnings of risk, in analogy with an earlier discovery of a signal for forecast error in the dopaminergic system,” says Bossaerts, “and indicates that we need to update our understanding of the neural basis of reward anticipation in uncertain conditions to include risk assessment.”

Posted on March 18, 2008 at 6:51 AMView Comments

Risk of Knowing Too Much About Risk

Interesting:

Dread is a powerful force. The problem with dread is that it leads to terrible decision-making.

Slovic says all of this results from how our brains process risk, which is in two ways. The first is intuitive, emotional and experience based. Not only do we fear more what we can’t control, but we also fear more what we can imagine or what we experience. This seems to be an evolutionary survival mechanism. In the presence of uncertainty, fear is a valuable defense. Our brains react emotionally, generate anxiety and tell us, “Remember the news report that showed what happened when those other kids took the bus? Don’t put your kids on the bus.”

The second way we process risk is analytical: we use probability and statistics to override, or at least prioritize, our dread. That is, our brain plays devil’s advocate with its initial intuitive reaction, and tries to say, “I know it seems scary, but eight times as many people die in cars as they do on buses. In fact, only one person dies on a bus for every 500 million miles buses travel. Buses are safer than cars.”

Unfortunately for us, that’s often not the voice that wins. Intuitive risk processors can easily overwhelm analytical ones, especially in the presence of those etched-in images, sounds and experiences. Intuition is so strong, in fact, that if you presented someone who had experienced a bus accident with factual risk analysis about the relative safety of buses over cars, it’s highly possible that they’d still choose to drive their kids to school, because their brain washes them in those dreadful images and reminds them that they control a car but don’t control a bus. A car just feels safer. “We have to work real hard in the presence of images to get the analytical part of risk response to work in our brains,” says Slovic. “It’s not easy at all.”

And we’re making it harder by disclosing more risks than ever to more people than ever. Not only does all of this disclosure make us feel helpless, but it also gives us ever more of those images and experiences that trigger the intuitive response without analytical rigor to override the fear. Slovic points to several recent cases where reason has lost to fear: The sniper who terrorized Washington D.C.; pathogenic threats like MRSA and brain-eating amoeba. Even the widely publicized drunk-driving death of a baseball player this year led to decisions that, from a risk perspective, were irrational.

Posted on March 6, 2008 at 6:24 AMView Comments

Your Brain on Fear

Interesting article from Newsweek:

The evolutionary primacy of the brain’s fear circuitry makes it more powerful than the brain’s reasoning faculties. The amygdala sprouts a profusion of connections to higher brain regions—neurons that carry one-way traffic from amygdala to neocortex. Few connections run from the cortex to the amygdala, however. That allows the amygdala to override the products of the logical, thoughtful cortex, but not vice versa. So although it is sometimes possible to think yourself out of fear (“I know that dark shape in the alley is just a trash can”), it takes great effort and persistence. Instead, fear tends to overrule reason, as the amygdala hobbles our logic and reasoning circuits. That makes fear “far, far more powerful than reason,” says neurobiologist Michael Fanselow of the University of California, Los Angeles. “It evolved as a mechanism to protect us from life-threatening situations, and from an evolutionary standpoint there’s nothing more important than that.”

I’ve already written about this sort of thing.

Posted on January 9, 2008 at 6:10 AMView Comments

Psychoecology and the DHS

Weird:

The Department of Homeland Security (DHS) has gone to many strange places in its search for ways to identify terrorists before they attack, but perhaps none stranger than this lab on the outskirts of Russia’s capital. The institute has for years served as the center of an obscure field of human behavior study—dubbed psychoecology—that traces it roots back to Soviet-era mind control research.

[…]

SSRM Tek is presented to a subject as an innocent computer game that flashes subliminal images across the screen—like pictures of Osama bin Laden or the World Trade Center. The “player”—a traveler at an airport screening line, for example—presses a button in response to the images, without consciously registering what he or she is looking at. The terrorist’s response to the scrambled image involuntarily differs from the innocent person’s, according to the theory.

Posted on September 24, 2007 at 7:34 AMView Comments

MRI Lie Detectors

Long and interesting article on fMRI lie detectors.

I was particularly struck by this paragraph, about why people are bad at detecting lies:

Maureen O’Sullivan, a deception researcher at the University of San Francisco, studies why humans are so bad at recognizing lies. Many people, she says, base assessments of truthfulness on irrelevant factors, such as personality or appearance. “Baby-faced, non-weird, and extroverted people are more likely to be judged truthful,” she says. (Maybe this explains my trust in Steve Glass.) People are also blinkered by the “truthfulness bias”: the vast majority of questions we ask of other people—the time, the price off the breakfast special—are answered honestly, and truth is therefore our default expectation. Then, there’s the “learning-curve problem.” We don’t have a refined idea of what a successful lie looks and sounds like, since we almost never receive feedback on the fibs that we’ve been told; the co-worker who, at the corporate retreat, assured you that she loved your presentation doesn’t usually reveal later that she hated it. As O’Sullivan puts it, “By definition, the most convincing lies go undetected.”

EDITED TO ADD (8/28): The New York Times has an article on the topic.

Posted on July 25, 2007 at 6:26 AMView Comments

Scanning People's Intentions

Here’s an article on a brain scanning technique that reads people’s intentions.

There’s not a lot of detail, but my guess is that it doesn’t work very well. But that’s not really the point. If it doesn’t work today, it will in five, ten, twenty years; it will work eventually.

What we need to do, today, is debate the legality and ethics of these sorts of interrogations:

“These techniques are emerging and we need an ethical debate about the implications, so that one day we’re not surprised and overwhelmed and caught on the wrong foot by what they can do. These things are going to come to us in the next few years and we should really be prepared,” Professor Haynes told the Guardian.

The use of brain scanners to judge whether people are likely to commit crimes is a contentious issue that society should tackle now, according to Prof Haynes. “We see the danger that this might become compulsory one day, but we have to be aware that if we prohibit it, we are also denying people who aren’t going to commit any crime the possibility of proving their innocence.”

More discussion along these lines is in the article. And I wrote about this sort of thing in 2005, in the context of Judge Roberts’ confirmation hearings.

Posted on February 15, 2007 at 6:32 AMView Comments

Eyewitness Identification Reform

According to this article, “Mistaken eyewitness identification is the leading cause of wrongful convictions.” Given what I’ve been reading recently about memory and the brain, this does not surprise me at all.

New Mexico is currently debating a bill reforming eyewitness identification procedures:

Under the proposed regulations, an eyewitness must provide a written description before a lineup takes place; there must be at least six individuals in a live lineup and 10 photos in a photographic line-up; and the members of the lineup must be shown sequentially rather than simultaneously.

The bill would also restrict the amount of time in which law enforcement could bring a suspect by for a physical identification by a victim or witness to within one hour after the crime was reported. Anything beyond one hour would require a lineup with multiple photos or people.

I don’t have access to any of the psychological or criminology studies that back these reforms up, but the bill is being supported by the right sorts of people.

Posted on February 7, 2007 at 6:38 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.