Entries Tagged "brain"

Page 1 of 3

Yet Another New Biometric: Brainprints

New research:

In “Brainprint,” a newly published study in academic journal Neurocomputing, researchers from Binghamton University observed the brain signals of 45 volunteers as they read a list of 75 acronyms, such as FBI and DVD. They recorded the brain’s reaction to each group of letters, focusing on the part of the brain associated with reading and recognizing words, and found that participants’ brains reacted differently to each acronym, enough that a computer system was able to identify each volunteer with 94 percent accuracy. The results suggest that brainwaves could be used by security systems to verify a person’s identity.

I have no idea what the false negatives are, or how robust this biometric is over time, but the article makes the important point that unlike most biometrics this one can be updated.

“If someone’s fingerprint is stolen, that person can’t just grow a new finger to replace the compromised fingerprint—the fingerprint for that person is compromised forever. Fingerprints are ‘non-cancellable.’ Brainprints, on the other hand, are potentially cancellable. So, in the unlikely event that attackers were actually able to steal a brainprint from an authorized user, the authorized user could then ‘reset’ their brainprint,” Laszlo said.

Presumably the resetting involves a new set of acronyms.

Author’s self-archived version of the paper (pdf).

Posted on June 4, 2015 at 10:36 AMView Comments

Hacking Brain-Computer Interfaces

In this fascinating piece of research, the question is asked: can we surreptitiously collect secret information from the brains of people using brain-computer interface devices? One article:

A team of security researchers from Oxford, UC Berkeley, and the University of Geneva say that they were able to deduce digits of PIN numbers, birth months, areas of residence and other personal information by presenting 30 headset-wearing subjects with images of ATM machines, debit cards, maps, people, and random numbers in a series of experiments. The paper, titled “On the Feasibility of Side-Channel Attacks with Brain Computer Interfaces,” represents the first major attempt to uncover potential security risks in the use of the headsets.

This is a new development in spyware.

EDITED TO ADD (9/6): More articles. And here’s a discussion of the pros and cons of this sort of technology.

Posted on September 5, 2012 at 6:06 AMView Comments

The Unreliability of Eyewitness Testimony

Interesting article:

The reliability of witness testimony is a vastly complex subject, but legal scholars and forensic psychologists say it’s possible to extract the truth from contradictory accounts and evolving memories. According to Barbara Tversky, professor emerita of psychology at Stanford University, the bottom line is this: “All other things equal, earlier recountings are more likely to be accurate than later ones. The longer the delay, the more likely that subsequent information will get confused with the target memory.”

[…]

Memory is a reconstructive process, says Richard Wise, a forensic psychologist at the University of North Dakota. “When an eyewitness recalls a crime, he or she must reconstruct his or her memory of the crime.” This, he says, is an unconscious process. To reconstruct a memory, the eyewitness draws upon several sources of information, only one being his or her actual recollection.

“To fill in gaps in memory, the eyewitness relies upon his or her expectation, attitudes, prejudices, bias, and prior knowledge. Furthermore, information supplied to an eyewitness after a crime (i.e., post-event information) by the police, prosecutor, other eyewitnesses, media, etc., can alter an eyewitness’s memory of the crime,” Wise said in an email.

That external input is what makes eyewitness testimony so unreliable. Eyewitnesses are generally unaware that their memory has been altered by post-event information, and feel convinced they’re recalling only the incident itself. “Once an eyewitness’s memory of the crime has been altered by post-event information, it is difficult or impossible to restore the eyewitness’s original memory of the crime,” Wise told Life’s Little Mysteries.

Posted on June 4, 2012 at 6:36 AMView Comments

More Brain Scans to Detect Future Terrorists

Worked well in a test:

For the first time, the Northwestern researchers used the P300 testing in a mock terrorism scenario in which the subjects are planning, rather than perpetrating, a crime. The P300 brain waves were measured by electrodes attached to the scalp of the make-believe “persons of interest” in the lab.

The most intriguing part of the study in terms of real-world implications, Rosenfeld said, is that even when the researchers had no advance details about mock terrorism plans, the technology was still accurate in identifying critical concealed information.

“Without any prior knowledge of the planned crime in our mock terrorism scenarios, we were able to identify 10 out of 12 terrorists and, among them, 20 out of 30 crime-related details,” Rosenfeld said. “The test was 83 percent accurate in predicting concealed knowledge, suggesting that our complex protocol could identify future terrorist activity.”

Rosenfeld is a leading scholar in the study of P300 testing to reveal concealed information. Basically, electrodes are attached to the scalp to record P300 brain activity—or brief electrical patterns in the cortex—that occur, according to the research, when meaningful information is presented to a person with “guilty knowledge.”

More news stories.

The base rate of terrorism makes this test useless, but the technology will only get better.

Posted on August 6, 2010 at 5:36 AMView Comments

Security Trade-Offs in Crayfish

Interesting:

The experiments offered the crayfish stark decisions—a choice between finding their next meal and becoming a meal for an apparent predator. In deciding on a course of action, they carefully weighed the risk of attack against the expected reward, Herberholz says.

Using a non-invasive method that allowed the crustaceans to freely move, the researchers offered juvenile Louisiana Red Swamp crayfish a simultaneous threat and reward: ahead lay the scent of food, but also the apparent approach of a predator.

In some cases, the “predator” (actually a shadow) appeared to be moving swiftly, in others slowly. To up the ante, the researchers also varied the intensity of the odor of food.

How would the animals react? Did the risk of being eaten outweigh their desire to feed? Should they “freeze”—in effect, play dead, hoping the predator would pass by, while the crayfish remained close to its meal—or move away from both the predator and food?

To make a quick escape, the crayfish flip their tails and swim backwards, an action preceded by a strong, measurable electric neural impulse. The specially designed tanks could non-invasively pick up and record these electrical signals. This allowed the researchers to identify the activation patterns of specific neurons during the decision-making process.

Although tail-flipping is a very effective escape strategy against natural predators, it adds critical distance between a foraging animal and its next meal.

The crayfish took decisive action in a matter of milliseconds. When faced with very fast shadows, they were significantly more likely to freeze than tail-flip away.

The researchers conclude that there is little incentive for retreat when the predator appears to be moving too rapidly for escape, and the crayfish would lose its own opportunity to eat. This was also true when the food odor was the strongest, raising the benefit of staying close to the expected reward. A strong predator stimulus, however, was able to override an attractive food signal, and crayfish decided to flip away under these conditions.

It’s not that this surprises anyone, it’s that researchers can now try and figure out the exact brain processes that enable the crayfish to make these decisions.

Posted on June 25, 2010 at 6:53 AMView Comments

Lie Detector Charlatans

This is worth reading:

Five years ago I wrote a Language Log post entitled “BS conditional semantics and the Pinocchio effect” about the nonsense spouted by a lie detection company, Nemesysco. I was disturbed by the marketing literature of the company, which suggested a 98% success rate in detecting evil intent of airline passengers, and included crap like this:

The LVA uses a patented and unique technology to detect “Brain activity finger prints” using the voice as a “medium” to the brain and analyzes the complete emotional structure of your subject. Using wide range spectrum analysis and micro-changes in the speech waveform itself (not micro tremors!) we can learn about any anomaly in the brain activity, and furthermore, classify it accordingly. Stress (“fight or flight” paradigm) is only a small part of this emotional structure

The 98% figure, as I pointed out, and as Mark Liberman made even clearer in a follow up post, is meaningless. There is no type of lie detector in existence whose performance can reasonably be compared to the performance of finger printing. It is meaningless to talk about someone’s “complete emotional structure”, and there is no interesting sense in which any current technology can analyze it. It is not the case that looking at speech will provide information about “any anomaly in the brain activity”: at most it will tell you about some anomalies. Oh, the delicious irony, a lie detector company that engages in wanton deception.

So, ok, Nemesysco, as I said in my earlier post, is clearly trying to pull the wool over people’s eyes. Disturbing, yes, but it doesn’t follow from the fact that its marketing is wildly misleading that the company’s technology is of no merit. However, we now know that the company’s technology is, in fact, of no merit. How do we know? Because two phoneticians, Anders Eriksson and Francisco Lacerda, studied the company’s technology, based largely on the original patent, and and provided a thorough analysis in a 2007 article Charlatanry in forensic speech science: A problem to be taken seriously, which appeared in the International Journal of Speech Language and the Law (IJSLL), vol 14.2 2007, 169–­193, Equinox Publishing. Eriksson and Lacerda conclude, regarding the original technology on which Nemesysco’s products are based, Layered Voice Analysis (LVA), that:

Any qualified speech scientist with some computer background can see at a glance, by consulting the documents, that the methods on which the program is based have no scientific validity.

Most of the lie detector industry is based on, well, lies.

EDITED TO ADD (5/13): The paper is available here. More details here. Nemesyco’s systems are being used to bully people out of receiving government aid in the UK.

Posted on May 6, 2009 at 12:14 PMView Comments

Leaving Infants in the Car

It happens; sometimes they die.

“Death by hyperthermia” is the official designation. When it happens to young children, the facts are often the same: An otherwise loving and attentive parent one day gets busy, or distracted, or upset, or confused by a change in his or her daily routine, and just… forgets a child is in the car. It happens that way somewhere in the United States 15 to 25 times a year, parceled out through the spring, summer and early fall.

It’s a fascinating piece of reporting, with some interesting security aspects. We protect against a common risk, and increase the chances of a rare risk:

Two decades ago, this was relatively rare. But in the early 1990s, car-safety experts declared that passenger-side front airbags could kill children, and they recommended that child seats be moved to the back of the car; then, for even more safety for the very young, that the baby seats be pivoted to face the rear.

There is a theory of why we forget something so important: dropping off the baby is routine:

The human brain, he says, is a magnificent but jury-rigged device in which newer and more sophisticated structures sit atop a junk heap of prototype brains still used by lower species. At the top of the device are the smartest and most nimble parts: the prefrontal cortex, which thinks and analyzes, and the hippocampus, which makes and holds on to our immediate memories. At the bottom is the basal ganglia, nearly identical to the brains of lizards, controlling voluntary but barely conscious actions.

Diamond says that in situations involving familiar, routine motor skills, the human animal presses the basal ganglia into service as a sort of auxiliary autopilot. When our prefrontal cortex and hippocampus are planning our day on the way to work, the ignorant but efficient basal ganglia is operating the car; that’s why you’ll sometimes find yourself having driven from point A to point B without a clear recollection of the route you took, the turns you made or the scenery you saw.

There are technical solutions:

In 2000, Chris Edwards, Terry Mack and Edward Modlin began to work on just such a product after one of their colleagues, Kevin Shelton, accidentally left his 9-month-old son to die in the parking lot of NASA Langley Research Center in Hampton, Va. The inventors patented a device with weight sensors and a keychain alarm. Based on aerospace technology, it was easy to use; it was relatively cheap, and it worked.

Janette Fennell had high hopes for this product: The dramatic narrative behind it, she felt, and the fact that it came from NASA, created a likelihood of widespread publicity and public acceptance.

That was five years ago. The device still isn’t on the shelves. The inventors could not find a commercial partner willing to manufacture it. One big problem was liability. If you made it, you could face enormous lawsuits if it malfunctioned and a child died. But another big problem was psychological: Marketing studies suggested it wouldn’t sell well.

The problem is this simple: People think this could never happen to them.

There’s talk of making this a mandatory safety feature, but nothing about the cost per lives saved. (In general, a regulatory goal is between $1 million and $10 million per life saved.)

And there’s the question of whether someone who accidentally leaves a baby in the car, resulting in the baby’s death, should be prosecuted as a criminal.

EDITED TO ADD (4/14): Tips to prevent this kind of tragedy.

Posted on March 17, 2009 at 1:10 PMView Comments

The Neuroscience of Cons

Fascinating:

The key to a con is not that you trust the conman, but that he shows he trusts you. Conmen ply their trade by appearing fragile or needing help, by seeming vulnerable. Because of THOMAS [The Human Oxytocin Mediated Attachment System], the human brain makes us feel good when we help others—this is the basis for attachment to family and friends and cooperation with strangers. “I need your help” is a potent stimulus for action.

This is interesting. They say that all cons rely on the mark’s greed to work. But this short essay implies that greed is only a secondary factor.

Posted on November 18, 2008 at 6:32 AMView Comments

Does Risk Management Make Sense?

We engage in risk management all the time, but it only makes sense if we do it right.

“Risk management” is just a fancy term for the cost-benefit tradeoff associated with any security decision. It’s what we do when we react to fear, or try to make ourselves feel secure. It’s the fight-or-flight reflex that evolved in primitive fish and remains in all vertebrates. It’s instinctual, intuitive and fundamental to life, and one of the brain’s primary functions.

Some have hypothesized that humans have a “risk thermostat” that tries to maintain some optimal risk level. It explains why we drive our motorcycles faster when we wear a helmet, or are more likely to take up smoking during wartime. It’s our natural risk management in action.

The problem is our brains are intuitively suited to the sorts of risk management decisions endemic to living in small family groups in the East African highlands in 100,000 BC, and not to living in the New York City of 2008. We make systematic risk management mistakes—miscalculating the probability of rare events, reacting more to stories than data, responding to the feeling of security rather than reality, and making decisions based on irrelevant context. And that risk thermostat of ours? It’s not nearly as finely tuned as we might like it to be.

Like a rabbit that responds to an oncoming car with its default predator avoidance behavior—dart left, dart right, dart left, and at the last moment jump—instead of just getting out of the way, our Stone Age intuition doesn’t serve us well in a modern technological society. So when we in the security industry use the term “risk management,” we don’t want you to do it by trusting your gut. We want you to do risk management consciously and intelligently, to analyze the tradeoff and make the best decision.

This means balancing the costs and benefits of any security decision—buying and installing a new technology, implementing a new procedure or forgoing a common precaution. It means allocating a security budget to mitigate different risks by different amounts. It means buying insurance to transfer some risks to others. It’s what businesses do, all the time, about everything. IT security has its own risk management decisions, based on the threats and the technologies.

There’s never just one risk, of course, and bad risk management decisions often carry an underlying tradeoff. Terrorism policy in the U.S. is based more on politics than actual security risk, but the politicians who make these decisions are concerned about the risks of not being re-elected.

Many corporate security decisions are made to mitigate the risk of lawsuits rather than address the risk of any actual security breach. And individuals make risk management decisions that consider not only the risks to the corporation, but the risks to their departments’ budgets, and to their careers.

You can’t completely remove emotion from risk management decisions, but the best way to keep risk management focused on the data is to formalize the methodology. That’s what companies that manage risk for a living—insurance companies, financial trading firms and arbitrageurs—try to do. They try to replace intuition with models, and hunches with mathematics.

The problem in the security world is we often lack the data to do risk management well. Technological risks are complicated and subtle. We don’t know how well our network security will keep the bad guys out, and we don’t know the cost to the company if we don’t keep them out. And the risks change all the time, making the calculations even harder. But this doesn’t mean we shouldn’t try.

You can’t avoid risk management; it’s fundamental to business just as to life. The question is whether you’re going to try to use data or whether you’re going to just react based on emotions, hunches and anecdotes.

This essay appeared as the first half of a point-counterpoint with Marcus Ranum in Information Security magazine.

Posted on October 14, 2008 at 1:25 PMView Comments

1 2 3

Sidebar photo of Bruce Schneier by Joe MacInnis.