Schneier on Security
A blog covering security and security technology.
« NSA and the National Cryptologic Museum |
| Yet Another Way to Sneak Liquids onto an Airplane »
August 6, 2010
More Brain Scans to Detect Future Terrorists
Worked well in a test:
For the first time, the Northwestern researchers used the P300 testing in a mock terrorism scenario in which the subjects are planning, rather than perpetrating, a crime. The P300 brain waves were measured by electrodes attached to the scalp of the make-believe "persons of interest" in the lab.
The most intriguing part of the study in terms of real-world implications, Rosenfeld said, is that even when the researchers had no advance details about mock terrorism plans, the technology was still accurate in identifying critical concealed information.
"Without any prior knowledge of the planned crime in our mock terrorism scenarios, we were able to identify 10 out of 12 terrorists and, among them, 20 out of 30 crime-related details," Rosenfeld said. "The test was 83 percent accurate in predicting concealed knowledge, suggesting that our complex protocol could identify future terrorist activity."
Rosenfeld is a leading scholar in the study of P300 testing to reveal concealed information. Basically, electrodes are attached to the scalp to record P300 brain activity -- or brief electrical patterns in the cortex -- that occur, according to the research, when meaningful information is presented to a person with "guilty knowledge."
More news stories.
The base rate of terrorism makes this test useless, but the technology will only get better.
Posted on August 6, 2010 at 5:36 AM
• 61 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
I don't see the study considering whether the scanner would flag as a terrorist someone who was planning a visit to one of the tested cities for merely tourism?
I wonder if this will be indicative of the future regarding our current polygraph techniques and if it will evolve to that level, or use a combination of brain scanning and the polygraph.
This could be potentially useful for background investigations and CI/CT interviews.
You seem to think the research is valid?
Is this really possible according to you: finding terrorists thanks to brain scans (assuming you manage to get one in your random checks)?
Everything looks good in the testing phase. It's the real world application that's the killer.
As I posted above you, I don't think this would be feasible regarding finding terrorists, but perhaps in interviewing the suspected terrorist already in custody.
* I know I'm not Bruce ;)
No waterboarding? No violations of the Geneva Convention. No thinking about the limit of pain up to the failure of an organ? Only violating civil rights?
It'll never sell.
How to posture for more grant money. What next, a laser that can shoot the Russians from outer space?
@BF Skinner too true.
God help the false positives cause nobody else will.
@ BF Skinner (et. all)
With your sarcasm noted; do you think that this could possibly be used as a precursor to justify the means of waterboarding and other unconventional means?
@Imperfect Citizen (et. al)
Couldn't agree more.
Like with the polygraph, I would fear becoming a false positive and therefore I would be so nervous about the test/scan that it will probably result in a false positive.
They are able to detect concealed information.
What if the person they have nabbed is a homosexual serving honorably in the military? A guy afraid his wife will recognize him on TV when he is supposed to be on a business trip? A closet Republican in Chicago running for dogcatcher?
Can they tell the difference between harmless (to the security of the US) concealed information and nefarious intent? The article didn't seem to clarify that.
Would living in a country where the government has the power to grab someone at random and attach electrodes to them be better than just giving the country to the Taliban? Because there is no point raising the barriers at an airport this high unless you start implementing similar levels of "security" at national monuments, large buildings, sports stadia, shopping mauls, etc.
As always, the problems with these new techniques are signal-to-noise ratio and interpretation of the data. SNR in this instance is a function of intent or willingness to conceal. This is problematic with innocent subjects in the real world, because, unlike these undergraduate volunteers, they don't have advance knowledge that a test will be conducted. Instead, they know that several masked guys with automatic weapons snatched them off the street, took them to a windowless building, glued electrodes to their heads; then they were parked in front of a monitor that's displaying words (and faces, schematics, maps, crime scenes?), any of which may trigger a stress response ("oh my God, do they know about _____ ?"), that's unrelated to the investigation. Sure, you might find out that they're wanted for extortion in Michigan, or abandoned a family in Idaho or have a string of unpaid parking tickets on a 1988 Lincoln Continental. That's not particularly useful if you're looking for people who attended a training camp in Idaho, were hidden in a Detroit safe house and are assembling materials for a car bomb. Over a broad enough population the individual P300 "hits" are irrelevant unless you have the capacity to measure their inter-relationships.
Compliance is another problem: it's easy to tell when a polygraphy subject stops answering questions. It's a bit harder to determine if he's examining what's presented to him on and not just staring at the wall behind the screen, crossing his eyes, etc.
The investigator's demographic assumptions are also potentially problematic: can all suspects read? Are they literate in the language that the test is presented in? Can all individuals register the presented items at the same rate as the healthy Western undergraduate subjects, or is some adjustment necessary (for older individuals, for non-alphabetic languages, for suspects seriously injured during apprehension)? If you must resort to images (processed by a separate cerebral pattern recognition system), are those data valid at all?
Interesting research, but clearly presents more questions than it answers.
The problem with P300, as I understand it, is that it's purely a recognition metric. If you've seen or thought about the thing in question, you'll get a positive response. So tops on the list for getting nailed as terrorists would be security people.
You have been convicted of thought crime.
I wonder if this would also spot undercover intelligence agents....
After all, if you've got nothing to hide, you've got nothing to fear, right?
/s shouldn't be necessary, but I've been on the net long enough to know better than to assume people get sarcasm.
Re: the technology will only get better.
Not always. Let me know where I can buy a modern violin better than a Stradivarius, a supersonic passenger flight, an online computer that I know won't get hacked and a credit card that I know won't get cloned.
I just can't wait until "they" make us all stick a dongle in our ear so they can stop us from thinking nasty bits, or arrest us when we do... sigh.
Maybe the only thing this is good for is screening actors.
Since they didn't have any real terrorists to test. Not even from those who have been caught or those the authorities don't care about/support.
Nor do they appear to have tried too hard to get a random set of test subjects.
I may have missed this but is it through direct questioning about terrorism or is it just a general scan? I wonder if it is able to filter out any of the suspects that are not doing anything wrong and are just generally terrified as they are being put through this process thinking about all sorts of things that are in no way related to terrorism?!?
Either way, they must somehow screen for people to have the scan done. Does this mean any person of the suspicious cultural background of the day gets pulled out of the line at an airport, cruise terminal, etc... to have their brain scanned?
like said earlier, big brother may be getting one step closer.
What about actors and writers? I guess we could figure out which ones are really the best.
How stupid. And where did they find the terrorists for testing ?
This is flawed in so many ways it's barely above the level of scam.
What's next ? A urine test to detect tax evaders ?
I don't subscribe to _Psychophysiology_ and the most recent year is not available from my school's db so I haven't read the study. As with much science reporting these days the articles seem drawn from the press release rather than a detailed analysis of the paper, or even the abstract http://www3.interscience.wiley.com/journal/...
I'd like to know more about the testing. Were there control groups? If not, why not? Was it double blinded? Triple blinded? How might that be accomplished? Is scanning deep inside the brain better than measuring sweaty palms like the current "state of the art"? Probably. Can special operators be trained to "beat the box" like they do the polygraph? The abstract suggests that countermeasures are detectable, but who knows.
Will this technology get better? Probably, but our protectors will still have to be suspicious of someone, take him into custody, make him sit very still, make him look at a screen… [At least we don't need probable cause for that sort of thing anymore.] Forget references to Minority Report, 1984, and Brave New World, this is Clockwork Orange territory.
It's not clear from the articles, but presumably the 83% accuracy figure is referring to type II errors - in other words, it has false negatives 17% of the time. Much more important, given the rarity of terrorism, is the rate of type I errors - false positives. (I assume this is what Bruce is alluding to in his last sentence.)
This is the same group that was using brain scanners to detect if you were racist.
You were asked to pronounce the name "smith" and a name in Kiswahili.
If you used more of your brain for the african name you were obviously racist.
The "the technology will only get better" notion seems very common-sense, but they said that about speech recognition, too. Which has been stuck at a near-useless level for a decade. Interfacing computers with sloppy humans is really hard.
Which is fine with me. The idea that one's own thoughts aren't really private is creepy and abusive beyond anything Orwell ever imagined. And these days, I have no doubt that the US government would leap at the chance to use such tools (if they ever became practical) at every single opportunity.
It sounds like the technique is most nearly useful when a lot of details are already known. Which, as far as I know, generally does not happen.
The "targets, non-targets, probes and irrelevants" categories sound very much like a brain-wave version of a polygraph test. It could turn out that the results could be gamed drugs, meditation, concentration training, etc., just as polygraph tests can be.
@Daniel Wood " do you think that this could possibly be used as a precursor to justify the means of waterboarding and other unconventional means?"
Quite the opposite. I think that, if valid, it'll be used to justify a much broader application "cause it's not waterboarding...it's not intrusive...it's not like it's torture." over a much wider (and ever changing list) of targets, as others here have pointed out.
(All y'all? We really ought to start generating numbered lists for why things good and bad ideas. It'd shorten the writing time (esp on mobile phones). Bruce could post "A" and we can respond "List C, 37". "Yep that's C37 alright." "Are you nuts. B15 maybe and that's a stretch but never 37.")
People I've talked to who approve of but don't bother to justify the use of torture? They want to hurt our enemies. 'You hurt us and we're going to hurt you'. Slapping 'trodes to the head?...just won't pay the bill for them.
...but of course it'll protect the kiddies.
@Cary "actors and writers? I guess we could figure out which ones are really the best."
Oh, well if you're going to go and make good arguments...
Time to start writing encrypting enzymes.
the real issue with this is that the human mind responds to any input as if it's real input, even when that "input" is merely the person's own thoughts .. you've all had the experience of being so immersed in some daydream or fantasy that it was completely real .. well, this thought-reading capability we're developing isn't going to be able to tell the difference between _remembering_ something that really happened and _fantasizing_ about something that didn't .. instead of being water-boarded, you'll hear questions and be fed images and sounds for days until all you can think about, when a question is asked of you, is whatever images/sounds they made you observe .. then, and only then, will you get your day in court with this thought-reading cap on your head
I find it disturbing we may move to a world where people are routinely connected to electrodes to broadcast their thoughts over the whole world when people already do this with their mouths anyway.
Just a matter of interpreting correctly what they are saying...
The problem with counting on technology getting better is that this is an inherently limited test. It may well be that recognizing something causes a certain EEG pattern 0.3 seconds later. Like most experimental psychology, it appears to work on freshman psychology majors.
Assuming that this is a reliable indicator for the general population, and can be reliably gained by doing EEGs in some high-tech nonintrusive manner, we've got a way of telling if some person recognizes something. That's all. With refinement, we might pick up on what sort of emotion.
This is absolutely useless for general investigation. It might be of some use in specific interrogation. If you're sure you've caught a terrorist, go through pictures of suspected terrorists and look for recognition. If you've got a lineup of suspects for a burglary, see if any of them recognized the burglarized house, and investigate those who do further.
I don't care how specific the test or how few the false positives: I don't like it. This same technology that would make it easier to ferret out terrorists would also make it easier for a despot (knock on wood) to retain power. The 20th century being one where many more people were killed by their own government than were killed by criminals or foreign governments, being more afraid of terrorists than of government-gone-bad leads to poor risk management decisions.
When the technology gets better, the terrorists will actually have a reason to start wearing tin-foil hats. This will make them very easy to identify. ;-)
When they connect the electrodes just repeat to yourself:
I am invoking the Fifth Amendment, I am invoking the Fifth Amendment, ...
If you can make the sensors sensitive enough to work remotely, you install them in fast food places. Customer stands in line, scanning the menu board. The P300 waves tingle when he or she reads the item(s) of desire. (Guilty knowledge.) The order is prepared while the customer is still in line.
I'm lovin' it.
Scott beat me to it. This is one more step on the way to Minority Report-style precrime detection.
Very interesting write-up. Haven't read the links yet.
Once this makes it to the real world, there's going to be flaws (computer algorithms, hardware design, etc). Before you know it there will be online articles on how to fool these scanners.
Regarding accurate vs false results, it's scary either way. If this technology becomes large scale implemented, data brokerages would soon have information on my fetishes, dark thoughts, people I've wanted to beat up, etc.
@ Western Infidel,
"The "the technology will only get better" notion seems very common-sense, but they said that about speech recognition, too. Which has been stuck at a near-useless level for a decade."
Hmm, I'm not sure it has, it may be suffering from "follow the free money" problem.
For several years there appears to have been only two players in the game (IBM and Dragon Software). In both cases they had gone beyond simple word recognition.
Not uncoincidentaly 9/11 happened a little over 10 years ago and shortly there after lots of "easy reasearch" money became available for things related to "The War on Terror". And many things including speach recognition had a "dark side" market stimulus.
Like any other "market", "what the customer has money for the market tends to supply" (if it can).And I remember being told around five years ago "speach recognition" in many different languages for specific trigger words / phrases etc appears to have became the new market target "poster boy".
I don't know if that is still the case as I've not had reason to "revisit" the "speach recognition market as we found another way of addressing the issues we had.
After showing your boarding pass and 2 forms of government issued ID,
1. Place your bag and shoes and belt on the x-ray belt.
2. Place your laptops and gadgets aside.
3. As all your liquid things in magically safe under 4 oz containers in the magical ziploc.
4. Take a picture with your camera to prove you don't have a coffee (or shampoo) in it.
5. Walk thru the naked scanner.
6. Have your brain scanned.
7. Collect your belongings.
8. Continue your journey.
Facebook is obsolete! New site: Brainbook. All of your thoughts posted in real time. Google's street-level scanners and Gov satellites remotely monitor brainwaves and post them on the Net.
1984 was kinda' late getting here, but...
And who will scan the brains of the scanners? Who will guard the guards themselves?
Women will be slapping an awful lot of men, in elevators, etc., who haven't said or done anything wrong, nor even stared at them.
Shoot, I've been following this guy and his work for several years, and now everyone knows as much about it as I do!
I'm curious how quickly the brain can look at a photograph and decide whether it recognizes the person or not. No longer will we have to rely on "which one of these men looks the most like the one that robbed you?", we can simply spam faces at the victim faster than their minds can cope. Faster than their innate prejudices can react, too.
83 Percent is the same odds as in russian roulette, with a 6 cylinder revolver, you can better your odds with a 9 shot revolver.
Just tell them that your lawyer reccommends that you not become one of their many mistakes.
That's the best analogy I've read in the last 6 months. "Shocking" would be an understatement.
Fund a story http://spot.us/stories/unfunded
"Many threats and vulnerabilities pose daily risks to our national security. CREATE develops predictive models that gauge how and where terrorist events might occur, estimate the economic consequences of such attacks and identify where the country's vulnerabilities reside"
Terrorists events are less likely where there are more people doing investigative journalism. The vulnerabilities don't reside in your brain.
Google: "The Mind Has No Firewall" for an eyebrow raising article.
Interesting, but does it work on well trained brains, like say for people who work essentially on covert operations? I'd imagine it's a lot easier to detect on civilians with terrorism plots, but with hardwired skills, would the device work as well?
No-one has mentioned that fact that you have to actually get hold of a (confirmed? (How can you trust the information otherwise?)) "terrorist" whose part in the overall plan is not yet complete. Assuming they maintained opsec. Then get them to listen/watch the last years worth of chatter/surveillance in a month and hope that they don't execute their plan before that.
@Radio Head "does it work on well trained brains"
Would need to run some tests with the equipment but it's past time to start brain training.
Steed and Mrs Peel recited nursery rhymns to create walls of impenetrable images that they could operate from behind.
One, two, three, four, five –
Once I caught a fish alive.
Six, seven, eight, nine, ten –
Then I let it go again.
Why did I let it go?
Because it bit my finger so.
Which finger did it bite?
The little finger on the right.
So now that TSA lied about retaining images in the automated public strip search machines and then sent them back to manufacturer with the images intact are they now guilty of child pornography?
But the question of data permanance comes up. is a brain scan of my brains activity real PII or can I claim copywrite owneship of it?
So if they use western college students to develop the systems, will the system wind up being optimized for a higher success rate of detecting terrorists among non-middle-easterners?
@anna: You left out step 6.5, "Travel to your destination airport" (which you will note means flying BEFORE step 7: collect your clothes/belongings).
According to their article ( http://dx.doi.org/10.1111/... ) they didn't have any false positives among their twelve innocent control subjects.
Of course, if you have 6 billion suspects, and only a few thousand terrorists, you need a much tighter bound on the false positives. They knew 50% of their subjects were guilty, so getting 10 terrorist correct and none wrong is a chance of about 1 in 30.000. So if that's on the limit of their abilities, their method might still have 200000 false positives world wide; dozens of times the expected number of terrorists. (Disclaimer: I am not a statistician.)
And as has been pointed out, this experiment was done with students, which may not be representative. And in practice finding the right probes to test people will not be as easy, nor will they be able to exclude people that may for other reasons have reactions to the probes (3 test subjects were excluded for that reason; and two for not keeping their attention on the task).
Even aside from that, I agree that this is going too far in the direction of thought-crimes. And what if, for example, someone was working out a terrorist plot as part of a story, or thesis, she was working on? (Come to think of it, didn't somebody get into trouble for writing a thesis about terrorism? This would just make that worse.)
@ A Nonny Bunny
"According to their article...they didn't have any false positives among their twelve innocent control subjects."
No surprises there, except that their samples were so small. Were the researchers blinded? Or the analysts of the data? Thanks.
What if I'm just feeling guilty about thinking my interrogator is hot and digging the uniform and boots? Did they think of that?
Well run studies be damned; I like Simon's question better.
I don't see any mention of blinding in the "methods" section. However, the labeling of guilty/innocent was done by an algorithm, not by experimenter/analyst judgment, so I don't think it would matter as much as in clinical trials of medicines (where judging effectiveness can be much more subjective, and where a doctor can influence a patient through expectations on his/her part).
As far as I understand it (and it doesn't seem to be discussed in the article), the P300 signal is a response to recognizing 'guilty information' (e.g. being presented with a photo of the cookie-jar you've stolen cookies out of.)
So it shouldn't react to just feeling guilty for other reasons, or to being nervous; or to the unconsciously transmitted expectations of non-blinded researchers.
I think the biggest problem with research like this is understanding its limitations. Even if their methodology and analysis is flawless, it still wouldn't translate one-to-one with practice, because that's not (sufficiently similar to) what they examined.
It would be interesting to see a realistic trial done, on a representative group of people with a realistic distribution of "guilty" test subjects. Heck, there are enough ex-convicts they could use as real guilty subjects. I suppose one major problem is getting funding for such a trial; "toy" trials on students are cheaper.
Polygraph is a fraud, a POS. Check antipolygraph.org
ALL (with no exception) spies that stole information from CIA/FBI always passed all polygraphs exams and for years and dozens of polygraphs sent classified secret information abroad... Anyone that talks of polygraph, mentions it or believes in it should be hit with a bat on the head hopping it will by miracle improve their cognitive abilities (or get humanity rid of that stupid error of evolution).
In my assessment, the P300 testing was a spectacular failure. All of that overwrought detection of subterfuge/concealed knowledge failed to detect the one most important thing... that the terrorists among the test subjects were only pretending to be terrorists.
If P300 testing were even half as slick as it is being made out to be, it needed to penetrate the fundamental subterfuge of the "terrorists" as poseurs.
A really well conceived terrorist plot could be expected to field a great number of decoys, whose purpose would be to create defender-resource exhausting misdirecting cover for the real mccoy.
Some one mentioned "Minority Report."
I'm thinking P300 is much more like, "Inception."
Where so many reviewers/critics thought they saw genius, what I saw was something so completely full of itself and sloppily thought through (for which no amount of eye candy could compensate, let alone distract), that I was a) only moderately entertained and b) hugely disappointed by the "genius" of director/writer Nolan. Ten-plus years in the making and I never would've guessed.
What does it mean for P300 testing to eventually get better, when the thing being polished is not much more than a turd?
say 1 million basic groups = 0.000012 accurate rating
say every one unique different = 0.000000003 accurate rating
can't wait to they use this for trails
So when do we get to see a brain scan on Dzhokhar Tsarvaev so that we can prove to the rest of the world and terrorist world how psychotic and deluded they are? I expect poor left temporal lobe and frontal lobe functioning.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..