Schneier on Security
A blog covering security and security technology.
« Friday Squid Blogging: There's Always More Squid Cartoons |
| 27 Suspended for Looking at George Clooney's Personal Data »
October 15, 2007
More Behavioral Profiling
I've seen several articles based on this press release:
Computer and behavioral scientists at the University at Buffalo are developing automated systems that track faces, voices, bodies and other biometrics against scientifically tested behavioral indicators to provide a numerical score of the likelihood that an individual may be about to commit a terrorist act.
I am generally in favor of funding all sorts of research, no matter how outlandish -- you never know when you'll discover something really good -- and I am generally in favor of this sort of behavioral assessment profiling.
But I wish reporters would approach these topics with something resembling skepticism. The false-positive rate matters far more than the false-negative rate, and I doubt something like this will be ready for fielding any time soon.
EDITED TO ADD (10/13): Another comment.
Posted on October 15, 2007 at 6:16 AM
• 27 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
If reporters would approach any topic with even a little skepticism, I'd probably fall on my knees and give my life to Jesus, because only direct intervention from on high could possibly bring about such a miracle.
If you know nothing about a topic, leave it to a bunch of statistical filters, a neural network or the like of them. There is no indication that they are doing anything else than that, wrapping it up in marketing speech and fishing for funding.
What's more, how do you gather a bunch of liars and terrorists to actually willingly help test the device? Well, you cannot.
This is polygraph nonsense again.
@Joshua: Why Jesus?
This is more Loki's arena, neh?
And don't forget the goat. Skadhi the huntress needs a good laugh.
There is a very similar software setup in public pools in the UK to detect drowning individuals. http://blog.softtechvc.com/2005/09/... . Although I would think detecting a lifess body on the bottom of a pool would be a lot easier than detecting someone who is acting like a terrorist.
Not sure how you would go about profiling a terrorist. There are too many factors and motivations behind why people do what they do to provide an accurate profile I would have thought.
To detect if someone is lying or hiding the truth as is said in the article I believe would be possible, but to say they are a terrorist just doesn’t make sense.
"I have to admit that I was initially pretty skeptical of U. Buffalo's proposed automatic terrorist threat assessment tool but now that Cory Doctorow - my lodestar to the reflexive geek position - has rubbished it (he compares it to phrenology), I figured I'd take another look."
Glad I'm not the only one who scans BoingBoing's headlines and inverts the reaction!
The research might be valuable, but it looks like someone wanted to pump up their grant proposal. This makes me question their ethics (or at least their judgment). You may have the fantasy your research will Change The World As We Know It, but please, leave it out of the grant proposals.
Along with the criminal predictor scan, I'm also waiting for my flying car.
Concerning the false positives, how do I defend myself against a claim of criminal terrorist intent based on a computer analysis of behavioral indicators? It's like being accused of a thought-crime; there is no way to exhonerate victims of the false positive.
Would this technology be able to detect government agents intent on flying to foreign countries to kidnap people there and sneak them out of the country to secret foreign prisons for torture?
No? Then no, this would be useless for finding terrorists.
The first article's statement
"No behavior always guarantees that someone is lying ..."
brought to mind a study done at Dalhousie University. It demonstrates what we all suspected
"people who were very motivated to catch liars tended also to be more confident"
A numeric result is totally useless.
A red/amber/green indicator, however, would be perfect.
Airlines could perhaps be convinced to offer lower fares on flights with one or more "amber" passengers.
"Ladies and gentlemen, we have good news for passengers of flight 403. We've just been advised that we have TWO amber passengers aboard, so we'll be giving all our green passengers a $40 rebate coupon!"
"The false-positive rate matters far more than the false-negative rate"
Technically, this is not accurate. Both factors enter the analysis. "Sensitivity Curves" --- the true-positive rate (which is the complement of the false negative rate) as a function of false positive rate --- are the crucial tool for investigating the effectiveness of this kind of screening device.
The point is that all such systems (polygraphs are the reference standard) have adjustable sensitivity settings. You can dial up or down the probability that the system will correctly identify whatever it's looking for (a terrorist, a spy, etc). The higher the sensitivity, the higher the false-positive rate, and vice-versa. The burden of proof on these systems (which none of them meet, so far as I am aware) is to show that there exists a "sweet spot" --- a sensitivity setting with a non-useless true-positive rate corresponding to an acceptable false-positive rate.
I hate to sound like a broken record on this, but the NAS study on polygraphing is really an excellent and comprehensive treatment that supplies the intellectual framework for understanding how to judge _all_ these person-screening technologies --- not just polygraphs. It should be required reading for any government official considering procurement of such systems, and for those who offer comment on them.
Let's imagine for the moment that this stuff works. And that the reason we're not all being called downtown for an "interview" to see if we're terrorists is that there aren't enough security investigators. So how would the logistics proceed? Every year (or maybe every month, just to be on the safe side) would everyone in the country be called down to an "interview" to determine whether they had developed terrorist leanings? Would the interview just be added to important security checkpoints like airports, border crossings, bus stations and laundromats?
Even if something like this worked (which seems extraordinarily unlikely, both in general terms and because people passing through security have so much else to be deceptive about) you would need such a huge infrastructure of surveillance and isolation to get real results that you would be better off economically just letting the terrorists through...
"Concerning the false positives, how do I defend myself against a claim of criminal terrorist intent based on a computer analysis of behavioral indicators? It's like being accused of a thought-crime; there is no way to exhonerate victims of the false positive."
You could of course request a re-test while thinking "happy thoughts".
You'd either be cleared or be able to fly off to Neverland, I forget which ...
@JB007 - surely no behaviour would indicate that the person is deceased?
So, what, you're asking for responsible reporting?
Sorry, I haven't seen any lately.
Where did they get their data that they're using to make their predictions?
In the medical field you can screen thousands of people and then follow up on the ones who, say had a heart attack within the next year to look for warning signs. How many millions of people would you have to screen to get even one person who is "about to commit a terrorist act."
If these automated systems really worked, or even had a hope of working, wouldm't we have already seem the techology being used in Israel? After all, they've had more experience with terrorist than the US ever has, and they have a solid track recod in hi-tech, and are usually ahead of the rest of the world in anti-terrorist measures..
My problem with behavioral profiling is that they generally flag anything "out of the ordinary."
Over time, every individual behaving unusually will be tired of being inconvenienced as a false positive, and change their behavior.
Who gets to decide what "ordinary" is?
Trying to create a program that can catch "hinky" actions better than a human has me very skeptical. Although it probably wouldn't hurt to put all "positives" on the same airplane.
My first question was too: Where did they get the patterns of terrorist behaviour?
On the other hand: Why not evolve a system for general behavioural patterns?
Imagin queing up at the airport, and a voice from the voicespeaker yelling: "Mr. Smith! The toilet is in floor 3, Room 301a". "Mrs. Johnson - calm down, please! You'll get breakfast on board."
The worst thing about all of these automated screening systems is their susceptibility to "Garbage in, Gospel out" syndrome. People, especially the sort of GED-flunkouts who wind up in bottom-level security jobs, tend to trust anything that came out of a computer, even if it looks like nonsense. The people who program these systems are aware of their limitations, but any suggestion that The Computer might be wrong will convince the two goons who just stepped up behind you that you must be a terrorist.
Those guys just put the right sticker to their project.
In these days every research even remotely in the security sector can gain much more money if they can catch TERRORISTS.
Beside that [paul] got to the center of the problem: what to do if such thing works ?
"Beside that [paul] got to the center of the problem: what to do if such thing works ?"
And if a frog had wings it wouldn't bump its arse hopping.
More people need to read the whole article.
The entire premise of the system is that you are being interviewed in the first place. And - that the interview will take long enough to build the profile during the interview.
The biggest problem I see deploying this for the U.S. is the lack of sufficient interview rooms and interviewers. It tried to dig up some stats, and found that in 2006, U.S. carriers enplaned 85,000,000 international passengers. Think about the logistics of handling all those interviews. That - and that the interviewers likely need to have considerably more training than the searchers and x-ray watchers.
The media isn't doing a good job, because they're burying that key point - and vastly underemphasizing it. I suspect that Cory either didn't get that far before deciding it was worth blogging, or missed the underwhelming refererence.
I agree that Cory is way over-reacting -- but this still strikes as something that won't work for us because we won't pay the price of deploying it. If we were willing to pay that price, we could already have detailed interviews with well-trained interviewers.
"...automated systems that track faces, voices, bodies and other biometrics against scientifically tested behavioral indicators to provide a numerical score of the likelihood that an individual may be about to commit a terrorist act."
Given the rarity of actual terrorist acts, I'd be very curious to see exactly what the "scientifically tested behavioral indicators" are. Oh, and the false positive and negative rates.
@Colossal Squid: Currently you're right.
But I have seen too much things come true (or become computable by raw force).
So at least it's useful to think about implications.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.