Detecting People Who Want to Do Harm

I’m dubious:

At a demonstration of the technology this week, project manager Robert P. Burns said the idea is to track a set of involuntary physiological reactions that might slip by a human observer. These occur when a person harbors malicious intent—but not when someone is late for a flight or annoyed by something else, he said, citing years of research into the psychology of deception.

The development team is investigating how effective its techniques are at flagging only people who intend to do harm. Even if it works, the technology raises a slew of questions – from privacy concerns, to the more fundamental issue of whether machines are up to a task now entrusted to humans.

I have a lot of respect for Paul Ekman’s opinion on the matter:

“I can understand why there’s an attempt being made to find a way to replace or improve on what human observers can do: the need is vast, for a country as large and porous as we are. However, I’m by no means convinced that any technology, any hardware will come close to doing what a highly trained human observer can do,'” said Ekman, who directs a company that trains government workers, including for the Transportation Security Administration, to detect suspicious behavior.

Posted on October 7, 2009 at 12:54 PM44 Comments

Comments

BF Skinner October 7, 2009 1:26 PM

May I ask you a personal question?

Sure.

Have you ever retired a human by mistake?

No.

But in your position, that is a risk.

Matthew Carrick October 7, 2009 1:31 PM

Well, it sure beats sleep deprivation, waterboarding and beatings so forget the airports and introduce this to Gitmo.

Clive Robinson October 7, 2009 1:39 PM

I think that Mr Ekman has the essential reason why there is a perceived need for the technology,

“However, I’m by no means convinced that any technology, any hardware will come close to doing what a highly trained human observer can do”

The three weasle words being “highly trained human”.

A piece of technology can be bought put in place and turned on. It requires no time intensive training nor rest breaks, and does not get distracted by all those things humans do when performing a mundain task.

And this is the problem, people capable of being taught to the level required are not going to want to do what is essentialy a very mundain task for comparitivly low wages.

C October 7, 2009 1:47 PM

@BFSkinner

I just watched that movie the other day for the first time in a long time.

+1 for the reference

eddie October 7, 2009 1:58 PM

Can we also be dubious about Paul Ekman’s opinion that even a “highly trained human observer” can detect suspicious behavior?

Stephen Smoogen October 7, 2009 2:14 PM

It is a thin sword here. On one side you have highly fallible humans whose brains are as queued by the difference a person looks to themselves as they are by nervous habits.. And on the other side you have technology that could be quite fallible but no-one would be held accountable if it was.

The biggest problem is that people in general do not want to accept either fallibility but want 100% assuredness.

moo October 7, 2009 2:17 PM

I dubious that more than 5% of the airport security workers I’ve come across are “highly trained human observers”. Most of them come across as high-school dropouts with no other prospects, who managed to make it through the two-week training course.

Not that this computer recognition crap is any better. Actually its worse, because a lot of people believe for some reason that computer systems are less fallible than human systems (even when humans supply all of the data to them in the first place).

Daniel Haran October 7, 2009 2:28 PM

Humans have too many cognitive biases. Sensors + machine learning might be radically effective.

Computers outperform humans on spam filtering. Most of the vendors were peddling bullshit tech, but that’s not to say that the field can’t produce interesting tech.

BW October 7, 2009 2:36 PM

Yeah… no. Behavior is dependent on culture. Americans seem to forget that there are other cultures in the world, that people act and react differently. The worst part about it is, culture changes, different generation react differently, your software would need to be adaptive or constantly updated. You could try and fit people into profiles but the thing is… America is a melting pot, families can end up being very multi-cultural (in 4 generations, my family hails from 5 different nations).

HJohn October 7, 2009 2:40 PM

Any kind of behavioral screening has the consequence of the screener being wrong, during which case they may be accused of and/or sued for all sorts of infringements, including profiling, homophobia, racism, sexism, fill-in-the-blank-ism.

I’m not saying it is not useful, I’m just saying it does not come without a cost as well.

Harry October 7, 2009 2:45 PM

Whoa, did you catch that whopper of an unexamined assumption. Who says a highly trained person can do this?

Plus, as has been said, there’s the little problem of cultural differences.

Richard October 7, 2009 3:00 PM

I doubt this assumption – what’s the measure of “highly trained person”? Tools and human have advantages at different areas.

Morgan October 7, 2009 3:30 PM

Wouldn’t the only way to test this technology be to actually harm people? i.e. you couldn’t test the detector with a mock attack. You would actually have to hire someone to do real harm, knowing they will get away with it, and not be stopped. Where do I apply?

Brandioch Conner October 7, 2009 3:46 PM

I’ll second Morgan’s comment.

Without a valid pool of characteristic X, how can you design a system to detect characteristic X?

In this case, you cannot even test for the absence of characteristic Y.

This looks like another boondoggle. I’d be looking for relationships between the people selling this snake oil and the politicians willing to spend public funds on it.

Stephanie October 7, 2009 6:39 PM

I’d hate to be the innocent person targeted and stuck on that watch list.
It looks to me like even with all the technology its been forgotten that people are attached to it. Points made above about racism, sexism, lack of awareness of other cultures well taken. How much of the watch lists are 21st century witch hunts? There’s no judicial review of many of these lists.

So if you are say a peace activist,gay, have ADD, aren’t well dressed, maybe a little rumpled looking, and just got off your cell hearing that someone in your family died or your partner is cheating on you with your best friend, and you are already being watched or on the list…its just what has been done to people of color all these years. Anything you do will be misinterpreted and you’ll be in personal danger from the security people because you are on a list or fit a profile. Your hands will be shaking, maybe your face will be red, you might be swearing and stomping around, you might glare at some security yahoo who is bossing you around…

BF Skinner October 7, 2009 6:53 PM

@C

Did you ever read Jeter’s sequel? I was thinking about that really.

He spent more time than Dick or Scott on the diversity of human response and the probability that there were replicants running around than anyone was aware of (Deckerd for one). And there were humans failing the VK.

Biophysical response is not a standard they cluster around means. Some of us run hot, others cool and some have no measurable empathy.

Vicki October 7, 2009 7:12 PM

Yes, the problem is “harbor malicious intent.” Malicious intent toward whom? Is this going to call someone a terrorist because they are traveling to where they are suing someone for fraud, or challenging a will, or testifying against an ex-partner in a custody lawsuit?

Or, as Stephanie says, the person who just found out that their partner is cheating on them, and plans to confront them in public? Would it flag someone like Karl Rove, if he was planning a smear campaign? Or the person who is going to tell the world that a “family values” politician is having a same-sex affair?

Sure, we can argue that at least some of my examples aren’t evil: but the emotional mindset “I’m going to destroy his candidacy” is probably closer to “I’m going to destroy that building” than to “I’m going to Fresno to propose marriage to my beloved.”

Roboticus October 7, 2009 7:25 PM

What is done if a person is flagged as suspicious? Are they delayed and/or detained or simply watched more closely? There is a big difference. However sadly I have a feeling that either 1) there will be so many false positives the system will be annoying (to those using it) and ignored or 2) there will be an occasional false positive to over react to.

Vincent October 7, 2009 8:21 PM

This sort of magic mind reading device is always fundamentally flawed.

I used to work for a guy who carried a “pocket polygraph” around that he’d picked up at COMDEX one of the last years in Vegas. He was unceremoniously demoted from a VP position a few months after he demoed it at an all hands meeting. It was more creepy to everyone that he’d be carrying the thing around with him and either pretending/believing it worked than any real fear of being caught in a lie about something as unimportant as a ship date.

I’m skeptical that even a highly trained human observer is any good at this stuff. People are great at noticing anomalous behavior, they’re absolutely horrible at reading minds.

Sitaram October 7, 2009 8:33 PM

Am I the only one to think of RFC 3514 as soon as I read this?

If so, am I that different from this community?

/me worries…

Timm Murray October 7, 2009 9:40 PM

Ekman has, in fact, done substantial research across cultures showing that many micro-expressions have a biological basis. They are therefore universal in humans, not cultural-specific as many anthropologists had thought. Note that there are, indeed, some which are cultural, but you can get pretty far on just the universal ones.

This included a study of 15,000 people, which narrowed down to 50 who could consistently (>80%) spot deception, and from there developed a tool that can train pretty much anyone to do the same.

Ekman may be pessimistic about the possibility of a machine doing the same, and I can’t say that I’m any more optimistic, at least for the immediate future. In any case, there’s a reason Bruce gives him respect.

Rick Auricchio October 8, 2009 2:04 AM

I suspect this technology can only detect those who are SIMULATING bad intent, in order to test it.

There would not be enough field trials with people who REALLY harbor bad intent.

Perhaps it will only catch actors who are studying an upcoming part as a villain.

Patrick G. October 8, 2009 4:42 AM

The first thing this kind of research shows is:
a selection well trained professionals are better at spotting people lying or with malicious intent than…

… the privatized, outsourced subcontractor who rents or temporarily employs halftrained, underpaid and/or overworked people doing the actual job at our airports.

And the actual people doing the job aren’t to blame, it’s the companies that want to make a huge profit with as little investment as possible.

And the state agencies turn a blind eye, because it’s so much cheaper than training and employing professionals like policemen or profilers…

Just my 2 (euro-)cents

jdbertron October 8, 2009 7:12 AM

Minority Report anyone ?

You can’t prove it works better than a human, because the human can’t prove it works.
I really hope the ACLU gets a hold of this one.

bethan October 8, 2009 7:35 AM

humans can rely pretty effectively on instinct, and a trained observer can learn to apply situational analysis to what his instinct tells him to pay attention to.

machines can’t do that, although, perhaps, they can read other indicators of intent. seems like sci-fi to me, but we’ll probably be forced to observe whether or not it works.

History October 8, 2009 8:22 AM

I seem to recall that at one point in our history “the experts” (with all sorts of research), concluded that physical attributes, like the height of one’s forehead and the distance between the eyes, were indicators for criminal intent.

It doesn’t seem like these “experts” are much farther along…

Frank Wilhoit October 8, 2009 8:29 AM

You’re missing the point. Any such system will generate false positives, but that is not a bug, it is a feature. The conservative view of criminal punishment is that its deterrent effect does not depend upon catching the right culprit. Anyone will do.

David October 8, 2009 8:40 AM

@Timm Murray: How did Ekman test for deception? How do you test for malicious intent?

You can’t do it in the laboratory. There’s no reason to think that a person’s reactions when requested to make a false statement are the same as when lying to deceive. Similarly, there’s no necessary correlation between thinking about harming somebody and intending to harm somebody.

In order to run real experiments, it would be necessary to find people who intend to deceive and harm, not in laboratory conditions, and follow up and investigate to see if they were or were not lying or were or were not intending harm (and that last is awful hard to determine). It would be necessary to run these for people of different cultures, social classes, and temperaments to see if there’s anything universal about this.

I’m not saying it can’t be done, and I’m not saying Ekman didn’t do it, but in a case like this I really do want to examine the experiments and methodology before believing in anything.

AC October 8, 2009 10:08 AM

@Frank Wilhoit:
Thanks for the meaningless partisan jab. While I agree with your basic sentiment regarding the criminalization of everything (and any excuse will suffice to demonstrate what we do to you when you step out of line), pinning that effect of big government on “conservatives” completely misses the point.

Clive Robinson October 8, 2009 12:16 PM

@ AC,

“Thanks for the meaningless partisan jab.”

I’m not sure that Frank Wilhoit ment his comment,

“The conservative view of criminal punishment is that its deterrent effect does not depend upon catching the right culprit. Anyone will do.”

as a political partisan statment.

In the UK where we actually have a “Conservative” political party, the statment Frank made would normaly mean “conservative in outlook” irrespective of a persons “political outlook”.

Try re-reading his statment but use the word victorian instead of conservative.

ed October 8, 2009 12:26 PM

@Morgan
“You would actually have to hire someone to do real harm, knowing they will get away with it, and not be stopped.”

It’s more complicated than that. The perpetrators would have to NOT know they’d get away with it, nor even be successful. Courage isn’t a belief in one’s own invincibility, it’s the determination to do what’s needed despite knowing one’s mortality.

And if intention to harm is the prerequisite, what about the people who believe they’re doing good on behalf of their cause? Ridding the world of vermin has always been promoted as a beneficial cause, regardless of who or what is classified as vermin.

Vincent October 8, 2009 2:55 PM

@bethan
Ah, but for the following:

1) humans don’t actually have instincts
2) instinct is just another way of saying bias (“complex set of preprogrammed behaviors”)

Which is to say that we can program a computer to sense the things we think we’re responding to when we sense, but it still isn’t going to relieve us of our biases.

Wasn’t the Canadian fruit machine already mentioned here by someone recently?

George October 8, 2009 3:27 PM

Stephanie, the TSA insists that their Behavior Detection Officers are fully capable of accurately and consistently distinguishing signs of terrorist stress from the signs of ordinary stress that are inherently pervasive at airports. Of course, any further information about how they’re able to do that, along with results of any efficacy testing, is classified for National Security reasons. So we’re supposed to accept their claims on faith. But is there any reason to doubt them?

George October 8, 2009 3:33 PM

Frank Wilhoit, you don’t have to be conservative, liberal, or anything else. You merely need to define a false positive as a success. The TSA does this all the time, claiming that their successful detection of people carrying illicit drugs or cash is irrefutable proof of their ability to spot terrorists. Anyone else would classify detection of drugs or cash as a “false positive,” since these items have nothing to do with threats to aircraft. But for the TSA, it’s a success.

paul October 8, 2009 7:20 PM

We’re defining 80+ percent as high accuracy? Yikes. That would be several million passengers detained a day without reason, or perhaps everyone detained at least once per multi-leg round trip. Or any terrorist group with more than 3 members would have a better-than even chance of getting through…

What really scares me about this is that the machines will be tested against the high-trained humans, and be forced to become bug-for-bug compatible with human beliefs about who has malicious intent.

aaronius October 8, 2009 7:48 PM

All you need to do is put chiips on everything – RFID microdust – and track proximity for association using MIT’s gaydar algorithm, adjusted for terrorists as defined by Bruce Willis movies and Chuick Norris, and then when they come near a movie airport – nab em.
The camera tech just puts a face on the cipher data.
Here, will you watch my coat while I visit the gents?

DC October 12, 2009 8:15 PM

I’d have to agree with everyone saying “well, how do you test this with real data anyway”.

I used to do DSP software development for a large company that made, among other things, classroom paging systems. Their CEO had the idea that it would be cool to have a “scream detector”, as the PA systems were full duplex and could listen as well as announce in the classrooms. Ideal to warn authorities or security if something really bad was going on in a class, right?

Well, kids scream all the time, mostly in fun, and even teachers lose it and get real close, don’t want to bring in SWAT for that kind of things, as kids learn fast how to get out of class etc.

So, we needed some real screams, actual fright, and did think of a couple of ways to get them without actually hurting anyone, just scaring them silly. We present the need for real data, and a plan to get it to the CEO (who by the way is one heck of a good guy). Result? Project terminated right away.

Can you think of the legal issues involved with say, bringing people to a party and scaring the willies out of them, even with a disclaimer and foreknowledge that we’d be doing that? And we’d need a fairly big sample for our neural networks and so on to get a good dymanic range (eg few missed detects and few false alarms vs a lot of correct answers). Getting fake screams is easy….real ones not so much.

And yes this really did happen. And, it wouldn’t have been all that hard. At the time we had a very PTSD vet living on our campus, who scared most people just by looking at them. He volunteered to be really scary, say, when people went to use a porta potty ’round back, they’d see him wild-eyed and carving an apparent corpse up perhaps, covered with red stuff and weapons. It would have worked. Then we’d have all been in jail or the poorhouse.

So, no scream detector was ever developed, at least not by us.

As posted above said, you can’t fake this kind of thing well at all, and it’s hard enough without fake data.

Mashiara October 28, 2009 12:17 PM

In our martial arts training (Bujinkan) we sometimes have special exercises related to the “killing intent” (or “intent to harm”) and “feeling” in general.

One exercise is usable for even beginners and aptly demonstrates the concept, I’m an atheist and firm believer in science but I cannot explain how it works, anyways just about always the “victim” is able to recognize the “killer” from a group (the victim sees them only from behind so reading face/body language is less of a question). There is a special “feeling” you get from the “killer” if they’re mentally visualizing how they intent to harm you (properly).

One reason to learn to recognize these “feelings” is to be able to mask and/or fake them.

Another “feeling” or “reading” thing are feelings of launched attacks and the “sending” of a feeling of a specific attack but actually doing something else (this will seriously mess your balance [in all meanings of the word] for a moment, when done to you).

I have been training for slightly over ten years and still can’t properly “send” “feelings” of attacks without attacking, I am fairly sensitive to “feeling” others though.

Anyways this incoherent rambling was mainly meant to point out that it’s possible to recognize this kind of “intents” (at least by humans) but that faking them is hard (as pointed out by the “shriek detector guy”) and thus testing people or devices is going to be a major problem…

Kent November 23, 2009 10:21 AM

Several have already noted Ekman’s caution of the need for a “highly trained human” to perform screening.

Who do you think might be offering “high-level training” for humans?

If you guessed it is Ekman and his MTET (or something like that), you’d be right.

His “culturally universal facial expressions reveal culturally universal emotions which reveal culturally universal reactions of liars” is a crock of hoo-ee.

The basis of his “universalilty” theory comes from a couple of trips he took to “Stone Age” tribes in Papua New Guinea. Reading his articles on these trips and his descriptions of his research is pretty revealing. Interesting, but clearly not supportive of any such sweeping “universality” as Ekman claims.

He’s a huckster hyping his “system” just as most detection deception system proponents are.

Buyer beware.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.