Human Pattern-Matching Failures in Airport Screening

I’ve written about this before: the human brain just isn’t suited to finding rare anomalies in a screening situation.

The Role of the Human Operator in Image-Based Airport Security Technologies

Abstract: Heightened international concerns relating to security and identity management have led to an increased interest in security applications, such as face recognition and baggage and passenger screening at airports. A common feature of many of these technologies is that a human operator is presented with an image and asked to decide whether the passenger or baggage corresponds to a person or item of interest. The human operator is a critical component in the performance of the system and it is of considerable interest to not only better understand the performance of human operators on such tasks, but to also design systems with a human operator in mind. This paper discusses a number of human factors issues which will have an impact on human operator performance in the operational environment, as well as highlighting the variables which must be considered when evaluating the performance of these technologies in scenario or operational trials based on Defence Science and Technology Organisation’s experience in such testing.

Posted on September 13, 2011 at 1:46 PM12 Comments

Comments

keith September 13, 2011 3:16 PM

True. You can steal anything as large as a television or car from a shop by simply having the superficial appearance of a shopper rather than a thief. Even people trained to look for particular things are mostly in a low-level trance state.

Ryan Cunningham September 13, 2011 4:36 PM

Unless the task involves simple, rote processing, machines will fail at it for the same reasons humans fail at it. Machines might be more consistent, but they’ll fail at pattern matching just as easily as we do.

This is a pamphlet to justify defense spending on technology. Be skeptical of the message and the messenger.

EH September 13, 2011 5:00 PM

I’m not sure what “just as easily” means in the above analogy, since machines don’t have brains and eyes that get tired. They may both fail at pattern matching, but not for the same reasons.

I do agree that all kinds of computerized pattern-matching are so young and fraught with error that there is almost certainly a funding ploy behind any promotion of a given technology and/or approach. If this stuff ever actually worked, it would be headline news before the abstract was released.

Being a passenger who complains about brown people on a flight doesn’t pay nearly as well as a profession.

nobodyspecial September 13, 2011 5:21 PM

there is almost certainly a funding ploy behind any promotion of a given technology

Or a political motive. A ‘security’ person picking every brown passenger out for a search leads to legal trouble. A computer doing it and it’s OK

GregW September 13, 2011 9:30 PM

I can’t see this particular article’s PDF, but human operators have long been known to face fatigue and reduced alertness after not that long a time. I had a friend who worked at a DisneyWorld water park. Apparently, Disney, based on research they knew of, ensured that operator shifts there for safety-oriented positions (life guard, water slide operator, etc) deliberately lasted no longer than 20 minutes to avoid operator fatigue and that sort of “eyes glazing over” effect. Operators would rotate to different roles every 20 minutes.

CB September 13, 2011 9:36 PM

Following up on @David’s point… I’m familiar with the physics-astronomy project called LIGO, which is searching for very rare gravitational wave signals. Their teams have similar human problems which bias the results. For example, as conservative scientists, there is a strong urge to rationalize a candidate signal away as being from nuisance disturbances or random noise. Their team got around the problem by injecting false signals into the data at the basic level (known only to a few people). After this practice was started, the team now has to expect some signals to show up all the time, and they never know if it’s real or not. Either way, they have to be ready to respond.

Chuck Finley September 13, 2011 11:39 PM

@CB In source code vulnerability review domain, where we use an automated tools. The tools produce a lot of noise, false negatives, false positives and poor automated findings are the expected artifacts of the not so “matured” code scanning algorithms.

Interestingly, programmers will almost always miss the vulnerable code pattern and focus with near exclusivity on source files and line # vulnerability instances.

So the qualitative effect of the code of the code review process (using current best tools and methods) depends very much more on the maturity of the humans than the maturity of the automated signal gathering process.

Where the tools are not very good the humans have to be exceptional to compensate. Where the tools produce a predominance of direct hits or near hits the more the humans can effect less damage by their mediocrity.

Programmers also almost always fail to initially realize they are responsible for the pattern of code exposures everywhere they may exist in the code not just the instances noted. This bias results in really poor mitigation choices where the team does not have a security code SME on hand to shoot down the bad ideas.

Its all about accountability for good or poor decisioning.

Andrew September 14, 2011 1:28 AM

To what degree is this ameliorated by introducing false threats at various frequencies?

A lot. Of course, it doesn’t have to be a false threat, just something that matches the parameters.

Old School (before 2001) was to run an exemplar through, of a fairly obvious type (a disabled gun in a block of transparent plastic), and if the exemplar got through, the operator would be yanked and the airline and/or contractor fined.

A much classier approach is to set up a reward system. Run an exemplar through once a day. If found, give a small nice reward. If not found, require a mandatory re-training. The key is that finding or not finding is the same day-to-day operating cost.

“Look sharp! The rest of the day off is waiting for you in one of these bags!”

Otter September 14, 2011 3:28 AM

If TSA hired fewer thugs, and more intelligent alert observers, there might be some point in discussing these issues.

Put another way: TSA continues to hire the same sort of people; therefore TSA is satisfied with their performance.

NobodySpecial September 14, 2011 10:16 AM

@Andrew – still doesn’t work. The problem is that when ever you insentivise people they optimise for that incentive.

Give them a reward for spotting a training gun and they will happily ignore all real guns until they find the training one – and once the training one is found, the rest of the day they will check nothing. This doesn’t even have to be a conscious decision – you can train lab rats to do the same thing.

staudenmaier September 14, 2011 11:45 AM

It’s too bad this is a “pay” article. It would do the general public a lot of good to know how effective, or how ineffective, these scanning devices are.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.