Training Baggage Screeners

The research in G. Giguère and B.C. Love, “Limits in decision making arise from limits in memory retrieval,” Proceedings of the National Academy of Sciences v. 19 (2013) has applications in training airport baggage screeners.

Abstract: Some decisions, such as predicting the winner of a baseball game, are challenging in part because outcomes are probabilistic. When making such decisions, one view is that humans stochastically and selectively retrieve a small set of relevant memories that provides evidence for competing options. We show that optimal performance at test is impossible when retrieving information in this fashion, no matter how extensive training is, because limited retrieval introduces noise into the decision process that cannot be overcome. One implication is that people should be more accurate in predicting future events when trained on idealized rather than on the actual distributions of items. In other words, we predict the best way to convey information to people is to present it in a distorted, idealized form. Idealization of training distributions is predicted to reduce the harmful noise induced by immutable bottlenecks in people’s memory retrieval processes. In contrast, machine learning systems that selectively weight (i.e., retrieve) all training examples at test should not benefit from idealization. These conjectures are strongly supported by several studies and supporting analyses. Unlike machine systems, people’s test performance on a target distribution is higher when they are trained on an idealized version of the distribution rather than on the actual target distribution. Optimal machine classifiers modified to selectively and stochastically sample from memory match the pattern of human performance. These results suggest firm limits on human rationality and have broad implications for how to train humans tasked with important classification decisions, such as radiologists, baggage screeners, intelligence analysts, and gamblers.

Posted on May 24, 2013 at 12:17 PM11 Comments


Alex May 24, 2013 1:00 PM

As long as the TSA keeps recruiting its employees by advertising on delivery pizza boxes, and hiring people who couldn’t hack it at McDonald’s, you’re going to get the lowest common denominator. At some point you get beyond a person’s skillset. For this lot, I think we might just be better off with algorithms doing the job. Or more realistically, bring back the old pre-TSA baggage screeners. They seemed quite effective as long as they were actually doing their jobs. I do want to point out that we went nearly 30 years without a domestic hijacking with the pre-TSA security.

Instead, it’d be nice if we had professional security agents (they’re agents, not sworn officers, despite the intentionally-intimidating blue smurf shirts & $145 tin stars). In my line of work, I often have to attend hearings in various federal courthouses and various federal buildings. Occasionally even a local county courthouse. In NONE of those have I ever encountered the ineptitude and rudeness and bullying of a typical TSA checkpoint, and I can’t remember the last time I’ve heard of there being an issue with people getting past the security guards (and often sworn LEOs) at these venues. I should point out that I’ve been stopped multiple times in these venues for some of what I’m carrying, but in each instance the security guards were polite, almost apologetic, and efficient. Sometimes I do forget that I’ve left a piece of banned electronics bouncing around in the bottom of my briefcase.

Oh, and yes, I’ve never ONCE had someone tell me at a federal building to remove my shoes or throw out a bottle of water.

HairlessHacker May 24, 2013 1:32 PM

I’m amused that Gamblers and Baggage screeners are grouped together as requiring the same skill set.

Jon May 25, 2013 2:57 AM

I admit I didn’t read the whole thing, but from the abstract several things strike me:

First, before we go anywhere with this, let’s replace ‘predicted’ with ‘proven’.

Second, what is an ideal distribution of noisy data?

Third, the very initial sentence presumes that there will be a winner. There might not be (imagine a handful of people playing blackjack in a casino, or total thermonuclear war).

Fourth, who decides how to distort the real data into an ‘ideal’ form?

Fifth, isn’t this just an indication of lousy education in statistics?

Fix that first, I’d suggest.



Otter May 25, 2013 4:46 AM

No need to fix anything. Go to the people who have long histories in training successful guards. Do what they do.

We should probably stop ignoring the plentiful evidence that TSA agents are doing exactly what they are paid to do.

Clive Robinson May 25, 2013 6:30 AM

@ Jon,

To answer your questions you are going to have to do a bit more reading in the area.

Currently there is a bit of excitment in the fact that models of brain behaviour appear to follow more along the lines of Quantum Probability than Clasical Probability.

One of the authors (Love) has written a colabarative paper back in 2011 which can be seen as part of this,

And before anybody asks for an opinion on it, my view is curiously open. Basicaly a friend who works in the field pointed it out to me when we were talking about Roger Penrose’s non clasical view of the mind.

But Roger Penrose was by nomeans the first but appears to have received the most “public stick” on it (possibly due to the fact he is a popular author as well as mathematician). You can read a potted history or time line in the first part of,

The author of that is Piero Scaruffi [1] who is fairly well known in the AI and cognative science fields (as well as being a music critic). He has also written a book in 2006 on the broader areas of conciousness and keeps an updated version online at,

Which will give you other areas to research if you are curious.


Troels A. May 25, 2013 4:33 PM

@ Alex:
Please, haven’t we been shown enough already that ‘algorithms” just cannot do this job? Even an unskilled and mostly unintelligent human is going to be beating out an algorithm at this job most of the time.

Where the TSA advertises is also pretty irrelevant. Pizza Boxes, Newspapers, whatever, it doesn’t matter. What matters is the selection criteria. TSA agents needs to be intelligent, attentive and skilled at improvised problem solving. And even then, they need to be trained and be able to be dynamic at solving the task that they are given.

It seems to be that the last part is the problem. Bruce has mentioned this before: that some TSA-agents know that the system they are serving is broken and that confiscating snow globes from children doesn’t improve airline security.

It’s not that the TSA lack intelligent people (although intelligence should certainly be a higher requirement in their recruiting department, because there are some real dummies in there). It’s that intelligent people are asked to act like robots and perform unintelligent and dull jobs that does little use.

MikeA May 25, 2013 5:19 PM

I’d be happy if the trainers could just make sure “Thou Shalt Not Steal” was burned into the agents’ thought-processes.

Tangurena May 27, 2013 11:43 AM

Part of me is disappointed that neither the study authors nor TSA use “recognition primed decisionmaking”. Fire and police departments use that model of training as well as the US military. A book that describes RPD is “Sources of Power” by Klein.

Another part of me is disappointed in the TSA as the testing center I go to take my Microsoft certifications also hosts TSA baggage screener exams. When you look over their shoulder at what they’re being tested on, you’ll drive instead of fly.

Dirk Praet May 27, 2013 8:14 PM

@ Troels A.

It’s that intelligent people are asked to act like robots and perform unintelligent and dull jobs that does little use.

And there you go. Anyone with half a brain and/or a diploma can get a better job. And those who can’t after a while get as bonkers as Charlie Chaplin in “Modern Times”. The administration knows that, and therefor will primarily target folks who are not necessarily the sharpest knives in the drawer, have an interesting potential for intimidation and can be conditioned into doing the job without asking too many questions whether or not what they are doing is actually useful and contributing to better security.

paul May 28, 2013 9:23 AM

Isn’t idealized distributions pretty much how we teach everything else? We pretty much always teach people the basic rules of a field and add caveats later, rather than giving them a big pile of everything including the niggling exceptions.

Jonadab June 4, 2013 6:28 AM

@paul: Indeed.

In fact, tests are generally constructed as idealized distributions over the material being tested. An idealized test provides better discrimination (between those who know the material well and those who don’t) than a mathematically random test.

Also, idealized tests are perceived by the human mind as being more “random” and therefore more fair. Non-idealized distributions are perceived by the human mind as being non-random and “rigged”, even if they were in fact generated by a scrupulously unbiased or even cryptographically sound random process.

Suppose you have a border checkpoint. You don’t want to search most of the vehicles, because that would take too long and not enough people would get to visit Queen Victoria Park (and spend their tourist dollars), which would be bad for everyone concerned.

So you decide to search 1/20th of the vehicles, at random. If you use a truly random mechanism to decide which ones to stop, at a probability of one in twenty for each, you’re occasionally going to go a hundred cars or more without stopping any, and at other times you’re going to stop two cars in a row, or even three. If it’s a very active border crossing, these “rare” events will happen multiple times per day. This is bad, because it makes people think the officers running the checkpoint are incompetent and moody and possibly drunk.

So what you actually do is significantly less random: you weight the probabilities for each vehicle based on how long it’s been since you stopped one. This makes the distribution more idealized (closer to even).

You don’t want a perfectly even distribution (every twentieth car exactly), because that’s too obvious and too predictable. You do need a random element. But you want something that looks a lot more even than the truly random distribution would.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.