Schneier on Security
A blog covering security and security technology.
« Confessions Corrupt Eyewitnesses |
| Hard Drive Encryption Specification »
February 4, 2009
Racial Profiling No Better than Random Screening
Not that this is any news, but there's some new research to back it up:
The study was performed by William Press, who does bioinformatics research at the University of Texas, Austin, with a joint appointment at Los Alamos National Labs. His background in statistics is apparent in his ability to handle various mathematical formulae with aplomb, but he's apparently used to explaining his work to biologists, since the descriptions that surround those formulae make the general outlines of the paper fairly accessible.
Press starts by examining what could be viewed as an idealized situation, at least from the screening perspective: a single perpetrator living under an authoritarian government that has perfect records on its citizens. Applying a profile to those records should allow the government to rank those citizens in order of risk, and it can screen them one-by-one until it identifies the actual perpetrator. Those circumstances lead to a pretty rapid screening process, and they can be generalized out to a situation where there are multiple likely perpetrators.
Things go rapidly sour for this system, however, as soon as you have an imperfect profile. In that case, which is more likely to reflect reality, there's a finite chance that the screening process misses a likely security risk. Since it works its way through the list of individuals iteratively, it never goes back to rescreen someone that's made it through the first pass. The impact of this flaw grows rapidly as the ability to accurately match the profile to the data available on an individual gets worse. Since we've already said that making a profile is challenging, and we know that even authoritarian governments don't have perfect information on their citizens, this system is probably worse than random screening in the real world.
In the real world, of course, most of us aren't going through security checks run by authoritarian governments. In Press' phrasing, democracies resample with replacement, in that they don't keep records of who goes through careful security screening at places like airports, so people get placed back on the list to go through the screening process again. One consequence of this is that, since screening resources are never infinite, we can only resample a small subset of the total population at any given moment.
Press then examines the effect of what he terms a strong profiling strategy, one in which a limited set of screening resources is deployed solely based the risk probabilities identified through profiling. It turns out that this also works poorly as the population size goes up. "The reason that this strong profiling strategy is inefficient," Press writes, "is that, on average, it keeps retesting the same innocent individuals who happen to have large pj [risk profile match] values."
According to Press, the solution is something that's widely recognized by the statistics community: identify individuals for robust screening based on the square root of their risk value. That gives the profile some weight, but distributes the screening much more broadly through the population, and uses limited resources more effectively. It's so widely used in mathematical circles that Press concludes his paper by writing, "It seems peculiar that the method is not better known."
Other articles on the research here, here, and here. Me on profiling.
Posted on February 4, 2009 at 12:50 PM
• 34 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Interesting paper. I like the idea of separating the sampling probability from the prior probability, and optimizing the former. I'd never heard of the square-root sampling strategy either, so I guess I learned something today.
I don't care for the section on "Probabilistic Recognition¨, though. The idea that multiple looks at an individual are probabilistically independent, identically distributed (iid) is very naive. I can believe that the advantage of the Optimal Authoritarian strategy over the Optimal Democratic strategy is less in this circumstance, but not in the way inferred from those curves.
In addition to the above-cited problem with the iid assumption, there is an implicit assumption about the probabilistic secondary screening: that if it "succeeds", then it is correct. That is, no false positives. The discussion in this section needs to be broadened considerably to take a false-positive rate into account.
Given the last paragraph of the quoted text, the title of this entry seems misleading -- it sounds like profiling (racial or otherwise, provided the selected population represents a higher risk than the general population) doesn't work on its own, but when it's used as a weighting factor it improves the efficiency of screening.
The title led me to think that the quoted material would demonstrate racial profiling was completely without merit
An argument based on models rather than empirical data seems rather less dispositive of the issue than the pithy post title suggests.
Frank: An argument based on models rather than empirical data seems rather less dispositive of the issue than the pithy post title suggests.
And, pray-tell, would one go about doing this empirically, since one's method of doing the survey is in, and of itself, in question? The question is the math: your point is like suggesting that we test electromagnetism by studying psychology.
Profiling works. Its why its used to catch crooks. Mind you I'm talking about the generic profiling, not specifically racial profiling.
Its associated with optimization algorithms. Given a specific function to optimize, a heuristic is deemed optimal if, given the same information, no other heuristic could come across a better solution. See A* for more on this.
The problem is that the data in probability is very fuzzy. People who fight against racial profiling argue that the amount of information stored in race is very minimal. If the amount of information is actually 0, then random sampling is actually optimal!
What the article describes is that, if you can identify a terrorist as a test on a uniform random variable (similar to rolling a d20 and comparing it to your THAC0), then sampling based on the square root of that critical value yields the "best results" according to their heuristic.
I'd like to see the actual article. Something that's missing is the proper definition of "best." I expect its a well accepted combination factoring in Type I and Type II errors.
This of course falls into a realm that's totally unanswered - what is "best."
Once we define best, we can then analyze whether or not racial profiling data contains enough information to yield optimal results beating random sampling.
See also them multiple-armed bandit problem. its rather similar. The idea is that you have N arms on a slot machine, each with different odds and payouts. All are unknown to you. How does one optimize their payout? It turns out to be a rather fun little math problem.
And to be more specific on Type-I and Type-II errors for those not versed in statistics:
Any hypothesis can be right or wrong - its the definition of a statistical hypothesis. Thus you can make two mistakes. You can claim something is true when it is false, or you can claim it is false when it is true. The titles "Type-I" and "Type-II" are tagged to these mistakes. Type-I errors are defined to be the more dangerous mistake. In our legal system, the Type-I error is an innocent falsely convicted. A Type-II error would be a guilty person walking free.
There's parameters you can tweak to change the rate of Type-I and Type-II errors, and they tend to be intertwined. In general, to "optimize" the system, you have to decide what the tradeoff is between the errors.
If we REALLY felt innocent until proven guilty, we couldn't lock anyone up, because a Type-I error would be unacceptable. If you look into the actual legal wordings, there are phrases like "beyond reasonable doubt" which bound this, allowing us to accept a few Type-I errors to drastically decrease Type-II errors (in particular, letting us lock up anyone at all!)
Bruce's final article on the topic pointed out that human intuition is very good at picking that balance. Its why we can have phrases like "beyond reasonable doubt" and have the system work. The more computers get involved, the harder it is to invoke that intuition. This means the computers have to prove their profiles are better than the intuition of people.
They're not making use of available information. When a person passes a screening, the probability that he's a bad guy drops (by the effectiveness of the screening). It doesn't hit zero (as assumed in the first example) nor does it remain unchanged (as in the latter).
Apply Bayes Theorem to get the correct probabilities, and act based on those.
What specific cases are you referring to when you say that (generic) profiling is used to catch crooks?
The effectiveness of so-called "fbi profiling" has been called into doubt here and elsewhere and I can't really think of any other kinds of criminal profiling of large populations.
Oh, , I hadn't realized the paper isn't generally available. The abstract is here:
PNAS makes the full text available only to subscribers. If you are at a University the chances are your library has a site license. I don't think I can make a version available without violating someones copyright.
> Profiling works. Its why its used to catch crooks.
Profiling produces predictable results when the subjects are themselves well predicted by the model the profile represents.
In other words, profiling works to catch crooks because crooks are predictable.
People with (relative to the police, as an organization) little in the way of resources or experience will tend to look at their problem in the same, narrow way as any other (ie: I need money; I'll hold up a grocery store). The police have been studying these problems for hundreds of years. But they need very specialized resources to catch, say, a serial killer or a terrorist, because the typical profiles do not apply (because they are not typical cases). Their motivation, and thus approach to their problem, is *not* the same as a "typical" criminal.
No-one is saying profiling doesn't work; but saying that a terrorist is more likely to be an arab is like saying a shoplifter is more likely to be a negro. That's wrong. A poor person is more likely to be a shoplifter; in some places, a negro might be more likely to be poor; but misrepresenting the model as simply "shoplifters are more likely to be negroes" is, statistically, foolish: it makes you understand the fundamental problem even less, and encourages ineffective or even counterproductive responses.
Simplified extremely: of course racial profiling doesn't work. Races are huge groups; terrorists are tiny groups. The probability that any individual in a particular racial group will be a terrorist is so low, that the ratio will be impossible to differentiate from the sample error (ie; that there weren't just as many terrorists in another racial group, but your sample missed them all).
"(similar to rolling a d20 and comparing it to your THAC0)"
Darn. Now I have to take "Schneier on Security" off the list of places that I wouldn't expect to find an obscure (and years out of date) Dungeons and Dragons reference.
"(similar to rolling a d20 and comparing it to your THAC0)"
The sad thing is that I don't even play D&D, and yet I knew exactly what that meant.
Reminds me of the slashdot story earlier today about the two convenience stores robbed by a guy with a Bat'leth (two-handed bladed Klingon weapon from Star Trek). The funny thing was, both convenience store clerks recognized what it was. Many slashdot comments expressed surprised at this, but I sort of recall that when I was a kid, almost everybody I knew watched Star Trek shows. A Bat'leth is instantly recognizable to anyone who watched TNG as a kid, so it doesn't surprise me that two convenience store clerks knew what it was.
It did surprise me that one of the clerks told the guy to get stuffed, though! Either a brave or foolish thing to do. I don't think I would have the cohones to tell off a hoodlum carrying a six-foot bladed weapon expressly designed to disembowl or decapitate.
Sorry for this OT babble.. time for bed!
RH; your argument involves static analysis in an abstract situation. There are at least two different variables which I can see which suggest you may be wrong.
a) if you are known to be using racial profiling that can be used against you;
- if you search only black terrorists, then the white terrorists, even though they are fewer in the group, will be the ones chosen to carry the equipment.
b) if you are known to be descriminiating against a group that legitimises the terrorists.
- normally people from that group would report their suspicions against other members of their group, but they feel that they can't because you are their enemy.
The best arguments are not whether racial profiling works in some academic scenarios, but rather how it works in real life with real people.
"Profiling works. Its why its used to catch crooks."
What do you mean by "Profiling".
A lot of police know that certain criminals on their patch do certain crimes a certain way (effectivly a signature) generaly they call it an MO.
Then there is the type of very iffy criminal psycology profiling used against Colin Stagg, (the man finaly cleared) over the brutal murder of Rachel Nickell on Wimbledon Common in the UK?
Briefly, having no real evidence from the murder and no identifiable suspects the Special Operations group (SO10) of the Met Police decided on a new aproach.
They approached criminal psychologist Paul Britton (then of the Towers Hospital in Leicester) for help.
He supplied an "offender profile" that was used to identify Colin Stagg as a likley suspect.
SO10 went on to mount a honey trap called "Operation Ezdell". Where a female police officer contacted Colin Stagg and through a labarynth of deceptions and fantasies about violent sex tried to get Colin to admit to the murder (which he did not).
This operation was effectivly run by Inspector Keith Pedder who has claimed Paul Britton was responsable for the direction and methods used through out.
Even with no evidence and no confession Colin Stagg was arrested. After having spent a year in jail, when Colin Stagg came to trial the judge, Mr Justice Ognall acused SO10 of trying to incriminate a suspect by "deceptive conduct of the grossest kind".
He then orderd that the entrapment evidence gained by operation ezdell be witheld, at which point the prosecution having no other evidence withdrew it's case.
Colin Stagg had had his life destroyed by those behind operation ezdell and the media. He was subject to various hate crimes and was awarded significant damages against the Met Police.
However it was not untill December last year when Robert Napper pleaded guilty on the grounds of diminished responsibility to Rachel Nickell's manslaughter that Colin was finaly cleared.
Needless to say the UK media jumped on the story yet again and a considerable number of details where brought out along with various talking heads and experts, and it has only been the global banking crissis that has not made it a bigger story...
I have to agree with the previous posters: ethical questions aside, this article does not claim that racial profiling does not work; it only claims that it does not work if done naïvely. To the contrary, they determine an optimal strategy for profiling which -- according to their model -- approximately maximises the available benefit from including race (or any other particular parameter) in your screening system.
> Darn. Now I have to take "Schneier on Security" off the list of places that I wouldn't expect to find an obscure (and years out of date) Dungeons and Dragons reference.
Actually, there have been several previous D&D references here. But the one where Bruce reveals he knows the game himself is this:
He went to too much work.
He failed to account for the fact that the "terrorist" in this case is intelligent.
If I'm an evil terrorist plotting to blow up a plane, no amount of profiling will ever catch me because I will sneak the bomb on board in the luggage of my Chinese girlfriend.
While the article is mathematically correct, it is simply absurd. Using this as a reasoning to use random searches instead of profile based searches is silly. The model it presents is simply inadequate. No one wants to minimize the cost of testing people until you find a malfeasor. The minimum should be calculated on the criminal event, given limited resources.
So, I hereby present another simplistic model better suited to the cases the average reader think of with the article in mind. Given N people, one can only test K (0
Not trying to rebuke the essay, we must be careful in assigning it results. It is meaningful, for example, to an intelligent agency, wishing to minimize effort on eavesdropping or interrogations, but it is not immediately applicable to other realms.
Finally, it is obvious that choosing random scrutinizing is a political decision not just risk management. Yet we need to prefer it or not based on facts, fully understanding the risk.
* A good question is how do you find Pj in the first place, but this is completely out of scope.
"I don't think I would have the cohones to tell off a hoodlum carrying a six-foot bladed weapon expressly designed to disembowl or decapitate."
Maybe the clerk knew something about martial arts or bladed weapons, enough to see that the perp would be unable to carry out the threat. Lots of geeks have McJobs.
This is pure sophistry. If 80% of attackers wear green hats, but only 10% of passengers wear green hats, then diverting detection resource from investigating the 90% without green hats to the 10% that do wear green hats will raise the probability of detecting an attacker.
So long as people keep confusing artificial intelligence
with artificial substitutes for intelligence
things like this will continue to happen
Your accusation of sophistry is humorous when any rational attacker would simply remove his green hat.
A naive response would be to say, "but a person can't change their race like they can take off a hat." Two products make that false - benoquin and blonde-in-a-bottle, that's all crazy Achmed the bomber needs to become Biff the upper-crust golden boy.
Of course all that ignores your numbers - when 99.999% of people wearing green hats are not dangerous - and there is a cost to searching the wrong people, moreso if those people feel part of a group being singled out - it should be obvious that you need a better discriminator than simply race.
While the news piece mentions racial profiling, the source article is only about "strong profiling". Racial criteria is only one option for profiling and no one in his right mind will suggest doing it by skin tone and hair color. Height, weight, accent, vocabulary, for example, are much harder to fake and I assume can be just as predictive. If we're talking about flights (again, only in news piece, not the math article), destination, seat preference, and numerous other data can probably help. I have no idea what intelligence agencies are using but I hope they're using empirical data from as wide spectrum as they can gather. (And allow me to avoid the question of actually translating that into search patterns in real life - the agencies efficiency is not my concern).
The problem I see here is applying an unrelated result to serve a cause. While the cause may be right (making equality visible), mangling scientific results for its sake is improper IMHO.
One should support the cause, knowing the true implications and making a moral choice, not hiding by inaccurate facts.
"I have no idea what intelligence agencies are using but I hope they're using empirical data from as wide spectrum as they can gather. (And allow me to avoid the question of actually translating that into search patterns in real life - the agencies efficiency is not my concern)."
Actually, it SHOULD be your concern because you are providing the tax revenue to pay for it and it is, supposedly, being done to "protect" you.
"If we're talking about flights (again, only in news piece, not the math article), destination, seat preference, and numerous other data can probably help."
No. Again, because we are dealing with intelligent people. It is very easy for them to make a "dry run" or two to see whether they trip the detection criteria BEFORE they bring on any weapons.
And, again, when profiling is based upon criteria such as you describe, the weapons will simply be moved to someone who is the OPPOSITE of that profile.
When the attackers can actively and intelligently work to subvert the "profile" then profiling becomes less accurate than randomly searching passengers.
Profiling is not useless. It is WORSE than useless because it wastes resources.
"Height, weight, accent, vocabulary, for example, are much harder to fake and I assume can be just as predictive."
Huh? Three of those four are easily changed given a couple of years of dedication, and I think you are going to have a really hard time drawing any meaningful conclusions based on height.
"While the cause may be right (making equality visible), mangling scientific results for its sake is improper IMHO."
I really didn't say a word about "the right cause" - I said there was a cost and the cost goes up if people perceive that they are not being treated fairly.
Here is an example of the way that cost can reduce efficiency: A friend of mine is nominally maronite christian and is a 3rd or 4th generation lebanese immigrant. He remembers that soon after 9/11, the priest at his church recommended that all parishioners give minimum cooperation to requests by the FBI and CIA to help them out in their 9/11 investigations because it seemed to him (the priest) that lebanese-americans were being unjustly singled out at the time and the less contact they had with government agencies, the less chance they had of becoming the victim of some procedural error. Consequently, no one from that church accepted government employment offers for arabic to english translators, something that even today the government has a short supply of.
So nevermind the moral grounds of equality, I'm talking about the cost to an agency's own effectiveness.
Using "dry runs" increases potential plane bombers operation costs hence assuming they are limited in resources (like any other group on earth) reduces their capability. Furthermore, I specifically stayed out of the question of how to do good profiling. From a mathematical view point, just adjust the probabilities to include dry runs into your model. Sure, it will flatten the probability rates, but it still does not contradict the fact that profiling can help. Finally, I agree that bad profiling can be worse than useless, but that solely depends on the profiling accuracy and is not a consequence of a profiling based polling system. (Asking about its applicability to "real life" based on how poor or not our current profiling mechanism is is hard to tell, since I do not know how those profiles are made)
If it takes "years of dedication" to fool profiling it is certainly good. It filters most occasional participants in the activity you wish to circumvent, giving you long time to gather information during that time.
Being of North African heritage myself I can sympathies with your Lebanese friend, but I can hardly see the connection here. You simply add another cost to the list of disadvantages using racial profiling. I never said I support that, nor did I oppose it. I merely said that when weighing the different costs, one must accurately estimate the benefit of using proper profiling. I assumed it goes without saying that one must do the same for the disadvantages. Alienating groups of people, actually increasing fear at public locations, direct monetary costs, pubic time waste, are only a few examples. It has nothing to do with the fact that using profiling information can increase the probability of catching malfeasors within a a limited series of checks (given that profiling is accurate (and including counter profiling measures)).
While this article, and several commentators here, have brought up many interesting points...I think a larger problem with terrorist profiling is being overlooked, especially in regards to airport security.
The fact is that it's a fantasy that you can build any sort of accurate profile of a terrorist, racial or not. Real terrorism in the West is simply too rare to begin to draw any sort of real conclusions, and while recently we've paid a lot of attention to Muslim terrorists, history pretty much proves they aren't the only people hijacking airplanes.
Several people brought up police profiling as "proof" that the idea works. But the difference is that investigative profiling has a much larger sample size (since most crimes happen far more often than do terrorist attacks) and are an attempt to solve a crime that already happened. You'll notice that police don't use profiling to predict who the next serial killer WILL be...and there is a reason for that.
The real problem is that, mostly because of crappy journalism and sensationalist TV shows like '24', people are convinced that their country is simply swarming with bad guys just waiting to kill them in their sleep. So when we're discussing profiling, it's easy to forget that during the vast majority of days at the average airport...there are going to be ZERO terrorists to catch. Lack of effectiveness of profiling aside, I think the negative impact can't be dismissed as easily as most people would like. We have to live with those side-effects every single day, while any benefits we might see would almost never happen.
"Using "dry runs" increases potential plane bombers operation costs hence assuming they are limited in resources (like any other group on earth) reduces their capability."
You usage of the word "limited" is meaningless in this context. Everything is "limited". They also have a "limited" amount of time before they die of old age.
Whether the resources are "limited" or not ONLY matters if the limitations actually affect their research. And considering that the price of a plane ticket can be under $100 ...
"Furthermore, I specifically stayed out of the question of how to do good profiling."
And I specifically pointed out that it doesn't matter how "good" the profiling is BECAUSE the people being profiled are AWARE that they are being profiled and are able to take countermeasures. Such as the terrorist having his Chinese girlfriend bring the bomb on board (unknowingly) in HER luggage.
The bomb gets on board because the INTELLIGENT terrorist used the profiling to subvert the profiling.
But random searching would have had a chance of finding the bomb.
Here, I think that this might clear up some of the confusion on this issue.
It isn't about ONE goal. It is about TWO goals.
#1. Identifying terrorists.
#2. Preventing weapons and bombs from being transported onto the plane.
Those two goals are NOT identical. Weapons and bombs can be transported by non-terrorists.
And terrorists can board a plane WITHOUT carrying a bomb or weapons.
Even 100% perfect racial profiling for terrorists (which is impossible) will do NOTHING about #2. Yet the end result could be a plane exploding in the air.
"If it takes "years of dedication" to fool profiling it is certainly good."
No, historically that's incorrect. The 9/11 terrorists spent more than a "couple years of dedication" to pull off what they did. Throwing in gym sessions and speech lessons during that same period of planning is not a significant cost. "Casual" terrorism, especially as practiced by foreigners, has not been a problem in the USA - at least not the kind of problem that promoters of "racial profiling" generally target.
@ Brandioch; You wrote"And I specifically pointed out that it doesn't matter how "good" the profiling is BECAUSE the people being profiled are AWARE that they are being profiled and are able to take countermeasures. Such as the terrorist having his Chinese girlfriend bring the bomb on board (unknowingly) in HER luggage."
Having someone else unknowingly bring the unlawful item onboard isn't really foolproof, is it. You should read up on El Al's profilers and what they've accomplished. Such as the British girl who was unknowingly carrying her arab husband's bomb onboard...and the profilers still found it.
Further... "The bomb gets on board because the INTELLIGENT terrorist used the profiling to subvert the profiling."
It's not really about intelligence. You can be as smart as you want, there's still no guarantee that you will make it past the profiler(s). Profiling, in a sense, is a highly developed form of the random search, it's just that they search your head, and it's not all that random...
"Having someone else unknowingly bring the unlawful item onboard isn't really foolproof, is it."
If the process was foolproof there wouldn't be a plane left in the sky. Read on for enlightenment.
"Profiling, in a sense, is a highly developed form of the random search, it's just that they search your head, and it's not all that random..."
Yeah. You go through a lot of tinfoil, don't you?
Meanwhile, here's a link to the story that appears to be what you were referring to. Too bad that it contradicts your bit about "profiling". The bomb was caught through regular security screening.
What caught the bomb was El Al's focus on item #2. And that is because El Al understands the difference.
But you can believe that they were searching her brain for something that she didn't know she was carrying if you want to.
@Brandioch Conner @Sommerfeldt
Since we try to establish a rational behaviour using (or not) profiling, using a specific example doesn't seem like the best strategy. Especially since the events we're dealing with have a strong intrinsic randomness.
Still, allow me to doubt the conclusion that profiling was not used during this famous capture. First, had she had an Israeli passport she would probably not have gone through as much scrutinizing as she did. Furthermore, the Hebrew descriptions of the same incident reveal a different version. They claim profiling was the base for the intensive baggage search she received.
See http://he.wikipedia.org/wiki/... and http://www.shabak.gov.il/heritage/affairs/Pages/... (you can use google translation but it is unfortunately very poor). Anyhow, trusting or not my Hebrew reading proficiency, judging the case by a single incident is plain wrong.
If we are already doing random searches, and that catches girlfriends with bombs, why don't we further reduce our pool of searches with profiling in addition to the random ones? As keeps getting mentioned, limited resources must be applied efficiently. Oh and by the way, I'll bet we're not letting everyone know what all our profiling data is, are we? That would be silly.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.