There is a great discussion about profiling going on in the comments to the previous post. To help, here is what I wrote on the subject in Beyond Fear (pp. 133-7):
Good security has people in charge. People are resilient. People can improvise. People can be creative. People can develop on-the-spot solutions. People can detect attackers who cheat, and can attempt to maintain security despite the cheating. People can detect passive failures and attempt to recover. People are the strongest point in a security process. When a security system succeeds in the face of a new or coordinated or devastating attack, it’s usually due to the efforts of people.
On 14 December 1999, Ahmed Ressam tried to enter the U.S. by ferryboat from Victoria Island, British Columbia. In the trunk of his car, he had a suitcase bomb. His plan was to drive to Los Angeles International Airport, put his suitcase on a luggage cart in the terminal, set the timer, and then leave. The plan would have worked had someone not been vigilant.
Ressam had to clear customs before boarding the ferry. He had fake ID, in the name of Benni Antoine Noris, and the computer cleared him based on this ID. He was allowed to go through after a routine check of his car’s trunk, even though he was wanted by the Canadian police. On the other side of the Strait of Juan de Fuca, at Port Angeles, Washington, Ressam was approached by U.S. customs agent Diana Dean, who asked some routine questions and then decided that he looked suspicious. He was fidgeting, sweaty, and jittery. He avoided eye contact. In Dean’s own words, he was acting “hinky.” More questioning — there was no one else crossing the border, so two other agents got involved — and more hinky behavior. Ressam’s car was eventually searched, and he was finally discovered and captured. It wasn’t any one thing that tipped Dean off; it was everything encompassed in the slang term “hinky.” But the system worked. The reason there wasn’t a bombing at LAX around Christmas in 1999 was because a knowledgeable person was in charge of security and paying attention.
There’s a dirty word for what Dean did that chilly afternoon in December, and it’s profiling. Everyone does it all the time. When you see someone lurking in a dark alley and change your direction to avoid him, you’re profiling. When a storeowner sees someone furtively looking around as she fiddles inside her jacket, that storeowner is profiling. People profile based on someone’s dress, mannerisms, tone of voice … and yes, also on their race and ethnicity. When you see someone running toward you on the street with a bloody ax, you don’t know for sure that he’s a crazed ax murderer. Perhaps he’s a butcher who’s actually running after the person next to you to give her the change she forgot. But you’re going to make a guess one way or another. That guess is an example of profiling.
To profile is to generalize. It’s taking characteristics of a population and applying them to an individual. People naturally have an intuition about other people based on different characteristics. Sometimes that intuition is right and sometimes it’s wrong, but it’s still a person’s first reaction. How good this intuition is as a countermeasure depends on two things: how accurate the intuition is and how effective it is when it becomes institutionalized or when the profile characteristics become commonplace.
One of the ways profiling becomes institutionalized is through computerization. Instead of Diana Dean looking someone over, a computer looks the profile over and gives it some sort of rating. Generally profiles with high ratings are further evaluated by people, although sometimes countermeasures kick in based on the computerized profile alone. This is, of course, more brittle. The computer can profile based only on simple, easy-to-assign characteristics: age, race, credit history, job history, et cetera. Computers don’t get hinky feelings. Computers also can’t adapt the way people can.
Profiling works better if the characteristics profiled are accurate. If erratic driving is a good indication that the driver is intoxicated, then that’s a good characteristic for a police officer to use to determine who he’s going to pull over. If furtively looking around a store or wearing a coat on a hot day is a good indication that the person is a shoplifter, then those are good characteristics for a store owner to pay attention to. But if wearing baggy trousers isn’t a good indication that the person is a shoplifter, then the store owner is going to spend a lot of time paying undue attention to honest people with lousy fashion sense.
In common parlance, the term “profiling” doesn’t refer to these characteristics. It refers to profiling based on characteristics like race and ethnicity, and institutionalized profiling based on those characteristics alone. During World War II, the U.S. rounded up over 100,000 people of Japanese origin who lived on the West Coast and locked them in camps (prisons, really). That was an example of profiling. Israeli border guards spend a lot more time scrutinizing Arab men than Israeli women; that’s another example of profiling. In many U.S. communities, police have been known to stop and question people of color driving around in wealthy white neighborhoods (commonly referred to as “DWB” — Driving While Black). In all of these cases you might possibly be able to argue some security benefit, but the trade-offs are enormous: Honest people who fit the profile can get annoyed, or harassed, or arrested, when they’re assumed to be attackers.
For democratic governments, this is a major problem. It’s just wrong to segregate people into “more likely to be attackers” and “less likely to be attackers” based on race or ethnicity. It’s wrong for the police to pull a car over just because its black occupants are driving in a rich white neighborhood. It’s discrimination.
But people make bad security trade-offs when they’re scared, which is why we saw Japanese internment camps during World War II, and why there is so much discrimination against Arabs in the U.S. going on today. That doesn’t make it right, and it doesn’t make it effective security. Writing about the Japanese internment, for example, a 1983 commission reported that the causes of the incarceration were rooted in “race prejudice, war hysteria, and a failure of political leadership.” But just because something is wrong doesn’t mean that people won’t continue to do it.
Ethics aside, institutionalized profiling fails because real attackers are so rare: Active failures will be much more common than passive failures. The great majority of people who fit the profile will be innocent. At the same time, some real attackers are going to deliberately try to sneak past the profile. During World War II, a Japanese American saboteur could try to evade imprisonment by pretending to be Chinese. Similarly, an Arab terrorist could dye his hair blond, practice an American accent, and so on.
Profiling can also blind you to threats outside the profile. If U.S. border guards stop and search everyone who’s young, Arab, and male, they’re not going to have the time to stop and search all sorts of other people, no matter how hinky they might be acting. On the other hand, if the attackers are of a single race or ethnicity, profiling is more likely to work (although the ethics are still questionable). It makes real security sense for El Al to spend more time investigating young Arab males than it does for them to investigate Israeli families. In Vietnam, American soldiers never knew which local civilians were really combatants; sometimes killing all of them was the security solution they chose.
If a lot of this discussion is abhorrent, as it probably should be, it’s the trade-offs in your head talking. It’s perfectly reasonable to decide not to implement a countermeasure not because it doesn’t work, but because the trade-offs are too great. Locking up every Arab-looking person will reduce the potential for Muslim terrorism, but no reasonable person would suggest it. (It’s an example of “winning the battle but losing the war.”) In the U.S., there are laws that prohibit police profiling by characteristics like ethnicity, because we believe that such security measures are wrong (and not simply because we believe them to be ineffective).
Still, no matter how much a government makes it illegal, profiling does occur. It occurs at an individual level, at the level of Diana Dean deciding which cars to wave through and which ones to investigate further. She profiled Ressam based on his mannerisms and his answers to her questions. He was Algerian, and she certainly noticed that. However, this was before 9/11, and the reports of the incident clearly indicate that she thought he was a drug smuggler; ethnicity probably wasn’t a key profiling factor in this case. In fact, this is one of the most interesting aspects of the story. That intuitive sense that something was amiss worked beautifully, even though everybody made a wrong assumption about what was wrong. Human intuition detected a completely unexpected kind of attack. Humans will beat computers at hinkiness-detection for many decades to come.
And done correctly, this intuition-based sort of profiling can be an excellent security countermeasure. Dean needed to have the training and the experience to profile accurately and properly, without stepping over the line and profiling illegally. The trick here is to make sure perceptions of risk match the actual risks. If those responsible for security profile based on superstition and wrong-headed intuition, or by blindly following a computerized profiling system, profiling won’t work at all. And even worse, it actually can reduce security by blinding people to the real threats. Institutionalized profiling can ossify a mind, and a person’s mind is the most important security countermeasure we have.