Entries Tagged "profiling"

Page 5 of 6

What the Terrorists Want

On Aug. 16, two men were escorted off a plane headed for Manchester, England, because some passengers thought they looked either Asian or Middle Eastern, might have been talking Arabic, wore leather jackets, and looked at their watches—and the passengers refused to fly with them on board. The men were questioned for several hours and then released.

On Aug. 15, an entire airport terminal was evacuated because someone’s cosmetics triggered a false positive for explosives. The same day, a Muslim man was removed from an airplane in Denver for reciting prayers. The Transportation Security Administration decided that the flight crew overreacted, but he still had to spend the night in Denver before flying home the next day. The next day, a Port of Seattle terminal was evacuated because a couple of dogs gave a false alarm for explosives.

On Aug. 19, a plane made an emergency landing in Tampa, Florida, after the crew became suspicious because two of the lavatory doors were locked. The plane was searched, but nothing was found. Meanwhile, a man who tampered with a bathroom smoke detector on a flight to San Antonio was cleared of terrorism, but only after having his house searched.

On Aug. 16, a woman suffered a panic attack and became violent on a flight from London to Washington, so the plane was escorted to the Boston airport by fighter jets. “The woman was carrying hand cream and matches but was not a terrorist threat,” said the TSA spokesman after the incident.

And on Aug. 18, a plane flying from London to Egypt made an emergency landing in Italy when someone found a bomb threat scrawled on an air sickness bag. Nothing was found on the plane, and no one knows how long the note was on board.

I’d like everyone to take a deep breath and listen for a minute.

The point of terrorism is to cause terror, sometimes to further a political goal and sometimes out of sheer hatred. The people terrorists kill are not the targets; they are collateral damage. And blowing up planes, trains, markets or buses is not the goal; those are just tactics. The real targets of terrorism are the rest of us: the billions of us who are not killed but are terrorized because of the killing. The real point of terrorism is not the act itself, but our reaction to the act.

And we’re doing exactly what the terrorists want.

We’re all a little jumpy after the recent arrest of 23 terror suspects in Great Britain. The men were reportedly plotting a liquid-explosive attack on airplanes, and both the press and politicians have been trumpeting the story ever since.

In truth, it’s doubtful that their plan would have succeeded; chemists have been debunking the idea since it became public. Certainly the suspects were a long way off from trying: None had bought airline tickets, and some didn’t even have passports.

Regardless of the threat, from the would-be bombers’ perspective, the explosives and planes were merely tactics. Their goal was to cause terror, and in that they’ve succeeded.

Imagine for a moment what would have happened if they had blown up 10 planes. There would be canceled flights, chaos at airports, bans on carry-on luggage, world leaders talking tough new security measures, political posturing and all sorts of false alarms as jittery people panicked. To a lesser degree, that’s basically what’s happening right now.

Our politicians help the terrorists every time they use fear as a campaign tactic. The press helps every time it writes scare stories about the plot and the threat. And if we’re terrified, and we share that fear, we help. All of these actions intensify and repeat the terrorists’ actions, and increase the effects of their terror.

(I am not saying that the politicians and press are terrorists, or that they share any of the blame for terrorist attacks. I’m not that stupid. But the subject of terrorism is more complex than it appears, and understanding its various causes and effects are vital for understanding how to best deal with it.)

The implausible plots and false alarms actually hurt us in two ways. Not only do they increase the level of fear, but they also waste time and resources that could be better spent fighting the real threats and increasing actual security. I’ll bet the terrorists are laughing at us.

Another thought experiment: Imagine for a moment that the British government arrested the 23 suspects without fanfare. Imagine that the TSA and its European counterparts didn’t engage in pointless airline-security measures like banning liquids. And imagine that the press didn’t write about it endlessly, and that the politicians didn’t use the event to remind us all how scared we should be. If we’d reacted that way, then the terrorists would have truly failed.

It’s time we calm down and fight terror with antiterror. This does not mean that we simply roll over and accept terrorism. There are things our government can and should do to fight terrorism, most of them involving intelligence and investigation—and not focusing on specific plots.

But our job is to remain steadfast in the face of terror, to refuse to be terrorized. Our job is to not panic every time two Muslims stand together checking their watches. There are approximately 1 billion Muslims in the world, a large percentage of them not Arab, and about 320 million Arabs in the Middle East, the overwhelming majority of them not terrorists. Our job is to think critically and rationally, and to ignore the cacophony of other interests trying to use terrorism to advance political careers or increase a television show’s viewership.

The surest defense against terrorism is to refuse to be terrorized. Our job is to recognize that terrorism is just one of the risks we face, and not a particularly common one at that. And our job is to fight those politicians who use fear as an excuse to take away our liberties and promote security theater that wastes money and doesn’t make us any safer.

This essay originally appeared on Wired.com.

EDITED TO ADD (3/24): Here’s another incident.

EDITED TO ADD (3/29): There have been many more incidents since I wrote this—all false alarms. I’ve stopped keeping a list.

Posted on August 24, 2006 at 7:08 AMView Comments

Behavioral Profiling

I’ve long been a fan of behavioral profiling, as opposed to racial profiling. The U.S. has been testing such a program. While there are legitimate fears that this could end up being racial profiling in disguise, I think this kind of thing is the right idea. (Although I am less impressed with this kind of thing.)

EDITED TO ADD (8/18): Funny cartoon on profiling.

There’s a moral here. Profiling is something we all do, and we do it because—for the most part—it works. But when you’re dealing with an intelligent adversary, as opposed to the cat, you invite that adversary to deliberately try to subvert your profiling system. The effectiveness of any profiling system is directly related to how likely it will be subverted.

Posted on August 18, 2006 at 1:21 PMView Comments

Good Example of Smart Profiling

In Beyond Fear, I wrote about profiling (reprinted here). I talked a lot about how smart behavioral-based profiling is much more effective than dumb characteristic-based profiling, and how well-trained people are much better than computers.

The story I used was about how U.S. customs agent Diana Dean caught Ahmed Ressam in 1999. Here’s another story:

An England football shirt gave away a Senegalese man attempting to enter Cyprus on a forged French passport, police on the Mediterranean island said on Monday.

Suspicions were aroused when the man appeared at a checkpoint supervising crossings from the Turkish Cypriot north to the Greek Cypriot south of the divided island, wearing the England shirt and presenting a French passport.

“Being a football fan, the officer found it highly unlikely that a Frenchman would want to wear an England football jersey,” a police source said.

“That was his first suspicion prior to the proper check on the passport, which turned out to be a fake,” said the source.

That’s just not the kind of thing you’re going to get a computer to pick up on, at least not until artificial intelligence actually produces a working brain.

Posted on July 27, 2006 at 12:46 PMView Comments

Patrick Smith on Airline Security

Patrick Smith writes the “Ask the Pilot” column for Salon. He’s written two very good posts on airline security, one about how Israel’s system won’t work in the U.S., and the other about profiling:

…here’s a more useful quiz:

  • In 1985, Air India Flight 182 was blown up over the Atlantic by:

    a. Muslim male extremists mostly between the ages of 17 and 40
    b. Bill O’Reilly
    c. The Mormon Tabernacle Choir
    d. Indian Sikh extremists, in retaliation for the Indian Army’s attack on the Golden Temple shrine in Amritsar

  • In 1986, who attempted to smuggle three pounds of explosives onto an El Al jetliner bound from London to Tel Aviv?

    a. Muslim male extremists mostly between the ages of 17 and 40
    b. Michael Smerconish
    c. Bob Mould
    d. A pregnant Irishwoman named Anne Murphy

  • In 1962, in the first-ever successful sabotage of a commercial jet, a Continental Airlines 707 was blown up with dynamite over Missouri by:

    a. Muslim male extremists mostly between the ages of 17 and 40
    b. Ann Coulter
    c. Henry Rollins
    d. Thomas Doty, a 34-year-old American passenger, as part of an insurance scam

  • In 1994, who nearly succeeded in skyjacking a DC-10 and crashing it into the Federal Express Corp. headquarters?

    a. Muslim male extremists mostly between the ages of 17 and 40
    b. Michelle Malkin
    c. Charlie Rose
    d. Auburn Calloway, an off-duty FedEx employee and resident of Memphis, Tenn.

  • In 1974, who stormed a Delta Air Lines DC-9 at Baltimore-Washington Airport, intending to crash it into the White House, and shot both pilots?

    a. Muslim male extremists mostly between the ages of 17 and 40
    b. Joe Scarborough
    c. Spalding Gray
    d. Samuel Byck, an unemployed tire salesman from Philadelphia

The answer, in all cases, is D.

Racial profiling doesn’t work against terrorism, because terrorists don’t fit any racial profile.

Posted on June 19, 2006 at 7:22 AMView Comments

Smart Profiling from the DHS

About time:

Here’s how it works: Select TSA employees will be trained to identify suspicious individuals who raise red flags by exhibiting unusual or anxious behavior, which can be as simple as changes in mannerisms, excessive sweating on a cool day, or changes in the pitch of a person’s voice. Racial or ethnic factors are not a criterion for singling out people, TSA officials say. Those who are identified as suspicious will be examined more thoroughly; for some, the agency will bring in local police to conduct face-to-face interviews and perhaps run the person’s name against national criminal databases and determine whether any threat exists. If such inquiries turn up other issues countries with terrorist connections, police officers can pursue the questioning or alert Federal counterterrorism agents. And of course the full retinue of baggage x-rays, magnetometers and other checks for weapons will continue.

Posted on May 23, 2006 at 6:20 AMView Comments

Data Mining for Terrorists

In the post 9/11 world, there’s much focus on connecting the dots. Many believe that data mining is the crystal ball that will enable us to uncover future terrorist plots. But even in the most wildly optimistic projections, data mining isn’t tenable for that purpose. We’re not trading privacy for security; we’re giving up privacy and getting no security in return.

Most people first learned about data mining in November 2002, when news broke about a massive government data mining program called Total Information Awareness. The basic idea was as audacious as it was repellent: suck up as much data as possible about everyone, sift through it with massive computers, and investigate patterns that might indicate terrorist plots. Americans across the political spectrum denounced the program, and in September 2003, Congress eliminated its funding and closed its offices.

But TIA didn’t die. According to The National Journal, it just changed its name and moved inside the Defense Department.

This shouldn’t be a surprise. In May 2004, the General Accounting Office published a report that listed 122 different federal government data mining programs that used people’s personal information. This list didn’t include classified programs, like the NSA’s eavesdropping effort, or state-run programs like MATRIX.

The promise of data mining is compelling, and convinces many. But it’s wrong. We’re not going to find terrorist plots through systems like this, and we’re going to waste valuable resources chasing down false alarms. To understand why, we have to look at the economics of the system.

Security is always a trade-off, and for a system to be worthwhile, the advantages have to be greater than the disadvantages. A national security data mining program is going to find some percentage of real attacks, and some percentage of false alarms. If the benefits of finding and stopping those attacks outweigh the cost—in money, liberties, etc.—then the system is a good one. If not, then you’d be better off spending that cost elsewhere.

Data mining works best when there’s a well-defined profile you’re searching for, a reasonable number of attacks per year, and a low cost of false alarms. Credit card fraud is one of data mining’s success stories: all credit card companies data mine their transaction databases, looking for spending patterns that indicate a stolen card. Many credit card thieves share a pattern—purchase expensive luxury goods, purchase things that can be easily fenced, etc.—and data mining systems can minimize the losses in many cases by shutting down the card. In addition, the cost of false alarms is only a phone call to the cardholder asking him to verify a couple of purchases. The cardholders don’t even resent these phone calls—as long as they’re infrequent—so the cost is just a few minutes of operator time.

Terrorist plots are different. There is no well-defined profile, and attacks are very rare. Taken together, these facts mean that data mining systems won’t uncover any terrorist plots until they are very accurate, and that even very accurate systems will be so flooded with false alarms that they will be useless.

All data mining systems fail in two different ways: false positives and false negatives. A false positive is when the system identifies a terrorist plot that really isn’t one. A false negative is when the system misses an actual terrorist plot. Depending on how you “tune” your detection algorithms, you can err on one side or the other: you can increase the number of false positives to ensure that you are less likely to miss an actual terrorist plot, or you can reduce the number of false positives at the expense of missing terrorist plots.

To reduce both those numbers, you need a well-defined profile. And that’s a problem when it comes to terrorism. In hindsight, it was really easy to connect the 9/11 dots and point to the warning signs, but it’s much harder before the fact. Certainly, there are common warning signs that many terrorist plots share, but each is unique, as well. The better you can define what you’re looking for, the better your results will be. Data mining for terrorist plots is going to be sloppy, and it’s going to be hard to find anything useful.

Data mining is like searching for a needle in a haystack. There are 900 million credit cards in circulation in the United States. According to the FTC September 2003 Identity Theft Survey Report, about 1% (10 million) cards are stolen and fraudulently used each year. Terrorism is different. There are trillions of connections between people and events—things that the data mining system will have to “look at”—and very few plots. This rarity makes even accurate identification systems useless.

Let’s look at some numbers. We’ll be optimistic. We’ll assume the system has a 1 in 100 false positive rate (99% accurate), and a 1 in 1,000 false negative rate (99.9% accurate).

Assume one trillion possible indicators to sift through: that’s about ten events—e-mails, phone calls, purchases, web surfings, whatever—per person in the U.S. per day. Also assume that 10 of them are actually terrorists plotting.

This unrealistically-accurate system will generate one billion false alarms for every real terrorist plot it uncovers. Every day of every year, the police will have to investigate 27 million potential plots in order to find the one real terrorist plot per month. Raise that false-positive accuracy to an absurd 99.9999% and you’re still chasing 2,750 false alarms per day—but that will inevitably raise your false negatives, and you’re going to miss some of those ten real plots.

This isn’t anything new. In statistics, it’s called the “base rate fallacy,” and it applies in other domains as well. For example, even highly accurate medical tests are useless as diagnostic tools if the incidence of the disease is rare in the general population. Terrorist attacks are also rare, any “test” is going to result in an endless stream of false alarms.

This is exactly the sort of thing we saw with the NSA’s eavesdropping program: the New York Times reported that the computers spat out thousands of tips per month. Every one of them turned out to be a false alarm.

And the cost was enormous: not just the cost of the FBI agents running around chasing dead-end leads instead of doing things that might actually make us safer, but also the cost in civil liberties. The fundamental freedoms that make our country the envy of the world are valuable, and not something that we should throw away lightly.

Data mining can work. It helps Visa keep the costs of fraud down, just as it helps Amazon.com show me books that I might want to buy, and Google show me advertising I’m more likely to be interested in. But these are all instances where the cost of false positives is low—a phone call from a Visa operator, or an uninteresting ad—and in systems that have value even if there is a high number of false negatives.

Finding terrorism plots is not a problem that lends itself to data mining. It’s a needle-in-a-haystack problem, and throwing more hay on the pile doesn’t make that problem any easier. We’d be far better off putting people in charge of investigating potential plots and letting them direct the computers, instead of putting the computers in charge and letting them decide who should be investigated.

This essay originally appeared on Wired.com.

Posted on March 9, 2006 at 7:44 AMView Comments

A "Typical" Terrorist

A simply horrible lead sentence in a Manila Times story:

If you see a man aged 17 to 35, wearing a ball cap, carrying a backpack, clutching a cellular phone and acting uneasily, chances are he is a terrorist.

Let’s see: Approximately 4.5 million people use the New York City subway every day. Assume that the above profile fits 1% of them. Does that mean that there are 25,000 terrorists riding the New York City subways every single day? Seems unlikely.

The rest of the article gets better, but still….

At least that is how the National Capital Regional Police Office (NCRPO) has “profiled” a terrorist.

Sr. Supt. Felipe Rojas Jr., chief of the NCRPO Regional Intelligence and Investigation Division (RIID), said Friday that his group came up with the profile based on the descriptions of witnesses in previous bombings.

Rojas said the US Federal Bureau of Investigation has a similar terrorist profile.

But a source in the intelligence community derided the profile, calling it stereotyped and inaccurate.

The police profile does not apply to the female bombers who the military said were being trained for suicide missions in Metro Manila.

Posted on October 20, 2005 at 11:47 AMView Comments

Secure Flight News

The TSA is not going to use commercial databases in its initial roll-out of Secure Flight, its airline screening program that matches passengers with names on the Watch List and No-Fly List. I don’t believe for a minute that they’re shelving plans to use commercial data permanently, but at least they’re delaying the process.

In other news, the report (also available here, here, and here) of the Secure Flight Privacy/IT Working Group is public. I was a member of that group, but honestly, I didn’t do any writing for the report. I had given up on the process, sick of not being able to get any answers out of TSA, and believed that the report would end up in somebody’s desk drawer, never to be seen again. I was stunned when I learned that the ASAC made the report public.

There’s a lot of stuff in the report, but I’d like to quote the section that outlines the basic questions that the TSA was unable to answer:

The SFWG found that TSA has failed to answer certain key questions about Secure Flight: First and foremost, TSA has not articulated what the specific goals of Secure Flight are. Based on the limited test results presented to us, we cannot assess whether even the general goal of evaluating passengers for the risk they represent to aviation security is a realistic or feasible one or how TSA proposes to achieve it. We do not know how much or what kind of personal information the system will collect or how data from various sources will flow through the system.

Until TSA answers these questions, it is impossible to evaluate the potential privacy or security impact of the program, including:

  • Minimizing false positives and dealing with them when they occur.
  • Misuse of information in the system.
  • Inappropriate or illegal access by persons with and without permissions.
  • Preventing use of the system and information processed through it for purposes other than airline passenger screening.

The following broadly defined questions represent the critical issues we believe TSA must address before we or any other advisory body can effectively evaluate the privacy and security impact of Secure Flight on the public.

  1. What is the goal or goals of Secure Flight? The TSA is under a Congressional mandate to match domestic airline passenger lists against the consolidated terrorist watch list. TSA has failed to specify with consistency whether watch list matching is the only goal of Secure Flight at this stage. The Secure Flight Capabilities and Testing Overview, dated February 9, 2005 (a non-public document given to the SFWG), states in the Appendix that the program is not looking for unknown terrorists and has no intention of doing so. On June 29, 2005, Justin Oberman (Assistant Administrator, Secure Flight/Registered Traveler) testified to a Congressional committee that “Another goal proposed for Secure Flight is its use to establish “Mechanisms for…violent criminal data vetting.” Finally, TSA has never been forthcoming about whether it has an additional, implicit goal the tracking of terrorism suspects (whose presence on the terrorist watch list does not necessarily signify intention to commit violence on a flight).

    While the problem of failing to establish clear goals for Secure Flight at a given point in time may arise from not recognizing the difference between program definition and program evolution, it is clearly an issue the TSA must address if Secure Flight is to proceed.

  2. What is the architecture of the Secure Flight system? The Working Group received limited information about the technical architecture of Secure Flight and none about how software and hardware choices were made. We know very little about how data will be collected, transferred, analyzed, stored or deleted. Although we are charged with evaluating the privacy and security of the system, we saw no statements of privacy policies and procedures other than Privacy Act notices published in the Federal Register for Secure Flight testing. No data management plan either for the test phase or the program as implemented was provided or discussed.
  3. Will Secure Flight be linked to other TSA applications? Linkage with other screening programs (such as Registered Traveler, Transportation Worker Identification and Credentialing (TWIC), and Customs and Border Patrol systems like U.S.-VISIT) that may operate on the same platform as Secure Flight is another aspect of the architecture and security question. Unanswered questions remain about how Secure Flight will interact with other vetting programs operating on the same platform; how it will ensure that its policies on data collection, use and retention will be implemented and enforced on a platform that also operates programs with significantly different policies in these areas; and how it will interact with the vetting of passengers on international flights?
  4. How will commercial data sources be used? One of the most controversial elements of Secure Flight has been the possible uses of commercial data. TSA has never clearly defined two threshold issues: what it means by “commercial data” and how it might use commercial data sources in the implementation of Secure Flight. TSA has never clearly distinguished among various possible uses of commercial data, which all have different implications.

    Possible uses of commercial data sometimes described by TSA include: (1) identity verification or authentication; (2) reducing false positives by augmenting passenger records indicating a possible match with data that could help distinguish an innocent passenger from someone on a watch list; (3) reducing false negatives by augmenting all passenger records with data that could suggest a match that would otherwise have been missed; (4) identifying sleepers, which itself includes: (a) identifying false identities; and (b) identifying behaviors indicative of terrorist activity. A fifth possibility has not been discussed by TSA: using commercial data to augment watch list entries to improve their fidelity. Assuming that identity verification is part of Secure Flight, what are the consequences if an identity cannot be verified with a certain level of assurance?

    It is important to note that TSA never presented the SFWG with the results of its commercial data tests. Until these test results are available and have been independently analyzed, commercial data should not be utilized in the Secure Flight program.

  5. Which matching algorithms work best? TSA never presented the SFWG with test results showing the effectiveness of algorithms used to match passenger names to a watch list. One goal of bringing watch list matching inside the government was to ensure that the best available matching technology was used uniformly. The SFWG saw no evidence that TSA compared different products and competing solutions. As a threshold matter, TSA did not describe to the SFWG its criteria for determining how the optimal matching solution would be determined. There are obvious and probably not-so-obvious tradeoffs between false positives and false negatives, but TSA did not explain how it reconciled these concerns.
  6. What is the oversight structure and policy for Secure Flight? TSA has not produced a comprehensive policy document for Secure Flight that defines oversight or governance responsibilities.

The members of the working group, and the signatories to the report, are Martin Abrams, Linda Ackerman, James Dempsey, Edward Felten, Daniel Gallington, Lauren Gelman, Steven Lilenthal, Anna Slomovic, and myself.

My previous posts about Secure Flight, and my involvement in the working group, are here, here, here, here, here, and here.

And in case you think things have gotten better, there’s a new story about how the no-fly list cost a pilot his job:

Cape Air pilot Robert Gray said he feels like he’s living a nightmare. Two months after he sued the federal government for refusing to let him take flight training courses so he could fly larger planes, he said yesterday, his situation has only worsened.

When Gray showed up for work a couple of weeks ago, he said Cape Air told him the government had placed him on its no-fly list, making it impossible for him to do his job. Gray, a Belfast native and British citizen, said the government still won’t tell him why it thinks he’s a threat.

“I haven’t been involved in any kind of terrorism, and I never committed any crime,” said Gray, 35, of West Yarmouth. He said he has never been arrested and can’t imagine what kind of secret information the government is relying on to destroy his life.

Remember what the no-fly list is. It’s a list of people who are so dangerous that they can’t be allowed to board an airplane under any circumstances, yet so innocent that they can’t be arrested—even under the provisions of the PATRIOT Act.

EDITED TO ADD: The U.S. Department of Justice Inspector General released a report last month on Secure Flight, basically concluding that the costs were out of control, and that the TSA didn’t know how much the program would cost in the future.

Here’s an article about some of the horrible problems people who have mistakenly found themselves on the no-fly list have had to endure. And another on what you can do if you find yourself on a list.

EDITED TO ADD: EPIC has received a bunch of documents about continued problems with false positives.

Posted on September 26, 2005 at 7:14 AMView Comments

Secure Flight News

According to Wired News, the DHS is looking for someone in Congress to sponsor a bill that eliminates congressional oversight over the Secure Flight program.

The bill would allow them to go ahead with the program regardless of GAO’s assessment. (Current law requires them to meet ten criteria set by Congress; the most recent GAO report said that they did not meet nine of them.) The bill would allow them to use commercial data even though they have not demonstrated its effectiveness. (The DHS funding bill passed by both the House and the Senate prohibits them from using commercial data during passenger screening, because there has been absolutely no test results showing that it is effective.)

In this new bill, all that would be required to go ahead with Secure Flight would be for Secretary Chertoff to say so:

Additionally, the proposed changes would permit Secure Flight to be rolled out to the nation’s airports after Homeland Security chief Michael Chertoff certifies the program will be effective and not overly invasive. The current bill requires independent congressional investigators to make that determination.

Looks like the DHS, being unable to comply with the law, is trying to change it. This is a rogue program that needs to be stopped.

In other news, the TSA has deleted about three million personal records it used for Secure Flight testing. This seems like a good idea, but it prevents people from knowing what data the government had on them—in violation of the Privacy Act.

Civil liberties activist Bill Scannell says it’s difficult to know whether TSA’s decision to destroy records so swiftly is a housecleaning effort or something else.

“Is the TSA just such an incredibly efficient organization that they’re getting rid of things that are no longer needed?” Scannell said. “Or is this a matter of the destruction of evidence?”

Scannell says it’s a fair question to ask in light of revelations that the TSA already violated the Privacy Act last year when it failed to fully disclose the scope of its testing for Secure Flight and its collection of commercial data on individuals.

My previous essay on Secure Flight is here.

Posted on August 15, 2005 at 9:43 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.