Entries Tagged "false positives"

Page 11 of 13

NSA Eavesdropping Yields Dead Ends

All of that extra-legal NSA eavesdropping resulted in a whole lot of dead ends.

In the anxious months after the Sept. 11 attacks, the National Security Agency began sending a steady stream of telephone numbers, e-mail addresses and names to the F.B.I. in search of terrorists. The stream soon became a flood, requiring hundreds of agents to check out thousands of tips a month.

But virtually all of them, current and former officials say, led to dead ends or innocent Americans.

Surely this can’t be a surprise to anyone? And as I’ve been arguing for years, programs like this divert resources from real investigations.

President Bush has characterized the eavesdropping program as a “vital tool” against terrorism; Vice President Dick Cheney has said it has saved “thousands of lives.”

But the results of the program look very different to some officials charged with tracking terrorism in the United States. More than a dozen current and former law enforcement and counterterrorism officials, including some in the small circle who knew of the secret program and how it played out at the F.B.I., said the torrent of tips led them to few potential terrorists inside the country they did not know of from other sources and diverted agents from counterterrorism work they viewed as more productive.

A lot of this article reads like a turf war between the NSA and the FBI, but the “inside baseball” aspects are interesting.

EDITED TO ADD (1/18): Jennifer Granick has written on the topic.

Posted on January 18, 2006 at 6:51 AMView Comments

Data Mining and Amazon Wishlists

Data Mining 101: Finding Subversives with Amazon Wishlists.

Now, imagine the false alarms and abuses that are possible if you have lots more data, and lots more computers to slice and dice it.

Of course, there are applications where this sort of data mining makes a whole lot of sense. But finding terrorists isn’t one of them. It’s a needle-in-a-haystack problem, and piling on more hay doesn’t help matters much.

Posted on January 5, 2006 at 6:15 AMView Comments

Sky Marshal Shooting in Miami

I have heretofore refrained from writing about the Miami false-alarm terrorist incident. For those of you who have spent the last few days in an isolation chamber, sky marshals shot and killed a mentally ill man they believed to be a terrorist. The shooting happened on the ground, in the jetway. The man claimed he had a bomb and wouldn’t stop when ordered to by sky marshals. At least, that’s the story.

I’ve read the reports, the claims of the sky marshals and the counterclaims of some witnesses. Whatever happened — and it’s possible that we’ll never know — it does seem that this incident isn’t the same as the British shooting of a Brazilian man on July 22.

I do want to make two points, though.

One, any time you have an officer making split-second life and death decisions, you’re going to have mistakes. I hesitate to second-guess the sky marshals on the ground; they were in a very difficult position. But the way to minimize mistakes is through training. I strongly recommend that anyone interested in this sort of thing read Blink, by Malcolm Gladwell.

Two, I’m not convinced the sky marshals’ threat model matches reality. Mentally ill people are far more common than terrorists. People who claim to have a bomb and don’t are far more common than people who actually do. The real question we should be asking here is: what should the appropriate response be to this low-probability threat?

EDITED TO ADD (12/11): Good Salon article on the topic.

Posted on December 9, 2005 at 1:28 PMView Comments

30,000 People Mistakenly Put on Terrorist Watch List

This is incredible:

Nearly 30,000 airline passengers discovered in the past year that they were mistakenly placed on federal “terrorist” watch lists, a transportation security official said Tuesday.

When are we finally going to admit that the DHS is incompetent at this?

EDITED TO ADD (12/7): At least they weren’t kidnapped and imprisoned for five months, and “shackled, beaten, photographed nude and injected with drugs by interrogators.”

Posted on December 7, 2005 at 10:26 AMView Comments

Automatic Lie Detector

Coming soon to airports:

Tested in Russia, the two-stage GK-1 voice analyser requires that passengers don headphones at a console and answer “yes” or “no” into a microphone to questions about whether they are planning something illicit.

The software will almost always pick up uncontrollable tremors in the voice that give away liars or those with something to hide, say its designers at Israeli firm Nemesysco.

Fascinating.

In general, I prefer security systems that are invasive yet anonymous to ones that are based on massive databases. And automatic systems that divide people into a “probably fine” and “investigate a bit more” categories seem like a good use of technology. I have no idea whether this system works (there is a lot of evidence that it does not), what the false positive and false negative rates are (this article states a completely useless 12% false positive rate), or how easy it would be to learn how to fool the system, though. And in all of these trade-off discussions, the devil is in the details.

Posted on November 21, 2005 at 8:07 AMView Comments

Stride-Based Security

Can a cell phone detect if it is stolen by measuring the gait of the person carrying it?

Researchers at the VTT Technical Research Centre of Finland have developed a prototype of a cell phone that uses motion sensors to record a user’s walking pattern of movement, or gait. The device then periodically checks to see that it is still in the possession of its legitimate owner, by measuring the current stride and comparing it against that stored in its memory.

Clever, as long as you realize that there are going to be a lot of false alarms. This seems okay:

If the phone suspects it has fallen into the wrong hands, it will prompt the user for a password if they attempt to make calls or access its memory.

Posted on November 16, 2005 at 6:26 AMView Comments

Chemical Trace Screening

New advances in technology:

“Mass spectrometry is one of the most sensitive methods for finding drugs, chemicals, pollutants and disease, but the problem is that you have to extract a sample and treat that sample before you can analyze it,” said Evan Williams, a chemistry professor at UC Berkeley.

That process can take anywhere from two to 15 minutes for each sample. Multiply that by the number of people in line at airport security at JFK the day before Thanksgiving, and you’ve got a logistical nightmare on your hands.

The research from Purdue, led by analytical chemistry professor Graham Cooks, developed a technique called desorption electrospray ionization, or DESI, that eliminates a part of the mass spectrometry process, and thus speeds up the detection of substances to less than 10 seconds, said Williams.

To use it, law enforcement officials and security screeners will spray methanol or a water and salt mixture on the surface of an object, or a person’s clothing or skin, and test immediately for microscopic traces of chemical compounds.

As this kind of technology gets better, the problems of false alarms becomes greater. We already know that a large percentage of U.S. currency bears traces of cocaine, but can a low-budget terrorist close down an airport by spraying trace chemicals randomly at passengers’ luggage when they’re not looking?

Posted on October 14, 2005 at 1:56 PMView Comments

Secure Flight News

The TSA is not going to use commercial databases in its initial roll-out of Secure Flight, its airline screening program that matches passengers with names on the Watch List and No-Fly List. I don’t believe for a minute that they’re shelving plans to use commercial data permanently, but at least they’re delaying the process.

In other news, the report (also available here, here, and here) of the Secure Flight Privacy/IT Working Group is public. I was a member of that group, but honestly, I didn’t do any writing for the report. I had given up on the process, sick of not being able to get any answers out of TSA, and believed that the report would end up in somebody’s desk drawer, never to be seen again. I was stunned when I learned that the ASAC made the report public.

There’s a lot of stuff in the report, but I’d like to quote the section that outlines the basic questions that the TSA was unable to answer:

The SFWG found that TSA has failed to answer certain key questions about Secure Flight: First and foremost, TSA has not articulated what the specific goals of Secure Flight are. Based on the limited test results presented to us, we cannot assess whether even the general goal of evaluating passengers for the risk they represent to aviation security is a realistic or feasible one or how TSA proposes to achieve it. We do not know how much or what kind of personal information the system will collect or how data from various sources will flow through the system.

Until TSA answers these questions, it is impossible to evaluate the potential privacy or security impact of the program, including:

  • Minimizing false positives and dealing with them when they occur.
  • Misuse of information in the system.
  • Inappropriate or illegal access by persons with and without permissions.
  • Preventing use of the system and information processed through it for purposes other than airline passenger screening.

The following broadly defined questions represent the critical issues we believe TSA must address before we or any other advisory body can effectively evaluate the privacy and security impact of Secure Flight on the public.

  1. What is the goal or goals of Secure Flight? The TSA is under a Congressional mandate to match domestic airline passenger lists against the consolidated terrorist watch list. TSA has failed to specify with consistency whether watch list matching is the only goal of Secure Flight at this stage. The Secure Flight Capabilities and Testing Overview, dated February 9, 2005 (a non-public document given to the SFWG), states in the Appendix that the program is not looking for unknown terrorists and has no intention of doing so. On June 29, 2005, Justin Oberman (Assistant Administrator, Secure Flight/Registered Traveler) testified to a Congressional committee that “Another goal proposed for Secure Flight is its use to establish “Mechanisms for…violent criminal data vetting.” Finally, TSA has never been forthcoming about whether it has an additional, implicit goal the tracking of terrorism suspects (whose presence on the terrorist watch list does not necessarily signify intention to commit violence on a flight).

    While the problem of failing to establish clear goals for Secure Flight at a given point in time may arise from not recognizing the difference between program definition and program evolution, it is clearly an issue the TSA must address if Secure Flight is to proceed.

  2. What is the architecture of the Secure Flight system? The Working Group received limited information about the technical architecture of Secure Flight and none about how software and hardware choices were made. We know very little about how data will be collected, transferred, analyzed, stored or deleted. Although we are charged with evaluating the privacy and security of the system, we saw no statements of privacy policies and procedures other than Privacy Act notices published in the Federal Register for Secure Flight testing. No data management plan either for the test phase or the program as implemented was provided or discussed.
  3. Will Secure Flight be linked to other TSA applications? Linkage with other screening programs (such as Registered Traveler, Transportation Worker Identification and Credentialing (TWIC), and Customs and Border Patrol systems like U.S.-VISIT) that may operate on the same platform as Secure Flight is another aspect of the architecture and security question. Unanswered questions remain about how Secure Flight will interact with other vetting programs operating on the same platform; how it will ensure that its policies on data collection, use and retention will be implemented and enforced on a platform that also operates programs with significantly different policies in these areas; and how it will interact with the vetting of passengers on international flights?
  4. How will commercial data sources be used? One of the most controversial elements of Secure Flight has been the possible uses of commercial data. TSA has never clearly defined two threshold issues: what it means by “commercial data” and how it might use commercial data sources in the implementation of Secure Flight. TSA has never clearly distinguished among various possible uses of commercial data, which all have different implications.

    Possible uses of commercial data sometimes described by TSA include: (1) identity verification or authentication; (2) reducing false positives by augmenting passenger records indicating a possible match with data that could help distinguish an innocent passenger from someone on a watch list; (3) reducing false negatives by augmenting all passenger records with data that could suggest a match that would otherwise have been missed; (4) identifying sleepers, which itself includes: (a) identifying false identities; and (b) identifying behaviors indicative of terrorist activity. A fifth possibility has not been discussed by TSA: using commercial data to augment watch list entries to improve their fidelity. Assuming that identity verification is part of Secure Flight, what are the consequences if an identity cannot be verified with a certain level of assurance?

    It is important to note that TSA never presented the SFWG with the results of its commercial data tests. Until these test results are available and have been independently analyzed, commercial data should not be utilized in the Secure Flight program.

  5. Which matching algorithms work best? TSA never presented the SFWG with test results showing the effectiveness of algorithms used to match passenger names to a watch list. One goal of bringing watch list matching inside the government was to ensure that the best available matching technology was used uniformly. The SFWG saw no evidence that TSA compared different products and competing solutions. As a threshold matter, TSA did not describe to the SFWG its criteria for determining how the optimal matching solution would be determined. There are obvious and probably not-so-obvious tradeoffs between false positives and false negatives, but TSA did not explain how it reconciled these concerns.
  6. What is the oversight structure and policy for Secure Flight? TSA has not produced a comprehensive policy document for Secure Flight that defines oversight or governance responsibilities.

The members of the working group, and the signatories to the report, are Martin Abrams, Linda Ackerman, James Dempsey, Edward Felten, Daniel Gallington, Lauren Gelman, Steven Lilenthal, Anna Slomovic, and myself.

My previous posts about Secure Flight, and my involvement in the working group, are here, here, here, here, here, and here.

And in case you think things have gotten better, there’s a new story about how the no-fly list cost a pilot his job:

Cape Air pilot Robert Gray said he feels like he’s living a nightmare. Two months after he sued the federal government for refusing to let him take flight training courses so he could fly larger planes, he said yesterday, his situation has only worsened.

When Gray showed up for work a couple of weeks ago, he said Cape Air told him the government had placed him on its no-fly list, making it impossible for him to do his job. Gray, a Belfast native and British citizen, said the government still won’t tell him why it thinks he’s a threat.

“I haven’t been involved in any kind of terrorism, and I never committed any crime,” said Gray, 35, of West Yarmouth. He said he has never been arrested and can’t imagine what kind of secret information the government is relying on to destroy his life.

Remember what the no-fly list is. It’s a list of people who are so dangerous that they can’t be allowed to board an airplane under any circumstances, yet so innocent that they can’t be arrested — even under the provisions of the PATRIOT Act.

EDITED TO ADD: The U.S. Department of Justice Inspector General released a report last month on Secure Flight, basically concluding that the costs were out of control, and that the TSA didn’t know how much the program would cost in the future.

Here’s an article about some of the horrible problems people who have mistakenly found themselves on the no-fly list have had to endure. And another on what you can do if you find yourself on a list.

EDITED TO ADD: EPIC has received a bunch of documents about continued problems with false positives.

Posted on September 26, 2005 at 7:14 AMView Comments

Unintended Information Revelation

Here’s a new Internet data-mining research program with a cool name: Unintended Information Revelation:

Existing search engines process individual documents based on the number of times a key word appears in a single document, but UIR constructs a concept chain graph used to search for the best path connecting two ideas within a multitude of documents.

To develop the method, researchers used the chapters of the 9/11 Commission Report to establish concept ontologies – lists of terms of interest in the specific domains relevant to the researchers: aviation, security and anti-terrorism issues.

“A concept chain graph will show you what’s common between two seemingly unconnected things,” said Srihari. “With regular searches, the input is a set of key words, the search produces a ranked list of documents, any one of which could satisfy the query.

“UIR, on the other hand, is a composite query, not a keyword query. It is designed to find the best path, the best chain of associations between two or more ideas. It returns to you an evidence trail that says, ‘This is how these pieces are connected.'”

The hope is to develop the core algorithms exposing veiled paths through documents generated by different individuals or organisations.

I’m a big fan of research, and I’m glad to see it being done. But I hope there is a lot of discussion and debate before we deploy something like this. I want to be convinced that the false positives don’t make it useless as an intelligence-gathering tool.

Posted on August 30, 2005 at 12:53 PMView Comments

Infants on the Terrorist Watch List

Imagine you’re in charge of airport security. You have a watch list of terrorist names, and you’re supposed to give anyone on that list extra scrutiny. One day someone shows up for a flight whose name is on that list. They’re an infant.

What do you do?

If you have even the slightest bit of sense, you realize that an infant can’t be a terrorist. So you let the infant through, knowing that it’s a false alarm. But if you have no flexibility in your job, if you have to follow the rules regardless of how stupid they are, if you have no authority to make your own decisions, then you detain the baby.

EDITED TO ADD: I know what the article says about the TSA rules:

The Transportation Security Administration, which administers the lists, instructs airlines not to deny boarding to children under 12 — or select them for extra security checks — even if their names match those on a list.

Whether the rules are being followed or ignored is besides my point. The screener is detaining babies because he thinks that’s what the rules require. He’s not permitted to exercise his own common sense.

Security works best when well-trained people have the authority to make decisions, not when poorly-trained people are slaves to the rules (whether real or imaginary). Rules provide CYA security, but not security against terrorism.

Posted on August 19, 2005 at 8:03 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.