Entries Tagged "false negatives"

Page 2 of 4

Man Flies with Someone Else's Ticket and No Legal ID

Last week, I got a bunch of press calls about Olajide Oluwaseun Noibi, who flew from New York to Los Angeles using an expired ticket in someone else’s name and a university ID. They all wanted to know what this says about airport security.

It says that airport security isn’t perfect, and that people make mistakes. But it’s not something that anyone should worry about. It’s not like Noibi figured out a new hole in the airport security system, one that he was able to exploit repeatedly. He got lucky. He got real lucky. It’s not something a terrorist can build a plot around.

I’m even less concerned because I’ve never thought the photo ID check had any value. Noibi was screened, just like any other passenger. Even the TSA blog makes this point:

In this case, TSA did not properly authenticate the passenger’s documentation. That said, it’s important to note that this individual received the same thorough physical screening as other passengers, including being screened by advanced imaging technology (body scanner).

Seems like the TSA is regularly downplaying the value of the photo ID check. This is from a Q&A about Secure Flight, their new system to match passengers with watch lists:

Q: This particular “layer” isn’t terribly effective. If this “layer” of security can be circumvented by anyone with a printer and a word processor, this doesn’t seem to be a terribly useful “layer” … especially looking at the amount of money being expended on this particular “layer”. It might be that this money could be more effectively spent on other “layers”.

A: TSA uses layers of security to ensure the security of the traveling public and the Nation’s transportation system. Secure Flight’s watchlist name matching constitutes only one security layer of the many in place to protect aviation. Others include intelligence gathering and analysis, airport checkpoints, random canine team searches at airports, federal air marshals, federal flight deck officers and more security measures both visible and invisible to the public.

Each one of these layers alone is capable of stopping a terrorist attack. In combination their security value is multiplied, creating a much stronger, formidable system. A terrorist who has to overcome multiple security layers in order to carry out an attack is more likely to be pre-empted, deterred, or to fail during the attempt.

Yes, the answer says that they need to spend millions to ensure that terrorists with a viable plot also need a computer, but you can tell that their heart wasn’t in the answer. “Checkpoints! Dogs! Air marshals! Ignore the stupid photo ID requirement.”

Noibi is an embarrassment for the TSA and for the airline Virgin America, who are both supposed to catch this kind of thing. But I’m not worried about the security risk, and neither is the TSA.

Posted on July 6, 2011 at 5:53 AMView Comments

Loaded Gun Slips Past TSA

I’m not really worried about mistakes like this. Sure, a gun slips through occasionally, and a knife slips through even more often. (I’m sure the TSA doesn’t catch 100% of all bombs in tests, either.) But these items are caught by the TSA often enough, and when the TSA does catch someone, they’re going to call the police and totally ruin his day. A terrorist can’t build a plot around succeeding.

It’s things like liquids that are the real problem. Because there are no consequences to trying—the bottle of water just gets thrown into the trash—a terrorist can repeatedly try until he succeeds in slipping it through.

I asked then-TSA Administrator Kip Hawley about this in 2007. He didn’t answer.

Posted on January 14, 2011 at 11:03 AMView Comments

Behavioral Profiling at Airports

There’s a long article in Nature on the practice:

It remains unclear what the officers found anomalous about George’s behaviour, and why he was detained. The TSA’s parent agency, the Department of Homeland Security (DHS), has declined to comment on his case because it is the subject of a federal lawsuit that was filed on George’s behalf in February by the American Civil Liberties Union. But the incident has brought renewed attention to a burgeoning controversy: is it possible to know whether people are being deceptive, or planning hostile acts, just by observing them?

Some people seem to think so. At London’s Heathrow Airport, for example, the UK government is deploying behaviour-detection officers in a trial modelled in part on SPOT. And in the United States, the DHS is pursuing a programme that would use sensors to look at nonverbal behaviours, and thereby spot terrorists as they walk through a corridor. The US Department of Defense and intelligence agencies have expressed interest in similar ideas.

Yet a growing number of researchers are dubious ­ not just about the projects themselves, but about the science on which they are based. “Simply put, people (including professional lie-catchers with extensive experience of assessing veracity) would achieve similar hit rates if they flipped a coin,” noted a 2007 report from a committee of credibility-assessment experts who reviewed research on portal screening.

“No scientific evidence exists to support the detection or inference of future behaviour, including intent,” declares a 2008 report prepared by the JASON defence advisory group. And the TSA had no business deploying SPOT across the nation’s airports “without first validating the scientific basis for identifying suspicious passengers in an airport environment”, stated a two-year review of the programme released on 20 May by the Government Accountability Office (GAO), the investigative arm of the US Congress.

Commentary from the MindHacks blog.

Also, the GAO has published a report on the U.S. DHS’s SPOT program: “Aviation Security: Efforts to Validate TSA’s Passenger Screening Behavior Detection Program Underway, but Opportunities Exist to Strengthen Validation and Address Operational Challenges.”

As of March 2010, TSA deployed about 3,000 BDOs at an annual cost of about $212 million; this force increased almost fifteen-fold between March 2007 and July 2009. BDOs have been selectively deployed to 161 of the 457 TSA-regulated airports in the United States at which passengers and their property are subject to TSA-mandated screening procedures.

It seems pretty clear that the program only catches criminals, and no terrorists. You’d think there would be more important things to spend $200 million a year on.

EDITED TO ADD (6/14): In the comments, a couple of people asked how this compares with the Israeli model of airport security—concentrate on the person—and the idea that trained officers notice if someone is acting “hinky”: both things that I have written favorably about.

The difference is the experience of the detecting officer and the amount of time they spend with each person. If you read about the programs described above, they’re supposed to “spot terrorists as they walk through a corridor,” or possibly after a few questions. That’s very different from what happens when you check into a flight an Ben Gurion Airport.

The problem with fast detection programs is that they don’t work, and the problem with the Israeli security model is that it doesn’t scale.

Posted on June 14, 2010 at 6:23 AMView Comments

Intelligence Can Never Be Perfect

Go read this article—”Setting impossible standards on intelligence”—on laying blame for the intelligence “failure” that allowed the Underwear Bomber to board an airplane on Christmas Day.

Although the CIA, FBI, and Defense, State, Treasury and Homeland Security departments have counterterrorism analytic units—some even with information-gathering operations—the assumption is that all of the data are passed on to NCTC.

The law, by the way, specifically says that the NCTC director “may not direct the execution of counterterrorism operations.”

The Senate committee’s list identifying “points of failure” shows that not all relevant information from some agencies landed at the NCTC.

Perhaps the leading example was the State Department’s failure to notify the NCTC in its initial reporting that Abdulmutallab—whose father had reported him missing in November and suspected “involvement with Yemeni-based extremists”—had an outstanding U.S. visa.

This initial fact, if contained in State’s first notice to the NCTC, would have raised the importance of his status. Instead, Abdulmutallab became one of hundreds of new names sent to the NCTC that day. The Senate panel blurs this in its report by focusing on State’s failure—as well as NCTC’s—to revoke the visa. Neither the department nor NCTC discovered the visa until it was too late.

Two other agencies also failed to report important relevant information.

[…]

How can the NCTC perform its role, which by law is “to serve as the central and shared knowledge bank on known and suspected terrorists and international terror groups,” if its analysts are unaware that additional intelligence exists at other agencies? The committee’s answer to that, listed as failure 10, was that the “NCTC’s watchlisting office did not conduct additional research to find additional derogatory information to place Abdulmutallab on a watchlist.”

True, NCTC analysts have access to most agency databases. But with hundreds of names arriving each day, which name does the NCTC select to then begin its search of 16 other agency databases? Especially when the expectation is that each agency has searched its own.

I’ve never been impressed with the “dots” that should have been connected regarding Abdulmutallab. On closer examination, they mostly evaporate. Nor do I consider Christmas Day a security failure. Plane lands safely, terrorist captured, no one hurt; what more do people want?

Posted on June 2, 2010 at 6:39 AMView Comments

Impersonation

Impersonation isn’t new. In 1556, a Frenchman was executed for impersonating Martin Guerre and this week hackers impersonated Barack Obama on Twitter. It’s not even unique to humans: mockingbirds, Viceroy butterflies, and the mimic octopus all use impersonation as a survival strategy. For people, detecting impersonation is a hard problem for three reasons: we need to verify the identity of people we don’t know, we interact with people through “narrow” communications channels like the telephone and Internet, and we want computerized systems to do the verification for us.

Traditional impersonation involves people fooling people. It’s still done today: impersonating garbage men to collect tips, impersonating parking lot attendants to collect fees, or impersonating the French president to fool Sarah Palin. Impersonating people like policemen, security guards, and meter readers is a common criminal tactic.

These tricks work because we all regularly interact with people we don’t know. No one could successfully impersonate your brother, your best friend, or your boss, because you know them intimately. But a policeman or a parking lot attendant? That’s just someone with a badge or a uniform. But badges and ID cards only help if you know how to verify one. Do you know what a valid police ID looks like? Or how to tell a real telephone repairman’s badge from a forged one?

Still, it’s human nature to trust these credentials. We naturally trust uniforms, even though we know that anyone can wear one. When we visit a Web site, we use the professionalism of the page to judge whether or not it’s really legitimate—never mind that anyone can cut and paste graphics. Watch the next time someone other than law enforcement verifies your ID; most people barely look at it.

Impersonation is even easier over limited communications channels. On the telephone, how can you distinguish someone working at your credit card company from someone trying to steal your account details and login information? On e-mail, how can you distinguish someone from your company’s tech support from a hacker trying to break into your network—or the mayor of Paris from an impersonator? Once in a while someone frees himself from jail by faxing a forged release order to his warden. This is social engineering: impersonating someone convincingly enough to fool the victim.

These days, a lot of identity verification happens with computers. Computers are fast at computation but not very good at judgment, and can be tricked. So people can fool speed cameras by taping a fake license plate over the real one, fingerprint readers with a piece of tape, or automatic face scanners with—and I’m not making this up—a photograph of a face held in front of their own. Even the most bored policeman wouldn’t fall for any of those tricks.

This is why identity theft is such a big problem today. So much authentication happens online, with only a small amount of information: user ID, password, birth date, Social Security number, and so on. Anyone who gets that information can impersonate you to a computer, which doesn’t know any better.

Despite all of these problems, most authentication systems work most of the time. Even something as ridiculous as faxed signatures work, and can be legally binding. But no authentication system is perfect, and impersonation is always possible.

This lack of perfection is okay, though. Security is a trade-off, and any well-designed authentication system balances security with ease of use, customer acceptance, cost, and so on. More authentication isn’t always better. Banks make this trade-off when they don’t bother authenticating signatures on checks under amounts like $25,000; it’s cheaper to deal with fraud after the fact. Web sites make this trade-off when they use simple passwords instead of something more secure, and merchants make this trade-off when they don’t bother verifying your signature against your credit card. We make this trade-off when we accept police badges, Best Buy uniforms, and faxed signatures with only a cursory amount of verification.

Good authentication systems also balance false positives against false negatives. Impersonation is just one way these systems can fail; they can also fail to authenticate the real person. An ATM is better off allowing occasional fraud than preventing legitimate account holders access to their money. On the other hand, a false positive in a nuclear launch system is much more dangerous; better to not launch the missiles.

Decentralized authentication systems work better than centralized ones. Open your wallet, and you’ll see a variety of physical tokens used to identify you to different people and organizations: your bank, your credit card company, the library, your health club, and your employer, as well as a catch-all driver’s license used to identify you in a variety of circumstances. That assortment is actually more secure than a single centralized identity card: each system must be broken individually, and breaking one doesn’t give the attacker access to everything. This is one of the reasons that centralized systems like REAL-ID make us less secure.

Finally, any good authentication system uses defense in depth. Since no authentication system is perfect, there need to be other security measures in place if authentication fails. That’s why all of a corporation’s assets and information isn’t available to anyone who can bluff his way into the corporate offices. That is why credit card companies have expert systems analyzing suspicious spending patterns. And it’s why identity theft won’t be solved by making personal information harder to steal.

We can reduce the risk of impersonation, but it will always be with us; technology cannot “solve” it in any absolute sense. Like any security, the trick is to balance the trade-offs. Too little security, and criminals withdraw money from all our bank accounts. Too much security and when Barack Obama calls to congratulate you on your reelection, you won’t believe it’s him.

This essay originally appeared in The Wall Street Journal.

Posted on January 9, 2009 at 2:04 PMView Comments

The Two Classes of Airport Contraband

Airport security found a jar of pasta sauce in my luggage last month. It was a 6-ounce jar, above the limit; the official confiscated it, because allowing it on the airplane with me would have been too dangerous. And to demonstrate how dangerous he really thought that jar was, he blithely tossed it in a nearby bin of similar liquid bottles and sent me on my way.

There are two classes of contraband at airport security checkpoints: the class that will get you in trouble if you try to bring it on an airplane, and the class that will cheerily be taken away from you if you try to bring it on an airplane. This difference is important: Making security screeners confiscate anything from that second class is a waste of time. All it does is harm innocents; it doesn’t stop terrorists at all.

Let me explain. If you’re caught at airport security with a bomb or a gun, the screeners aren’t just going to take it away from you. They’re going to call the police, and you’re going to be stuck for a few hours answering a lot of awkward questions. You may be arrested, and you’ll almost certainly miss your flight. At best, you’re going to have a very unpleasant day.

This is why articles about how screeners don’t catch every—or even a majority—of guns and bombs that go through the checkpoints don’t bother me. The screeners don’t have to be perfect; they just have to be good enough. No terrorist is going to base his plot on getting a gun through airport security if there’s a decent chance of getting caught, because the consequences of getting caught are too great.

Contrast that with a terrorist plot that requires a 12-ounce bottle of liquid. There’s no evidence that the London liquid bombers actually had a workable plot, but assume for the moment they did. If some copycat terrorists try to bring their liquid bomb through airport security and the screeners catch them—like they caught me with my bottle of pasta sauce—the terrorists can simply try again. They can try again and again. They can keep trying until they succeed. Because there are no consequences to trying and failing, the screeners have to be 100 percent effective. Even if they slip up one in a hundred times, the plot can succeed.

The same is true for knitting needles, pocketknives, scissors, corkscrews, cigarette lighters and whatever else the airport screeners are confiscating this week. If there’s no consequence to getting caught with it, then confiscating it only hurts innocent people. At best, it mildly annoys the terrorists.

To fix this, airport security has to make a choice. If something is dangerous, treat it as dangerous and treat anyone who tries to bring it on as potentially dangerous. If it’s not dangerous, then stop trying to keep it off airplanes. Trying to have it both ways just distracts the screeners from actually making us safer.

EDITED TO ADD (10/23): A similar article ran in The Guardian.

Posted on September 23, 2008 at 5:47 AMView Comments

New TSA Report

A classified 2006 TSA report on airport security has been leaked to USA Today. (Other papers are covering the story, but their articles seem to be all derived from the original USA Today article.)

There’s good news:

This year, the TSA for the first time began running covert tests every day at every checkpoint at every airport. That began partly in response to the classified TSA report showing that screeners at San Francisco International Airport were tested several times a day and found about 80% of the fake bombs.

Constant testing makes screeners “more suspicious as well as more capable of recognizing (bomb) components,” the report said. The report does not explain the high failure rates but said O’Hare’s checkpoints were too congested and too wide for supervisors to monitor screeners.

At San Francisco, “everybody realizes they are under scrutiny, being watched and tested constantly,” said Gerald Berry, president of Covenant Aviation Security, which hires and manages the San Francisco screeners. San Francisco is one of eight airports, most of them small, where screeners work for a private company instead of the TSA. The idea for constant testing came from Ed Gomez, TSA security director at San Francisco, Berry said. The tests often involve an undercover person putting a bag with a fake bomb on an X-ray machine belt, he said.

Repeated testing is good, for a whole bunch of reasons.

There’s bad news:

Howe said the increased difficulty explains why screeners at Los Angeles and Chicago O’Hare airports failed to find more than 60% of fake explosives that TSA agents tried to get through checkpoints last year.

The failure rates—about 75% at Los Angeles and 60% at O’Hare—are higher than some tests of screeners a few years ago and equivalent to other previous tests.

Sure, the tests are harder. But those are miserable numbers.

And there’s unexplainable news:

At San Diego International Airport, tests are run by passengers whom local TSA managers ask to carry a fake bomb, said screener Cris Soulia, an official in a screeners union.

Someone please tell me this doesn’t actually happen. “Hi Mr. Passenger. I’m a TSA manager. You know I’m not lying to you because of this official-looking laminated badge I have. We need you to help us test airport security. Here’s a ‘fake’ bomb that we’d like you to carry through security in your luggage. Another TSA manager will, um, meet you at your destination. Give the fake bomb to him when you land. And, by the way, what’s your mother’s maiden name?”

How in the world is this a good idea? And how hard is it to dress real TSA managers up like vacationers?

EDITED TO ADD (10/24): Here’s a story of someone being asked to carry an item through airport security at Dulles Airport.

EDITED TO ADD (10/26): TSA claims that this doesn’t happen:

TSA officials do not ask random passengers to carry fake bombs through checkpoints for testing at San Diego International Airport, or any other airport.

[…]

TSA Traveler Alert: If approached by anyone claiming to be a TSA employee asking you to take something through the checkpoint, please contact a uniformed TSA employee at the checkpoint or a law enforcement officer immediately.

Is there anyone else who has had this happen to them?

Posted on October 19, 2007 at 2:37 PMView Comments

More Behavioral Profiling

I’ve seen several articles based on this press release:

Computer and behavioral scientists at the University at Buffalo are developing automated systems that track faces, voices, bodies and other biometrics against scientifically tested behavioral indicators to provide a numerical score of the likelihood that an individual may be about to commit a terrorist act.

I am generally in favor of funding all sorts of research, no matter how outlandish—you never know when you’ll discover something really good—and I am generally in favor of this sort of behavioral assessment profiling.

But I wish reporters would approach these topics with something resembling skepticism. The false-positive rate matters far more than the false-negative rate, and I doubt something like this will be ready for fielding any time soon.

EDITED TO ADD (10/13): Another comment.

Posted on October 15, 2007 at 6:16 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.