Entries Tagged "TSA"

Page 3 of 31

The TSA's FAST Personality Screening Program Violates the Fourth Amendment

New law journal article: “A Slow March Towards Thought Crime: How the Department of Homeland Security’s FAST Program Violates the Fourth Amendment,” by Christopher A. Rogers. From the abstract:

FAST is currently designed for deployment at airports, where heightened security threats justify warrantless searches under the administrative search exception to the Fourth Amendment. FAST scans, however, exceed the scope of the administrative search exception. Under this exception, the courts would employ a balancing test, weighing the governmental need for the search versus the invasion of personal privacy of the search, to determine whether FAST scans violate the Fourth Amendment. Although the government has an acute interest in protecting the nation’s air transportation system against terrorism, FAST is not narrowly tailored to that interest because it cannot detect the presence or absence of weapons but instead detects merely a person’s frame of mind. Further, the system is capable of detecting an enormous amount of the scannee’s highly sensitive personal medical information, ranging from detection of arrhythmias and cardiovascular disease, to asthma and respiratory failures, physiological abnormalities, psychiatric conditions, or even a woman’s stage in her ovulation cycle. This personal information warrants heightened protection under the Fourth Amendment. Rather than target all persons who fly on commercial airplanes, the Department of Homeland Security should limit the use of FAST to where it has credible intelligence that a terrorist act may occur and should place those people scanned on prior notice that they will be scanned using FAST.

Posted on March 6, 2015 at 6:28 AMView Comments

Evading Airport Security

The news is reporting about Evan Booth, who builds weaponry out of items you can buy after airport security. It’s clever stuff.

It’s not new, though. People have been explaining how to evade airport security for years.

Back in 2006, I—and others—explained how to print your own boarding pass and evade the photo-ID check, a trick that still seems to work. In 2008, I demonstrated carrying two large bottles of liquid through airport security. Here’s a paper about stabbing people with stuff you can take through airport security. And here’s a German video of someone building a bomb out of components he snuck through a full-body scanner. There’s lots more if you start poking around the Internet.

So, what’s the moral here? It’s not like the terrorists don’t know about these tricks. They’re no surprise to the TSA, either. If airport security is so porous, why aren’t there more terrorist attacks? Why aren’t the terrorists using these, and other, techniques to attack planes every month?

I think the answer is simple: airplane terrorism isn’t a big risk. There are very few actual terrorists, and plots are much more difficult to execute than the tactics of the attack itself. It’s the same reason why I don’t care very much about the various TSA mistakes that are regularly reported.

Posted on December 4, 2013 at 6:28 AMView Comments

The TSA Is Legally Allowed to Lie to Us

The TSA does not have to tell the truth:

Can the TSA (or local governments as directed by the TSA) lie in response to a FOIA request?

Sure, no problem! Even the NSA responds that they “can’t confirm or deny the existence” of classified things for which admitting or denying existence would (allegedly, of course) damage national security. But the TSA? U.S. District Judge Joan A. Lenard granted the TSA the special privilege of not needing to go that route, rubber-stamping the decision of the TSA and the airport authority to write to me that no CCTV footage of the incident existed when, in fact, it did. This footage is non-classified and its existence is admitted by over a dozen visible camera domes and even signage that the area is being recorded. Beyond that, the TSA regularly releases checkpoint video when it doesn’t show them doing something wrong (for example, here’s CCTV of me beating their body scanners). But if it shows evidence of misconduct? Just go ahead and lie.

EDITED TO ADD (9/14): This is an overstatement.

Posted on September 10, 2013 at 6:55 AMView Comments

Latest Movie-Plot Threat: Explosive-Dipped Clothing

It’s being reported, although there’s no indication of where this rumor is coming from or what it’s based on.

…the new tactic allows terrorists to dip ordinary clothing into the liquid to make the clothes themselves into explosives once dry.

“It’s ingenious,” one of the officials said.

Another senior official said that the tactic would not be detected by current security measures.

I can see the trailer now. “In a world where your very clothes might explode at any moment, Bruce Willis is, Bruce Willis in a Michael Bay film: BLOW UP! Co-starring Lindsay Lohan…”

I guess there’s nothing to be done but to force everyone to fly naked.

Posted on August 9, 2013 at 6:04 AMView Comments

TSA Considering Implementing Randomized Security

For a change, here’s a good idea by the TSA:

TSA has just issued a Request for Information (RFI) to prospective vendors who could develop and supply such randomizers, which TSA expects to deploy at CAT X through CAT IV airports throughout the United States.

“The Randomizers would be used to route passengers randomly to different checkpoint lines,” says the agency’s RFI.

The article lists a bunch of requirements by the TSA for the device.

I’ve seen something like this at customs in, I think, India. Every passenger walks up to a kiosk and presses a button. If the green light turns on, he walks through. If the red light turns on, his bags get searched. Presumably the customs officials can set the search percentage.

Automatic randomized screening is a good idea. It’s free from bias or profiling. It can’t be gamed. These both make it more secure. Note that this is just an RFI from the TSA. An actual program might be years away, and it might not be implemented well. But it’s certainly a start.

EDITED TO ADD (7/19): This is an opposing view. Basically, it’s based on the argument that profiling makes sense, and randomized screening means that you can’t profile. It’s an argument I’ve argued against before.

EDITED TO ADD (8/10): Another argument that profiling does not work.

Posted on July 19, 2013 at 2:45 PMView Comments

The Politics of Security in a Democracy

Terrorism causes fear, and we overreact to that fear. Our brains aren’t very good at probability and risk analysis. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. We think rare risks are more common than they are, and we fear them more than probability indicates we should.

Our leaders are just as prone to this overreaction as we are. But aside from basic psychology, there are other reasons that it’s smart politics to exaggerate terrorist threats, and security threats in general.

The first is that we respond to a strong leader. Bill Clinton famously said: “When people feel uncertain, they’d rather have somebody that’s strong and wrong than somebody who’s weak and right.” He’s right.

The second is that doing something—anything—is good politics. A politician wants to be seen as taking charge, demanding answers, fixing things. It just doesn’t look as good to sit back and claim that there’s nothing to do. The logic is along the lines of: “Something must be done. This is something. Therefore, we must do it.”

The third is that the “fear preacher” wins, regardless of the outcome. Imagine two politicians today. One of them preaches fear and draconian security measures. The other is someone like me, who tells people that terrorism is a negligible risk, that risk is part of life, and that while some security is necessary, we should mostly just refuse to be terrorized and get on with our lives.

Fast-forward 10 years. If I’m right and there have been no more terrorist attacks, the fear preacher takes credit for keeping us safe. But if a terrorist attack has occurred, my government career is over. Even if the incidence of terrorism is as ridiculously low as it is today, there’s no benefit for a politician to take my side of that gamble.

The fourth and final reason is money. Every new security technology, from surveillance cameras to high-tech fusion centers to airport full-body scanners, has a for-profit corporation lobbying for its purchase and use. Given the three other reasons above, it’s easy—and probably profitable—for a politician to make them happy and say yes.

For any given politician, the implications of these four reasons are straightforward. Overestimating the threat is better than underestimating it. Doing something about the threat is better than doing nothing. Doing something that is explicitly reactive is better than being proactive. (If you’re proactive and you’re wrong, you’ve wasted money. If you’re proactive and you’re right but no longer in power, whoever is in power is going to get the credit for what you did.) Visible is better than invisible. Creating something new is better than fixing something old.

Those last two maxims are why it’s better for a politician to fund a terrorist fusion center than to pay for more Arabic translators for the National Security Agency. No one’s going to see the additional appropriation in the NSA’s secret budget. On the other hand, a high-tech computerized fusion center is going to make front page news, even if it doesn’t actually do anything useful.

This leads to another phenomenon about security and government. Once a security system is in place, it can be very hard to dislodge it. Imagine a politician who objects to some aspect of airport security: the liquid ban, the shoe removal, something. If he pushes to relax security, he gets the blame if something bad happens as a result. No one wants to roll back a police power and have the lack of that power cause a well-publicized death, even if it’s a one-in-a-billion fluke.

We’re seeing this force at work in the bloated terrorist no-fly and watch lists; agents have lots of incentive to put someone on the list, but absolutely no incentive to take anyone off. We’re also seeing this in the Transportation Security Administration’s attempt to reverse the ban on small blades on airplanes. Twice it tried to make the change, and twice fearful politicians prevented it from going through with it.

Lots of unneeded and ineffective security measures are perpetrated by a government bureaucracy that is primarily concerned about the security of its members’ careers. They know the voters are more likely to punish them more if they fail to secure against a repetition of the last attack, and less if they fail to anticipate the next one.

What can we do? Well, the first step toward solving a problem is recognizing that you have one. These are not iron-clad rules; they’re tendencies. If we can keep these tendencies and their causes in mind, we’re more likely to end up with sensible security measures that are commensurate with the threat, instead of a lot of security theater and draconian police powers that are not.

Our leaders’ job is to resist these tendencies. Our job is to support politicians who do resist.

This essay originally appeared on CNN.com.

EDITED TO ADD (6/4): This essay has been translated into Swedish.

EDITED TO ADD (6/14): A similar essay, on the politics of terrorism defense.

Posted on May 28, 2013 at 5:09 AMView Comments

Training Baggage Screeners

The research in G. Giguère and B.C. Love, “Limits in decision making arise from limits in memory retrieval,” Proceedings of the National Academy of Sciences v. 19 (2013) has applications in training airport baggage screeners.

Abstract: Some decisions, such as predicting the winner of a baseball game, are challenging in part because outcomes are probabilistic. When making such decisions, one view is that humans stochastically and selectively retrieve a small set of relevant memories that provides evidence for competing options. We show that optimal performance at test is impossible when retrieving information in this fashion, no matter how extensive training is, because limited retrieval introduces noise into the decision process that cannot be overcome. One implication is that people should be more accurate in predicting future events when trained on idealized rather than on the actual distributions of items. In other words, we predict the best way to convey information to people is to present it in a distorted, idealized form. Idealization of training distributions is predicted to reduce the harmful noise induced by immutable bottlenecks in people’s memory retrieval processes. In contrast, machine learning systems that selectively weight (i.e., retrieve) all training examples at test should not benefit from idealization. These conjectures are strongly supported by several studies and supporting analyses. Unlike machine systems, people’s test performance on a target distribution is higher when they are trained on an idealized version of the distribution rather than on the actual target distribution. Optimal machine classifiers modified to selectively and stochastically sample from memory match the pattern of human performance. These results suggest firm limits on human rationality and have broad implications for how to train humans tasked with important classification decisions, such as radiologists, baggage screeners, intelligence analysts, and gamblers.

Posted on May 24, 2013 at 12:17 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.