11-Year-Old Bypasses Airport Security
Sure, stories like this are great fun, but I don’t think it’s much of a security concern. Terrorists can’t build a plot around random occasional security failures.
Page 8 of 46
Sure, stories like this are great fun, but I don’t think it’s much of a security concern. Terrorists can’t build a plot around random occasional security failures.
Normally, companies instruct their employees not to resist. But Hainan Airlines did the opposite:
Two safety officers and the chief purser got cash and property worth 4m yuan ($628,500; £406,200) each. The rest got assets worth 2.5m yuan each.
That’s a lot of money, especially in China. I’m sure it will influence future decisions by crew, and even passengers, about resisting terrorist attacks.
In July 2011, a federal appeals court ruled that the Transportation Security Administration had to conduct a notice-and-comment rulemaking on its policy of using “Advanced Imaging Technology” for primary screening at airports. TSA was supposed to publish the policy in the Federal Register, take comments from the public, and justify its policy based on public input. The court told TSA to do all this “promptly.” A year later, TSA has not even started that public process. Defying the court, the TSA has not satisfied public concerns about privacy, about costs and delays, security weaknesses, and the potential health effects of these machines. If the government is going to “body-scan” Americans at U.S. airports, President Obama should force the TSA to begin the public process the court ordered.
The petition needed 150 signatures to go “public” on Whitehouse.gov (currently at 296), and needs 25,000 to require a response from the administration. You have to register before you can sign, but it’s a painless procedure. Basically, they’re checking that you have a valid e-mail address.
Everyone should sign it.
Rand Paul has introduced legislation to rein in the TSA. There are two bills:
One bill would require that the mostly federalized program be turned over to private screeners and allow airports with Department of Homeland Security approval to select companies to handle the work.
This seems to be a result of a fundamental misunderstanding of the economic incentives involved here, combined with magical thinking that a market solution solves all. In airport screening, the passenger isn’t the customer. (Technically he is, but only indirectly.) The airline isn’t even the customer. The customer is the U.S. government, which is in the grip of an irrational fear of terrorism.
It doesn’t matter if an airport screener receives a paycheck signed by the Department of the Treasury or Private Airport Screening Services, Inc. As long as a terrorized government—one that needs to be seen by voters as “tough on terror” and wants to stop every terrorist attack, regardless of the cost, and is willing to sacrifice all for the illusion of security—gets to set the security standards, we’re going to get TSA-style security.
We can put the airlines, either directly or via airport fees, in charge of security, but that has problems in the other direction. Airlines don’t really care about terrorism; it’s rare, the costs to the airline are relatively small (remember that the government bailed the industry out after 9/11), and the rest of the costs are externalities and are borne by other people. So if airlines are in charge, we’re likely to get less security than makes sense.
It makes sense for a government to be in charge of airport security—either directly or by setting standards for contractors to follow, I don’t care—but we’ll only get sensible security when the government starts behaving sensibly.
The second bill would permit travelers to opt out of pat-downs and be rescreened, allow them to call a lawyer when detained, increase the role of dogs in explosive detection, let passengers “appropriately object to mistreatment,” allow children 12 years old and younger to avoid “unnecessary pat-downs” and require the distribution of the new rights at airports.
That legislation also would let airports decide to privatize if wanted and expand TSA’s PreCheck program for trusted travelers.
This is a mixed bag. Airports can already privatize security—SFO has done so already—and TSA’s PreCheck is being expanded. Opting out of pat downs and being rescreened only makes sense if the pat down request was the result of an anomaly in the screening process; my guess is that rescreening will just produce the same anomaly and still require a pat down. The right to call a lawyer when detained is a good one, although in reality we passengers just want to make our flights; that’s why we let ourselves be subjected to this sort of treatment at airports. And the phrase “unnecessary pat-downs” all comes down to what is considered necessary. If a 12-year-old goes through a full-body scanner and a gun-shaped image shows up on the screen, is the subsequent pat down necessary? What if it’s a long and thin image? What if he goes through a metal detector and it beeps? And who gets to decide what’s necessary? If it’s the TSA, nothing will change.
And dogs: a great idea, but a logistical nightmare. Dogs require space to eat, sleep, run, poop, and so on. They just don’t fit into your typical airport setup.
The problem isn’t government-run airport security, full-body scanners, the screening of children and the elderly, or even a paucity of dogs. The problem is that we were so terrorized that we demanded our government keep us safe at all costs. The problem is that our government was so terrorized after 9/11 that it gave an enormous amount of power to our security organizations. The problem is that the security-industrial complex has gotten large and powerful—and good at advancing its agenda—and that we’ve scared our public officials into being so scared that they don’t notice when security goes too far.
I too want to rein in the TSA, but the only way to do that is to change the TSA’s mission. And the only way to do that is to change the government that gives the TSA its mission. We need to refuse to be terrorized, and we need to elect non-terrorized legislators.
But that’s a long way off. In the near term, I’d like to see legislation that forces the TSA, the DHS, and anyone working in counterterrorism, to justify their systems, procedures, and expenditures with cost-benefit analyses.
This is me on that issue:
An even more meaningful response to any of these issues would be to perform a cost-benefit analysis. These sorts of analyses are standard, even with regard to rare risks, but the TSA (and, in fact, the whole Department of Homeland Security) has never conducted them on any of its programmes or technologies. It’s incredible but true: he TSA does not analyse whether the security measures it deploys are worth deploying. In 2010, the National Academies of Science wrote a pretty damning report on this topic.
Filling in where the TSA and the DHS have left a void, academics have performed some cost-benefit analyses on specific airline-security measures. The results are pretty much what you would expect: the security benefits of most post-9/11 security changes do not justify the costs.
More on security cost-benefit analyses here and here. It’s not going to magically dismantle the security-industrial complex, eliminate the culture of fear, or imbue our elected officials with common sense—but it’s a start.
EDITED TO ADD (7/13): A rebuttal to my essay. It’s too insulting to respond directly to, but there are points worth debating.
Remember my rebuttal of Sam Harris’s essay advocating the profiling of Muslims at airports? That wasn’t the end of it. Harris and I conducted a back-and-forth e-mail discussion, the results of which are here. At 14,000+ words, I only recommend it for the most stalwart of readers.
Although the plot was disrupted before a particular airline was targeted and tickets were purchased, al Qaeda’s continued attempts to attack the U.S. speak to the organization’s persistence and willingness to refine specific approaches to killing. Unlike Abdulmutallab’s bomb, the new device contained lead azide, an explosive often used as a detonator. If the new underwear bomb had been used, the bomber would have ignited the lead azide, which would have triggered a more powerful explosive, possibly military-grade explosive pentaerythritol tetranitrate (PETN).
Lead azide and PETN were key components in a 2010 plan to detonate two bombs sent from Yemen and bound for Chicago—one in a cargo aircraft and the other in the cargo hold of a passenger aircraft. In that plot, al-Qaeda hid bombs in printer cartridges, allowing them to slip past cargo handlers and airport screeners. Both bombs contained far more explosive material than the 80 grams of PETN that Abdulmutallab smuggled onto his Northwest Airlines flight.
With the latest device, al Asiri appears to have been able to improve on the underwear bomb supplied to Abdulmutallab, says Joan Neuhaus Schaan, a fellow in homeland security and terrorism for Rice University’s James A. Baker III Institute for Public Policy.
The interview is also interesting, and I am especially pleased to see this last answer:
What has been the most effective means of disrupting terrorism attacks?
As with bombs that were being sent from Yemen to Chicago as cargo, this latest plot was discovered using human intelligence rather than screening procedures and technologies. These plans were disrupted because of proactive mechanisms put in place to stop terrorism rather than defensive approaches such as screening.
According to a report from the DHS Office of Inspector General:
Federal investigators “identified vulnerabilities in the screening process” at domestic airports using so-called “full body scanners,” according to a classified internal Department of Homeland Security report.
EPIC obtained an unclassified version of the report in a FOIA response. Here’s the summary.
Why do otherwise rational people think it’s a good idea to profile people at airports? Recently, neuroscientist and best-selling author Sam Harris related a story of an elderly couple being given the twice-over by the TSA, pointed out how these two were obviously not a threat, and recommended that the TSA focus on the actual threat: “Muslims, or anyone who looks like he or she could conceivably be Muslim.”
This is a bad idea. It doesn’t make us any safer—and it actually puts us all at risk.
The right way to look at security is in terms of cost-benefit trade-offs. If adding profiling to airport checkpoints allowed us to detect more threats at a lower cost, than we should implement it. If it didn’t, we’d be foolish to do so. Sometimes profiling works. Consider a sheep in a meadow, happily munching on grass. When he spies a wolf, he’s going to judge that individual wolf based on a bunch of assumptions related to the past behavior of its species. In short, that sheep is going to profile…and then run away. This makes perfect sense, and is why evolution produced sheep—and other animals—that react this way. But this sort of profiling doesn’t work with humans at airports, for several reasons.
First, in the sheep’s case the profile is accurate, in that all wolves are out to eat sheep. Maybe a particular wolf isn’t hungry at the moment, but enough wolves are hungry enough of the time to justify the occasional false alarm. However, it isn’t true that almost all Muslims are out to blow up airplanes. In fact, almost none of them are. Post 9/11, we’ve had 2 Muslim terrorists on U.S airplanes: the shoe bomber and the underwear bomber. If you assume 0.8% (that’s one estimate of the percentage of Muslim Americans) of the 630 million annual airplane fliers are Muslim and triple it to account for others who look Semitic, then the chances any profiled flier will be a Muslim terrorist is 1 in 80 million. Add the 19 9/11 terrorists—arguably a singular event—that number drops to 1 in 8 million. Either way, because the number of actual terrorists is so low, almost everyone selected by the profile will be innocent. This is called the “base rate fallacy,” and dooms any type of broad terrorist profiling, including the TSA’s behavioral profiling.
Second, sheep can safely ignore animals that don’t look like the few predators they know. On the other hand, to assume that only Arab-appearing people are terrorists is dangerously naive. Muslims are black, white, Asian, and everything else—most Muslims are not Arab. Recent terrorists have been European, Asian, African, Hispanic, and Middle Eastern; male and female; young and old. Underwear bomber Umar Farouk Abdul Mutallab was Nigerian. Shoe bomber Richard Reid was British with a Jamaican father. One of the London subway bombers, Germaine Lindsay, was Afro-Caribbean. Dirty bomb suspect Jose Padilla was Hispanic-American. The 2002 Bali terrorists were Indonesian. Both Timothy McVeigh and the Unabomber were white Americans. The Chechen terrorists who blew up two Russian planes in 2004 were female. Focusing on a profile increases the risk that TSA agents will miss those who don’t match it.
Third, wolves can’t deliberately try to evade the profile. A wolf in sheep’s clothing is just a story, but humans are smart and adaptable enough to put the concept into practice. Once the TSA establishes a profile, terrorists will take steps to avoid it. The Chechens deliberately chose female suicide bombers because Russian security was less thorough with women. Al Qaeda has tried to recruit non-Muslims. And terrorists have given bombs to innocent—and innocent-looking—travelers. Randomized secondary screening is more effective, especially since the goal isn’t to catch every plot but to create enough uncertainty that terrorists don’t even try.
And fourth, sheep don’t care if they offend innocent wolves; the two species are never going to be friends. At airports, though, there is an enormous social and political cost to the millions of false alarms. Beyond the societal harms of deliberately harassing a minority group, singling out Muslims alienates the very people who are in the best position to discover and alert authorities about Muslim plots before the terrorists even get to the airport. This alone is reason enough not to profile.
I too am incensed—but not surprised—when the TSA singles out four-year old girls, children with cerebral palsy, pretty women, the elderly, and wheelchair users for humiliation, abuse, and sometimes theft. Any bureaucracy that processes 630 million people per year will generate stories like this. When people propose profiling, they are really asking for a security system that can apply judgment. Unfortunately, that’s really hard. Rules are easier to explain and train. Zero tolerance is easier to justify and defend. Judgment requires better-educated, more expert, and much-higher-paid screeners. And the personal career risks to a TSA agent of being wrong when exercising judgment far outweigh any benefits from being sensible.
The proper reaction to screening horror stories isn’t to subject only “those people” to it; it’s to subject no one to it. (Can anyone even explain what hypothetical terrorist plot could successfully evade normal security, but would be discovered during secondary screening?) Invasive TSA screening is nothing more than security theater. It doesn’t make us safer, and it’s not worth the cost. Even more strongly, security isn’t our society’s only value. Do we really want the full power of government to act out our stereotypes and prejudices? Have we Americans ever done something like this and not been ashamed later? This is what we have a Constitution for: to help us live up to our values and not down to our fears.
This essay previously appeared on Forbes.com and Sam Harris’s blog.
We don’t know much, but here are my predictions:
I’ve often written about the base rate fallacy and how it makes tests for rare events—like airplane terrorists—useless because the false positives vastly outnumber the real positives. This essay uses that argument to demonstrate why the TSA’s FAST program is useless:
First, predictive software of this kind is undermined by a simple statistical problem known as the false-positive paradox. Any system designed to spot terrorists before they commit an act of terrorism is, necessarily, looking for a needle in a haystack. As the adage would suggest, it turns out that this is an incredibly difficult thing to do. Here is why: let’s assume for a moment that 1 in 1,000,000 people is a terrorist about to commit a crime. Terrorists are actually probably much much more rare, or we would have a whole lot more acts of terrorism, given the daily throughput of the global transportation system. Now lets imagine the FAST algorithm correctly classifies 99.99 percent of observations—an incredibly high rate of accuracy for any big data-based predictive model. Even with this unbelievable level of accuracy, the system would still falsely accuse 99 people of being terrorists for every one terrorist it finds. Given that none of these people would have actually committed a terrorist act yet distinguishing the innocent false positives from the guilty might be a non-trivial, and invasive task.
Of course FAST has nowhere near a 99.99 percent accuracy rate. I imagine much of the work being done here is classified, but a writeup in Nature reported that the first round of field tests had a 70 percent accuracy rate. From the available material it is difficult to determine exactly what this number means. There are a couple of ways to interpret this, since both the write-up and the DHS documentation (all pdfs) are unclear. This might mean that the current iteration of FAST correctly classifies 70 percent of people it observes—which would produce false positives at an abysmal rate, given the rarity of terrorists in the population. The other way of interpreting this reported result is that FAST will call a terrorist a terrorist 70 percent of the time. This second option tells us nothing about the rate of false positives, but it would likely be quite high. In either case, it is likely that the false-positive paradox would be in full force for FAST, ensuring that any real terrorists identified are lost in a sea of falsely accused innocents.
It’s that final sentence in the first quoted paragraph that really points to how bad this idea is. If FAST determines you are guilty of a crime you have not yet committed, how do you exonerate yourself?
Sidebar photo of Bruce Schneier by Joe MacInnis.