DHS Has a Blog
The U.S. Department of Homeland Security has a blog. I don’t know if it will be as interesting or entertaining as the TSA’s blog.
Page 16 of 37
The U.S. Department of Homeland Security has a blog. I don’t know if it will be as interesting or entertaining as the TSA’s blog.
The eighth, and final, session of the SHB09 was optimistically titled “How Do We Fix the World?” I moderated, which meant that my liveblogging was more spotty, especially in the discussion section.
David Mandel, Defense Research and Development Canada (suggested reading: Applied Behavioral Science in Support of Intelligence Analysis, Radicalization: What does it mean?; The Role of Instigators in Radicalization to Violent Extremism), is part of the Thinking, Risk, and Intelligence Group at DRDC Toronto. His first observation: “Be wary of purported world-fixers.” His second observation: when you claim that something is broken, it is important to specify the respects in which it’s broken and what fixed looks like. His third observation: it is also important to analyze the consequences of any potential fix. An analysis of the way things are is perceptually based, but an analysis of the way things should be is value-based. He also presented data showing that predictions made by intelligence analysts (at least in one Canadian organization) were pretty good.
Ross Anderson, Cambridge University (suggested reading: Database State; book chapters on psychology and terror), asked “Where’s the equilibrium?” Both privacy and security are moving targets, but he expects that someday soon there will be a societal equilibrium. Incentives to price discriminate go up, and the cost to do so goes down. He gave several examples of database systems that reached very different equilibrium points, depending on corporate lobbying, political realities, public outrage, etc. He believes that privacy will be regulated, the only question being when and how. “Where will the privacy boundary end up, and why? How can we nudge it one way or another?”
Alma Whitten, Google (suggested reading: Why Johnny can’t encrypt: A usability evaluation of PGP 5.0), presented a set of ideals about privacy (very European like) and some of the engineering challenges they present. “Engineering challenge #1: How to support access and control to personal data that isn’t authenticated? Engineering challenge #2: How to inform users about both authenticated and unauthenticated data? Engineering challenge #3: How to balance giving users control over data collection versus detecting and stopping abuse? Engineering challenge #4: How to give users fine-grained control over their data without overwhelming them with options? Engineering challenge #5: How to link sequential actions while preventing them from being linkable to a person? Engineering challenge #6: How to make the benefits of aggregate data analysis apparent to users? Engineering challenge #7: How to avoid or detect inadvertent recording of data that can be linked to an individual?” (Note that Alma requested not to be recorded.)
John Mueller, Ohio State University (suggested reading: Reacting to Terrorism: Probabilities, Consequences, and the Persistence of Fear; Evaluating Measures to Protect the Homeland from Terrorism; Terrorphobia: Our False Sense of Insecurity), talked about terrorism and the Department of Homeland Security. Terrorism isn’t a threat; it’s a problem and a concern, certainly, but the word “threat” is still extreme. Al Qaeda isn’t a threat, and they’re the most serious potential attacker against the U.S. and Western Europe. And terrorists are overwhelmingly stupid. Meanwhile, the terrorism issue “has become a self-licking ice cream cone.” In other words, it’s now an ever-perpetuating government bureaucracy. There are virtually an infinite number of targets; the odds of any one target being targeted is effectively zero; terrorists pick targets largely at random; if you protect target, it makes other targets less safe; most targets are vulnerable in the physical sense, but invulnerable in the sense that they can be rebuilt relatively cheaply (even something like the Pentagon); some targets simply can’t be protected; if you’re going to protect some targets, you need to determine if they should really be protected. (I recommend his book, Overblown.)
Adam Shostack, Microsoft (his blog), pointed out that even the problem of figuring out what part of the problem to work on first is difficult. One of the issues is shame. We don’t want to talk about what’s wrong, so we can’t use that information to determine where we want to go. We make excuses—customers will flee, people will sue, stock prices will go down—even though we know that those excuses have been demonstrated to be false.
During the discussion, there was a lot of talk about the choice between informing users and bombarding them with information they can’t understand. And lots more that I couldn’t transcribe.
And that’s it. SHB09 was a fantastic workshop, filled with interesting people and interesting discussion. Next year in the other Cambridge.
Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.
I’m very happy with this quote in a CNN.com story on “whole-body imaging” at airports:
Bruce Schneier, an internationally recognized security technologist, said whole-body imaging technology “works pretty well,” privacy rights aside. But he thinks the financial investment was a mistake. In a post-9/11 world, he said, he knows his position isn’t “politically tenable,” but he believes money would be better spent on intelligence-gathering and investigations.
“It’s stupid to spend money so terrorists can change plans,” he said by phone from Poland, where he was speaking at a conference. If terrorists are swayed from going through airports, they’ll just target other locations, such as a hotel in Mumbai, India, he said.
“We’d be much better off going after bad guys … and back to pre-9/11 levels of airport security,” he said. “There’s a huge ‘cover your ass’ factor in politics, but unfortunately, it doesn’t make us safer.”
I’ve written about “cover your ass” security in the past, but it’s nice to see it in the press.
Anyone interested?
General Dynamics Information Technology put out an ad last month on behalf of the Homeland Security Department seeking someone who could “think like the bad guy.” Applicants, it said, must understand hackers’ tools and tactics and be able to analyze Internet traffic and identify vulnerabilities in the federal systems.
In the Pentagon’s budget request submitted last week, Defense Secretary Robert Gates said the Pentagon will increase the number of cyberexperts it can train each year from 80 to 250 by 2011.
An employee of Whole Foods in Ann Arbor, Michigan, was fired in 2007 for apprehending a shoplifter. More specifically, he was fired for touching a customer, even though that customer had a backpack filled with stolen groceries and was running away with them.
I regularly see security decisions that, like the Whole Foods incident, seem to make absolutely no sense. However, in every case, the decisions actually make perfect sense once you understand the underlying incentives driving the decision. All security decisions are trade-offs, but the motivations behind them are not always obvious: They’re often subjective, and driven by external incentives. And often security trade-offs are made for nonsecurity reasons.
Almost certainly, Whole Foods has a no-touching-the-customer policy because its attorneys recommended it. “No touching” is a security measure as well, but it’s security against customer lawsuits. The cost of these lawsuits would be much, much greater than the $346 worth of groceries stolen in this instance. Even applied to suspected shoplifters, the policy makes sense: The cost of a lawsuit resulting from tackling an innocent shopper by mistake would be far greater than the cost of letting actual shoplifters get away. As perverse it may seem, the result is completely reasonable given the corporate incentives—Whole Foods wrote a corporate policy that benefited itself.
At least, it works as long as the police and other factors keep society’s shoplifter population down to a reasonable level.
Incentives explain much that is perplexing about security trade-offs. Why does King County, Washington, require one form of ID to get a concealed-carry permit, but two forms of ID to pay for the permit by check? Making a mistake on a gun permit is an abstract problem, but a bad check actually costs some department money.
In the decades before 9/11, why did the airlines fight every security measure except the photo-ID check? Increased security annoys their customers, but the photo-ID check solved a security problem of a different kind: the resale of nonrefundable tickets. So the airlines were on board for that one.
And why does the TSA confiscate liquids at airport security, on the off chance that a terrorist will try to make a liquid explosive instead of using the more common solid ones? Because the officials in charge of the decision used CYA security measures to prevent specific, known tactics rather than broad, general ones.
The same misplaced incentives explain the ongoing problem of innocent prisoners spending years in places like Guantanamo and Abu Ghraib. The solution might seem obvious: Release the innocent ones, keep the guilty ones, and figure out whether the ones we aren’t sure about are innocent or guilty. But the incentives are more perverse than that. Who is going to sign the order releasing one of those prisoners? Which military officer is going to accept the risk, no matter how small, of being wrong?
I read almost five years ago that prisoners were being held by the United States far longer than they should, because ”no one wanted to be responsible for releasing the next Osama bin Laden.” That incentive to do nothing hasn’t changed. It might have even gotten stronger, as these innocents languish in prison.
In all these cases, the best way to change the trade-off is to change the incentives. Look at why the Whole Foods case works. Store employees don’t have to apprehend shoplifters, because society created a special organization specifically authorized to lay hands on people the grocery store points to as shoplifters: the police. If we want more rationality out of the TSA, there needs to be someone with a broader perspective willing to deal with general threats rather than specific targets or tactics.
For prisoners, society has created a special organization specifically entrusted with the role of judging the evidence against them and releasing them if appropriate: the judiciary. It’s only because the George W. Bush administration decided to remove the Guantanamo prisoners from the legal system that we are now stuck with these perverse incentives. Our country would be smart to move as many of these people through the court system as we can.
This essay originally appeared on Wired.com.
Last Saturday I was interviewed on Paul Harris’s Chicago radio show.
Someone did the analysis:
As will be analyzed below, it is estimated that the costs of the no-fly list, since 2002, range from approximately $300 million (a conservative estimate) to $966 million (an estimate on the high end). Using those figures as low and high potentials, a reasonable estimate is that the U.S. government has spent over $500 million on the project since the September 11, 2001 terrorist attacks. Using annual data, this article suggests that the list costs taxpayers somewhere between $50 million and $161 million a year, with a reasonable compromise of those figures at approximately $100 million.
Excellent article:
The same elements of psychology lead people to exaggerate the likelihood of terrorist attacks: Images of terrifying but highly unusual catastrophes on television—such as the World Trade Center collapsing—are far more memorable than images of more mundane and more prevalent threats, like dying in car crashes. Psychologists call this the “availability heuristic,” in which people estimate the probability of something occurring based on how easy it is to bring examples of the event to mind.
As a result of this psychological bias, large numbers of Americans have overestimated the probability of future terrorist strikes: In a poll conducted a few weeks after September 11, respondents saw a 20 percent chance that they would be personally harmed in a terrorist attack within the next year and nearly a 50 percent chance that the average American would be harmed. Those alarmist predictions, thankfully, proved to be wrong; in fact, since September 11, international terrorism has killed only a few hundred people per year around the globe, as John Mueller points out in Overblown. At the current rates, Mueller argues, the lifetime probability of any resident of the globe being killed by terrorism is just one in 80,000.
This public anxiety is the central reason for both the creation of DHS and its subsequent emphasis on showy prevention measures, which Schneier calls a form of “security theater.” But that raises a question: Even if DHS doesn’t actually make us safer, could its existence still be justified if reducing the public’s fears leads to tangible economic benefits? “If the public’s response is based on irrational, emotional fears, it may be reasonable for the government to do things that make us feel better, even if those don’t make us safer in a rational sense, because if they feel better, people will fly on planes and behave in a way that’s good for the economy,” Tierney told me. But the psychological impact of DHS still has to be subject to cost-benefit analysis: On balance, is the government actually calming people rather than making them more nervous? Tierney argues convincingly that the same public fears that encourage government officials to spend money on flashy preventive measures also encourage them to exaggerate the terrorist threat. “It’s very difficult for a government official to come out and say anything like, ‘Let’s put this threat in perspective,'” he told me. “If they were to do so, and there isn’t a terrorist attack, they get no credit; and, if there is, that’s the end of their career.” Of course, no government official feels this pressure more acutely than the head of homeland security. And so, even as DHS seeks to tamp down public fears with expensive and often wasteful preventive measures, it may also be encouraging those fears—which, in turn, creates ever more public demand for spending on prevention.
Michael Chertoff’s public comments about terrorism embody this dilemma: Despite his laudable efforts to speak soberly and responsibly about terrorism—and to argue that there are many kinds of attacks we simply can’t prevent—the incentives associated with his job have led him at times to increase, rather than diminish, public anxiety. Last March he declared that, “if we don’t recognize the struggle we are in as a significant existential struggle, then it is going to be very hard to maintain the focus.” If nuclear attacks aren’t likely and smaller events aren’t existential threats, I asked, why did he say the war on terrorism is a “significant existential struggle”? “To me, existential is a threat that shakes the core of a society’s confidence and causes a significant and long-lasting line of damage to the country,” he replied. But it would take a series of weekly Virginia Tech-style shootings or London-style subway bombings to shake the core of American confidence; and Al Qaeda hasn’t come close to mustering that frequency of low-level attacks in any Western democracy since September 11. “Terrorism kills a certain number of people, and so do forest fires,” Mueller told me. “If terrorism is merely killing certain numbers of people, then it’s not an existential threat, and money is better spent on smoke alarms or forcing people to wear seat belts instead of chasing terrorists.”
I missed this interview with DHS Secretary Michael Chertoff from December. It’s all worth reading, but I want to point out where he claims that airplane hijackings were routine prior to 9/11:
What I can tell you is that in the period prior to September 12, 2001, it was a regular, routine issue to have American aircraft hijacked or blown up from time to time, whether it was Lockerbie or TSA or TWA 857 [I believe he meant TWA 847 – Joel] or 9/11 itself. And we haven’t had even a serious attempt at a hijacking or bombing on an American plane since then.
BoingBoing provides the actual facts:
According to Airsafe.com, the last flight previous to 9/11 to be hijacked with fatalities from an American destination was a Pacific Southwest Airlines flight on December 7th, 1987. “Lockerbie” refers to Pan Am Flight 103 which was destroyed by a bomb over Scotland after departing from London Heathrow International Airport on its way to JFK, with screening done—as now—by an organization other than the TSA. TWA Flight 847 departed from Athens (Ellinikon) International Airport, also not under TSA oversight.
While Wikipedia’s list of aircraft hijackings may not be comprehensive—I cannot find a complete list from the FAA, which does not seem to list hijackings, including 9/11, in its Accidents & Incidents Data—the last incident of an American flight being hijacked was in 1994, when FedEx Flight 705 was hijacked by a disgruntled employee.
The implication that hijacking or bombing of American airline flights is a regular occurrence is not borne out by history, nor does it follow that increased screening by the TSA at airports has prevented more attacks since 9/11.
Sidebar photo of Bruce Schneier by Joe MacInnis.