Entries Tagged "DHS"

Page 17 of 39

Fixing Airport Security

It’s been months since the Transportation Security Administration has had a permanent director. If, during the job interview (no, I didn’t get one), President Obama asked me how I’d fix airport security in one sentence, I would reply: “Get rid of the photo ID check, and return passenger screening to pre-9/11 levels.”

Okay, that’s a joke. While showing ID, taking your shoes off and throwing away your water bottles isn’t making us much safer, I don’t expect the Obama administration to roll back those security measures anytime soon. Airport security is more about CYA than anything else: defending against what the terrorists did last time.

But the administration can’t risk appearing as if it facilitated a terrorist attack, no matter how remote the possibility, so those annoyances are probably here to stay.

This would be my real answer: “Establish accountability and transparency for airport screening.” And if I had another sentence: “Airports are one of the places where Americans, and visitors to America, are most likely to interact with a law enforcement officer – and yet no one knows what rights travelers have or how to exercise those rights.”

Obama has repeatedly talked about increasing openness and transparency in government, and it’s time to bring transparency to the Transportation Security Administration (TSA).

Let’s start with the no-fly and watch lists. Right now, everything about them is secret: You can’t find out if you’re on one, or who put you there and why, and you can’t clear your name if you’re innocent. This Kafkaesque scenario is so un-American it’s embarrassing. Obama should make the no-fly list subject to judicial review.

Then, move on to the checkpoints themselves. What are our rights? What powers do the TSA officers have? If we’re asked “friendly” questions by behavioral detection officers, are we allowed not to answer? If we object to the rough handling of ourselves or our belongings, can the TSA official retaliate against us by putting us on a watch list? Obama should make the rules clear and explicit, and allow people to bring legal action against the TSA for violating those rules; otherwise, airport checkpoints will remain a Constitution-free zone in our country.

Next, Obama should refuse to use unfunded mandates to sneak expensive security measures past Congress. The Secure Flight program is the worst offender. Airlines are being forced to spend billions of dollars redesigning their reservations systems to accommodate the TSA’s demands to preapprove every passenger before he or she is allowed to board an airplane. These costs are borne by us, in the form of higher ticket prices, even though we never see them explicitly listed.

Maybe Secure Flight is a good use of our money; maybe it isn’t. But let’s have debates like that in the open, as part of the budget process, where it belongs.

And finally, Obama should mandate that airport security be solely about terrorism, and not a general-purpose security checkpoint to catch everyone from pot smokers to deadbeat dads.

The Constitution provides us, both Americans and visitors to America, with strong protections against invasive police searches. Two exceptions come into play at airport security checkpoints. The first is “implied consent,” which means that you cannot refuse to be searched; your consent is implied when you purchased your ticket. And the second is “plain view,” which means that if the TSA officer happens to see something unrelated to airport security while screening you, he is allowed to act on that.

Both of these principles are well established and make sense, but it’s their combination that turns airport security checkpoints into police-state-like checkpoints.

The TSA should limit its searches to bombs and weapons and leave general policing to the police – where we know courts and the Constitution still apply.

None of these changes will make airports any less safe, but they will go a long way to de-ratcheting the culture of fear, restoring the presumption of innocence and reassuring Americans, and the rest of the world, that – as Obama said in his inauguration speech – “we reject as false the choice between our safety and our ideals.”

This essay originally appeared, without hyperlinks, in the New York Daily News.

Posted on June 24, 2009 at 6:40 AMView Comments

Imagining Threats

A couple of years ago, the Department of Homeland Security hired a bunch of science fiction writers to come in for a day and think of ways terrorists could attack America. If our inability to prevent 9/11 marked a failure of imagination, as some said at the time, then who better than science fiction writers to inject a little imagination into counterterrorism planning?

I discounted the exercise at the time, calling it “embarrassing.” I never thought that 9/11 was a failure of imagination. I thought, and still think, that 9/11 was primarily a confluence of three things: the dual failure of centralized coordination and local control within the FBI, and some lucky breaks on the part of the attackers. More imagination leads to more movie-plot threats—which contributes to overall fear and overestimation of the risks. And that doesn’t help keep us safe at all.

Recently, I read a paper by Magne Jørgensen that provides some insight into why this is so. Titled More Risk Analysis Can Lead to Increased Over-Optimism and Over-Confidence, the paper isn’t about terrorism at all. It’s about software projects.

Most software development project plans are overly optimistic, and most planners are overconfident about their overoptimistic plans. Jørgensen studied how risk analysis affected this. He conducted four separate experiments on software engineers, and concluded (though there are lots of caveats in the paper, and more research needs to be done) that performing more risk analysis can make engineers more overoptimistic instead of more realistic.

Potential explanations all come from behavioral economics: cognitive biases that affect how we think and make decisions. (I’ve written about some of these biases and how they affect security decisions, and there’s a great book on the topic as well.)

First, there’s a control bias. We tend to underestimate risks in situations where we are in control, and overestimate risks in situations when we are not in control. Driving versus flying is a common example. This bias becomes stronger with familiarity, involvement and a desire to experience control, all of which increase with increased risk analysis. So the more risk analysis, the greater the control bias, and the greater the underestimation of risk.

The second explanation is the availability heuristic. Basically, we judge the importance or likelihood of something happening by the ease of bringing instances of that thing to mind. So we tend to overestimate the probability of a rare risk that is seen in a news headline, because it is so easy to imagine. Likewise, we underestimate the probability of things occurring that don’t happen to be in the news.

A corollary of this phenomenon is that, if we’re asked to think about a series of things, we overestimate the probability of the last thing thought about because it’s more easily remembered.

According to Jørgensen’s reasoning, people tend to do software risk analysis by thinking of the severe risks first, and then the more manageable risks. So the more risk analysis that’s done, the less severe the last risk imagined, and thus the greater the underestimation of the total risk.

The third explanation is similar: the peak end rule. When thinking about a total experience, people tend to place too much weight on the last part of the experience. In one experiment, people had to hold their hands under cold water for one minute. Then, they had to hold their hands under cold water for one minute again, then keep their hands in the water for an additional 30 seconds while the temperature was gradually raised. When asked about it afterwards, most people preferred the second option to the first, even though the second had more total discomfort. (An intrusive medical device was redesigned along these lines, resulting in a longer period of discomfort but a relatively comfortable final few seconds. People liked it a lot better.) This means, like the second explanation, that the least severe last risk imagined gets greater weight than it deserves.

Fascinating stuff. But the biases produce the reverse effect when it comes to movie-plot threats. The more you think about far-fetched terrorism possibilities, the more outlandish and scary they become, and the less control you think you have. This causes us to overestimate the risks.

Think about this in the context of terrorism. If you’re asked to come up with threats, you’ll think of the significant ones first. If you’re pushed to find more, if you hire science-fiction writers to dream them up, you’ll quickly get into the low-probability movie plot threats. But since they’re the last ones generated, they’re more available. (They’re also more vivid—science fiction writers are good at that—which also leads us to overestimate their probability.) They also suggest we’re even less in control of the situation than we believed. Spending too much time imagining disaster scenarios leads people to overestimate the risks of disaster.

I’m sure there’s also an anchoring effect in operation. This is another cognitive bias, where people’s numerical estimates of things are affected by numbers they’ve most recently thought about, even random ones. People who are given a list of three risks will think the total number of risks are lower than people who are given a list of 12 risks. So if the science fiction writers come up with 137 risks, people will believe that the number of risks is higher than they otherwise would—even if they recognize the 137 number is absurd.

Jørgensen does not believe risk analysis is useless in software projects, and I don’t believe scenario brainstorming is useless in counterterrorism. Both can lead to new insights and, as a result, a more intelligent analysis of both specific risks and general risk. But an over-reliance on either can be detrimental.

Last month, at the 2009 Homeland Security Science & Technology Stakeholders Conference in Washington D.C., science fiction writers helped the attendees think differently about security. This seems like a far better use of their talents than imagining some of the zillions of ways terrorists can attack America.

This essay originally appeared on Wired.com.

Posted on June 19, 2009 at 6:49 AMView Comments

Second SHB Workshop Liveblogging (9)

The eighth, and final, session of the SHB09 was optimistically titled “How Do We Fix the World?” I moderated, which meant that my liveblogging was more spotty, especially in the discussion section.

David Mandel, Defense Research and Development Canada (suggested reading: Applied Behavioral Science in Support of Intelligence Analysis, Radicalization: What does it mean?; The Role of Instigators in Radicalization to Violent Extremism), is part of the Thinking, Risk, and Intelligence Group at DRDC Toronto. His first observation: “Be wary of purported world-fixers.” His second observation: when you claim that something is broken, it is important to specify the respects in which it’s broken and what fixed looks like. His third observation: it is also important to analyze the consequences of any potential fix. An analysis of the way things are is perceptually based, but an analysis of the way things should be is value-based. He also presented data showing that predictions made by intelligence analysts (at least in one Canadian organization) were pretty good.

Ross Anderson, Cambridge University (suggested reading: Database State; book chapters on psychology and terror), asked “Where’s the equilibrium?” Both privacy and security are moving targets, but he expects that someday soon there will be a societal equilibrium. Incentives to price discriminate go up, and the cost to do so goes down. He gave several examples of database systems that reached very different equilibrium points, depending on corporate lobbying, political realities, public outrage, etc. He believes that privacy will be regulated, the only question being when and how. “Where will the privacy boundary end up, and why? How can we nudge it one way or another?”

Alma Whitten, Google (suggested reading: Why Johnny can’t encrypt: A usability evaluation of PGP 5.0), presented a set of ideals about privacy (very European like) and some of the engineering challenges they present. “Engineering challenge #1: How to support access and control to personal data that isn’t authenticated? Engineering challenge #2: How to inform users about both authenticated and unauthenticated data? Engineering challenge #3: How to balance giving users control over data collection versus detecting and stopping abuse? Engineering challenge #4: How to give users fine-grained control over their data without overwhelming them with options? Engineering challenge #5: How to link sequential actions while preventing them from being linkable to a person? Engineering challenge #6: How to make the benefits of aggregate data analysis apparent to users? Engineering challenge #7: How to avoid or detect inadvertent recording of data that can be linked to an individual?” (Note that Alma requested not to be recorded.)

John Mueller, Ohio State University (suggested reading: Reacting to Terrorism: Probabilities, Consequences, and the Persistence of Fear; Evaluating Measures to Protect the Homeland from Terrorism; Terrorphobia: Our False Sense of Insecurity), talked about terrorism and the Department of Homeland Security. Terrorism isn’t a threat; it’s a problem and a concern, certainly, but the word “threat” is still extreme. Al Qaeda isn’t a threat, and they’re the most serious potential attacker against the U.S. and Western Europe. And terrorists are overwhelmingly stupid. Meanwhile, the terrorism issue “has become a self-licking ice cream cone.” In other words, it’s now an ever-perpetuating government bureaucracy. There are virtually an infinite number of targets; the odds of any one target being targeted is effectively zero; terrorists pick targets largely at random; if you protect target, it makes other targets less safe; most targets are vulnerable in the physical sense, but invulnerable in the sense that they can be rebuilt relatively cheaply (even something like the Pentagon); some targets simply can’t be protected; if you’re going to protect some targets, you need to determine if they should really be protected. (I recommend his book, Overblown.)

Adam Shostack, Microsoft (his blog), pointed out that even the problem of figuring out what part of the problem to work on first is difficult. One of the issues is shame. We don’t want to talk about what’s wrong, so we can’t use that information to determine where we want to go. We make excuses—customers will flee, people will sue, stock prices will go down—even though we know that those excuses have been demonstrated to be false.

During the discussion, there was a lot of talk about the choice between informing users and bombarding them with information they can’t understand. And lots more that I couldn’t transcribe.

And that’s it. SHB09 was a fantastic workshop, filled with interesting people and interesting discussion. Next year in the other Cambridge.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 4:55 PMView Comments

Obama's Cybersecurity Speech

I am optimistic about President Obama’s new cybersecurity policy and the appointment of a new “cybersecurity coordinator,” though much depends on the details. What we do know is that the threats are real, from identity theft to Chinese hacking to cyberwar.

His principles were all welcome—securing government networks, coordinating responses, working to secure the infrastructure in private hands (the power grid, the communications networks, and so on), although I think he’s overly optimistic that legislation won’t be required. I was especially heartened to hear his commitment to funding research. Much of the technology we currently use to secure cyberspace was developed from university research, and the more of it we finance today the more secure we’ll be in a decade.

Education is also vital, although sometimes I think my parents need more cybersecurity education than my grandchildren do. I also appreciate the president’s commitment to transparency and privacy, both of which are vital for security.

But the details matter. Centralizing security responsibilities has the downside of making security more brittle by instituting a single approach and a uniformity of thinking. Unless the new coordinator distributes responsibility, cybersecurity won’t improve.

As the administration moves forward on the plan, two principles should apply. One, security decisions need to be made as close to the problem as possible. Protecting networks should be done by people who understand those networks, and threats needs to be assessed by people close to the threats. But distributed responsibility has more risk, so oversight is vital.

Two, security coordination needs to happen at the highest level possible, whether that’s evaluating information about different threats, responding to an Internet worm or establishing guidelines for protecting personal information. The whole picture is larger than any single agency.

This essay originally appeared on The New York Times website, along with several others commenting on Obama’s speech. All the essays are worth reading, although I want to specifically quote James Bamford making an important point I’ve repeatedly made:

The history of White House czars is not a glorious one as anyone who has followed the rise and fall of the drug czars can tell. There is a lot of hype, a White House speech, and then things go back to normal. Power, the ability to cause change, depends primarily on who controls the money and who is closest to the president’s ear.

Because the new cyber czar will have neither a checkbook nor direct access to President Obama, the role will be more analogous to a traffic cop than a czar.

Gus Hosein wrote a good essay on the need for privacy:

Of course raising barriers around computer systems is certainly a good start. But when these systems are breached, our personal information is left vulnerable. Yet governments and companies are collecting more and more of our information.

The presumption should be that all data collected is vulnerable to abuse or theft. We should therefore collect only what is absolutely required.

As I said, they’re all worth reading. And here are some more links.

I wrote something similar in 2002 about the creation of the Department of Homeland Security:

The human body defends itself through overlapping security systems. It has a complex immune system specifically to fight disease, but disease fighting is also distributed throughout every organ and every cell. The body has all sorts of security systems, ranging from your skin to keep harmful things out of your body, to your liver filtering harmful things from your bloodstream, to the defenses in your digestive system. These systems all do their own thing in their own way. They overlap each other, and to a certain extent one can compensate when another fails. It might seem redundant and inefficient, but it’s more robust, reliable, and secure. You’re alive and reading this because of it.

EDITED TO ADD (6/2): Gene Spafford’s opinion.

EDITED TO ADD (6/4): Good commentary from Bob Blakley.

Posted on May 29, 2009 at 3:01 PMView Comments

Me on Full-Body Scanners in Airports

I’m very happy with this quote in a CNN.com story on “whole-body imaging” at airports:

Bruce Schneier, an internationally recognized security technologist, said whole-body imaging technology “works pretty well,” privacy rights aside. But he thinks the financial investment was a mistake. In a post-9/11 world, he said, he knows his position isn’t “politically tenable,” but he believes money would be better spent on intelligence-gathering and investigations.

“It’s stupid to spend money so terrorists can change plans,” he said by phone from Poland, where he was speaking at a conference. If terrorists are swayed from going through airports, they’ll just target other locations, such as a hotel in Mumbai, India, he said.

“We’d be much better off going after bad guys … and back to pre-9/11 levels of airport security,” he said. “There’s a huge ‘cover your ass’ factor in politics, but unfortunately, it doesn’t make us safer.”

I’ve written about “cover your ass” security in the past, but it’s nice to see it in the press.

Posted on May 20, 2009 at 2:34 PMView Comments

DHS Recruitment Drive

Anyone interested?

General Dynamics Information Technology put out an ad last month on behalf of the Homeland Security Department seeking someone who could “think like the bad guy.” Applicants, it said, must understand hackers’ tools and tactics and be able to analyze Internet traffic and identify vulnerabilities in the federal systems.

In the Pentagon’s budget request submitted last week, Defense Secretary Robert Gates said the Pentagon will increase the number of cyberexperts it can train each year from 80 to 250 by 2011.

Posted on April 21, 2009 at 6:25 AMView Comments

Perverse Security Incentives

An employee of Whole Foods in Ann Arbor, Michigan, was fired in 2007 for apprehending a shoplifter. More specifically, he was fired for touching a customer, even though that customer had a backpack filled with stolen groceries and was running away with them.

I regularly see security decisions that, like the Whole Foods incident, seem to make absolutely no sense. However, in every case, the decisions actually make perfect sense once you understand the underlying incentives driving the decision. All security decisions are trade-offs, but the motivations behind them are not always obvious: They’re often subjective, and driven by external incentives. And often security trade-offs are made for nonsecurity reasons.

Almost certainly, Whole Foods has a no-touching-the-customer policy because its attorneys recommended it. “No touching” is a security measure as well, but it’s security against customer lawsuits. The cost of these lawsuits would be much, much greater than the $346 worth of groceries stolen in this instance. Even applied to suspected shoplifters, the policy makes sense: The cost of a lawsuit resulting from tackling an innocent shopper by mistake would be far greater than the cost of letting actual shoplifters get away. As perverse it may seem, the result is completely reasonable given the corporate incentives—Whole Foods wrote a corporate policy that benefited itself.

At least, it works as long as the police and other factors keep society’s shoplifter population down to a reasonable level.

Incentives explain much that is perplexing about security trade-offs. Why does King County, Washington, require one form of ID to get a concealed-carry permit, but two forms of ID to pay for the permit by check? Making a mistake on a gun permit is an abstract problem, but a bad check actually costs some department money.

In the decades before 9/11, why did the airlines fight every security measure except the photo-ID check? Increased security annoys their customers, but the photo-ID check solved a security problem of a different kind: the resale of nonrefundable tickets. So the airlines were on board for that one.

And why does the TSA confiscate liquids at airport security, on the off chance that a terrorist will try to make a liquid explosive instead of using the more common solid ones? Because the officials in charge of the decision used CYA security measures to prevent specific, known tactics rather than broad, general ones.

The same misplaced incentives explain the ongoing problem of innocent prisoners spending years in places like Guantanamo and Abu Ghraib. The solution might seem obvious: Release the innocent ones, keep the guilty ones, and figure out whether the ones we aren’t sure about are innocent or guilty. But the incentives are more perverse than that. Who is going to sign the order releasing one of those prisoners? Which military officer is going to accept the risk, no matter how small, of being wrong?

I read almost five years ago that prisoners were being held by the United States far longer than they should, because ”no one wanted to be responsible for releasing the next Osama bin Laden.” That incentive to do nothing hasn’t changed. It might have even gotten stronger, as these innocents languish in prison.

In all these cases, the best way to change the trade-off is to change the incentives. Look at why the Whole Foods case works. Store employees don’t have to apprehend shoplifters, because society created a special organization specifically authorized to lay hands on people the grocery store points to as shoplifters: the police. If we want more rationality out of the TSA, there needs to be someone with a broader perspective willing to deal with general threats rather than specific targets or tactics.

For prisoners, society has created a special organization specifically entrusted with the role of judging the evidence against them and releasing them if appropriate: the judiciary. It’s only because the George W. Bush administration decided to remove the Guantanamo prisoners from the legal system that we are now stuck with these perverse incentives. Our country would be smart to move as many of these people through the court system as we can.

This essay originally appeared on Wired.com.

Posted on March 2, 2009 at 7:10 AMView Comments

Jeffrey Rosen on the Department of Homeland Security

Excellent article:

The same elements of psychology lead people to exaggerate the likelihood of terrorist attacks: Images of terrifying but highly unusual catastrophes on television—such as the World Trade Center collapsing—are far more memorable than images of more mundane and more prevalent threats, like dying in car crashes. Psychologists call this the “availability heuristic,” in which people estimate the probability of something occurring based on how easy it is to bring examples of the event to mind.

As a result of this psychological bias, large numbers of Americans have overestimated the probability of future terrorist strikes: In a poll conducted a few weeks after September 11, respondents saw a 20 percent chance that they would be personally harmed in a terrorist attack within the next year and nearly a 50 percent chance that the average American would be harmed. Those alarmist predictions, thankfully, proved to be wrong; in fact, since September 11, international terrorism has killed only a few hundred people per year around the globe, as John Mueller points out in Overblown. At the current rates, Mueller argues, the lifetime probability of any resident of the globe being killed by terrorism is just one in 80,000.

This public anxiety is the central reason for both the creation of DHS and its subsequent emphasis on showy prevention measures, which Schneier calls a form of “security theater.” But that raises a question: Even if DHS doesn’t actually make us safer, could its existence still be justified if reducing the public’s fears leads to tangible economic benefits? “If the public’s response is based on irrational, emotional fears, it may be reasonable for the government to do things that make us feel better, even if those don’t make us safer in a rational sense, because if they feel better, people will fly on planes and behave in a way that’s good for the economy,” Tierney told me. But the psychological impact of DHS still has to be subject to cost-benefit analysis: On balance, is the government actually calming people rather than making them more nervous? Tierney argues convincingly that the same public fears that encourage government officials to spend money on flashy preventive measures also encourage them to exaggerate the terrorist threat. “It’s very difficult for a government official to come out and say anything like, ‘Let’s put this threat in perspective,'” he told me. “If they were to do so, and there isn’t a terrorist attack, they get no credit; and, if there is, that’s the end of their career.” Of course, no government official feels this pressure more acutely than the head of homeland security. And so, even as DHS seeks to tamp down public fears with expensive and often wasteful preventive measures, it may also be encouraging those fears—which, in turn, creates ever more public demand for spending on prevention.

Michael Chertoff’s public comments about terrorism embody this dilemma: Despite his laudable efforts to speak soberly and responsibly about terrorism—and to argue that there are many kinds of attacks we simply can’t prevent—the incentives associated with his job have led him at times to increase, rather than diminish, public anxiety. Last March he declared that, “if we don’t recognize the struggle we are in as a significant existential struggle, then it is going to be very hard to maintain the focus.” If nuclear attacks aren’t likely and smaller events aren’t existential threats, I asked, why did he say the war on terrorism is a “significant existential struggle”? “To me, existential is a threat that shakes the core of a society’s confidence and causes a significant and long-lasting line of damage to the country,” he replied. But it would take a series of weekly Virginia Tech-style shootings or London-style subway bombings to shake the core of American confidence; and Al Qaeda hasn’t come close to mustering that frequency of low-level attacks in any Western democracy since September 11. “Terrorism kills a certain number of people, and so do forest fires,” Mueller told me. “If terrorism is merely killing certain numbers of people, then it’s not an existential threat, and money is better spent on smoke alarms or forcing people to wear seat belts instead of chasing terrorists.”

Posted on January 30, 2009 at 11:38 AMView Comments

1 15 16 17 18 19 39

Sidebar photo of Bruce Schneier by Joe MacInnis.