Entries Tagged "intelligence"

Page 16 of 25

Post-Underwear-Bomber Airport Security

In the headlong rush to “fix” security after the Underwear Bomber’s unsuccessful Christmas Day attack, there’s been far too little discussion about what worked and what didn’t, and what will and will not make us safer in the future.

The security checkpoints worked. Because we screen for obvious bombs, Umar Farouk Abdulmutallab—or, more precisely, whoever built the bomb—had to construct a far less reliable bomb than he would have otherwise. Instead of using a timer or a plunger or a reliable detonation mechanism, as would any commercial user of PETN, he had to resort to an ad hoc and much more inefficient homebrew mechanism: one involving a syringe and 20 minutes in the lavatory and we don’t know exactly what else. And it didn’t work.

Yes, the Amsterdam screeners allowed Abdulmutallab onto the plane with PETN sewn into his underwear, but that’s not a failure, either. There is no security checkpoint, run by any government anywhere in the world, designed to catch this. It isn’t a new threat; it’s more than a decade old. Nor is it unexpected; anyone who says otherwise simply isn’t paying attention. But PETN is hard to explode, as we saw on Christmas Day.

Additionally, the passengers on the airplane worked. For years, I’ve said that exactly two things have made us safer since 9/11: reinforcing the cockpit door and convincing passengers that they need to fight back. It was the second of these that, on Christmas Day, quickly subdued Abdulmutallab after he set his pants on fire.

To the extent security failed, it failed before Abdulmutallab even got to the airport. Why was he issued an American visa? Why didn’t anyone follow up on his father’s tip? While I’m sure there are things to be improved and fixed, remember that everything is obvious in hindsight. After the fact, it’s easy to point to the bits of evidence and claim that someone should have “connected the dots.” But before the fact, when there are millions of dots—some important but the vast majority unimportant—uncovering plots is a lot harder.

Despite this, the proposed fixes focus on the details of the plot rather than the broad threat. We’re going to install full-body scanners, even though there are lots of ways to hide PETN—stuff it in a body cavity, spread it thinly on a garment—from the machines. We’re going to profile people traveling from 14 countries, even though it’s easy for a terrorist to travel from a different country. Seating requirements for the last hour of flight were the most ridiculous example.

The problem with all these measures is that they’re only effective if we guess the plot correctly. Defending against a particular tactic or target makes sense if tactics and targets are few. But there are hundreds of tactics and millions of targets, so all these measures will do is force the terrorists to make a minor modification to their plot.

It’s magical thinking: If we defend against what the terrorists did last time, we’ll somehow defend against what they do next time. Of course this doesn’t work. We take away guns and bombs, so the terrorists use box cutters. We take away box cutters and corkscrews, and the terrorists hide explosives in their shoes. We screen shoes, they use liquids. We limit liquids, they sew PETN into their underwear. We implement full-body scanners, and they’re going to do something else. This is a stupid game; we should stop playing it.

But we can’t help it. As a species, we’re hardwired to fear specific stories—terrorists with PETN underwear, terrorists on subways, terrorists with crop dusters—and we want to feel secure against those stories. So we implement security theater against the stories, while ignoring the broad threats.

What we need is security that’s effective even if we can’t guess the next plot: intelligence, investigation, and emergency response. Our foiling of the liquid bombers demonstrates this. They were arrested in London, before they got to the airport. It didn’t matter if they were using liquids—which they chose precisely because we weren’t screening for them—or solids or powders. It didn’t matter if they were targeting airplanes or shopping malls or crowded movie theaters. They were arrested, and the plot was foiled. That’s effective security.

Finally, we need to be indomitable. The real security failure on Christmas Day was in our reaction. We’re reacting out of fear, wasting money on the story rather than securing ourselves against the threat. Abdulmutallab succeeded in causing terror even though his attack failed.

If we refuse to be terrorized, if we refuse to implement security theater and remember that we can never completely eliminate the risk of terrorism, then the terrorists fail even if their attacks succeed.

This essay previously appeared on Sphere, the AOL.com news site.

EDITED TO ADD (1/8): Similar sentiment.

Posted on January 7, 2010 at 1:18 PMView Comments

Another Contest: Fixing Airport Security

Slate is hosting an airport security suggestions contest: ideas “for making airport security more effective, more efficient, or more pleasant.” Deadline is midday Friday.

I had already submitted a suggestion before I was asked to be a judge. Since I’m no longer eligible, here’s what I sent them:

Reduce the TSA’s budget, and spend the money on:

1. Intelligence. Security measures that focus on specific tactics or targets are a waste of money unless we guess the next attack correctly. Security measures that just force the terrorists to make a minor change in their tactics or targets is not money well spent.

2. Investigation. Since the terrorists deliberately choose plots that we’re not looking for, the best security is to stop plots before they get to the airport. Remember the arrest of the London liquid bombers.

3. Emergency response. Terrorism’s harm depends more on our reactions to attacks than the attacks themselves. We’re naturally resilient, but how we respond in those first hours and days is critical.

And as an added bonus, all of these measures protect us against non-airplane terrorism as well. All we have to do is stop focusing on specific movie plots, and start thinking about the overall threat.

Probably not what they were looking for, and certainly not anything the government is even going to remotely consider—but the smart solution all the same.

Posted on January 7, 2010 at 10:53 AMView Comments

David Brooks on Resilience in the Face of Security Imperfection

David Brooks makes some very good points in this New York Times op-ed from last week:

All this money and technology seems to have reduced the risk of future attack. But, of course, the system is bound to fail sometimes. Reality is unpredictable, and no amount of computer technology is going to change that. Bureaucracies are always blind because they convert the rich flow of personalities and events into crude notations that can be filed and collated. Human institutions are always going to miss crucial clues because the information in the universe is infinite and events do not conform to algorithmic regularity.

[…]

In a mature nation, President Obama could go on TV and say, “Listen, we’re doing the best we can, but some terrorists are bound to get through.” But this is apparently a country that must be spoken to in childish ways. The original line out of the White House was that the system worked. Don’t worry, little Johnny.

When that didn’t work the official line went to the other extreme. “I consider that totally unacceptable,” Obama said. I’m really mad, Johnny. But don’t worry, I’ll make it all better.

[…]

For better or worse, over the past 50 years we have concentrated authority in centralized agencies and reduced the role of decentralized citizen action. We’ve done this in many spheres of life. Maybe that’s wise, maybe it’s not. But we shouldn’t imagine that these centralized institutions are going to work perfectly or even well most of the time. It would be nice if we reacted to their inevitable failures not with rabid denunciation and cynicism, but with a little resiliency, an awareness that human systems fail and bad things will happen and we don’t have to lose our heads every time they do.

There’s a pervasive belief in this society that perfection is possible. So if something bad occurs, it can never be because we just got unlucky. It must be because something went wrong and someone is at fault, and therefore things must be fixed. Sometimes, though, this simply isn’t true. Sometimes it’s better not to fix things: either there is no fix, or the fix is more expensive than living with the problem, or the side effects of the fix are worse than the problem. And sometimes you can do everything right and have it still turn out wrong. Welcome to the real world.

EDITED TO ADD (1/8): Glenn Greenwald on “The Degrading Effects of Terrorism Fears.”

Posted on January 6, 2010 at 10:27 AMView Comments

Intercepting Predator Video

Sometimes mediocre encryption is better than strong encryption, and sometimes no encryption is better still.

The Wall Street Journal reported this week that Iraqi, and possibly also Afghan, militants are using commercial software to eavesdrop on U.S. Predators, other unmanned aerial vehicles, or UAVs, and even piloted planes. The systems weren’t “hacked”—the insurgents can’t control them—but because the downlink is unencrypted, they can watch the same video stream as the coalition troops on the ground.

The naive reaction is to ridicule the military. Encryption is so easy that HDTVs do it—just a software routine and you’re done—and the Pentagon has known about this flaw since Bosnia in the 1990s. But encrypting the data is the easiest part; key management is the hard part. Each UAV needs to share a key with the ground station. These keys have to be produced, guarded, transported, used and then destroyed. And the equipment, both the Predators and the ground terminals, needs to be classified and controlled, and all the users need security clearance.

The command and control channel is, and always has been, encrypted—because that’s both more important and easier to manage. UAVs are flown by airmen sitting at comfortable desks on U.S. military bases, where key management is simpler. But the video feed is different. It needs to be available to all sorts of people, of varying nationalities and security clearances, on a variety of field terminals, in a variety of geographical areas, in all sorts of conditions—with everything constantly changing. Key management in this environment would be a nightmare.

Additionally, how valuable is this video downlink is to the enemy? The primary fear seems to be that the militants watch the video, notice their compound being surveilled and flee before the missiles hit. Or notice a bunch of Marines walking through a recognizable area and attack them. This might make a great movie scene, but it’s not very realistic. Without context, and just by peeking at random video streams, the risk caused by eavesdropping is low.

Contrast this with the additional risks if you encrypt: A soldier in the field doesn’t have access to the real-time video because of a key management failure; a UAV can’t be quickly deployed to a new area because the keys aren’t in place; we can’t share the video information with our allies because we can’t give them the keys; most soldiers can’t use this technology because they don’t have the right clearances. Given this risk analysis, not encrypting the video is almost certainly the right decision.

There is another option, though. During the Cold War, the NSA’s primary adversary was Soviet intelligence, and it developed its crypto solutions accordingly. Even though that level of security makes no sense in Bosnia, and certainly not in Iraq and Afghanistan, it is what the NSA had to offer. If you encrypt, they said, you have to do it “right.”

The problem is, the world has changed. Today’s insurgent adversaries don’t have KGB-level intelligence gathering or cryptanalytic capabilities. At the same time, computer and network data gathering has become much cheaper and easier, so they have technical capabilities the Soviets could only dream of. Defending against these sorts of adversaries doesn’t require military-grade encryption only where it counts; it requires commercial-grade encryption everywhere possible.

This sort of solution would require the NSA to develop a whole new level of lightweight commercial-grade security systems for military applications—not just office-data “Sensitive but Unclassified” or “For Official Use Only” classifications. It would require the NSA to allow keys to be handed to uncleared UAV operators, and perhaps read over insecure phone lines and stored in people’s back pockets. It would require the sort of ad hoc key management systems you find in internet protocols, or in DRM systems. It wouldn’t be anywhere near perfect, but it would be more commensurate with the actual threats.

And it would help defend against a completely different threat facing the Pentagon: The PR threat. Regardless of whether the people responsible made the right security decision when they rushed the Predator into production, or when they convinced themselves that local adversaries wouldn’t know how to exploit it, or when they forgot to update their Bosnia-era threat analysis to account for advances in technology, the story is now being played out in the press. The Pentagon is getting beaten up because it’s not protecting against the threat—because it’s easy to make a sound bite where the threat sounds really dire. And now it has to defend against the perceived threat to the troops, regardless of whether the defense actually protects the troops or not. Reminds me of the TSA, actually.

So the military is now committed to encrypting the video … eventually. The next generation Predators, called Reapers—Who names this stuff? Second-grade boys?—will have the same weakness. Maybe we’ll have encrypted video by 2010, or 2014, but I don’t think that’s even remotely possible unless the NSA relaxes its key management and classification requirements and embraces a lightweight, less secure encryption solution for these sorts of situations. The real failure here is the failure of the Cold War security model to deal with today’s threats.

This essay originally appeared on Wired.com.

EDITED TO ADD (12/24): Good article from The New Yorker on the uses—and politics—of these UAVs.

EDITED TO ADD (12/30): Error corrected—”uncleared UAV operators” should have read “uncleared UAV viewers.” The point is that the operators in the U.S. are cleared and their communications are encrypted, but the viewers in Asia are uncleared and the data is unencrypted.

Posted on December 24, 2009 at 5:24 AMView Comments

TSA Publishes Standard Operating Procedures

BoingBoing is pretty snarky:

The TSA has published a “redacted” version of their s00per s33kr1t screening procedure guidelines (Want to know whether to frisk a CIA operative at the checkpoint? Now you can!). Unfortunately, the security geniuses at the DHS don’t know that drawing black blocks over the words you want to eliminate from your PDF doesn’t actually make the words go away, and can be defeated by nefarious al Qaeda operatives through a complex technique known as ctrl-a/ctrl-c/ctrl-v. Thankfully, only the most elite terrorists would be capable of matching wits with the technology brilliance on display at the agency charged with defending our nation’s skies by ensuring that imaginary hair-gel bombs are kept off of airplanes.

TSA is launching a “full review” to determine how this could have happened. I’ll save them the effort: someone screwed up.

In a statement Tuesday night, the TSA sought to minimize the impact of the unintentional release—calling the document “outdated,” “unclassified” and unimplemented—while saying that it took the incident “very seriously,” and “took swift action” when it was discovered.

Yeah, right.

The original link to the document is dead, but here’s the unredacted document.

I’ve skimmed it, and haven’t found anything terribly interesting. Here’s what Wired.com noticed:

One of the redacted sections, for example, indicates that an armed law enforcement officer in or out of uniform may pass beyond the checkpoint without screening after providing a U.S. government-issued photo ID and “Notice of LEO Flying Armed Document.”

Some commercial airline pilots receive training by the U.S. Marshals Service and are allowed to carry TSA-issued firearms on planes. They can pass through without screening only after presenting “bonafide credentials and aircraft operator photo ID,” the document says.

Foreign dignitaries equivalent to cabinet rank and above, accompanying a spouse, their children under the age of 12, and a State Department escort are exempt from screening.

There are also references to a CIA program called WOMAP, the Worldwide Operational Meet and Assist Program. As part of WOMAP, foreign dignitaries and their escorts—authorized CIA representatives—are exempt from screening, provided they’re approved in advance by TSA’s Office of Intelligence.

Passengers carrying passports from Cuba, Iran, North Korea, Libya, Syria, Sudan, Afghanistan, Lebanon, Somalia, Iraq, Yemen or Algeria are to be designated for selective screening.

Although only a few portions of the document were redacted, the manual contains other tidbits that weren’t redacted, such as a thorough description of diplomatic pouches that are exempt from screening.

I’m a little bit saddened when we all make a big deal about how dumb people are at redacting digital documents. We’ve had a steady stream of these badly redacted documents, and I don’t want to lose that. I also don’t want agencies deciding not to release documents at all, rather than risk this sort of embarrassment.

EDITED TO ADD (12/10): News:

Five Transportation Security Administration employees have been placed on administrative leave after a sensitive airport security manual was posted on the Internet, the agency announced Wednesday.

EDITED TO ADD (12/12): Did the TSA compromise an intelligence program?

Posted on December 10, 2009 at 6:47 AMView Comments

Beyond Security Theater

[I was asked to write this essay for the New Internationalist (n. 427, November 2009, pp. 10–13). It’s nothing I haven’t said before, but I’m pleased with how this essay came together.]

Terrorism is rare, far rarer than many people think. It’s rare because very few people want to commit acts of terrorism, and executing a terrorist plot is much harder than television makes it appear. The best defenses against terrorism are largely invisible: investigation, intelligence, and emergency response. But even these are less effective at keeping us safe than our social and political policies, both at home and abroad. However, our elected leaders don’t think this way: they are far more likely to implement security theater against movie-plot threats.

A movie-plot threat is an overly specific attack scenario. Whether it’s terrorists with crop dusters, terrorists contaminating the milk supply, or terrorists attacking the Olympics, specific stories affect our emotions more intensely than mere data does. Stories are what we fear. It’s not just hypothetical stories: terrorists flying planes into buildings, terrorists with bombs in their shoes or in their water bottles, and terrorists with guns and bombs waging a co-ordinated attack against a city are even scarier movie-plot threats because they actually happened.

Security theater refers to security measures that make people feel more secure without doing anything to actually improve their security. An example: the photo ID checks that have sprung up in office buildings. No-one has ever explained why verifying that someone has a photo ID provides any actual security, but it looks like security to have a uniformed guard-for-hire looking at ID cards. Airport-security examples include the National Guard troops stationed at US airports in the months after 9/11—their guns had no bullets. The US colour-coded system of threat levels, the pervasive harassment of photographers, and the metal detectors that are increasingly common in hotels and office buildings since the Mumbai terrorist attacks, are additional examples.

To be sure, reasonable arguments can be made that some terrorist targets are more attractive than others: aeroplanes because a small bomb can result in the death of everyone aboard, monuments because of their national significance, national events because of television coverage, and transportation because of the numbers of people who commute daily. But there are literally millions of potential targets in any large country (there are five million commercial buildings alone in the US), and hundreds of potential terrorist tactics; it’s impossible to defend every place against everything, and it’s impossible to predict which tactic and target terrorists will try next.

Feeling and Reality

Security is both a feeling and a reality. The propensity for security theater comes from the interplay between the public and its leaders. When people are scared, they need something done that will make them feel safe, even if it doesn’t truly make them safer. Politicians naturally want to do something in response to crisis, even if that something doesn’t make any sense.

Often, this “something” is directly related to the details of a recent event: we confiscate liquids, screen shoes, and ban box cutters on airplanes. But it’s not the target and tactics of the last attack that are important, but the next attack. These measures are only effective if we happen to guess what the next terrorists are planning. If we spend billions defending our rail systems, and the terrorists bomb a shopping mall instead, we’ve wasted our money. If we concentrate airport security on screening shoes and confiscating liquids, and the terrorists hide explosives in their brassieres and use solids, we’ve wasted our money. Terrorists don’t care what they blow up and it shouldn’t be our goal merely to force the terrorists to make a minor change in their tactics or targets.

Our penchant for movie plots blinds us to the broader threats. And security theater consumes resources that could better be spent elsewhere.

Any terrorist attack is a series of events: something like planning, recruiting, funding, practising, executing, aftermath. Our most effective defenses are at the beginning and end of that process—intelligence, investigation, and emergency response—and least effective when they require us to guess the plot correctly. By intelligence and investigation, I don’t mean the broad data-mining or eavesdropping systems that have been proposed and in some cases implemented—those are also movie-plot stories without much basis in actual effectiveness—but instead the traditional “follow the evidence” type of investigation that has worked for decades.

Unfortunately for politicians, the security measures that work are largely invisible. Such measures include enhancing the intelligence-gathering abilities of the secret services, hiring cultural experts and Arabic translators, building bridges with Islamic communities both nationally and internationally, funding police capabilities—both investigative arms to prevent terrorist attacks, and emergency communications systems for after attacks occur—and arresting terrorist plotters without media fanfare. They do not include expansive new police or spying laws. Our police don’t need any new laws to deal with terrorism; rather, they need apolitical funding. These security measures don’t make good television, and they don’t help, come re-election time. But they work, addressing the reality of security instead of the feeling.

The arrest of the “liquid bombers” in London is an example: they were caught through old-fashioned intelligence and police work. Their choice of target (airplanes) and tactic (liquid explosives) didn’t matter; they would have been arrested regardless.

But even as we do all of this we cannot neglect the feeling of security, because it’s how we collectively overcome the psychological damage that terrorism causes. It’s not security theater we need, it’s direct appeals to our feelings. The best way to help people feel secure is by acting secure around them. Instead of reacting to terrorism with fear, we—and our leaders—need to react with indomitability.

Refuse to Be Terrorized

By not overreacting, by not responding to movie-plot threats, and by not becoming defensive, we demonstrate the resilience of our society, in our laws, our culture, our freedoms. There is a difference between indomitability and arrogant “bring ’em on” rhetoric. There’s a difference between accepting the inherent risk that comes with a free and open society, and hyping the threats.

We should treat terrorists like common criminals and give them all the benefits of true and open justice—not merely because it demonstrates our indomitability, but because it makes us all safer. Once a society starts circumventing its own laws, the risks to its future stability are much greater than terrorism.

Supporting real security even though it’s invisible, and demonstrating indomitability even though fear is more politically expedient, requires real courage. Demagoguery is easy. What we need is leaders willing both to do what’s right and to speak the truth.

Despite fearful rhetoric to the contrary, terrorism is not a transcendent threat. A terrorist attack cannot possibly destroy a country’s way of life; it’s only our reaction to that attack that can do that kind of damage. The more we undermine our own laws, the more we convert our buildings into fortresses, the more we reduce the freedoms and liberties at the foundation of our societies, the more we’re doing the terrorists’ job for them.

We saw some of this in the Londoners’ reaction to the 2005 transport bombings. Among the political and media hype and fearmongering, there was a thread of firm resolve. People didn’t fall victim to fear. They rode the trains and buses the next day and continued their lives. Terrorism’s goal isn’t murder; terrorism attacks the mind, using victims as a prop. By refusing to be terrorized, we deny the terrorists their primary weapon: our own fear.

Today, we can project indomitability by rolling back all the fear-based post-9/11 security measures. Our leaders have lost credibility; getting it back requires a decrease in hyperbole. Ditch the invasive mass surveillance systems and new police state-like powers. Return airport security to pre-9/11 levels. Remove swagger from our foreign policies. Show the world that our legal system is up to the challenge of terrorism. Stop telling people to report all suspicious activity; it does little but make us suspicious of each other, increasing both fear and helplessness.

Terrorism has always been rare, and for all we’ve heard about 9/11 changing the world, it’s still rare. Even 9/11 failed to kill as many people as automobiles do in the US every single month. But there’s a pervasive myth that terrorism is easy. It’s easy to imagine terrorist plots, both large-scale “poison the food supply” and small-scale “10 guys with guns and cars.” Movies and television bolster this myth, so many people are surprised that there have been so few attacks in Western cities since 9/11. Certainly intelligence and investigation successes have made it harder, but mostly it’s because terrorist attacks are actually hard. It’s hard to find willing recruits, to co-ordinate plans, and to execute those plans—and it’s easy to make mistakes.

Counterterrorism is also hard, especially when we’re psychologically prone to muck it up. Since 9/11, we’ve embarked on strategies of defending specific targets against specific tactics, overreacting to every terrorist video, stoking fear, demonizing ethnic groups, and treating the terrorists as if they were legitimate military opponents who could actually destroy a country or a way of life—all of this plays into the hands of terrorists. We’d do much better by leveraging the inherent strengths of our modern democracies and the natural advantages we have over the terrorists: our adaptability and survivability, our international network of laws and law enforcement, and the freedoms and liberties that make our society so enviable. The way we live is open enough to make terrorists rare; we are observant enough to prevent most of the terrorist plots that exist, and indomitable enough to survive the even fewer terrorist plots that actually succeed. We don’t need to pretend otherwise.

EDITED TO ADD (11/14): Commentary from Kevin Drum, James Fallows, and The Economist.

Posted on November 13, 2009 at 6:52 AMView Comments

FBI/CIA/NSA Information Sharing Before 9/11

It’s conventional wisdom that the legal “wall” between intelligence and law enforcement was one of the reasons we failed to prevent 9/11. The 9/11 Comission evaluated that claim, and published a classified report in 2004. The report was released, with a few redactions, over the summer: “Legal Barriers to Information Sharing: The Erection of a Wall Between Intelligence and Law Enforcement Investigations,” 9/11 Commission Staff Monograph by Barbara A. Grewe, Senior Counsel for Special Projects, August 20, 2004.

The report concludes otherwise:

“The information sharing failures in the summer of 2001 were not the result of legal barriers but of the failure of individuals to understand that the barriers did not apply to the facts at hand,” the 35-page monograph concludes. “Simply put, there was no legal reason why the information could not have been shared.”

The prevailing confusion was exacerbated by numerous complicating circumstances, the monograph explains. The Foreign Intelligence Surveillance Court was growing impatient with the FBI because of repeated errors in applications for surveillance. Justice Department officials were uncomfortable requesting intelligence surveillance of persons and facilities related to Osama bin Laden since there was already a criminal investigation against bin Laden underway, which normally would have preempted FISA surveillance. Officials were reluctant to turn to the FISA Court of Review for clarification of their concerns since one of the judges on the court had expressed doubts about the constitutionality of FISA in the first place. And so on. Although not mentioned in the monograph, it probably didn’t help that public interest critics in the 1990s (myself included) were accusing the FISA Court of serving as a “rubber stamp” and indiscriminately approving requests for intelligence surveillance.

In the end, the monograph implicitly suggests that if the law was not the problem, then changing the law may not be the solution.

James Bamford comes to much the same conclusion in his book, The Shadow Factory: The NSA from 9/11 to the Eavesdropping on America: there was no legal wall that prevented intelligence and law enforcement from sharing the information necessary to prevent 9/11; it was inter-agency rivalries and turf battles.

Posted on November 12, 2009 at 2:26 PMView Comments

John Mueller on Zazi

I have refrained from commenting on the case against Najibullah Zazi, simply because it’s so often the case that the details reported in the press have very little do with reality. My suspicion was, that as in in so many other cases, he was an idiot who couldn’t do any real harm and was turned into a bogeyman for political purposes.

However, John Mueller—who I’ve written about before—has done the research:

Recalls his step-uncle affectionately, Zazi is “a dumb kid, believe me.” A high school dropout, Zazi mostly worked as doughnut peddler in Lower Manhattan, barely making a living. Somewhere along the line, it is alleged, he took it into his head to set off a bomb and traveled to Pakistan where he received explosives training from al-Qaeda and copied nine pages of chemical bombmaking instructions onto his laptop. FBI Director Robert Mueller asserted in testimony on September 30 that this training gave Zazi the “capability” to set off a bomb.

That, however, seems to be a substantial overstatement—not unlike the Director’s 2003 testimony assuring us that, although his agency had yet to identify an al-Qaeda cell in the U.S., such unidentified entities nonetheless presented “the greatest threat,” had “developed a support infrastructure” in the country, and were able and intended to inflict “significant casualties in the US with little warning.”

An overstatement because, upon returning to the United States, Zazi allegedly spent the better part of a year trying to concoct the bomb he had supposedly learned how to make. In the process, he, or some confederates, purchased bomb materials using stolen credit cards, a bone-headed maneuver guaranteeing that red flags would go up about the sale and that surveillance videos in the stores would be maintained rather than routinely erased.

However, even with the material at hand, Zazi still apparently couldn’t figure it out, and he frantically contacted an unidentified person for help several times. Each of these communications was “more urgent in tone than the last,” according to court documents.

Clearly, if Zazi was able eventually to bring his alleged aspirations to fruition, he could have done some damage, though, given his capacities, the person most in existential danger was surely the lapsed doughnut peddler himself.

As I said in 2007:

Terrorism is a real threat, and one that needs to be addressed by appropriate means. But allowing ourselves to be terrorized by wannabe terrorists and unrealistic plots—and worse, allowing our essential freedoms to be lost by using them as an excuse—is wrong.

[…]

I’ll be the first to admit that I don’t have all the facts in any of these cases. None of us do. So let’s have some healthy skepticism. Skepticism when we read about these terrorist masterminds who were poised to kill thousands of people and do incalculable damage. Skepticism when we’re told that their arrest proves that we need to give away our own freedoms and liberties. And skepticism that those arrested are even guilty in the first place.

The problem with these arrests is that the crimes have not happened yet. So these cases involve trying to divine what people will do in the future. They involve trying to guess as to people’s motives and abilities. They often involve informants with questionable integrity, and my worry is that in our zeal to prevent terrorism, we create terrorists where there weren’t any to begin with.

Mueller writes:

It follows that any terrorism problem within the United States principally derives from homegrown people like Zazi, often isolated from each other, who fantasize about performing dire deeds. Penn State’s Michael Kenney has interviewed dozens of officials and intelligence agents and analyzed court documents, and finds homegrown Islamic militants to be operationally unsophisticated, short on know-how, prone to make mistakes, poor at planning, and severely hampered by a limited capacity to learn. Another study documents the difficulties of network coordination that continually threaten operational unity, trust, cohesion, and the ability to act collectively. And the popular notion these characters have the capacity to steal or put together an atomic bomb seems, to put it mildly, as fanciful as some of the terrorists’ schemes.

By contrast, the image projected by the Department of Homeland Security continues to be of an enemy that is “relentless, patient, opportunistic, and flexible,” shows “an understanding of the potential consequence of carefully planned attacks on economic transportation, and symbolic targets,” seriously threatens “national security,” and could inflict “mass casualties, weaken the economy, and damage public morale and confidence.” That description may fit some terrorists—the 9/11 hijackers among them. But not the vast majority, including the hapless Zazi.

EDITED TO ADD (11/9): This is the Michael Kenney paper that Mueller cites.

Posted on November 9, 2009 at 12:15 PMView Comments

1 14 15 16 17 18 25

Sidebar photo of Bruce Schneier by Joe MacInnis.