Blog: May 2010 Archives

Canada Spending $1B on Security for G8/G20 Summit in June


The Canadian government disclosed Tuesday that the total price tag to police the elite Group of Eight meeting in Muskoka, as well as the bigger-tent Group of 20 summit starting a day later in downtown Toronto, has already climbed to more than $833-million. It said it’s preparing to spend up to $930-million for the three days of meetings that start June 25.

That price tag is more than 20 times the total reported cost for the April, 2009, G20 summit in Britain, with the government estimating a cost of $30-million, and seems much higher than security costs at previous summits ­ the Gleneagles G8 summit in Scotland, 2005, was reported to have spent $110-million on security, while the estimate for the 2008 G8 gathering in Japan was $381-million.

These numbers are crazy. There simply isn’t any justification for this kind of spending.

By comparison, the estimated total cost of security for the 17-day 2010 Winter Olympics in Vancouver was just over $898-million.

Think of all the actual security you can buy for that money.

EDITED TO ADD (6/12): Two links detailing how the money was probably spent. Pittsburgh’s cost, less than a year before, was estimated at $18 million.

EDITED TO ADD (6/28): The total seems to be $1.2B. I haven’t found any breakdown of the spending that differentiates between operational costs and capital improvements. If, for example, the Toronto police all got new radios out of this budget, those radios will continue to provide benefits for the city of Toronto long after the summit. On the other hand, money spent on extra security guards for the week provides no ongoing benefit.

My best quote to the media: “If it really costs this much to secure a meeting of the world’s leaders, maybe they should try video conferencing.”

Posted on May 31, 2010 at 8:58 AM

Friday Squid Blogging: 500-Million-Year-Old Squid

Early squid:

New Canadian research into 500 million-year-old carnivore fossils has revealed an early ancestor of modern-day squids and octopuses, solving the mystery surrounding a previously unclassifiable creature.

“This is significant because it means that primitive cephalopods were around much earlier than we thought, and offers a reinterpretation of the long-held origins of this important group of marine animals,” Martin Smith, University of Toronto and Royal Ontario Museum paleontology PhD student, said in a release.


This was one of those confusing, uninterpretable Cambrian animals, represented by only one poorly preserved specimen. Now, 91 new specimens have been dug up and interpreted, and it makes sense to call it a cephalopod. It has two camera eyes—not arthropod-like compound eyes—on stalks, an axial cavity containing paired gills like the mantles of modern cephalopods, and a flexible siphon opening into that cavity. There are also subtle similarities in the structure of the connective tissue in the lateral fins. Obviously, it has a pair of tentacles; no mouthparts have been preserved, but there are hints in the form of dark deposits between the tentacles, which may be all that’s left of the mouthparts ­- and are in the right place for a cephalopod ancestor.

Also, this, this, and this. And the paper from Nature.

Posted on May 28, 2010 at 4:52 PM4 Comments

Another Scene from an Airport

I’ve gotten to the front of the security line at a different airport, and handed a different TSA officer my ID and ticket.

TSA Officer: (Looks everything over. Reads the name on my passport.) The Bruce Schneier?

Me: (Nods, managing not to say: “No no, just a Bruce Schneier; didn’t you hear I come in six-packs?”)

TSA Officer: The security expert?

Me: Yes.

TSA Officer: (Takes off his glove. Offers me his hand to shake.)

Me: (Shakes his hand.)

TSA Officer: I read your stuff all the time.

That’s twice in a row, after years of not being recognized by any TSA officer ever. This is starting to worry me.

Posted on May 28, 2010 at 12:00 PM108 Comments

Low-Tech Burglars to Get Lighter Sentences in Louisiana

This is the kind of law that annoys me:

A Senate bill to toughen penalties for crimes committed with the aid of Internet-generated “virtual maps,” including acts of terrorism, won quick approval Monday in the House.


Adley’s bill defines a “virtual street-level map” as one that is available on the Internet and can generate the location or picture of a home or building by entering the address of the structure or an individual’s name on a website.

Rep. Henry Burns, R-Haughton, who handled Adley’s bill on the House floor, said that if the map is used in an act of terrorism, the legislation requires a judge to impose an additional minimum sentence of at least 10 years onto the terrorist act.

If the map is used in the commission of a crime like burglary, Burns said, the bill calls for the addition of at least one year in jail to be added to the burglary sentence.

Crimes are crimes, regardless of the ancillary technology used to plan them.

Posted on May 28, 2010 at 6:24 AM59 Comments

If You See Something, Think Twice About Saying Something

If you see something, say something.” Or, maybe not:

The Travis County Criminal Justice Center was closed for most of the day on Friday, May 14, after a man reported that a “suspicious package” had been left in the building. The court complex was evacuated, and the APD Explosive Ordinance Disposal Unit was called in for a look-see. The package in question, a backpack, contained paperwork but no explosive device. The building reopened at 1:40pm. The man who reported the suspicious package, Douglas Scott Hoopes, was arrested and charged with making a false report and booked into the jail. The charge is a felony punishable by up to two years in jail.

I don’t think we can have it both ways. We expect people to report anything suspicious—even dumb things—and now we want to press charges if they report something that isn’t an actual threat. Truth is, if you ask amateurs to act as front-line security personnel, you shouldn’t be surprised when you get amateur security.

I think this excerpt from a poem by Rick Moranis says it best:

If you see something,
Say something.
If you say something,
Mean something.
If you mean something,
You may have to prove something.
If you can’t prove something,
You may regret saying something.

There’s more.

EDITED TO ADD (5/26): Seems like he left the package himself, and then called it in. So there’s ample reason to arrest him. Never mind.

Posted on May 26, 2010 at 9:16 AM32 Comments

Scene from an Airport

I’ve gotten to the front of the security line and handed the TSA officer my ID and ticket.

TSA Officer: (Looks at my ticket. Looks at my ID. Looks at me. Smiles.)

Me: (Smiles back.)

TSA Officer: (Looks at my ID. Looks at me. Smiles.)

Me: (Tips hat. Smiles back.)

TSA Officer: A beloved name from the blogosphere.

Me: And I always thought that I slipped through these lines anonymously.

TSA Officer: Don’t worry. No one will notice. This isn’t the sort of job that rewards competence, you know.

Me: Have a good day.

Posted on May 24, 2010 at 2:29 PM85 Comments

Alerting Users that Applications are Using Cameras, Microphones, Etc.

Interesting research: “What You See is What They Get: Protecting users from unwanted use of microphones, cameras, and other sensors,” by Jon Howell and Stuart Schechter.

Abstract: Sensors such as cameras and microphones collect privacy-sensitive data streams without the user’s explicit action. Conventional sensor access policies either hassle users to grant applications access to sensors or grant with no approval at all. Once access is granted, an application may collect sensor data even after the application’s interface suggests that the sensor is no longer being accessed.

We introduce the sensor-access widget, a graphical user interface element that resides within an application’s display. The widget provides an animated representation of the personal data being collected by its corresponding sensor, calling attention to the application’s attempt to collect the data. The widget indicates whether the sensor data is currently allowed to flow to the application. The widget also acts as a control point through which the user can configure the sensor and grant or deny the application access. By building perpetual disclosure of sensor data collection into the platform, sensor-access widgets enable new access-control policies that relax the tension between the user’s privacy needs and applications’ ease of access.

Apple seems to be taking some steps in this direction with the location sensor disclosure in iPhone 4.0 OS.

Posted on May 24, 2010 at 7:32 AM37 Comments

Applications Disclosing Required Authority

This is an interesting piece of research evaluating different user interface designs by which applications disclose to users what sort of authority they need to install themselves. Given all the recent concerns about third-party access to user data on social networking sites (particularly Facebook), this is particularly timely research.

We have provided evidence of a growing trend among application platforms to disclose, via application installation consent dialogs, the resources and actions that applications will be authorized to perform if installed. To improve the design of these disclosures, we have have taken an important first step of testing key design elements. We hope these findings will assist future researchers in creating experiences that leave users feeling better informed and more confident in their installation decisions.

Within the admittedly constrained context of our laboratory study, disclosure design had surprisingly little effect on participants’ ability to absorb and search information. However, the great majority of participants preferred designs that used images or icons to represent resources. This great majority of participants also disliked designs that used paragraphs, the central design element of Facebook’s disclosures, and outlines, the central design element of Android’s disclosures.

Posted on May 21, 2010 at 1:17 PM9 Comments

Automobile Security Analysis

Experimental Security Analysis of a Modern Automobile,” by a whole mess of authors:

Abstract: Modern automobiles are no longer mere mechanical devices; they are pervasively monitored and controlled by dozens of digital computers coordinated via internal vehicular networks. While this transformation has driven major advancements in efficiency and safety, it has also introduced a range of new potential risks. In this paper we experimentally evaluate these issues on a modern automobile and demonstrate the fragility of the underlying system structure. We demonstrate that an attacker who is able to infiltrate virtually any Electronic Control Unit (ECU) can leverage this ability to completely circumvent a broad array of safety-critical systems. Over a range of experiments, both in the lab and in road tests, we demonstrate the ability to adversarially control a wide range of automotive functions and completely ignore driver input—including disabling the brakes, selectively braking individual wheels on demand, stopping the engine, and so on. We find that it is possible to bypass rudimentary network security protections within the car, such as maliciously bridging between our car’s two internal subnets. We also present composite attacks that leverage individual weaknesses, including an attack that embeds malicious code in a car’s telematics unit and that will completely erase any evidence of its presence after a crash. Looking forward, we discuss the complex challenges in addressing these vulnerabilities while considering the existing automotive ecosystem.

Posted on May 21, 2010 at 6:56 AM60 Comments

Detecting Browser History

Interesting research.

Main results:


  • We analyzed the results from over a quarter of a million people who ran our tests in the last few months, and found that we can detect browsing histories for over 76% of them. All major browsers allow their users’ history to be detected, but it seems that users of the more modern browsers such as Safari and Chrome are more affected; we detected visited sites for 82% of Safari users and 94% of Chrome users.


  • While our tests were quite limited, for our test of 5000 most popular websites, we detected an average of 63 visited locations (13 sites and 50 subpages on those sites); the medians were 8 and 17 respectively.
  • Almost 10% of our visitors had over 30 visited sites and 120 subpages detected—heavy Internet users who don’t protect themselves are more affected than others.


  • The ability to detect visitors’ browsing history requires just a few lines of code. Armed with a list of websites to check for, a malicious webmaster can scan over 25 thousand links per second (1.5 million links per minute) in almost every recent browser.
  • Most websites and pages you view in your browser can be detected as long as they are kept in your history. Almost every address that was in your browser’s address bar can be detected (this includes most pages, including those retrieved using https and some forms with potentialy private information such as your zipcode or search query). Pages won’t be detected when they expire from your history (usually after a month or two), or if you manually clear it.

For now, the only way to fix the issue is to constantly clear browsing history or use private browsing modes. The first browser to prevent this trick in a default installation (Firefox 4.0) is supposed to come out in October.

Here’s a link to the paper.

Posted on May 20, 2010 at 1:28 PM27 Comments

Outsourcing to an Indian Jail

This doesn’t seem like the best idea:

Authorities in the southern Indian state of Andhra Pradesh are planning to set up an outsourcing unit in a jail.

The unit will employ 200 educated convicts who will handle back office operations like data entry, and process and transmit information.

It’s not necessarily a bad idea, as long as misusable information isn’t being handled by the criminals.

The unit, which is expected to undertake back-office work for banks, will work round the clock with three shifts of 70 staff each.

Okay, definitely a bad idea.

Working in the unit will also be financially rewarding for the prisoners.

I’ll bet.

Posted on May 18, 2010 at 7:29 AM73 Comments

Insect-Based Terrorism

Sounds like fearmongering to me.

How real is the threat? Many of the world’s most dangerous pathogens already are transmitted by arthropods, the animal phylum that includes mosquitoes. But so far the United States has not been exposed to a large-scale spread of vector-borne diseases like Rift Valley, chikungunya fever or Japanese encephalitis. But terrorists with a cursory knowledge of science could potentially release insects carrying these diseases in a state with a tropical climate like Florida’s, according to several experts who will speak at the workshop.

Posted on May 17, 2010 at 1:30 PM42 Comments

New Windows Attack

It’s still only in the lab, but nothing detects it right now:

The attack is a clever “bait-and-switch” style move. Harmless code is passed to the security software for scanning, but as soon as it’s given the green light, it’s swapped for the malicious code. The attack works even more reliably on multi-core systems because one thread doesn’t keep an eye on other threads that are running simultaneously, making the switch easier.

The attack, called KHOBE (Kernel HOok Bypassing Engine), leverages a Windows module called the System Service Descriptor Table, or SSDT, which is hooked up to the Windows kernel. Unfortunately, SSDT is utilized by antivirus software.

Posted on May 14, 2010 at 11:50 AM43 Comments

Fifth Annual Movie-Plot Threat Contest Semi-Finalists

On April 1, I announced the Fifth Annual Movie Plot Threat Contest:

Your task, ye Weavers of Tales, is to create a fable of fairytale suitable for instilling the appropriate level of fear in children so they grow up appreciating all the lords do to protect them.

Submissions are in, and here are the semifinalists.

  1. Untitled story about polar bears, by Mike Ferguson.
  2. The Gashlycrumb Terrors,” by Laura.
  3. Untitled Little Red Riding Hood parody, by Isti.
  4. The Boy who Didn’t Cry Wolf,” by yt.
  5. Untitled story about exploding imps, by Mister JTA.

Cast your vote by number; voting closes at the end of the month.

Posted on May 14, 2010 at 6:51 AM

Worst-Case Thinking

At a security conference recently, the moderator asked the panel of distinguished cybersecurity leaders what their nightmare scenario was. The answers were the predictable array of large-scale attacks: against our communications infrastructure, against the power grid, against the financial system, in combination with a physical attack.

I didn’t get to give my answer until the afternoon, which was: “My nightmare scenario is that people keep talking about their nightmare scenarios.”

There’s a certain blindness that comes from worst-case thinking. An extension of the precautionary principle, it involves imagining the worst possible outcome and then acting as if it were a certainty. It substitutes imagination for thinking, speculation for risk analysis, and fear for reason. It fosters powerlessness and vulnerability and magnifies social paralysis. And it makes us more vulnerable to the effects of terrorism.

Worst-case thinking means generally bad decision making for several reasons. First, it’s only half of the cost-benefit equation. Every decision has costs and benefits, risks and rewards. By speculating about what can possibly go wrong, and then acting as if that is likely to happen, worst-case thinking focuses only on the extreme but improbable risks and does a poor job at assessing outcomes.

Second, it’s based on flawed logic. It begs the question by assuming that a proponent of an action must prove that the nightmare scenario is impossible.

Third, it can be used to support any position or its opposite. If we build a nuclear power plant, it could melt down. If we don’t build it, we will run short of power and society will collapse into anarchy. If we allow flights near Iceland’s volcanic ash, planes will crash and people will die. If we don’t, organs won’t arrive in time for transplant operations and people will die. If we don’t invade Iraq, Saddam Hussein might use the nuclear weapons he might have. If we do, we might destabilize the Middle East, leading to widespread violence and death.

Of course, not all fears are equal. Those that we tend to exaggerate are more easily justified by worst-case thinking. So terrorism fears trump privacy fears, and almost everything else; technology is hard to understand and therefore scary; nuclear weapons are worse than conventional weapons; our children need to be protected at all costs; and annihilating the planet is bad. Basically, any fear that would make a good movie plot is amenable to worst-case thinking.

Fourth and finally, worst-case thinking validates ignorance. Instead of focusing on what we know, it focuses on what we don’t know—and what we can imagine.

Remember Defense Secretary Rumsfeld’s quote? “Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know.” And this: “the absence of evidence is not evidence of absence.” Ignorance isn’t a cause for doubt; when you can fill that ignorance with imagination, it can be a call to action.

Even worse, it can lead to hasty and dangerous acts. You can’t wait for a smoking gun, so you act as if the gun is about to go off. Rather than making us safer, worst-case thinking has the potential to cause dangerous escalation.

The new undercurrent in this is that our society no longer has the ability to calculate probabilities. Risk assessment is devalued. Probabilistic thinking is repudiated in favor of “possibilistic thinking“: Since we can’t know what’s likely to go wrong, let’s speculate about what can possibly go wrong.

Worst-case thinking leads to bad decisions, bad systems design, and bad security. And we all have direct experience with its effects: airline security and the TSA, which we make fun of when we’re not appalled that they’re harassing 93-year-old women or keeping first graders off airplanes. You can’t be too careful!

Actually, you can. You can refuse to fly because of the possibility of plane crashes. You can lock your children in the house because of the possibility of child predators. You can eschew all contact with people because of the possibility of hurt. Steven Hawking wants to avoid trying to communicate with aliens because they might be hostile; does he want to turn off all the planet’s television broadcasts because they’re radiating into space? It isn’t hard to parody worst-case thinking, and at its extreme it’s a psychological condition.

Frank Furedi, a sociology professor at the University of Kent, writes: “Worst-case thinking encourages society to adopt fear as one of the dominant principles around which the public, the government and institutions should organize their life. It institutionalizes insecurity and fosters a mood of confusion and powerlessness. Through popularizing the belief that worst cases are normal, it incites people to feel defenseless and vulnerable to a wide range of future threats.”

Even worse, it plays directly into the hands of terrorists, creating a population that is easily terrorized—even by failed terrorist attacks like the Christmas Day underwear bomber and the Times Square SUV bomber.

When someone is proposing a change, the onus should be on them to justify it over the status quo. But worst-case thinking is a way of looking at the world that exaggerates the rare and unusual and gives the rare much more credence than it deserves.

It isn’t really a principle; it’s a cheap trick to justify what you already believe. It lets lazy or biased people make what seem to be cogent arguments without understanding the whole issue. And when people don’t need to refute counterarguments, there’s no point in listening to them.

This essay was originally published on, although they stripped out all the links.

Posted on May 13, 2010 at 6:53 AM82 Comments

"If You See Something, Say Something"

That slogan is owned by New York’s Metropolitan Transit Authority (the MTA).

Since obtaining the trademark in 2007, the authority has granted permission to use the phrase in public awareness campaigns to 54 organizations in the United States and overseas, like Amtrak, the Chicago Transit Authority, the emergency management office at Stony Brook University and three states in Australia.

Of course, you’re only supposed to say something if you see something you think is terrorism:

Some requests have been rejected, including one from a university that wanted to use it to address a series of dormitory burglaries.

“The intent of the slogan is to focus on terrorism activity, not crime, and we felt that use in other spheres would water down its effectiveness,” said Christopher Boylan, an M.T.A. spokesman.

Not that it’s very effective.

The campaign urges people to call a counter-terrorism hot line, 1-888-NYC-SAFE. Police officials said 16,191 calls were received last year, down from 27,127 in 2008.

That’s a lot of wasted manpower, dealing with all those calls.

Of course, the vendors in Times Square who saw the smoking Nissan Pathfinder two weeks ago didn’t call that number.

And, as I’ve written previously, “if you ask amateurs to act as front-line security personnel, you shouldn’t be surprised when you get amateur security.” People don’t need to be reminded to call the police; the slogan is nothing more than an invitation to report people who are different.

EDITED TO ADD (5/14): Nice article illustrating how ineffective the campaign is.

Posted on May 12, 2010 at 7:08 AM64 Comments


I sure hope this is a parody:

SnapScouts Keep America Safe!

Want to earn tons of cool badges and prizes while competing with you friends to see who can be the best American? Download the SnapScouts app for your Android phone (iPhone app coming soon) and get started patrolling your neighborhood.

It’s up to you to keep America safe! If you see something suspicious, Snap it! If you see someone who doesn’t belong, Snap it! Not sure if someone or something is suspicious? Snap it anyway!

Play with your friends and family to see who can get the best prizes. Join the SnapScouts today!

Posted on May 10, 2010 at 2:11 PM55 Comments

9/11 Made us Safer?

There’s an essay on the Computerworld website that claims I implied, and believe, so:

OK, so strictly-speaking, he doesn’t use those exact words, but the implication is certainly clear. In a discussion about why there aren’t more terrorist attacks, he argues that ‘minor’ terrorist plots like the Times Square car bomb are counter-productive for terrorist groups, because “9/11 upped the stakes.”

This comes from an essay of mine that discusses why there have been so few terrorist attacks since 9/11. There’s the primary reason—there aren’t very many terrorists out there—and the secondary reason: terrorist attacks are harder to pull off than popular culture leads people to believe. What he’s talking about above is the tertiary reason: terrorist attacks have a secondary purpose of impressing supporters back home, and 9/11 has upped the stakes in what a flashy terrorist attack is supposed to look like.

From there to 9/11 making us safer is quite a leap, and not one that I expected anyone to make. Certainly a series of events, before, during, and after 9/11, contributed to an environment in which a particular group of terrorists found low-budget terrorist attacks less useful—and I suppose by extension we might be safer because of it. But you’d also have to factor in the risks associated with increased police powers, the NSA spying on all of us without warrants, and the increased disregard for the law we’ve seen out of the U.S. government since 9/11. And even so, that’s a far cry from claiming causality that 9/11 made us safer.

Not that any of this really matters. Compared to the real risks in the world, the risk of terrorism is so small that it’s not worth a lot of worry. As John Mueller pointed out, the risks of terrorism “are similar to the risks of using home appliances (200 deaths per year in the United States) or of commercial aviation (103 deaths per year).”

EDITED TO ADD (5/10): A response from Computerworld.

Posted on May 10, 2010 at 6:15 AM54 Comments

Friday Squid Blogging: The Colossal Squid isn't a Vicious Predator

New research shows that, even though it’s 15 meters long, it’s not the kraken of myth:

Its large size and predatory nature fuelled the ancient myth of the underwater “kraken” seamonster and modern speculation that the colossal squid must be aggressive and fast, attributes that allow it to prey on fish and even give sperm whales a hard time.

Yet as the creature is seldom encountered let alone studied, there are no direct measurements of the colossal squid’s behaviour.

So instead, the team used a set of routine metabolic rates for other deep-sea squid species and extrapolated the data to match the colossal squid’s size.


“Our findings demonstrate that the colossal squid has a daily energy consumption 300-fold to 600-fold lower than those of other similar-sized top predators of the Southern Ocean, such as baleen and toothed whales,” says Dr Rosa.


This study reveals a single 5kg Antarctic toothfish would provide enough nourishment for a 500kg colossal squid to survive for 200 days.


“The colossal squid is not a voracious predator capable of high-speed predator-prey interactions,” says Dr Rosa.

“It is rather, an ambush or sit-and-float predator that uses the hooks on its arms and tentacles to ensnare prey that unwittingly approach.”

Posted on May 7, 2010 at 4:26 PM17 Comments

Cory Doctorow Gets Phished

It can happen to anyone:

Here’s how I got fooled. On Monday, I unlocked my Nexus One phone, installing a new and more powerful version of the Android operating system that allowed me to do some neat tricks, like using the phone as a wireless modem on my laptop. In the process of reinstallation, I deleted all my stored passwords from the phone. I also had a couple of editorials come out that day, and did a couple of interviews, and generally emitted a pretty fair whack of information.

The next day, Tuesday, we were ten minutes late getting out of the house. My wife and I dropped my daughter off at the daycare, then hurried to our regular coffee shop to get take-outs before parting ways to go to our respective offices. Because we were a little late arriving, the line was longer than usual. My wife went off to read the free newspapers, I stood in the line. Bored, I opened up my phone fired up my freshly reinstalled Twitter client and saw that I had a direct message from an old friend in Seattle, someone I know through fandom. The message read “Is this you????”? and was followed by one of those ubiquitous shortened URLs that consist of a domain and a short code, like this:

The whole story is worth reading.

Posted on May 7, 2010 at 6:56 AM45 Comments

Nobody Encrypts their Phone Calls

From the Forbes blog:

In an annual report published Friday by the U.S. judicial system on the number of wiretaps it granted over the past year …, the courts revealed that there were 2,376 wiretaps by law enforcement agencies in 2009, up 26% from 1,891 the year before, and up 76% from 1999. (Those numbers, it should be noted, don’t include international wiretaps or those aimed at intelligence purposes rather than law enforcement.)

But in the midst of that wiretapping bonanza, a more surprising figure is the number of cases in which law enforcement encountered encryption as a barrier: one.

According to the courts, only one wiretapping case in the entire country encountered encryption last year, and in that single case, whatever privacy tools were used don’t seemed to have posed much of a hurdle to eavesdroppers. “In 2009, encryption was encountered during one state wiretap, but did not prevent officials from obtaining the plain text of the communications,” reads the report.

Posted on May 6, 2010 at 7:06 AM49 Comments

Why Aren't There More Terrorist Attacks?

As the details of the Times Square car bomb attempt emerge in the wake of Faisal Shahzad’s arrest Monday night, one thing has already been made clear: Terrorism is fairly easy. All you need is a gun or a bomb, and a crowded target. Guns are easy to buy. Bombs are easy to make. Crowded targets—not only in New York, but all over the country—are easy to come by. If you’re willing to die in the aftermath of your attack, you could launch a pretty effective terrorist attack with a few days of planning, maybe less.

But if it’s so easy, why aren’t there more terrorist attacks like the failed car bomb in New York’s Times Square? Or the terrorist shootings in Mumbai? Or the Moscow subway bombings? After the enormous horror and tragedy of 9/11, why have the past eight years been so safe in the U.S.?

There are actually several answers to this question. One, terrorist attacks are harder to pull off than popular imagination—and the movies—lead everyone to believe. Two, there are far fewer terrorists than the political rhetoric of the past eight years leads everyone to believe. And three, random minor terrorist attacks don’t serve Islamic terrorists’ interests right now.

Hard to Pull Off

Terrorism sounds easy, but the actual attack is the easiest part.

Putting together the people, the plot and the materials is hard. It’s hard to sneak terrorists into the U.S. It’s hard to grow your own inside the U.S. It’s hard to operate; the general population, even the Muslim population, is against you.

Movies and television make terrorist plots look easier than they are. It’s hard to hold conspiracies together. It’s easy to make a mistake. Even 9/11, which was planned before the climate of fear that event engendered, just barely succeeded. Today, it’s much harder to pull something like that off without slipping up and getting arrested.

Few Terrorists

But even more important than the difficulty of executing a terrorist attack, there aren’t a lot of terrorists out there. Al-Qaida isn’t a well-organized global organization with movie-plot-villain capabilities; it’s a loose collection of people using the same name. Despite the post-9/11 rhetoric, there isn’t a terrorist cell in every major city. If you think about the major terrorist plots we’ve foiled in the U.S.—the JFK bombers, the Fort Dix plotters—they were mostly amateur terrorist wannabes with no connection to any sort of al-Qaida central command, and mostly no ability to effectively carry out the attacks they planned.

The successful terrorist attacks—the Fort Hood shooter, the guy who flew his plane into the Austin IRS office, the anthrax mailer—were largely nut cases operating alone. Even the unsuccessful shoe bomber, and the equally unsuccessful Christmas Day underwear bomber, had minimal organized help—and that help originated outside the U.S.

Terrorism doesn’t occur without terrorists, and they are far rarer than popular opinion would have it.

Small Attacks Aren’t Enough

Lastly, and perhaps most subtly, there’s not a lot of value in unspectacular terrorism anymore.

If you think about it, terrorism is essentially a PR stunt. The death of innocents and the destruction of property isn’t the goal of terrorism; it’s just the tactic used. And acts of terrorism are intended for two audiences: for the victims, who are supposed to be terrorized as a result, and for the allies and potential allies of the terrorists, who are supposed to give them more funding and generally support their efforts.

An act of terrorism that doesn’t instill terror in the target population is a failure, even if people die. And an act of terrorism that doesn’t impress the terrorists’ allies is not very effective, either.

Fortunately for us and unfortunately for the terrorists, 9/11 upped the stakes. It’s no longer enough to blow up something like the Oklahoma City Federal Building. Terrorists need to blow up airplanes or the Brooklyn Bridge or the Sears Tower or JFK airport—something big to impress the folks back home. Small no-name targets just don’t cut it anymore.

Note that this is very different than terrorism by an occupied population: the IRA in Northern Ireland, Iraqis in Iraq, Palestinians in Israel. Setting aside the actual politics, all of these terrorists believe they are repelling foreign invaders. That’s not the situation here in the U.S.

So, to sum up: If you’re just a loner wannabe who wants to go out with a bang, terrorism is easy. You’re more likely to get caught if you take a long time to plan or involve a bunch of people, but you might succeed. If you’re a representative of al-Qaida trying to make a statement in the U.S., it’s much harder. You just don’t have the people, and you’re probably going to slip up and get caught.

This essay originally appeared on AOL News.

EDITED TO ADD (5/5): A similar sentiment about the economic motivations of terrorists.

Posted on May 5, 2010 at 7:09 AM121 Comments

Preventing Terrorist Attacks in Crowded Areas

On the New York Times Room for Debate Blog, I—along with several other people—was asked about how to prevent terrorist attacks in crowded areas. This is my response.

In the wake of Saturday’s failed Times Square car bombing, it’s natural to ask how we can prevent this sort of thing from happening again. The answer is stop focusing on the specifics of what actually happened, and instead think about the threat in general.

Think about the security measures commonly proposed. Cameras won’t help. They don’t prevent terrorist attacks, and their forensic value after the fact is minimal. In the Times Square case, surely there’s enough other evidence—the car’s identification number, the auto body shop the stolen license plates came from, the name of the fertilizer store—to identify the guy. We will almost certainly not need the camera footage. The images released so far, like the images in so many other terrorist attacks, may make for exciting television, but their value to law enforcement officers is limited.

Check points won’t help, either. You can’t check everybody and everything. There are too many people to check, and too many train stations, buses, theaters, department stores and other places where people congregate. Patrolling guards, bomb-sniffing dogs, chemical and biological weapons detectors: they all suffer from similar problems. In general, focusing on specific tactics or defending specific targets doesn’t make sense. They’re inflexible; possibly effective if you guess the plot correctly, but completely ineffective if you don’t. At best, the countermeasures just force the terrorists to make minor changes in their tactic and target.

It’s much smarter to spend our limited counterterrorism resources on measures that don’t focus on the specific. It’s more efficient to spend money on investigating and stopping terrorist attacks before they happen, and responding effectively to any that occur. This approach works because it’s flexible and adaptive; it’s effective regardless of what the bad guys are planning for next time.

After the Christmas Day airplane bombing attempt, I was asked how we can better protect our airplanes from terrorist attacks. I pointed out that the event was a security success—the plane landed safely, nobody was hurt, a terrorist was in custody—and that the next attack would probably have nothing to do with explosive underwear. After the Moscow subway bombing, I wrote that overly specific security countermeasures like subway cameras and sensors were a waste of money.

Now we have a failed car bombing in Times Square. We can’t protect against the next imagined movie-plot threat. Isn’t it time to recognize that the bad guys are flexible and adaptive, and that we need the same quality in our countermeasures?

I know, nothing I haven’t said many times before.

Steven Simon likes cameras, although his arguments are more movie-plot than real. Michael Black, Noah Shachtman, Michael Tarr, and Jeffrey Rosen all write about the limitations of security cameras. Paul Ekman wants more people. And Richard Clarke has a nice essay about how we shouldn’t panic.

Posted on May 4, 2010 at 1:31 PM77 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.