January 2010 Archives

Tracking your Browser Without Cookies

How unique is your browser? Can you be tracked simply by its characteristics? The EFF is trying to find out. Their site Panopticlick will measure the characteristics of your browser setup and tell you how unique it is.

I just ran the test on myself, and my browser is unique amongst the 120,000 browsers tested so far. It's my browser plugin details; no one else has the exact configuration I do. My list of system fonts is almost unique; only one other person has the exact configuration I do. (This seems odd to me, I have a week old Sony laptop running Windows 7, and I haven't done anything with the fonts.)

EFF has some suggestions for self-defense, none of them very satisfactory. And here's a news story.

EDITED TO ADD (1/29): There's a lot in the comments leading me to question the accuracy of this test. I'll post more when I know more.

EDITED TO ADD (2/12): Comments from one of the project developers.

Posted on January 29, 2010 at 7:06 AM130 Comments

World Privacy Day and the Madrid Privacy Declaration

Today is World Privacy Day. (I know; it's odd to me, too.) You can celebrate by signing on to the Madrid Privacy Declaration, either as an individual or as an organization.

Me, I'm celebrating -- but I'm not going to tell you how.

Posted on January 28, 2010 at 6:21 AM33 Comments

Scanning Cargo for Nuclear Material and Conventional Explosives

Still experimental:

The team propose using a particle accelerator to alternately smash ionised hydrogen molecules and deuterium ions into targets of carbon and boron respectively. The collisions produce beams of gamma rays of various energies as well as neutrons. These beams are then passed through the cargo.

By measuring the way the beams are absorbed, Goldberg and company say they can work out whether the cargo contains explosives or nuclear materials. And they say they can do it at the rate of 20 containers per hour.

That's an ambitious goal that presents numerous challenges.

For example, the beam currents will provide relatively sparse data so the team will have to employ a technique called few-view tomography to fill in the gaps. It will also mean that each container will have to be zapped several times. That may not be entirely desirable for certain types of goods such as food and equipment with delicate electronics.

Just how beams of gamma rays and neutrons affect these kinds of goods is something that will have to be determined

Then there is the question of false positives. One advantage of a machine like this is that it has several scanning modes is that if one reveals something suspicious, it can switch to another to look in more detail. That should build up a decent picture of the cargo's contents and reduce false positives.

Posted on January 27, 2010 at 6:53 AM47 Comments

More Surveillance in the UK

This seems like a bad idea:

Police in the UK are planning to use unmanned spy drones, controversially deployed in Afghanistan, for the "routine" monitoring of antisocial motorists, protesters, agricultural thieves and fly-tippers, in a significant expansion of covert state surveillance.

Once again, laws and technologies deployed against terrorism are used against much more mundane crimes.

Posted on January 26, 2010 at 7:16 AM77 Comments

The Abdulmutallab Dots that Should Have Been Connected

The notion that U.S. intelligence should have "connected the dots," and caught Abdulmutallab, isn't going away. This is a typical example:

So you'd need come "articulable facts" which could "reasonably warrant a determination" that the guy may be a terrorist based on his behavior. And one assumes his behavior would have to catch the attention of the authorities, correct?

Well let's see.

  1. His dad, a former minister in Nigeria, informed the US embassy there that his son had been radicalized (the dad obviously had a reason for concern).
  2. US intelligence had been following him for a while, dubbing him "the Nigerian" (one assumes there was a reason).
  3. He was on a watch list (one assumes there was a reason).
  4. He had been banned from Britain (yup, one assumes there was a reason).
  5. The British intelligence service had identified him to our intelligence agencies in 2008 as a potential threat (sigh, uh, yeah, reason).
  6. He'd just visited Yemen, an al Qaeda hotbed (given the first 5, one can reasonably guess at the reason).
  7. He bought a one-way ticket to the United States in Africa through Europe (red flag 1).
  8. He paid cash (red flag 2).
  9. He checked no luggage (red flag 3).

...are those or are those not "articulable facts" which should have "reasonably warranted a determination" that this guy fit the profile of someone who is usually up too no good? No?

Kevin Drum responds to this line by line:

...the more we learn, the less this seems to be holding water. Let's go through the list one by one:

  1. Jim Arkedis, a former intelligence analyst: "For the record, 99 percent of the time, walk-in sources to U.S. Embassies are of poor-to-unknown quality. That includes friends and family members who walk into the embassy and claim their relatives are potential dangers. Why? Family relations are tangled webs, and who really knows if your uncle just might want you arrested in revenge for that unsettled family land dispute."
  2. This is true. But we didn't have a name, only a tip that "a Nigerian" might be planning an attack.
  3. Yes. But as the LA Times puts it, he was on a list of half a million people with "suspected extremist links but who are not considered threats."
  4. Yes, but not because of any suspected terrorist ties. From the New York Times: "[Home Secretary Alan] Johnson said Mr. Abdulmutallab's application to renew his student visa was rejected in May after officials had determined that the academic course he gave as his reason for returning to Britain was fake....The rejection of the visa renewal appeared to have been part of a wider process initiated by British authorities this year when they began to crack down on so-called fake colleges that officials said had been established in large numbers across Britain in an attempt to elude tightened immigration controls."
  5. No, they didn't. From the Telegraph: "Diplomatic sources said that the Prime Minister's spokesman had intended to refer to information gleaned by MI5 after the Christmas Day incident following an exhaustive examination of records going back through Abdulmutallab's time in Britain up to October 2008."
  6. True.
  7. No, it was a roundtrip ticket.
  8. Nigeria and Ghana (where Abdulmutallab bought his ticket) are largely cash economies. Andrew Sprung tells us that Abdulmutallab "would certainly raise no alarms by paying cash."
  9. This is apparently true.

I'd go even further on point 9. I fly 240,000 miles a year, and I almost never check luggage. And that goes double when flying in or out of the Third World. And I've also read that he didn't have a coat, something else that -- living in Minneapolis -- I regularly see.

As I keep saying, everything is obvious in hindsight. After the fact, it's easy to point to the bits of evidence and claim that someone should have "connected the dots." But before the fact, when there are millions of dots -- some important but the vast majority unimportant -- uncovering plots is a lot harder.

I wrote in 2002:

The problem is that the dots can only be numbered after the fact. With the benefit of hindsight, it's easy to draw lines from people in flight school here, to secret meetings in foreign countries there, over to interesting tips from foreign governments, and then to INS records. Before 9/11 it's not so easy. Rather than thinking of intelligence as a simple connect-the-dots picture, think of it as a million unnumbered pictures superimposed on top of each other. Or a random-dot stereogram. Is it a lion, a tree, a cast iron stove, or just an unintelligible mess of dots? You try and figure it out.

It's certainly possible that intelligence missed something that could have alerted them. And there have been reports saying that a misspelling of Abdulmutallab's name caused the Department of State to miss an alert. (I've also heard, although I can't find a link, that some database truncated his name because it was too long for the database field.) And I'm sure that a lot of the money we're wasting on full body scanners and other airport security measures could be much better spent increasing our intelligence and investigation capabilities. But be careful before you claim something that's obvious after the fact should have been obvious before the fact.

Posted on January 25, 2010 at 7:09 AM66 Comments

Me on Chinese Hacking and Enabling Surveillance

CNN.com just published an essay of mine on China's hacking of Google, an update of this essay.

EDITED TO ADD (2/8): An essay along similar lines.

Posted on January 24, 2010 at 8:43 AM30 Comments

German TV on the Failure of Full-Body Scanners

The video is worth watching, even if you don't speak German. The scanner caught a subject's cell phone and Swiss Army knife -- and the microphone he was wearing -- but missed all the components to make a bomb that he hid on his body. Admittedly, he only faced the scanner from the front and not from the side. But he also didn't hide anything in a body cavity other than his mouth -- I didn't think about that one -- he didn't use low density or thinly sliced PETN, and he didn't hide anything in his carry-on luggage.

Full-body scanners: they're not just a dumb idea, they don't actually work.

Posted on January 22, 2010 at 7:28 AM73 Comments

Wrasse Punish Cheaters

Interesting:

The bluestreak cleaner wrasse (Labroides dimidiatus) operates an underwater health spa for larger fish. It advertises its services with bright colours and distinctive dances. When customers arrive, the cleaner eats parasites and dead tissue lurking in any hard-to-reach places. Males and females will sometimes operate a joint business, working together to clean their clients. The clients, in return, dutifully pay the cleaners by not eating them.

That's the basic idea, but cleaners sometimes violate their contracts. Rather than picking off parasites, they'll take a bite of the mucus that lines their clients' skin. That's an offensive act -- it's like a masseuse having an inappropriate grope between strokes. The affronted client will often leave. That's particularly bad news if the cleaners are working as a pair because the other fish, who didn't do anything wrong, still loses out on future parasite meals.

Males don't take this sort of behaviour lightly. Nichola Raihani from the Zoological Society of London has found that males will punish their female partners by chasing them aggressively, if their mucus-snatching antics cause a client to storm out.

[...]

At first glance, the male cleaner wrasse behaves oddly for an animal, in punishing an offender on behalf of a third party, even though he hasn't been wronged himself. That's common practice in human societies but much rarer in the animal world. But Raihani's experiments clearly show that the males are actually doing themselves a favour by punishing females on behalf of a third party. Their act of apparent altruism means they get more food in the long run.

Posted on January 20, 2010 at 1:26 PM9 Comments

Google vs. China

I'm not sure what I can add to this: politically motivated attacks against Gmail from China. I've previously written about hacking from China. Shishir Nagaraja and Ross Anderson wrote a report specifically describing how the Chinese have been hacking groups that are politically opposed to them. I've previously written about censorship, Chinese and otherwise. I've previously written about broad government eavesdropping on the Internet, Chinese and otherwise. Seems that the Chinese got in through back doors installed to facilitate government eavesdropping, which I even talked about in my essay on eavesdropping. This new attack seems to be highly sophisticated, which is no surprise.

This isn't a new story, and I wouldn't have mentioned it at all if it weren't for the surreal sentence at the bottom of this paragraph:

The Google-China flap has already reignited the debate over global censorship, reinvigorating human rights groups drawing attention to abuses in the country and prompting U.S. politicians to take a hard look at trade relations. The Obama administration issued statements of support for Google, and members of Congress are pushing to revive a bill banning U.S. tech companies from working with governments that digitally spy on their citizens.

Of course, the bill won't go anywhere, but shouldn't someone inform those members of Congress about what's been going on in the United States for the past eight years?

In related news, Google has enabled https by default for Gmail users. In June 2009, I cosigned a letter to the CEO of Google asking for this change. It's a good thing.

EDITED TO ADD (1/19): Commentary on Google's bargaining position.

Posted on January 19, 2010 at 12:45 PM35 Comments

Privacy Violations by Facebook Employees

I don't know if this is real, but it seems perfectly reasonable that all of Facebook is stored in a huge database that someone with the proper permissions can access and modify. And it also makes sense that developers and others would need the ability to assume anyone's identity.

Rumpus: You've previously mentioned a master password, which you no longer use.

Employee: I'm not sure when exactly it was deprecated, but we did have a master password at one point where you could type in any user's user ID, and then the password. I'm not going to give you the exact password, but with upper and lower case, symbols, numbers, all of the above, it spelled out 'Chuck Norris,' more or less. It was pretty fantastic.

Rumpus: This was accessible by any Facebook employee?

Employee: Technically, yes. But it was pretty much limited to the original engineers, who were basically the only people who knew about it. It wasn't as if random people in Human Resources were using this password to log into profiles. It was made and designed for engineering reasons. But it was there, and any employee could find it if they knew where to look.

I should also say that it was only available internally. If I were to log in from a high school or library, I couldn't use it. You had to be in the Facebook office, using the Facebook ISP.

Rumpus: Do you think Facebook employees ever abused the privilege of having universal access?

Employee: I know it has happened in the past, because at least two people have been fired for it that I know of.

[...]

Employee: See, the thing is -- and I don't know how much you know about it -- it's all stored in a database on the backend. Literally everything. Your messages are stored in a database, whether deleted or not. So we can just query the database, and easily look at it without every logging into your account. That's what most people don't understand.

Rumpus: So the master password is basically irrelevant.

Employee: Yeah.

Rumpus: It's just for style.

Employee: Right. But it's no longer in use. Like I alluded to, we've cracked down on this lately, but it has been replaced by a pretty cool tool. If I visited your profile, for example, on our closed network, there's a 'switch login' button. I literally just click it, explain why I'm logging in as you, click 'OK,' and I'm you. You can do it as long as you have an explanation, because you'd better be able to back it up. For example, if you're investigating a compromised account, you have to actually be able to log into that account.

Rumpus: Are your managers really on your ass about it every time you log in as someone else?

Employee: No, but if it comes up, you'd better be able to justify it. Or you will be fired.

Rumpus: What did they do?

Employee: I know one of them went in and manipulated some other person's data, changed their religious views or something like that. I don't remember exactly what it was, but he got reported, got found out, got fired.

Posted on January 19, 2010 at 11:25 AM33 Comments

Eavesdropping in the Former Soviet Union

Interesting story:

The phone's ringer is a pretty simple thing: there's a coil, a magnet and a hammer controlled by the magnet that hits the gongs when there is AC current in the coil. The ringer system is connected directly to the phone line when the phone is on hook. (Actually through a capacitor that protects the ringer system from DC current normally present in the line.)

If you haven't figured yet, the coil with the hammer is a speaker, not a perfect one, but a speaker anyway, and that also means that the system can be used as an electrodynamic microphone. Any ordinary speaker is an electrodynamic microphone at the same time, if you hook it up to an audio amplifier using normal microphone input.

So this was how actually they, the KGB, did their eavesdropping, I thought. They didn't need to freeze outside or put bugs in our homes, because they had a nice wiretapping device in every single home in the country. The shocking part of it was that they didn't just eavesdrop phone conversations - that one was kind of obvious. They were able to hear everything. The PSTN switching stations were considered strategic objects, they were under KGB's control and surely it was no problem for them to get a few powerful amplifiers hooked up to certain lines leading to homes they needed to eavesdrop. Simple!

Posted on January 19, 2010 at 6:03 AM48 Comments

Security vs. Sustainability in Building Construction

Interesting:

Any facility executive involved in the design of a new building would agree that security is one important goal for the new facility. These days, facility executives are likely to say that green design is another priority. Unfortunately, these two goals are often in conflict. Consider the issues that arise when even a parking lot is being designed. From a security perspective, bright lights in the parking lot enable security cameras to pick up all activity at night. From a green point of view, a brightly lit parking lot is a waste of energy and a source of light pollution. An advocate of green design would argue for plenty of leafy trees and bushes in the parking lot to minimize the urban heat island effect; a security consultant would reply that trees in the lot will block surveillance cameras and provide hiding places for would-be criminals.

There is no shortage of conflicts between sustainability and security goals. Fortunately these conflicts can be resolved to the mutual benefit of both parties, resulting in sustainable and secure buildings and campuses. This balance can be best achieved if security is involved early in the design process.

Posted on January 18, 2010 at 1:34 PM42 Comments

Prison Escape Artist

Clever ruse:

When he went to court for hearings, he could see the system was flawed. He would arrive on the twelfth floor in handcuffs and attached at the waist to a dozen other inmates. A correction officer would lead them into the bull pen, an area where inmates wait for their lawyers. From the bull pen, the inmates would follow their lawyers or court officials either up a set of back stairs into a courtroom or down a set of stairs.

The more Tackmann went to court, the more he noticed that once the inmate at the head of the line would get uncuffed and turn into the bull pen, he would be out of view of the correction officer at the back of the line. He could then avoid the bull pen and dart down the rear stairs.

[...]

On the morning of September 30, Tackmann prepared for court in Manhattan. He dressed in a light-gray three-piece suit that he thinks was his stepfather’s. He wore two sets of dress socks. One around his feet, the other around the Rikers Island slippers he was ordered to wear ("to make them look like shoes; they looked like suede shoes").

As he was bussed to the courthouse, he rehearsed the move in his mind.

When you come up to the twelfth floor, you’re handcuffed with like twelve people on a chain. The C.O. is right there with you.You have to be ready, so if the move is there…

That day, the move was there. "I was in the front of the line. The C.O. -- it was some new guy. He un-handcuffed us in the hallway, and I was the first one around the corner."


Tackmann raced down the stairwell and knocked on a courtroom door. A court officer opened it.

Tackmann had the shtick worked out -- the lawyer in distress. "You know," he said, "I was just with a client, and my mother is real sick in Bellevue. Could you tell me how to get to Bellevue? I gotta get over there fast; she is 80 years old."

He wanted to sprint. The adrenaline was gushing. He calmly walked to the courtroom entrance as the sweat trickled around his neck. He raced down several flights of stairs and tried the door. It was locked. He walked down another flight. Locked. What is going on? Did they find out I was missing already? One more flight down. The door was open. He jumped in an elevator, got out on the ground floor, and walked into the street. Freedom. But not for long.

Posted on January 18, 2010 at 6:57 AM23 Comments

Fixing Intelligence Failures

President Obama, in his speech last week, rightly focused on fixing the intelligence failures that resulted in Umar Farouk Abdulmutallab being ignored, rather than on technologies targeted at the details of his underwear-bomb plot. But while Obama's instincts are right, reforming intelligence for this new century and its new threats is a more difficult task than he might like. We don't need new technologies, new laws, new bureaucratic overlords, or -- for heaven's sake -- new agencies. What prevents information sharing among intelligence organizations is the culture of the generation that built those organizations.

The U.S. intelligence system is a sprawling apparatus, spanning the FBI and the State Department, the CIA and the National Security Agency, and the Department of Homeland Security -- itself an amalgamation of two dozen different organizations -- designed and optimized to fight the Cold War. The single, enormous adversary then was the Soviet Union: as bureaucratic as they come, with a huge budget, and capable of very sophisticated espionage operations. We needed to defend against technologically advanced electronic eavesdropping operations, their agents trying to bribe or seduce our agents, and a worldwide intelligence gathering capability that hung on our every word.

In that environment, secrecy was paramount. Information had to be protected by armed guards and double fences, shared only among those with appropriate security clearances and a legitimate "need to know," and it was better not to transmit information at all than to transmit it insecurely.

Today's adversaries are different. There are still governments, like China, who are after our secrets. But the secrets they're after are more often corporate than military, and most of the other organizations of interest are like al Qaeda: decentralized, poorly funded and incapable of the intricate spy versus spy operations the Soviet Union could pull off.

Against these adversaries, sharing is far more important than secrecy. Our intelligence organizations need to trade techniques and expertise with industry, and they need to share information among the different parts of themselves. Today's terrorist plots are loosely organized ad hoc affairs, and those dots that are so important for us to connect beforehand might be on different desks, in different buildings, owned by different organizations.

Critics have pointed to laws that prohibited inter-agency sharing but, as the 9/11 Commission found, the law allows for far more sharing than goes on. It doesn't happen because of inter-agency rivalries, a reliance on outdated information systems, and a culture of secrecy. What we need is an intelligence community that shares ideas and hunches and facts on their versions of Facebook, Twitter and wikis. We need the bottom-up organization that has made the Internet the greatest collection of human knowledge and ideas ever assembled.

The problem is far more social than technological. Teaching your mom to "text" and your dad to Twitter doesn't make them part of the Internet generation, and giving all those cold warriors blogging lessons won't change their mentality -- or the culture. The reason this continues to be a problem, the reason President George W. Bush couldn't change things even after the 9/11 Commission came to much the same conclusions as President Obama's recent review did, is generational. The Internet is the greatest generation gap since rock and roll, and it's just as true inside government as out. We might have to wait for the elders inside these agencies to retire and be replaced by people who grew up with the Internet.

A version of this op-ed previously appeared in the San Francisco Chronicle.

I wrote about this in 2002.

EDITED TO ADD (1/17): Another opinion.

Posted on January 16, 2010 at 7:13 AM37 Comments

Loretta Napoleoni on the Economics of Terrorism

Interesting TED talk:

Loretta Napoleoni details her rare opportunity to talk to the secretive Italian Red Brigades -- an experience that sparked a lifelong interest in terrorism. She gives a behind-the-scenes look at its complex economics, revealing a surprising connection between money laundering and the US Patriot Act.

Posted on January 15, 2010 at 1:39 PM18 Comments

Ray McGovern on Intelligence Failures

Good commentary from former CIA analyst Ray McGovern:

The short answer to the second sentence is: Yes, it is inevitable that "certain plots will succeed."

A more helpful answer would address the question as to how we might best minimize their prospects for success. And to do this, sorry to say, there is no getting around the necessity to address the root causes of terrorism or, in the vernacular, "why they hate us."

If we don't go beyond self-exculpatory sloganeering in attempting to answer that key question, any "counter terrorism apparatus" is doomed to failure. Honest appraisals would tread on delicate territory, but any intelligence agency worth its salt must be willing/able to address it.

Delicate? Take, for example, what Khalid Sheik Mohammed, the "mastermind" of 9/11, said was his main motive. Here’s what the 9/11 Commission Report wrote on page 147. You will not find it reported in the Fawning Corporate Media (FCM):

"By his own account, KSM’s animus toward the United States stemmed…from his violent disagreement with U.S. foreign policy favoring Israel."

This is not the entire picture, of course. Other key factors include the post-Gulf War stationing of U.S. troops in Saudi Arabia, widely seen as defiling the holy sites of Islam.

Add Washington’s propping up of dictatorial, repressive regimes in order to secure continuing access to oil and natural gas -- widely (and accurately) seen as one of the main reasons for the invasion of Iraq and Afghanistan.

Not to mention the Pentagon’s insatiable thirst for additional permanent (sorry, the term is now "enduring") military bases in that part of the world.

[...]

The most effective step would be to release the CIA Inspector General report on intelligence community performance prior to 9/11. That investigation was run by, and its report was prepared by an honest man, it turns out.

It was immediately suppressed by then-Acting DCI John McLaughlin -- another Tenet clone -- and McLaughin’s successors as director, Porter Goss, Michael Hayden, and now Leon Panetta.

Accountability is key. If there is no accountability, there is total freedom to screw up, and screw up royally, without any thought of possible personal consequences.

Not only is it certain that we will face more terrorist attacks, but the keystone-cops nature of recent intelligence operations ... whether in using cell phones in planning kidnappings in Italy, or in allowing suicide bombers access to CIA bases in Taliban-infested eastern Afghanistan ... will continue. Not to mention the screw-up in the case of Abdulmutallab.

Posted on January 15, 2010 at 7:22 AM53 Comments

$3.2 Million Jewelry Store Theft

I've written about this sort of thing before:

A robber bored a hole through the wall of jewelry shop and walked off with about 200 luxury watches worth 300 million yen ($3.2 million) in Tokyo's upscale Ginza district, police said Saturday.

From Secrets and Lies, p. 318:

Threat modeling is, for the most part, ad hoc. You think about the threats until you can’t think of any more, then you stop. And then you’re annoyed and surprised when some attacker thinks of an attack you didn’t. My favorite example is a band of California art thieves that would break into people’s houses by cutting a hole in their walls with a chainsaw. The attacker completely bypassed the threat model of the defender. The countermeasures that the homeowner put in place were door and window alarms; they didn’t make a difference to this attack.

One of the important things to consider in threat modeling is whether the attacker is looking for any victim, or is specifically targeting you. If the attacker is looking for any victim, then countermeasures that make you a less attractive target than other people are generally good enough. If the attacker is specifically targeting you, then you need to consider a greater level of security.

Posted on January 14, 2010 at 12:43 PM55 Comments

Body Cavity Scanners

At least one company is touting its technology:

Nesch, a company based in Crown Point, Indiana, may have a solution. It’s called diffraction-enhanced X-ray imaging or DEXI, which employs proprietary diffraction enhanced imaging and multiple image radiography

Rather than simply shining X-rays through the subject and looking at the amount that passes through (like a conventional X-ray machine), DEXI analyzes the X-rays that are scattered or refracted by soft tissue or other low-density material. Conventional X-rays show little more than the skeleton, but the new technique can reveal far more, which makes it useful for both medical and security applications.

Posted on January 14, 2010 at 6:00 AM45 Comments

Airplane Security Commentary

Excellent commentary from The Register:

As the smoke clears following the case of Umar Farouk Abdul Mutallab, the failed Christmas Day "underpants bomber" of Northwest Airlines Flight 253 fame, there are just three simple points for us Westerners to take away.

First: It is completely impossible to prevent terrorists from attacking airliners.

Second: This does not matter. There is no need for greater efforts on security.

Third: A terrorist set fire to his own trousers, suffering eyewateringly painful burns to what Australian cricket commentators sometimes refer to as the "groinal area", and nobody seems to be laughing. What's wrong with us?

Posted on January 13, 2010 at 2:55 PM36 Comments

The Power Law of Terrorism

Research result #1: "A Generalized Fission-Fusion Model for the Frequency of Severe Terrorist Attacks," by Aaron Clauset and Frederik W. Wiegel.

Plot the number of people killed in terrorists attacks around the world since 1968 against the frequency with which such attacks occur and you’ll get a power law distribution, that’s a fancy way of saying a straight line when both axis have logarithmic scales.

The question, of course, is why? Why not a normal distribution, in which there would be many orders of magnitude fewer extreme events?

Aaron Clauset and Frederik Wiegel have built a model that might explain why. The model makes five simple assumptions about the way terrorist groups grow and fall apart and how often they carry out major attacks. And here’s the strange thing: this model almost exactly reproduces the distribution of terrorists attacks we see in the real world.

These assumptions are things like: terrorist groups grow by accretion (absorbing other groups) and fall apart by disintegrating into individuals. They must also be able to recruit from a more or less unlimited supply of willing terrorists within the population.

Research Result #2: "Universal Patterns Underlying Ongoing Wars and Terrorism," by Neil F. Johnson, Mike Spagat, Jorge A. Restrepo, Oscar Becerra, Juan Camilo Bohorquez, Nicolas Suarez, Elvira Maria Restrepo, and Roberto Zarama.

In the case of the Iraq war, we might ask how many conflicts causing ten casualties are expected to occur over a one-year period. According to the data, the answer is the average number of events per year times 10­-2.3, or 0.005. If we instead ask how many events will cause twenty casualties, the answer is proportional to 20­-2.3. Taking into account the entire history of any given war, one finds that the frequency of events on all scales can be predicted by exactly the same exponent.

Professor Neil Johnson of Oxford University has come up with a remarkable result regarding these power laws: for several different wars, the exponent has about the same value. Johnson studied the long-standing conflict in Colombia, the war in Iraq, the global rate of terrorist attacks in non-G7 countries, and the war in Afghanistan. In each case, the power law exponent that predicted the distribution of conflicts was close to the value ­2.5.

This doesn't surprise me; power laws are common in naturally random phenomena.

Posted on January 12, 2010 at 1:46 PM25 Comments

The Comparative Risk of Terrorism

Good essay from the Wall Street Journal:

It might be unrealistic to expect the average citizen to have a nuanced grasp of statistically based risk analysis, but there is nothing nuanced about two basic facts:

(1) America is a country of 310 million people, in which thousands of horrible things happen every single day; and

(2) The chances that one of those horrible things will be that you're subjected to a terrorist attack can, for all practical purposes, be calculated as zero.

Consider that on this very day about 6,700 Americans will die.... Consider then that around 1,900 of the Americans who die today will be less than 65, and that indeed about 140 will be children. Approximately 50 Americans will be murdered today, including several women killed by their husbands or boyfriends, and several children who will die from abuse and neglect. Around 85 of us will commit suicide, and another 120 will die in traffic accidents.

[...]

Indeed, if one does not utter the magic word "terrorism," the notion that it is actually in the best interests of the country for the government to do everything possible to keep its citizens safe becomes self-evident nonsense. Consider again some of the things that will kill 6,700 Americans today. The country's homicide rate is approximately six times higher than that of most other developed nations; we have 15,000 more murders per year than we would if the rate were comparable to that of otherwise similar countries. Americans own around 200 million firearms, which is to say there are nearly as many privately owned guns as there are adults in the country. In addition, there are about 200,000 convicted murderers walking free in America today (there have been more than 600,000 murders in America over the past 30 years, and the average time served for the crime is about 12 years).

Given these statistics, there is little doubt that banning private gun ownership and making life without parole mandatory for anyone convicted of murder would reduce the homicide rate in America significantly. It would almost surely make a major dent in the suicide rate as well: Half of the nation's 31,000 suicides involve a handgun. How many people would support taking both these steps, which together would save exponentially more lives than even a -- obviously hypothetical -- perfect terrorist-prevention system? Fortunately, very few. (Although I admit a depressingly large number might support automatic life without parole.)

Or consider traffic accidents. All sorts of measures could be taken to reduce the current rate of automotive carnage from 120 fatalities a day -- from lowering speed limits, to requiring mechanisms that make it impossible to start a car while drunk, to even more restrictive measures. Some of these measures may well be worth taking. But the point is that at present we seem to consider 43,000 traffic deaths per year an acceptable cost to pay for driving big fast cars.

Kevin Drum takes issue with the analysis:

Two things. First, this line of argument -- that terrorism is statistically harmless compared to lots of other activities -- will never work. For better or worse, it just won't. So we should knock it off.

Second, even in the realm of pure logic it really doesn't hold water. The fundamental fear of terrorism is that it's not just random or unintentional, like car accidents or (for most of us) the threat of homicide. It's carried out by people with a purpose. The panic caused by the underwear bomber wasn't so much over the prospect of a planeload of casualties, it was over the reminder that al-Qaeda is still out there and still eager to expand its reach and kill thousands if we ever decide to let our guard down a little bit.

So even if you agree with Campos, as I do, that overreaction to al-Qaeda's efforts is dumb and counterproductive, it's perfectly reasonable to be more afraid of a highly motivated group with malign ideology and murderous intent than of things like traffic accidents or hurricanes. Suggesting otherwise, in some kind of hyperlogical a-death-is-a-death sense, strikes most people as naive and clueless. It's an argument that probably hurts the cause of common sense more than it helps.

While I agree that arguing that terrorism is statistically harmless isn't going to win any converts, I still think it's an important point to make. We routinely overestimate rare risks and underestimate common risks, and the more we recognize that cognitive bias, the better chance we have for overcoming it.

And Kevin illustrates another cognitive bias: we fear risks deliberately perpetrated by other people more than we do risks that occur by accident. And while we fear the unknown -- the "reminder that al-Qaeda is still out there and still eager to expand its reach and kill thousands if we ever decide to let our guard down a little bit" -- more than the familiar, the reality is that automobiles will kill over 3,000 people this month, next month, and every month from now until the foreseeable future, irrespective of whether we let our guard down or not. There simply isn't any reasonable scenario by which terrorism even approaches that death toll.

Yes, the risks are different. Your personal chance of dying in a car accident depends on where you live, how much you drive, whether or not you drink and drive, and so on. But your personal chance of dying in a terrorist attack also depends on these sorts of things: where you live, how often you fly, what you do for a living, and so on. (There's also a control bias at work: we underestimate the risk in situations where we're in control, or think we're in control -- like driving -- and overestimate the risks in situations where we're not in control.) But as a nation we get to set our priorities, and decide how to spend our money. No one is suggesting we ignore the risks of terrorism -- and making people feel safe is a good thing to do -- but it makes no sense to focus so much effort and money on it when there are far worse risks to Americans.

Jeffrey Rosen wrote about this last year. And similar sentiments from Baroness Murphy of the British House of Lords.

Remember, the terrorists want us to be terrorized, and they've chosen this tactic precisely because we have all these cognitive biases that magnify their actions. We can fight back by refusing to be terroroized.

Posted on January 12, 2010 at 6:15 AM88 Comments

My Second CNN.com Essay on the Underwear Bomber

This one is about our tendency to overreact to rare risks, and is an update of this 2007 essay.

I think we should start calling them the "underpants of mass destruction."

Posted on January 11, 2010 at 1:46 PM41 Comments

768-bit Number Factored

News:

On December 12, 2009, we factored the 768-bit, 232-digit number RSA-768 by the number field sieve. The number RSA-768 was taken from the now obsolete RSA Challenge list as a representative 768-bit RSA modulus. This result is a record for factoring general integers. Factoring a 1024-bit RSA modulus would be about a thousand times harder, and a 768-bit RSA modulus is several thousands times harder to factor than a 512-bit one. Because the first factorization of a 512-bit RSA modulus was reported only a decade ago it is not unreasonable to expect that 1024-bit RSA moduli can be factored well within the next decade by an academic effort such as ours.... Thus, it would be prudent to phase out usage of 1024-bit RSA within the next three to four years.

[...]

Our computation required more than 1020 operations. With the equivalent of almost 2000 years of computing on a single core 2.2GHz AMD Opteron, on the order of 267 instructions were carried out. The overall effort is sufficiently low that even for short-term protection of data of little value, 768-bit RSA moduli can no longer be recommended.

News articles.

Posted on January 11, 2010 at 8:00 AM37 Comments

Cybersecurity Theater at FOSE

FOSE, the big government IT conference, has a "Cybersecurity Theater" this year. I wonder if they'll check photo IDs.

On a similar note, I am pleased that my term "security theater" has finally hit the mainstream. It's everywhere. My favorite variant is "security theater of the absurd."

And this great cartoon. And two more.

Jon Stewart didn't use the words "security theater," but he was pretty funny on January 4.

Posted on January 8, 2010 at 12:14 PM37 Comments

FIPS 140-2 Level 2 Certified USB Memory Stick Cracked

Kind of a dumb mistake:

The USB drives in question encrypt the stored data via the practically uncrackable AES 256-bit hardware encryption system. Therefore, the main point of attack for accessing the plain text data stored on the drive is the password entry mechanism. When analysing the relevant Windows program, the SySS security experts found a rather blatant flaw that has quite obviously slipped through testers' nets. During a successful authorisation procedure the program will, irrespective of the password, always send the same character string to the drive after performing various crypto operations -- and this is the case for all USB Flash drives of this type.

Cracking the drives is therefore quite simple. The SySS experts wrote a small tool for the active password entry program's RAM which always made sure that the appropriate string was sent to the drive, irrespective of the password entered and as a result gained immediate access to all the data on the drive. The vulnerable devices include the Kingston DataTraveler BlackBox, the SanDisk Cruzer Enterprise FIPS Edition and the Verbatim Corporate Secure FIPS Edition.

Nice piece of analysis work.

The article goes on to question the value of the FIPS certification:

The real question, however, remains unanswered ­ how could USB Flash drives that exhibit such a serious security hole be given one of the highest certificates for crypto devices? Even more importantly, perhaps ­ what is the value of a certification that fails to detect such holes?

The problem is that no one really understands what a FIPS 140-2 certification means. Instead, they think something like: "This crypto thingy is certified, so it must be secure." In fact, FIPS 140-2 Level 2 certification only means that certain good algorithms are used, and that there is some level of tamper resistance and tamper evidence. Marketing departments of security take advantage of this confusion -- it's not only FIPS 140, it's all the security standards -- and encourage their customers to equate conformance to the standard with security.

So when that equivalence is demonstrated to be false, people are surprised.

Posted on January 8, 2010 at 7:24 AM101 Comments

Post-Underwear-Bomber Airport Security

In the headlong rush to "fix" security after the Underwear Bomber's unsuccessful Christmas Day attack, there's been far too little discussion about what worked and what didn't, and what will and will not make us safer in the future.

The security checkpoints worked. Because we screen for obvious bombs, Umar Farouk Abdulmutallab -- or, more precisely, whoever built the bomb -- had to construct a far less reliable bomb than he would have otherwise. Instead of using a timer or a plunger or a reliable detonation mechanism, as would any commercial user of PETN, he had to resort to an ad hoc and much more inefficient homebrew mechanism: one involving a syringe and 20 minutes in the lavatory and we don't know exactly what else. And it didn't work.

Yes, the Amsterdam screeners allowed Abdulmutallab onto the plane with PETN sewn into his underwear, but that's not a failure, either. There is no security checkpoint, run by any government anywhere in the world, designed to catch this. It isn't a new threat; it's more than a decade old. Nor is it unexpected; anyone who says otherwise simply isn't paying attention. But PETN is hard to explode, as we saw on Christmas Day.

Additionally, the passengers on the airplane worked. For years, I've said that exactly two things have made us safer since 9/11: reinforcing the cockpit door and convincing passengers that they need to fight back. It was the second of these that, on Christmas Day, quickly subdued Abdulmutallab after he set his pants on fire.

To the extent security failed, it failed before Abdulmutallab even got to the airport. Why was he issued an American visa? Why didn't anyone follow up on his father's tip? While I'm sure there are things to be improved and fixed, remember that everything is obvious in hindsight. After the fact, it's easy to point to the bits of evidence and claim that someone should have "connected the dots." But before the fact, when there are millions of dots -- some important but the vast majority unimportant -- uncovering plots is a lot harder.

Despite this, the proposed fixes focus on the details of the plot rather than the broad threat. We're going to install full-body scanners, even though there are lots of ways to hide PETN -- stuff it in a body cavity, spread it thinly on a garment -- from the machines. We're going to profile people traveling from 14 countries, even though it's easy for a terrorist to travel from a different country. Seating requirements for the last hour of flight were the most ridiculous example.

The problem with all these measures is that they're only effective if we guess the plot correctly. Defending against a particular tactic or target makes sense if tactics and targets are few. But there are hundreds of tactics and millions of targets, so all these measures will do is force the terrorists to make a minor modification to their plot.

It's magical thinking: If we defend against what the terrorists did last time, we'll somehow defend against what they do next time. Of course this doesn't work. We take away guns and bombs, so the terrorists use box cutters. We take away box cutters and corkscrews, and the terrorists hide explosives in their shoes. We screen shoes, they use liquids. We limit liquids, they sew PETN into their underwear. We implement full-body scanners, and they're going to do something else. This is a stupid game; we should stop playing it.

But we can't help it. As a species, we're hardwired to fear specific stories -- terrorists with PETN underwear, terrorists on subways, terrorists with crop dusters -- and we want to feel secure against those stories. So we implement security theater against the stories, while ignoring the broad threats.

What we need is security that's effective even if we can't guess the next plot: intelligence, investigation, and emergency response. Our foiling of the liquid bombers demonstrates this. They were arrested in London, before they got to the airport. It didn't matter if they were using liquids -- which they chose precisely because we weren't screening for them -- or solids or powders. It didn't matter if they were targeting airplanes or shopping malls or crowded movie theaters. They were arrested, and the plot was foiled. That's effective security.

Finally, we need to be indomitable. The real security failure on Christmas Day was in our reaction. We're reacting out of fear, wasting money on the story rather than securing ourselves against the threat. Abdulmutallab succeeded in causing terror even though his attack failed.

If we refuse to be terrorized, if we refuse to implement security theater and remember that we can never completely eliminate the risk of terrorism, then the terrorists fail even if their attacks succeed.

This essay previously appeared on Sphere, the AOL.com news site.

EDITED TO ADD (1/8): Similar sentiment.

Posted on January 7, 2010 at 1:18 PM63 Comments

Another Contest: Fixing Airport Security

Slate is hosting an airport security suggestions contest: ideas "for making airport security more effective, more efficient, or more pleasant." Deadline is midday Friday.

I had already submitted a suggestion before I was asked to be a judge. Since I'm no longer eligible, here's what I sent them:

Reduce the TSA's budget, and spend the money on:

1. Intelligence. Security measures that focus on specific tactics or targets are a waste of money unless we guess the next attack correctly. Security measures that just force the terrorists to make a minor change in their tactics or targets is not money well spent.

2. Investigation. Since the terrorists deliberately choose plots that we're not looking for, the best security is to stop plots before they get to the airport. Remember the arrest of the London liquid bombers.

3. Emergency response. Terrorism's harm depends more on our reactions to attacks than the attacks themselves. We're naturally resilient, but how we respond in those first hours and days is critical.

And as an added bonus, all of these measures protect us against non-airplane terrorism as well. All we have to do is stop focusing on specific movie plots, and start thinking about the overall threat.

Probably not what they were looking for, and certainly not anything the government is even going to remotely consider -- but the smart solution all the same.

Posted on January 7, 2010 at 10:53 AM43 Comments

Gift Cards and Employee Retail Theft

Retail theft by employees has always been a problem, but gift cards make it easier:

At the Saks flagship store in Manhattan, a 23-year-old sales clerk was caught recently ringing up $130,000 in false merchandise returns and siphoning the money onto a gift card.

[...]

Many of the gift card crimes are straightforward, frequently involving young sales clerks and smaller amounts than the Saks theft. Among the variations of such crimes, cashiers often do fake refunds of merchandise and then, with the amount refunded, use their registers to electronically fill gift cards, which they take. Or sometimes when shoppers buy gift cards, cashiers give them blank cards and then divert the shoppers' money onto cards for themselves.

That last tactic is particularly Grinch-like.

Posted on January 7, 2010 at 5:46 AM25 Comments

Nate Silver on the Risks of Airplane Terrorism

Over at fivethirtyeight.com, Nate Silver crunches the numbers and concludes that, at least as far as terrorism is concerned, air travel is safer than it's ever been:

In the 2000s, a total of 469 passengers (including crew and terrorists) were killed worldwide as the result of Violent Passenger Incidents, 265 of which were on 9/11 itself. No fatal incidents have occurred since nearly simultaneous bombings of two Russian aircraft on 8/24/2004; this makes for the longest streak without a fatal incident since World War II. The overall death toll during the 2000s is about the same as it was during the 1960s, and substantially less than in the 1970s and 1980s, when violent incidents peaked. The worst individual years were 1985, 1988 and 1989, in that order; 2001 ranks fourth.

Of course, there is a lot more air travel now than there was a couple of decades ago. Although worldwide data is difficult to obtain, U.S. air travel generally expanded at rates of 10-15% per year from the 1930s through 9/11. If we assume that U.S. air traffic represents about a third of the worldwide total (the U.S. share of global GDP, which is probably a reasonable proxy, has fairly consistently been between 26-28% during this period), we can estimate the number of deaths from Violent Passenger Incidents per one billion passenger boardings. By this measure, the 2000s tied the 1990s for being the safest on record, each of which were about six times safer than any previous decade. About 22 passengers per one billion enplanements were killed as the result of VPIs during the 2000s; this compares with a rate of about 191 deaths per billion enplanements during the 1960s.

Why? Because over the past decade, the risk of airplane terrorism has been very low:

Over the past decade, according to BTS, there have been 99,320,309 commercial airline departures that either originated or landed within the United States. Dividing by six, we get one terrorist incident per 16,553,385 departures.

These departures flew a collective 69,415,786,000 miles. That means there has been one terrorist incident per 11,569,297,667 mles flown. This distance is equivalent to 1,459,664 trips around the diameter of the Earth, 24,218 round trips to the Moon, or two round trips to Neptune.

Assuming an average airborne speed of 425 miles per hour, these airplanes were aloft for a total of 163,331,261 hours. Therefore, there has been one terrorist incident per 27,221,877 hours airborne. This can also be expressed as one incident per 1,134,245 days airborne, or one incident per 3,105 years airborne.

There were a total of 674 passengers, not counting crew or the terrorists themselves, on the flights on which these incidents occurred. By contrast, there have been 7,015,630,000 passenger enplanements over the past decade. Therefore, the odds of being on given departure which is the subject of a terrorist incident have been 1 in 10,408,947 over the past decade. By contrast, the odds of being struck by lightning in a given year are about 1 in 500,000. This means that you could board 20 flights per year and still be less likely to be the subject of an attempted terrorist attack than to be struck by lightning.

In 2008, 37,000 people died in automobile accidents -- the lowest number since 1961. Even so, that's more than a 9/11 worth of fatalities every month, month after month, year after year.

There are all sorts of psychological biases that cause us to both misjudge risk and overreact to rare risks, but we can do better than that if we stop and think rationally.

Posted on January 6, 2010 at 2:59 PM34 Comments

David Brooks on Resilience in the Face of Security Imperfection

David Brooks makes some very good points in this New York Times op-ed from last week:

All this money and technology seems to have reduced the risk of future attack. But, of course, the system is bound to fail sometimes. Reality is unpredictable, and no amount of computer technology is going to change that. Bureaucracies are always blind because they convert the rich flow of personalities and events into crude notations that can be filed and collated. Human institutions are always going to miss crucial clues because the information in the universe is infinite and events do not conform to algorithmic regularity.

[...]

In a mature nation, President Obama could go on TV and say, “Listen, we’re doing the best we can, but some terrorists are bound to get through.” But this is apparently a country that must be spoken to in childish ways. The original line out of the White House was that the system worked. Don’t worry, little Johnny.

When that didn’t work the official line went to the other extreme. “I consider that totally unacceptable,” Obama said. I’m really mad, Johnny. But don’t worry, I’ll make it all better.

[...]

For better or worse, over the past 50 years we have concentrated authority in centralized agencies and reduced the role of decentralized citizen action. We’ve done this in many spheres of life. Maybe that’s wise, maybe it’s not. But we shouldn’t imagine that these centralized institutions are going to work perfectly or even well most of the time. It would be nice if we reacted to their inevitable failures not with rabid denunciation and cynicism, but with a little resiliency, an awareness that human systems fail and bad things will happen and we don’t have to lose our heads every time they do.

There's a pervasive belief in this society that perfection is possible. So if something bad occurs, it can never be because we just got unlucky. It must be because something went wrong and someone is at fault, and therefore things must be fixed. Sometimes, though, this simply isn't true. Sometimes it's better not to fix things: either there is no fix, or the fix is more expensive than living with the problem, or the side effects of the fix are worse than the problem. And sometimes you can do everything right and have it still turn out wrong. Welcome to the real world.

EDITED TO ADD (1/8): Glenn Greenwald on "The Degrading Effects of Terrorism Fears."

Posted on January 6, 2010 at 10:27 AM27 Comments

TSA Logo Contest

Over at "Ask the Pilot," Patrick Smith has a great idea:

Calling all artists: One thing TSA needs, I think, is a better logo and a snappy motto. Perhaps there's a graphic designer out there who can help with a new rendition of the agency's circular eagle-and-flag motif. I'm imagining a revised eagle, its talons clutching a box cutter and a toothpaste tube. It says "Transportation Security Administration" around the top. Below are the three simple words of the TSA mission statement: "Tedium, Weakness, Farce."

Let's do it. I'm announcing the TSA Logo Contest. Rules are simple: create a TSA logo. People are welcome to give ideas in the comments, but only actual created logos are eligible to compete. (When my website administrator wakes up, I'll ask him how we can post images in the comments.) Contest ends on February 6. Winner receives copies of my books, copies of Patrick Smith's book, an empty 12-ounce bottle labeled "saline" that you can refill and get through any TSA security checkpoint, and a fake boarding pass on any flight for any date.

EDITED TO ADD (1/6): Please leave links to your submissions in the comments, and I will add them to the post. After the contest is over, I'll choose five finalists and post them. The winner will be chosen by popular acclaim.

The Entries:

photo
Sean Flanagan
photo
Tom B
photo
Rhys Gibson
photo
Baz (1)
photo
Baz (2)
photo
Russell Nelson
photo
Kurushio
photo
Cathy
photo
Tonio Loewald
photo
I love to fly and it shows (1)
photo
Evanda
photo
Shesparticular
photo
MrJM
photo
Amy
photo
Hudsn
photo
Auximinus
photo
DS
photo
Pox Voldius
photo
I love to fly and it shows (2)
photo
Brendan McTague
photo
Andy S.
photo
Pope Noonius I
photo
Travis McHale
photo
T
photo
Matthew Williams
photo
Will Imholte


EDITED TO ADD: vote on the finalists here.

Posted on January 6, 2010 at 8:42 AM148 Comments

Breaching the Secure Area in Airports

An unidentified man breached airport security at Newark Airport on Sunday, walking into the secured area through the exit, prompting the evacuation of a terminal and flight delays that continued into the next day. This isn't common, but it happens regularly. The result is always the same, and it's not obvious that fixing the problem is the right solution.

This kind of security breach is inevitable, simply because human guards are not perfect. Sometimes it's someone going in through the out door, unnoticed by a bored guard. Sometimes it's someone running through the checkpoint and getting lost in the crowd. Sometimes it's an open door that should be locked. Amazing as it seems to frequent fliers, the perpetrator often doesn't even know he did anything wrong.

Basically, whenever there is -- or could be -- an unscreened person lost within the secure area of an airport, there are two things the TSA can do. They can say "this isn't a big deal," and ignore it. Or they can evacuate everyone inside the secure area, search every nook and cranny -- inside the large boxes of napkins at the fast food restaurant, above the false ceilings in the bathrooms, everywhere -- looking for anyone hiding or anything anyone hid, and then rescreen everybody: causing delays of six, eight, twelve, or more hours. That's it; those are the options. And there's no way someone in charge will choose to ignore the risk; even if the odds of a terrorist exploit are minuscule, it'll cost him his career if he's wrong.

Several European airports have their security screening organized differently. At Schipol Airport in Amsterdam, for example, passengers are screened at the gates. This is more expensive and requires a substantially different airport design, but it does mean that if there is a security breach, only the gate has to be evacuated and searched, and the people rescreened.

American airports can do more to secure against this risk, but I'm reasonably sure it's not worth it. We could double the guards to reduce the risk of inattentiveness, and redesign the airports to make this kind of thing less likely, but those are expensive solutions to an already rare problem. As much as I don't like saying it, the smartest thing is probably to live with this occasional but major inconvenience.

This essay originally appeared on ThreatPost.com.

EDITED TO ADD (1/9): A first-person account of the chaos at Newark Airport, with observations and recommendations.

Posted on January 6, 2010 at 6:10 AM56 Comments

Matt Blaze on the New "Unpredictable" TSA Screening Measures

Interesting:

"Unpredictable" security as applied to air passenger screening means that sometimes (perhaps most of the time), certain checks that might detect terrorist activity are not applied to some or all passengers on any given flight. Passengers can't predict or influence when or whether they are be subjected to any particular screening mechanism. And so, the strategy assumes, the would-be terrorist will be forced to prepare for every possible mechanism in the TSA's arsenal, effectively narrowing his or her range of options enough to make any serious mischief infeasible.

But terrorist organizations -- especially those employing suicide bombers -- have very different goals and incentives from those of smugglers, fare beaters and tax cheats. Groups like Al Qaeda aim to cause widespread disruption and terror by whatever means they can, even at great cost to individual members. In particular, they are willing and able to sacrifice -- martyr -- the very lives of their solders in the service of that goal. The fate of any individual terrorist is irrelevant as long as the loss contributes to terror and disruption.

Paradoxically, the best terrorist strategy (as long as they have enough volunteers) under unpredictable screening may be to prepare a cadre of suicide bombers for the least rigorous screening to which they might be subjected, and not, as the strategy assumes, for the most rigorous. Sent on their way, each will either succeed at destroying a plane or be caught, but either outcome serves the terrorists' objective.

The problem is that catching someone under a randomized strategy creates a terrible dilemma for the authorities. What do we do when we detect a bomb-wielding terrorist whose device was discovered through the enhanced, randomly applied screening procedure?

EDITED TO ADD (1/5): In this blog post, a reader of Andrew Sullivan's blog argues that the terrorist didn't care if he blew the plane up or not, that he went back to his seat instead of detonating the explosive in the toilet precisely because he wanted his fellow passengers to see his attempt -- just in case it failed.

Posted on January 5, 2010 at 11:41 AM48 Comments

Adopting the Israeli Airport Security Model

I've been reading a lot recently -- like this article on the Israeli airport security model, and how we should adopt more of the Israeli security model here in the U.S. This sums up the problem with that idea nicely:

On the other hand, no matter how safe or how wonderful the flying experience on El Al, it is TINY airline by U.S. standards, with only 38 aircraft, 46 destinations, and fewer than two million passengers in 2008. As near as I can tell, Cairo is their only destination in a majority Muslim country. Delta, before the Northwest merger is included, reported 449 aircraft and 375 destinations.

Ben Gurion Airport is Israel’s primary (not only) international gateway. In 2008, Ben Gurion served 11.1 million international passengers and 470,000 domestic passengers, roughly comparable to the 10 million total served at Sacramento, the airport I use most often. Amsterdam served 47.4 million total, and Detroit served 35.1 million total in 2008.

By American standards, in terms of passengers served, Ben Gurion is a busy regional airport.

Simply put, the Israeli airport security model does not scale.

EDITED TO ADD (1/7): More.

EDITED TO ADD (1/12): Interview with El Al's former head of security.

Posted on January 5, 2010 at 7:04 AM81 Comments

Vatican Admits Perfect Security is Both Impossible and Undesirable

This is refreshing:

Father Lombardi said it was not realistic to think the Vatican could ensure 100% security for the Pope and that security guards appeared to have acted as quickly as possible.

It seems that they intervened at the earliest possible moment in a situation in which zero risk cannot be achieved," he told the Associated Press news agency.

"People want to see him up close and he's pleased to see them closely too. A zero risk doesn't seem realistic in a situation in which there's a direct rapport with the people."

EDITED TO ADD (1/4): This is particularly enlightened in comparison to the fears that somehow the U.S. president was endangered by people sneaking into a dinner with him. Presidents meet and shake hands with uncleared random people all the time; the Secret Service knows how to deal with that sort of thing.

Posted on January 4, 2010 at 1:15 PM31 Comments

Christmas Bomber: Where Airport Security Worked

With all the talk about the failure of airport security to detect the PETN that the Christmas bomber sewed into his underwear -- and to think I've been using the phrase "underwear bomber" as a joke all these years -- people forget that airport security played an important role in foiling the plot.

In order to get through airport security, Abdulmutallab -- or, more precisely, whoever built the bomb -- had to construct a far less reliable bomb than he would have otherwise; he had to resort to a much more ineffective detonation mechanism. And, as we've learned, detonating PETN is actually very hard.

Additionally, I don't think it's fair to criticize airport security for not catching the PETN. The security systems at airports aren't designed to catch someone strapping a plastic explosive to his body. Even more strongly: no security system, at any airport, in any country on the planet, is designed to catch someone doing this. This isn't a surprise. It isn't even a new idea. It wasn't even a new idea when I said this to then TSA head Kip Hawley in 2007: "I don't want to even think about how much C4 I can strap to my legs and walk through your magnetometers." You can try to argue that the TSA, and other airport security organizations around the world, should have been redesigned years ago to catch this, but anyone who is surprised by this attack simply hasn't been paying attention.

EDITED TO ADD (1/4): I don't know what to make of this:

Ben Wallace, who used to work at defence firm QinetiQ, one of the companies making the technology, warned it was not a "big silver bullet".

[...]

Mr Wallace said the scanners would probably not have detected the failed Detroit plane plot of Christmas Day.

He said the same of the 2006 airliner liquid bomb plot and of explosives used in the 2005 bombings of three Tube trains and a bus in London.

[...]

He said the "passive millimetre wave scanners" - which QinetiQ helped develop - probably would not have detected key plots affecting passengers in the UK in recent years.

[...]

Mr Wallace told BBC Radio 4's Today programme: "The advantage of the millimetre waves are that they can be used at longer range, they can be quicker and they are harmless to travellers.

"But there is a big but, and the but was in all the testing that we undertook, it was unlikely that it would have picked up the current explosive devices being used by al-Qaeda."

He added: "It probably wouldn't have picked up the very large plot with the liquids in 2006 at Heathrow or indeed the... bombs that were used on the Tube because it wasn't very good and it wasn't that easy to detect liquids and plastics unless they were very solid plastics.

"This is not necessarily the big silver bullet that is somehow being portrayed by Downing Street."

A spokeswoman for QinetiQ said "no single technology can address every eventuality or security risk".

"QinetiQ's passive millimetre wave system, SPO, is a... people-screening system which can identify potential security threats concealed on the human body. It is not a checkpoint security system.

"SPO can effectively shortlist people who may need further investigation, either via other technology such as x-rays, or human intervention such as a pat-down search."

Posted on January 4, 2010 at 6:28 AM83 Comments

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..