January 15, 2010
by Bruce Schneier
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-1001.html>. These same essays appear in the "Schneier on Security" blog: <http://www.schneier.com/blog>. An RSS feed is available.
In this issue:
In the headlong rush to "fix" security after the Underwear Bomber's unsuccessful Christmas Day attack, there's been far too little discussion about what worked and what didn't, and what will and will not make us safer in the future.
The security checkpoints worked. Because we screen for obvious bombs, Umar Farouk Abdulmutallab -- or, more precisely, whoever built the bomb -- had to construct a far less reliable bomb than he would have otherwise. Instead of using a timer or a plunger or a reliable detonation mechanism, as would any commercial user of PETN, he had to resort to an ad hoc and much more inefficient homebrew mechanism: one involving a syringe and 20 minutes in the lavatory and we don't know exactly what else. And it didn't work.
Yes, the Amsterdam screeners allowed Abdulmutallab onto the plane with PETN sewn into his underwear, but that's not a failure, either. There is no security checkpoint, run by any government anywhere in the world, designed to catch this. It isn't a new threat; it's more than a decade old. Nor is it unexpected; anyone who says otherwise simply isn't paying attention. But PETN is hard to explode, as we saw on Christmas Day.
Additionally, the passengers on the airplane worked. For years, I've said that exactly two things have made us safer since 9/11: reinforcing the cockpit door and convincing passengers that they need to fight back. It was the second of these that, on Christmas Day, quickly subdued Abdulmutallab after he set his pants on fire.
To the extent security failed, it failed before Abdulmutallab even got to the airport. Why was he issued an American visa? Why didn't anyone follow up on his father's tip? While I'm sure there are things to be improved and fixed, remember that everything is obvious in hindsight. After the fact, it's easy to point to the bits of evidence and claim that someone should have "connected the dots." But before the fact, when there are millions of dots -- some important but the vast majority unimportant -- uncovering plots is a lot harder.
Despite this, the proposed fixes focus on the details of the plot rather than the broad threat. We're going to install full-body scanners, even though there are lots of ways to hide PETN -- stuff it in a body cavity, spread it thinly on a garment -- from the machines. We're going to profile people traveling from 14 countries, even though it's easy for a terrorist to travel from a different country. Seating requirements for the last hour of flight were the most ridiculous example.
The problem with all these measures is that they're only effective if we guess the plot correctly. Defending against a particular tactic or target makes sense if tactics and targets are few. But there are hundreds of tactics and millions of targets, so all these measures will do is force the terrorists to make a minor modification to their plot.
It's magical thinking: If we defend against what the terrorists did last time, we'll somehow defend against what they do next time. Of course this doesn't work. We take away guns and bombs, so the terrorists use box cutters. We take away box cutters and corkscrews, and the terrorists hide explosives in their shoes. We screen shoes, they use liquids. We limit liquids, they sew PETN into their underwear. We implement full-body scanners, and they're going to do something else. This is a stupid game; we should stop playing it.
But we can't help it. As a species, we're hardwired to fear specific stories -- terrorists with PETN underwear, terrorists on subways, terrorists with crop dusters -- and we want to feel secure against those stories. So we implement security theater against the stories, while ignoring the broad threats.
What we need is security that's effective even if we can't guess the next plot: intelligence, investigation, and emergency response. Our foiling of the liquid bombers demonstrates this. They were arrested in London, before they got to the airport. It didn't matter if they were using liquids -- which they chose precisely because we weren't screening for them -- or solids or powders. It didn't matter if they were targeting airplanes or shopping malls or crowded movie theaters. They were arrested, and the plot was foiled. That's effective security.
Finally, we need to be indomitable. The real security failure on Christmas Day was in our reaction. We're reacting out of fear, wasting money on the story rather than securing ourselves against the threat. Abdulmutallab succeeded in causing terror even though his attack failed.
If we refuse to be terrorized, if we refuse to implement security theater and remember that we can never completely eliminate the risk of terrorism, then the terrorists fail even if their attacks succeed.
This was my first reaction:
Rachael Maddow interviewed me:
I did a lot of interviews: television, radio, and print.
Jeffrey Goldberg published a Q&A with me for the Atlantic website:
I participated in a "Room for Debate" discussion on airport security profiling; nothing I haven't said before.
I wrote about intelligence failures back in 2002.
I'm pleased that my term "security theater" has finally hit the mainstream. It's everywhere. My favorite variant is "security theater of the absurd."
Excellent commentary from The Register:
Good commentary from former CIA analyst Ray McGovern:
David Brooks on resilience in the face of security imperfection:
Nate Silver on the risks of airplane terrorism:
The comparative risk of terrorism, including my commentary on the topic:
Matt Blaze on the new "unpredictable" TSA screening measures:
Problems with adopting the Israeli airport security model:
It's unclear whether a full body scanner would have caught the Christmas Underwear Bomber:
One company is touting body cavity scanners:
Jon Stewart was pretty funny on January 4.
The U.S. civil rights movement as an insurgency:
A very good four-part series: "Risk and Security in the Telecommunications Industry."
Australia restores some sanity to airport screening; I wonder if it lasted.
MagnePrint technology for credit/debit cards: seems like a solution in search of a problem.
This is very serious: Santa's naughty/nice database was hacked.
Howard Schmidt named U.S. cybersecurity czar:
Luggage Locator: wow, is this a bad idea:
Plant security countermeasures:
Good survey article by Alessandro Acquisti in IEEE Security & Privacy: "The Behavioral Economics of Personal Information"
The Vatican admits that perfect security is both impossible and undesirable:
Retail theft by employees has always been a problem, but gift cards make it easier:
FIPS 140-2 Level 2 certified USB memory stick cracked
768-bit number factored:
Interesting research on the power law associated with terrorist attacks:
Op-ed on the CIA's National Clandestine Service:
$3.2 million jewelry store theft in Japan by drilling a hole through the wall:
Loretta Napoleoni on the economics of terrorism:
Over at "Ask the Pilot," Patrick Smith has a great idea: "Calling all artists: One thing TSA needs, I think, is a better logo and a snappy motto. Perhaps there's a graphic designer out there who can help with a new rendition of the agency's circular eagle-and-flag motif. I'm imagining a revised eagle, its talons clutching a box cutter and a toothpaste tube. It says 'Transportation Security Administration' around the top. Below are the three simple words of the TSA mission statement: 'Tedium, Weakness, Farce.'"
Let's do it. I'm announcing the TSA Logo Contest. Rules are simple: create a TSA logo. People are welcome to give ideas in the comments, but only actual created logos are eligible to compete. Contest ends on February 6. Winner receives copies of my books, copies of Patrick Smith's book, an empty 12-ounce bottle labeled "saline" that you can refill and get through any TSA security checkpoint, and a fake boarding pass on any flight for any date.
Submit your entry, and view other entries, at the blog post:
I'll post the finalists around February 6th, and then everyone can vote for a winner.
Slate hosted an airport security suggestions contest: ideas "for making airport security more effective, more efficient, or more pleasant." Deadline was last week.
I had already submitted a suggestion before I was asked to be a judge. Since I'm no longer eligible, here's what I sent them:
Reduce the TSA's budget, and spend the money on:
Probably not what they were looking for, and certainly not anything the government is even going to remotely consider -- but the smart solution all the same.
Here are the six links to the face-off Marcus Ranum and I did on stage at the Information Security Decisions conference in Chicago.
President Obama, in his speech last week, rightly focused on fixing the intelligence failures that resulted in Umar Farouk Abdulmutallab being ignored, rather than on technologies targeted at the details of his underwear-bomb plot. But while Obama's instincts are right, reforming intelligence for this new century and its new threats is a more difficult task than he might like. We don't need new technologies, new laws, new bureaucratic overlords, or -- for heaven's sake -- new agencies. What prevents information sharing among intelligence organizations is the culture of the generation that built those organizations.
The U.S. intelligence system is a sprawling apparatus, spanning the FBI and the State Department, the CIA and the National Security Agency, and the Department of Homeland Security -- itself an amalgamation of two dozen different organizations -- designed and optimized to fight the Cold War. The single, enormous adversary then was the Soviet Union: as bureaucratic as they come, with a huge budget, and capable of very sophisticated espionage operations. We needed to defend against technologically advanced electronic eavesdropping operations, their agents trying to bribe or seduce our agents, and a worldwide intelligence gathering capability that hung on our every word.
In that environment, secrecy was paramount. Information had to be protected by armed guards and double fences, shared only among those with appropriate security clearances and a legitimate "need to know," and it was better not to transmit information at all than to transmit it insecurely.
Today's adversaries are different. There are still governments, like China, who are after our secrets. But the secrets they're after are more often corporate than military, and most of the other organizations of interest are like al Qaeda: decentralized, poorly funded and incapable of the intricate spy versus spy operations the Soviet Union could pull off.
Against these adversaries, sharing is far more important than secrecy. Our intelligence organizations need to trade techniques and expertise with industry, and they need to share information among the different parts of themselves. Today's terrorist plots are loosely organized ad hoc affairs, and those dots that are so important for us to connect beforehand might be on different desks, in different buildings, owned by different organizations.
Critics have pointed to laws that prohibited inter-agency sharing but, as the 9/11 Commission found, the law allows for far more sharing than goes on. It doesn't happen because of inter-agency rivalries, a reliance on outdated information systems, and a culture of secrecy. What we need is an intelligence community that shares ideas and hunches and facts on their versions of Facebook, Twitter and wikis. We need the bottom-up organization that has made the Internet the greatest collection of human knowledge and ideas ever assembled.
The problem is far more social than technological. Teaching your mom to "text" and your dad to Twitter doesn't make them part of the Internet generation, and giving all those cold warriors blogging lessons won't change their mentality -- or the culture. The reason this continues to be a problem, the reason President George W. Bush couldn't change things even after the 9/11 Commission came to much the same conclusions as President Obama's recent review did, is generational. The Internet is the greatest generation gap since rock and roll, and it's just as true inside government as out. We might have to wait for the elders inside these agencies to retire and be replaced by people who grew up with the Internet.
This op-ed previously appeared in the San Francisco Chronicle.
Sometimes mediocre encryption is better than strong encryption, and sometimes no encryption is better still.
The Wall Street Journal reported this week that Iraqi, and possibly also Afghan, militants are using commercial software to eavesdrop on U.S. Predators, other unmanned aerial vehicles, or UAVs, and even piloted planes. The systems weren't "hacked" -- the insurgents can't control them -- but because the downlink is unencrypted, they can watch the same video stream as the coalition troops on the ground.
The naive reaction is to ridicule the military. Encryption is so easy that HDTVs do it -- just a software routine and you're done -- and the Pentagon has known about this flaw since Bosnia in the 1990s. But encrypting the data is the easiest part; key management is the hard part. Each UAV needs to share a key with the ground station. These keys have to be produced, guarded, transported, used and then destroyed. And the equipment, both the Predators and the ground terminals, needs to be classified and controlled, and all the users need security clearance.
The command and control channel is, and always has been, encrypted -- because that's both more important and easier to manage. UAVs are flown by airmen sitting at comfortable desks on U.S. military bases, where key management is simpler. But the video feed is different. It needs to be available to all sorts of people, of varying nationalities and security clearances, on a variety of field terminals, in a variety of geographical areas, in all sorts of conditions -- with everything constantly changing. Key management in this environment would be a nightmare.
Additionally, how valuable is this video downlink is to the enemy? The primary fear seems to be that the militants watch the video, notice their compound being surveilled and flee before the missiles hit. Or notice a bunch of Marines walking through a recognizable area and attack them. This might make a great movie scene, but it's not very realistic. Without context, and just by peeking at random video streams, the risk caused by eavesdropping is low.
Contrast this with the additional risks if you encrypt: A soldier in the field doesn't have access to the real-time video because of a key management failure; a UAV can't be quickly deployed to a new area because the keys aren't in place; we can't share the video information with our allies because we can't give them the keys; most soldiers can't use this technology because they don't have the right clearances. Given this risk analysis, not encrypting the video is almost certainly the right decision.
There is another option, though. During the Cold War, the NSA's primary adversary was Soviet intelligence, and it developed its crypto solutions accordingly. Even though that level of security makes no sense in Bosnia, and certainly not in Iraq and Afghanistan, it is what the NSA had to offer. If you encrypt, they said, you have to do it "right."
The problem is, the world has changed. Today's insurgent adversaries don't have KGB-level intelligence gathering or cryptanalytic capabilities. At the same time, computer and network data gathering has become much cheaper and easier, so they have technical capabilities the Soviets could only dream of. Defending against these sorts of adversaries doesn't require military-grade encryption only where it counts; it requires commercial-grade encryption everywhere possible.
This sort of solution would require the NSA to develop a whole new level of lightweight commercial-grade security systems for military applications -- not just office-data "Sensitive but Unclassified" or "For Official Use Only" classifications. It would require the NSA to allow keys to be handed to uncleared downlink viewers, and perhaps read over insecure phone lines and stored in people's back pockets. It would require the sort of ad hoc key management systems you find in internet protocols, or in DRM systems. It wouldn't be anywhere near perfect, but it would be more commensurate with the actual threats.
And it would help defend against a completely different threat facing the Pentagon: The PR threat. Regardless of whether the people responsible made the right security decision when they rushed the Predator into production, or when they convinced themselves that local adversaries wouldn't know how to exploit it, or when they forgot to update their Bosnia-era threat analysis to account for advances in technology, the story is now being played out in the press. The Pentagon is getting beaten up because it's not protecting against the threat -- because it's easy to make a sound bite where the threat sounds really dire. And now it has to defend against the perceived threat to the troops, regardless of whether the defense actually protects the troops or not. Reminds me of the TSA, actually.
So the military is now committed to encrypting the video ... eventually. The next generation Predators, called Reapers -- Who names this stuff? Second-grade boys? -- will have the same weakness. Maybe we'll have encrypted video by 2010, or 2014, but I don't think that's even remotely possible unless the NSA relaxes its key management and classification requirements and embraces a lightweight, less secure encryption solution for these sorts of situations. The real failure here is the failure of the Cold War security model to deal with today's threats.
This essay originally appeared on Wired.com.
An unidentified man breached airport security at Newark Airport on Sunday, walking into the secured area through the exit, prompting the evacuation of a terminal and flight delays that continued into the next day. This isn't common, but it happens regularly. The result is always the same, and it's not obvious that fixing the problem is the right solution.
This kind of security breach is inevitable, simply because human guards are not perfect. Sometimes it's someone going in through the out door, unnoticed by a bored guard. Sometimes it's someone running through the checkpoint and getting lost in the crowd. Sometimes it's an open door that should be locked. Amazing as it seems to frequent fliers, the perpetrator often doesn't even know he did anything wrong.
Basically, whenever there is -- or could be -- an unscreened person lost within the secure area of an airport, there are two things the TSA can do. They can say "this isn't a big deal," and ignore it. Or they can evacuate everyone inside the secure area, search every nook and cranny -- inside the large boxes of napkins at the fast food restaurant, above the false ceilings in the bathrooms, everywhere -- looking for anyone hiding or anything anyone hid, and then rescreen everybody: causing delays of six, eight, twelve, or more hours. That's it; those are the options. And there's no way someone in charge will choose to ignore the risk; even if the odds of a terrorist exploit are minuscule, it'll cost him his career if he's wrong.
Several European airports have their security screening organized differently. At Schipol Airport in Amsterdam, for example, passengers are screened at the gates. This is more expensive and requires a substantially different airport design, but it does mean that if there is a security breach, only the gate has to be evacuated and searched, and the people rescreened.
American airports can do more to secure against this risk, but I'm reasonably sure it's not worth it. We could double the guards to reduce the risk of inattentiveness, and redesign the airports to make this kind of thing less likely, but those are expensive solutions to an already rare problem. As much as I don't like saying it, the smartest thing is probably to live with this occasional but major inconvenience.
This essay originally appeared on ThreatPost.com.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Schneier on Security," "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2010 by Bruce Schneier.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.