Blog: December 2009 Archives

Quantum Cryptography Cracked


This presentation will show the first experimental implementation of an eavesdropper for quantum cryptosystem. Although quantum cryptography has been proven unconditionally secure, by exploiting physical imperfections (detector vulnerability) we have successfully built an intercept-resend attack and demonstrated eavesdropping under realistic conditions on an installed quantum key distribution line. The actual eavesdropping hardware we have built will be shown during the conference.

While I am very interested in quantum cryptography, I have never been optimistic about its practicality. And it’s always interesting to see provably secure cryptosystems broken.

Posted on December 30, 2009 at 6:04 AM55 Comments

Me and the Christmas Underwear Bomber

I spent a lot of yesterday giving press interviews. Nothing I haven’t said before, but it’s now national news and everyone wants to hear it.

These are the most interesting bits. Rachel Maddow interviewed me last night on her show. Jeffrey Goldberg interviewed me for the Atlantic website. And published a rewrite of an older article of mine on terrorism and security.

I’ve started to call the bizarre new TSA rules “magical thinking”: if we somehow protect against the specific tactic of the previous terrorist, we make ourselves safe from the next terrorist.

EDITED TO ADD (12/29): I don’t know about this quote:

“I flew 265,000 miles last year,” said Bruce Schneier, a cryptographer and security analyst. “You know what really pisses me off? Making me check my luggage. Not letting me use my laptop, so I can’t work. Taking away my Kindle, so I can’t read. I care about those things. I care about making me safer much, much less.”

For the record, I do care about being safer. I just don’t think any of the airplane security measures proposed by the TSA accomplish that.

Posted on December 29, 2009 at 11:17 AM138 Comments

Separating Explosives from the Detonator

Chechen terrorists did it in 2004. I said this in an interview with then TSA head Kip Hawley in 2007:

I don’t want to even think about how much C4 I can strap to my legs and walk through your magnetometers.

And what sort of magical thinking is behind the rumored TSA rule about keeping passengers seated during the last hour of flight? Do we really think the terrorist won’t think of blowing up their improvised explosive devices during the first hour of flight?

For years I’ve been saying this:

Only two things have made flying safer [since 9/11]: the reinforcement of cockpit doors, and the fact that passengers know now to resist hijackers.

This week, the second one worked over Detroit. Security succeeded.

EDITED TO ADD (12/26): Only one carry on? No electronics for the first hour of flight? I wish that, just once, some terrorist would try something that you can only foil by upgrading the passengers to first class and giving them free drinks.

Posted on December 26, 2009 at 5:43 PM261 Comments

Intercepting Predator Video

Sometimes mediocre encryption is better than strong encryption, and sometimes no encryption is better still.

The Wall Street Journal reported this week that Iraqi, and possibly also Afghan, militants are using commercial software to eavesdrop on U.S. Predators, other unmanned aerial vehicles, or UAVs, and even piloted planes. The systems weren’t “hacked”—the insurgents can’t control them—but because the downlink is unencrypted, they can watch the same video stream as the coalition troops on the ground.

The naive reaction is to ridicule the military. Encryption is so easy that HDTVs do it—just a software routine and you’re done—and the Pentagon has known about this flaw since Bosnia in the 1990s. But encrypting the data is the easiest part; key management is the hard part. Each UAV needs to share a key with the ground station. These keys have to be produced, guarded, transported, used and then destroyed. And the equipment, both the Predators and the ground terminals, needs to be classified and controlled, and all the users need security clearance.

The command and control channel is, and always has been, encrypted—because that’s both more important and easier to manage. UAVs are flown by airmen sitting at comfortable desks on U.S. military bases, where key management is simpler. But the video feed is different. It needs to be available to all sorts of people, of varying nationalities and security clearances, on a variety of field terminals, in a variety of geographical areas, in all sorts of conditions—with everything constantly changing. Key management in this environment would be a nightmare.

Additionally, how valuable is this video downlink is to the enemy? The primary fear seems to be that the militants watch the video, notice their compound being surveilled and flee before the missiles hit. Or notice a bunch of Marines walking through a recognizable area and attack them. This might make a great movie scene, but it’s not very realistic. Without context, and just by peeking at random video streams, the risk caused by eavesdropping is low.

Contrast this with the additional risks if you encrypt: A soldier in the field doesn’t have access to the real-time video because of a key management failure; a UAV can’t be quickly deployed to a new area because the keys aren’t in place; we can’t share the video information with our allies because we can’t give them the keys; most soldiers can’t use this technology because they don’t have the right clearances. Given this risk analysis, not encrypting the video is almost certainly the right decision.

There is another option, though. During the Cold War, the NSA’s primary adversary was Soviet intelligence, and it developed its crypto solutions accordingly. Even though that level of security makes no sense in Bosnia, and certainly not in Iraq and Afghanistan, it is what the NSA had to offer. If you encrypt, they said, you have to do it “right.”

The problem is, the world has changed. Today’s insurgent adversaries don’t have KGB-level intelligence gathering or cryptanalytic capabilities. At the same time, computer and network data gathering has become much cheaper and easier, so they have technical capabilities the Soviets could only dream of. Defending against these sorts of adversaries doesn’t require military-grade encryption only where it counts; it requires commercial-grade encryption everywhere possible.

This sort of solution would require the NSA to develop a whole new level of lightweight commercial-grade security systems for military applications—not just office-data “Sensitive but Unclassified” or “For Official Use Only” classifications. It would require the NSA to allow keys to be handed to uncleared UAV operators, and perhaps read over insecure phone lines and stored in people’s back pockets. It would require the sort of ad hoc key management systems you find in internet protocols, or in DRM systems. It wouldn’t be anywhere near perfect, but it would be more commensurate with the actual threats.

And it would help defend against a completely different threat facing the Pentagon: The PR threat. Regardless of whether the people responsible made the right security decision when they rushed the Predator into production, or when they convinced themselves that local adversaries wouldn’t know how to exploit it, or when they forgot to update their Bosnia-era threat analysis to account for advances in technology, the story is now being played out in the press. The Pentagon is getting beaten up because it’s not protecting against the threat—because it’s easy to make a sound bite where the threat sounds really dire. And now it has to defend against the perceived threat to the troops, regardless of whether the defense actually protects the troops or not. Reminds me of the TSA, actually.

So the military is now committed to encrypting the video … eventually. The next generation Predators, called Reapers—Who names this stuff? Second-grade boys?—will have the same weakness. Maybe we’ll have encrypted video by 2010, or 2014, but I don’t think that’s even remotely possible unless the NSA relaxes its key management and classification requirements and embraces a lightweight, less secure encryption solution for these sorts of situations. The real failure here is the failure of the Cold War security model to deal with today’s threats.

This essay originally appeared on

EDITED TO ADD (12/24): Good article from The New Yorker on the uses—and politics—of these UAVs.

EDITED TO ADD (12/30): Error corrected—”uncleared UAV operators” should have read “uncleared UAV viewers.” The point is that the operators in the U.S. are cleared and their communications are encrypted, but the viewers in Asia are uncleared and the data is unencrypted.

Posted on December 24, 2009 at 5:24 AM92 Comments

Plant Security Countermeasures

The essay is about veganism and plant eating, but I found the descriptions of plant security countermeasures interesting:

Plants can’t run away from a threat but they can stand their ground. “They are very good at avoiding getting eaten,” said Linda Walling of the University of California, Riverside. “It’s an unusual situation where insects can overcome those defenses.” At the smallest nip to its leaves, specialized cells on the plant’s surface release chemicals to irritate the predator or sticky goo to entrap it. Genes in the plant’s DNA are activated to wage systemwide chemical warfare, the plant’s version of an immune response. We need terpenes, alkaloids, phenolics—let’s move.

“I’m amazed at how fast some of these things happen,” said Consuelo M. De Moraes of Pennsylvania State University. Dr. De Moraes and her colleagues did labeling experiments to clock a plant’s systemic response time and found that, in less than 20 minutes from the moment the caterpillar had begun feeding on its leaves, the plant had plucked carbon from the air and forged defensive compounds from scratch.

Just because we humans can’t hear them doesn’t mean plants don’t howl. Some of the compounds that plants generate in response to insect mastication—their feedback, you might say—are volatile chemicals that serve as cries for help. Such airborne alarm calls have been shown to attract both large predatory insects like dragon flies, which delight in caterpillar meat, and tiny parasitic insects, which can infect a caterpillar and destroy it from within.

Enemies of the plant’s enemies are not the only ones to tune into the emergency broadcast. “Some of these cues, some of these volatiles that are released when a focal plant is damaged,” said Richard Karban of the University of California, Davis, “cause other plants of the same species, or even of another species, to likewise become more resistant to herbivores.”

There’s more in the essay.

Posted on December 23, 2009 at 7:50 AM22 Comments

Luggage Locator

Wow, is this a bad idea:

The Luggage Locator is an innovative product that travellers or anyone can use to locate items. It has been specifically engineered to help people find their luggage quickly and can also be used around the home or office.

A battery operated, two unit system, the Luggage Locator consists of a small transmitter about the size of a key chain and a lightweight receiver that attaches to any luggage handle. With the simple push of a button, the transmitter activates the receiver causing a bright flashing light and loud chirping sound. Locating your luggage after a long trip has never been quicker nor easier.

Anyone care to guess what’s most likely to happen if a piece of luggage in an airport starts flashing and chirping? I think it’ll be taken out to the tarmac and blown up using remote controlled bazookas.

Posted on December 22, 2009 at 12:20 PM55 Comments

Howard Schmidt to be Named U.S. Cybersecurity Czar

I head this rumor two days ago, and The New York Times is reporting today.

Reporters are calling me for reactions and opinions, but I just don’t know. Schmidt is good, but I don’t know if anyone can do well in a job with lots of responsibility but no actual authority. But maybe Obama will imbue the position with authority—I don’t know.

Posted on December 22, 2009 at 9:28 AM20 Comments

MagnePrint Technology for Credit/Debit Cards

This seems like a solution in search of a problem:

MagTek discovered that no two magnetic strips are identical. This is due to the manufacturing process. Similar to DNA, the structure of every magnetic stripe is different and the differences are distinguishable.

Knowing that, MagTek pairs the card’s magnetic strip signature with the card user’s personal data to create a one-of-a-kind digital identifier. MagTek calls this technology MagnePrint.

Basically, each card gets a “fingerprint” of the magnetic strip printed on it. And the reader (merchant terminal, ATM, whatever) verifies not only the card information, but the fingerprint as well. So a thief can’t skim your card information and make another card.

I see a couple of issues here. One, any fraud solution that requires the credit card companies to issue new readers simply isn’t going to happen in the U.S. If it were, we’d have embedded chips in our credit cards already. Trying to convince the merchants to type additional data in by hand isn’t going to work, either. We finally got merchants to type in a 3–4 digit CVV code—that basically does the same thing as this idea (albeit with less security).

Two, physically cloning cards is much less of a threat than virtually cloning them: buying things over the phone and Internet, etc. Yes, there are losses here, but I’m sure they’re not great enough to justify all of this infrastructure change.

Still, a clever security idea. I expect there’s an application for this somewhere.

Posted on December 18, 2009 at 6:32 AM72 Comments

Australia Restores Some Sanity to Airport Screening

Welcome news:

Carry-on baggage rules will be relaxed under a shake-up of aviation security announced by the Federal Government today.

The changes will see passengers again allowed to carry some sharp implements, such as nail files and clippers, umbrellas, crochet and knitting needles on board aircraft from July next year.

Metal cutlery will return to return to cabin meals and airport restaurants following Government recognition that security arrangements must be targeted at ‘real risks’.

I’m sure these rules won’t apply to flights to the U.S., where security arrangements must still be targeted at movie-plot threats.

Posted on December 17, 2009 at 12:54 PM32 Comments

The Politics of Power in Cyberspace

Thoughtful blog post by The Atlantic‘s Marc Ainbinder:

We allow Google,, credit companies and all manner of private corporations to collect intimate information about our lives, but we reflexively recoil when the government proposes to monitor (and not even collect) a fraction of that information, even with legal safeguards. We carry in our wallets credit cards with RFID chips. Data companies send unmarked vans in our neighborhoods, mapping wireless networks. The IBM scientist and tech guru Jeff Jonas noted on his blog that every time we send a text message, we’re contributing to a cloud where “powerful analytics commingle space-time-travel data with tertiary data.” Geolocated tweets can tell everyone where we are, what we’re doing, and who we like. Sure, The data is ostensibly anonymized, but the reality is a bit different: we provide so much of it that, as Jonas notes, we tend to re-identify ourselves—out our identity—fairly quickly. This is good and bad; the world becomes more efficient, we leave less of a footprint, we get what we want more quickly. But we also sacrifice privacy, individuality, and other goods that can’t be measured in dollars and cents.

Government power is just different than corporate power. Our engagement with technology implies a certain consent to give up information to companies. A deeper mistrust of government is healthy, so far as the it places pressure on lawmakers to properly oversee the exercise of state power. Warrantless domestic surveillance by NSA during the Bush administration doubtless ensnared a number of innocent Americans and monitored the communications of people who posed no harm to anyone. Where the standard is personal privacy and the rule of law, the violation is severe.

But where the standard is harm, the damage is minimal compared to the information that is routinely and legally collected by non-state entities—information that is used to target us for political appeals, to sell us something, or to steal money, to pilfer intellectual property or abuse technology. 85 percent of infrastructure in this country is in private hands; it is extremely vulnerable to attack and even to catastrophic resource failure.


This asymmetry is distorting the politics of cyber security. It frustrates the front line cyber folks to no end, but they are, in some ways, responsible for it.

For one thing, the NSA lacks credibility with many Americans and with some lawmakers because of its aforementioned activities. And yet the NSA is—really—the only entity with the expertise, the size, and the capability to secure the cyber realm. For another, the government remains obsessed with secrecy. The NSA and the Department of Defense can penetrate virtually any computer network on the face of the planet, and probably do so with regularity for defense purposes. Their capabilities in this “offensive” realm are awesome, and kind of scary. The technology that’ll be used to defend the country from cyber attacks of all types is the same technology used to track insurgents in Iraq (classified), tap into terrorist net-centered communications (classified), probe nation-state computer defenses (classified), figure out how to electronically hack into missile guidance systems (classified). Also: they’re worried that terrorists would figure out how vulnerable we really are if they knew everything. Here’s the weird part: China, Russia, savvy cyber terrorists—they know all this. They have the same technology.

My essay on who should be in charge of cybersecurity.

Posted on December 17, 2009 at 6:10 AM38 Comments

The U.S. Civil Rights Movement as an Insurgency

This is interesting:

Most Americans fail to appreciate that the Civil Rights movement was about the overthrow of an entrenched political order in each of the Southern states, that the segregationists who controlled this order did not hesitate to employ violence (law enforcement, paramilitary, mob) to preserve it, and that for nearly a century the federal government tacitly or overtly supported the segregationist state governments. That the Civil Rights movement employed nonviolent tactics should fool us no more than it did the segregationists, who correctly saw themselves as being at war. Significant change was never going to occur within the political system: it had to be forced. The aim of the segregationists was to keep the federal government on the sidelines. The aim of the Civil Rights movement was to “capture” the federal government—to get it to apply its weight against the Southern states. As to why it matters: a major reason we were slow to grasp the emergence and extent of the insurgency in Iraq is that it didn’t—and doesn’t—look like a classic insurgency. In fact, the official Department of Defense definition of insurgency still reflects a Vietnam era understanding of the term. Looking at the Civil Rights movement as an insurgency is useful because it assists in thinking more comprehensively about the phenomenon of insurgency and assists in a more complete—and therefore more useful—definition of the term.

The link to his talk is broken, unfortunately.

EDITED TO ADD (12/15): Video here. Thanks, mcb.

Posted on December 15, 2009 at 7:57 AM45 Comments

U.S./Russia Cyber Arms Control Talks

Now this is interesting:

The United States has begun talks with Russia and a United Nations arms control committee about strengthening Internet security and limiting military use of cyberspace.


The Russians have held that the increasing challenges posed by military activities to civilian computer networks can be best dealt with by an international treaty, similar to treaties that have limited the spread of nuclear, chemical and biological weapons. The United States had resisted, arguing that it was impossible to draw a line between the commercial and military uses of software and hardware.


A State Department official, who was not authorized to speak about the talks and requested anonymity, disputed the Russian characterization of the American position. While the Russians have continued to focus on treaties that may restrict weapons development, the United States is hoping to use the talks to increase international cooperation in opposing Internet crime. Strengthening defenses against Internet criminals would also strengthen defenses against any military-directed cyberattacks, the United States maintains.


The American interest in reopening discussions shows that the Obama administration, even in absence of a designated Internet security chief, is breaking with the Bush administration, which declined to talk with Russia about issues related to military attacks using the Internet.

I’m not sure what can be achieved here, but talking is always good.

I just posted about cyberwar policy.

Posted on December 14, 2009 at 6:46 AM33 Comments

Obama's Cybersecurity Czar

Rumors are that RSA president Art Coviello declined the job. No surprise: it has no actual authority but a lot of responsibility.

Security experts have pointed out that previous cybersecurity positions, cybersecurity czars and directors at the Department of Homeland Security, have been unable to make any significant changes to lock down federal systems. Virtually nothing can get done without some kind of budgetary authority, security expert Bruce Schneier has said about the vacant position. An advisor can set priorities and try to carry them out, but won’t have the clout to force government agencies to make changes and adhere to policies.

For the record, I was never approached. But I would certainly decline; this is a political job, and someone political needs to fill it.

I’ve written about this before—also, the last paragraph here:

And if you’re going to appoint a cybersecurity czar, you have to give him actual budgetary authority—otherwise he won’t be able to get anything done, either.

Maybe we should do a reality TV show: “America’s Next Cybersecurity Czar.”

EDITED TO ADD (12/12): Commentary.

Posted on December 11, 2009 at 6:37 AM27 Comments

Reacting to Security Vulnerabilities

Last month, researchers found a security flaw in the SSL protocol, which is used to protect sensitive web data. The protocol is used for online commerce, webmail, and social networking sites. Basically, hackers could hijack an SSL session and execute commands without the knowledge of either the client or the server. The list of affected products is enormous.

If this sounds serious to you, you’re right. It is serious. Given that, what should you do now? Should you not use SSL until it’s fixed, and only pay for internet purchases over the phone? Should you download some kind of protection? Should you take some other remedial action? What?

If you read the IT press regularly, you’ll see this sort of question again and again. The answer for this particular vulnerability, as for pretty much any other vulnerability you read about, is the same: do nothing. That’s right, nothing. Don’t panic. Don’t change your behavior. Ignore the problem, and let the vendors figure it out.

There are several reasons for this. One, it’s hard to figure out which vulnerabilities are serious and which are not. Vulnerabilities such as this happen multiple times a month. They affect different software, different operating systems, and different web protocols. The press either mentions them or not, somewhat randomly; just because it’s in the news doesn’t mean it’s serious.

Two, it’s hard to figure out if there’s anything you can do. Many vulnerabilities affect operating systems or Internet protocols. The only sure fix would be to avoid using your computer. Some vulnerabilities have surprising consequences. The SSL vulnerability mentioned above could be used to hack Twitter. Did you expect that? I sure didn’t.

Three, the odds of a particular vulnerability affecting you are small. There are a lot of fish in the Internet, and you’re just one of billions.

Four, often you can’t do anything. These vulnerabilities affect clients and servers, individuals and corporations. A lot of your data isn’t under your direct control—it’s on your web-based email servers, in some corporate database, or in a cloud computing application. If a vulnerability affects the computers running Facebook, for example, your data is at risk, whether you log in to Facebook or not.

It’s much smarter to have a reasonable set of default security practices and continue doing them. This includes:

1. Install an antivirus program if you run Windows, and configure it to update daily. It doesn’t matter which one you use; they’re all about the same. For Windows, I like the free version of AVG Internet Security. Apple Mac and Linux users can ignore this, as virus writers target the operating system with the largest market share.

2. Configure your OS and network router properly. Microsoft’s operating systems come with a lot of security enabled by default; this is good. But have someone who knows what they’re doing check the configuration of your router, too.

3. Turn on automatic software updates. This is the mechanism by which your software patches itself in the background, without you having to do anything. Make sure it’s turned on for your computer, OS, security software, and any applications that have the option. Yes, you have to do it for everything, as they often have separate mechanisms.

4. Show common sense regarding the Internet. This might be the hardest thing, and the most important. Know when an email is real, and when you shouldn’t click on the link. Know when a website is suspicious. Know when something is amiss.

5. Perform regular backups. This is vital. If you’re infected with something, you may have to reinstall your operating system and applications. Good backups ensure you don’t lose your data—documents, photographs, music—if that becomes necessary.

That’s basically it. I could give a longer list of safe computing practices, but this short one is likely to keep you safe. After that, trust the vendors. They spent all last month scrambling to fix the SSL vulnerability, and they’ll spend all this month scrambling to fix whatever new vulnerabilities are discovered. Let that be their problem.

Posted on December 10, 2009 at 1:13 PM41 Comments

TSA Publishes Standard Operating Procedures

BoingBoing is pretty snarky:

The TSA has published a “redacted” version of their s00per s33kr1t screening procedure guidelines (Want to know whether to frisk a CIA operative at the checkpoint? Now you can!). Unfortunately, the security geniuses at the DHS don’t know that drawing black blocks over the words you want to eliminate from your PDF doesn’t actually make the words go away, and can be defeated by nefarious al Qaeda operatives through a complex technique known as ctrl-a/ctrl-c/ctrl-v. Thankfully, only the most elite terrorists would be capable of matching wits with the technology brilliance on display at the agency charged with defending our nation’s skies by ensuring that imaginary hair-gel bombs are kept off of airplanes.

TSA is launching a “full review” to determine how this could have happened. I’ll save them the effort: someone screwed up.

In a statement Tuesday night, the TSA sought to minimize the impact of the unintentional release—calling the document “outdated,” “unclassified” and unimplemented—while saying that it took the incident “very seriously,” and “took swift action” when it was discovered.

Yeah, right.

The original link to the document is dead, but here’s the unredacted document.

I’ve skimmed it, and haven’t found anything terribly interesting. Here’s what noticed:

One of the redacted sections, for example, indicates that an armed law enforcement officer in or out of uniform may pass beyond the checkpoint without screening after providing a U.S. government-issued photo ID and “Notice of LEO Flying Armed Document.”

Some commercial airline pilots receive training by the U.S. Marshals Service and are allowed to carry TSA-issued firearms on planes. They can pass through without screening only after presenting “bonafide credentials and aircraft operator photo ID,” the document says.

Foreign dignitaries equivalent to cabinet rank and above, accompanying a spouse, their children under the age of 12, and a State Department escort are exempt from screening.

There are also references to a CIA program called WOMAP, the Worldwide Operational Meet and Assist Program. As part of WOMAP, foreign dignitaries and their escorts—authorized CIA representatives—are exempt from screening, provided they’re approved in advance by TSA’s Office of Intelligence.

Passengers carrying passports from Cuba, Iran, North Korea, Libya, Syria, Sudan, Afghanistan, Lebanon, Somalia, Iraq, Yemen or Algeria are to be designated for selective screening.

Although only a few portions of the document were redacted, the manual contains other tidbits that weren’t redacted, such as a thorough description of diplomatic pouches that are exempt from screening.

I’m a little bit saddened when we all make a big deal about how dumb people are at redacting digital documents. We’ve had a steady stream of these badly redacted documents, and I don’t want to lose that. I also don’t want agencies deciding not to release documents at all, rather than risk this sort of embarrassment.

EDITED TO ADD (12/10): News:

Five Transportation Security Administration employees have been placed on administrative leave after a sensitive airport security manual was posted on the Internet, the agency announced Wednesday.

EDITED TO ADD (12/12): Did the TSA compromise an intelligence program?

Posted on December 10, 2009 at 6:47 AM52 Comments

My Reaction to Eric Schmidt

Schmidt said:

I think judgment matters. If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place. If you really need that kind of privacy, the reality is that search engines—including Google—do retain this information for some time and it’s important, for example, that we are all subject in the United States to the Patriot Act and it is possible that all that information could be made available to the authorities.

This, from 2006, is my response:

Privacy protects us from abuses by those in power, even if we’re doing nothing wrong at the time of surveillance.

We do nothing wrong when we make love or go to the bathroom. We are not deliberately hiding anything when we seek out private places for reflection or conversation. We keep private journals, sing in the privacy of the shower, and write letters to secret lovers and then burn them. Privacy is a basic human need.


For if we are observed in all matters, we are constantly under threat of correction, judgment, criticism, even plagiarism of our own uniqueness. We become children, fettered under watchful eyes, constantly fearful that—either now or in the uncertain future—patterns we leave behind will be brought back to implicate us, by whatever authority has now become focused upon our once-private and innocent acts. We lose our individuality, because everything we do is observable and recordable.


This is the loss of freedom we face when our privacy is taken from us. This is life in former East Germany, or life in Saddam Hussein’s Iraq. And it’s our future as we allow an ever-intrusive eye into our personal, private lives.

Too many wrongly characterize the debate as “security versus privacy.” The real choice is liberty versus control. Tyranny, whether it arises under threat of foreign physical attack or under constant domestic authoritative scrutiny, is still tyranny. Liberty requires security without intrusion, security plus privacy. Widespread police surveillance is the very definition of a police state. And that’s why we should champion privacy even when we have nothing to hide.

EDITED TO ADD: See also Daniel Solove’s “‘I’ve Got Nothing to Hide’ and Other Misunderstandings of Privacy.”

Posted on December 9, 2009 at 12:22 PM143 Comments

Emotional Epidemiology

This, from The New England Journal of Medicine, sounds familiar:

This is the story line for most headline-grabbing illnesses—HIV, Ebola virus, SARS, typhoid. These diseases capture our imagination and ignite our fears in ways that more prosaic illnesses do not. These dramatic stakes lend themselves quite naturally to thriller books and movies; Dustin Hoffman hasn’t starred in any blockbusters about emphysema or dysentery.

When the inoculum of dramatic illness is first introduced into society, the public psyche rapidly becomes infected. Almost like an IgE-mediated histamine release, there is an immediate flooding of fear, even if the illness—like Ebola—is infinitely less likely to cause death than, say, a run-in with the Second Avenue bus. This immediate fear of the unknown was what had all my patients demanding the as-yet-unproduced H1N1 vaccine last spring.

As the novel disease establishes itself within society, a certain amount of emotional tolerance is created. H1N1 infection waxed and waned over the summer, and my patients grew less anxious. There was, of course, no medical basis for this decreased vigilance. Unusual risk groups and atypical seasonality should, in fact, have raised concern. By late summer, the perceived mysteriousness of H1N1 had receded, and the number of messages on my clinic phone followed suit.

But emotional epidemiology does not remain static. As autumn rolled around, I sensed a peeved expectation from my patients that this swine flu problem should have been solved already. The fact that it wasn’t “solved,” that the medical profession seemed somehow to be dithering, created an uneasy void. Not knowing whether to succumb to panic or to indifference, patients instead grew suspicious.

Posted on December 9, 2009 at 6:43 AM20 Comments

Using Fake Documents to Get a Valid U.S. Passport

I missed this story:

Since 2007, the U.S. State Department has been issuing high-tech “e-passports,” which contain computer chips carrying biometric data to prevent forgery. Unfortunately, according to a March report from the Government Accountability Office (GAO), getting one of these supersecure passports under false pretenses isn’t particularly difficult for anyone with even basic forgery skills.

A GAO investigator managed to obtain four genuine U.S. passports using fake names and fraudulent documents. In one case, he used the Social Security number of a man who had died in 1965. In another, he used the Social Security number of a fictitious 5-year-old child created for a previous investigation, along with an ID showing that he was 53 years old. The investigator then used one of the fake passports to buy a plane ticket, obtain a boarding pass, and make it through a security checkpoint at a major U.S. airport. (When presented with the results of the GAO investigation, the State Department agreed that there was a “major vulnerability” in the passport issuance process and agreed to study the matter.)

More than 70 countries have adopted the biometric passports, which officials describe as a revolution in immigration security. However, the GAO’s investigation proves that even the best technology can’t keep a country safe when the bureaucracy behind it fails.

No credential can be more secure than its breeder documents and issuance procedures.

Posted on December 8, 2009 at 6:05 AM64 Comments

Terrorists Targeting High-Profile Events

In an AP story on increased security at major football (the American variety) events, this sentence struck me:

“High-profile events are something that terrorist groups would love to interrupt somehow,” said Anthony Mangione, chief of U.S. Immigration and Customs Enforcement’s Miami office.

This is certainly the conventional wisdom, but is there any actual evidence that it’s true? The 9/11 terrorists could have easily chosen a different date and a major event—sporting or other—to target, but they didn’t. The London and Madrid train bombers could have just as easily chosen more high-profile events to bomb, but they didn’t. The Mumbai terrorists chose an ordinary day and ordinary targets. Aum Shinrikyo chose an ordinary day and ordinary train lines. Timothy McVeigh chose the ordinary Oklahoma City Federal Building. Irish terrorists chose, and Palestinian terrorists continue to choose, ordinary targets. Some of this can be attributed to the fact that ordinary targets are easier targets, but not a lot of it.

The only examples that come to mind of terrorists choosing high-profile events or targets are the idiot wannabe terrorists who would have been incapable of doing anything unless egged on by a government informant. Hardly convincing evidence.

Yes, I’ve seen the movie Black Sunday. But is there any reason to believe that terrorists want to target these sorts of events other than us projecting our own fears and prejudices onto the terrorists’ motives?

I wrote about protecting the World Series some years ago.

Posted on December 7, 2009 at 7:53 AM79 Comments

Sprint Provides U.S. Law Enforcement with Cell Phone Customer Location Data

Wired summarizes research by Christopher Soghoian:

Sprint Nextel provided law enforcement agencies with customer location data more than 8 million times between September 2008 and October 2009, according to a company manager who disclosed the statistic at a non-public interception and wiretapping conference in October.

The manager also revealed the existence of a previously undisclosed web portal that Sprint provides law enforcement to conduct automated “pings” to track users. Through the website, authorized agents can type in a mobile phone number and obtain global positioning system (GPS) coordinates of the phone.

From Soghoian’s blog:

Sprint Nextel provided law enforcement agencies with its customers’ (GPS) location information over 8 million times between September 2008 and October 2009. This massive disclosure of sensitive customer information was made possible due to the roll-out by Sprint of a new, special web portal for law enforcement officers.

The evidence documenting this surveillance program comes in the form of an audio recording of Sprint’s Manager of Electronic Surveillance, who described it during a panel discussion at a wiretapping and interception industry conference, held in Washington DC in October of 2009.

It is unclear if Federal law enforcement agencies’ extensive collection of geolocation data should have been disclosed to Congress pursuant to a 1999 law that requires the publication of certain surveillance statistics—since the Department of Justice simply ignores the law, and has not provided the legally mandated reports to Congress since 2004.

Sprint denies this; details in the Wired article. The odds of us ever learning the truth are probably very low.

Posted on December 3, 2009 at 7:18 AM52 Comments

The Security Implications of Windows Volume Shadow Copy

It can be impossible to securely delete a file:

What are the security implications of Volume Shadow Copy?

Suppose you decide to protect one of your documents from prying eyes. First, you create an encrypted copy using an encryption application. Then, you “wipe” (or “secure-delete”) the original document, which consists of overwriting it several times and deleting it. (This is necessary, because if you just deleted the document without overwriting it, all the data that was in the file would physically remain on the disk until it got overwritten by other data. See question above for an explanation of how file deletion works.)

Ordinarily, this would render the original, unencrypted document irretrievable. However, if the original file was stored on a volume protected by the Volume Shadow Copy service and it was there when a restore point was created, the original file will be retrievable using Previous versions. All you need to do is right-click the containing folder, click Restore previous versions, open a snapshot, and, lo and behold, you’ll see the original file that you tried so hard to delete!

The reason wiping the file doesn’t help, of course, is that before the file’s blocks get overwritten, VSC will save them to the shadow copy. It doesn’t matter how many times you overwrite the file, the shadow copy will still be there, safely stored on a hidden volume.

Is there a way to securely delete a file on a volume protected by VSC?

No. Shadow copies are read-only, so there is no way to delete a file from all the shadow copies.

Posted on December 2, 2009 at 6:16 AM111 Comments

Fingerprinting RFID Chips

This research centers on looking at the radio characteristics of individual RFID chips and creating a “fingerprint.” It makes sense; fingerprinting individual radios based on their transmission characteristics is as old as WW II. But while the research centers on using this as an anti-counterfeiting measure, I think it would much more likely be used as an identification and surveillance tool. Even if the communication is fully encrypted, this technology could be used to uniquely identify the chip.

Posted on December 1, 2009 at 1:25 PM34 Comments

Cyberwarfare Policy

National Journal has an excellent article on cyberwar policy. I agree with the author’s comments on The Atlantic blog:

Would the United States ever use a more devastating weapon, perhaps shutting off the lights in an adversary nation? The answer is, almost certainly no, not unless America were attacked first.

To understand why, forget about the cyber dimension for a moment. Imagine that some foreign military had flown over a power substation and Brazil and dropped a bomb on it, depriving electricity to millions of people, as well as the places they work, the hospitals they visit, and the transportation they use. If there were no official armed conflict between Brazil and its attacker, the bombing would be illegal under international law. That’s a pretty basic test. But even if there were a declared war, or a recognized state of hostilities, knocking out vital electricity to millions of citizens—who presumably are not soldiers in the fight—would fail a number of other basic requirements of the laws of armed conflict. For starters, it could be considered disproportionate, particularly if Brazil hadn’t launched any similar sized offensive on its adversary. Shutting off electricity to whole cities can effectively paralyze them. And the bombing would clearly target non-combatants. The government uses electricity, yes, but so does the entire civilian population.

Now add the cyber dimension. If the effect of a hacker taking down the power grid is the same as a bomber—that is, knocking out electrical power—then the same rules apply. That essentially was the conclusion of a National Academies of Sciences report in April. The authors write, “During acknowledged armed conflict (notably when kinetic and other means are also being used against the same target nation), cyber attack is governed by all the standard law of armed conflict. …If the effects of a kinetic attack are such that the attack would be ruled out on such grounds, a cyber attack that would cause similar effects would also be ruled out.”


According to a report in The Guardian, military planners refrained from launching a broad cyber attack against Serbia during the Kosovo conflict for fear of committing war crimes. The Pentagon theoretically had the power to “bring Serbia’s financial systems to a halt” and to go after the personal accounts of Slobodan Milosevic, the newspaper reported. But when the NATO-led bombing campaign was in full force, the Defense Department’s general counsel issued guidance on cyber war that said the law of (traditional) war applied.

The military ran into this same dilemma four years later, during preparations to invade Iraq in 2003. Planners considered whether to launch a massive attack on the Iraqi financial system in advance of the conventional strike. But they stopped short when they realized that the same networks used by Iraqi banks were also used by banks in France. Releasing a vicious computer virus into the system could potentially harm America’s allies. Some planners also worried that the contagion could spread to the United States. It could have been the cyber equivalent of nuclear fallout.

A 240-page Rand study by Martin Libicki—”Cyberdefense and Cyberwar“—came to the same conclusion:

Predicting what an attack can do requires knowing how the system and its operators will respond to signs of dysfunction and knowing the behavior of processes and systems associated with the system being attacked. Even then, cyberwar operations neither directly harm individuals nor destroy equipment (albeit with some exceptions). At best, these operations can confuse and frustrate operators of military systems, and then only temporarily. Thus, cyberwar can only be a support function for other elements of warfare, for instance, in disarming the enemy.

Commenting on the Rand report:

The report backs its findings by measuring probable outcomes to cyberattacks and determining that the results are too scattered to carry out accurate predictions. This is coupled with the problem of countering an attack. It is difficult to determine who conducted a specific cyberattack so any counter strikes or retaliations could backfire. Rather than going on the offensive, the United States should pursue diplomacy and attempt to find and prosecute the cybercriminals involved in an initial strike.

Libicki said that the military can attempt a cyberattack for a specific combat operation, but it would be a guessing game when trying to gauge the operation’s success since any result from the cyberattack would be unclear.

Instead the Rand report suggests the government invest in bolstering military networks, which as we know, have the same vulnerabilities as civilian networks.

I wrote about cyberwar back in 2005.

Posted on December 1, 2009 at 6:59 AM69 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.