Schneier on Security
A blog covering security and security technology.
December 2009 Archives
This presentation will show the first experimental implementation of an eavesdropper for quantum cryptosystem. Although quantum cryptography has been proven unconditionally secure, by exploiting physical imperfections (detector vulnerability) we have successfully built an intercept-resend attack and demonstrated eavesdropping under realistic conditions on an installed quantum key distribution line. The actual eavesdropping hardware we have built will be shown during the conference.
While I am very interested in quantum cryptography, I have never been optimistic about its practicality. And it's always interesting to see provably secure cryptosystems broken.
I spent a lot of yesterday giving press interviews. Nothing I haven’t said before, but it’s now national news and everyone wants to hear it.
These are the most interesting bits. Rachel Maddow interviewed me last night on her show. Jeffrey Goldberg interviewed me for the Atlantic website. And CNN.com published a rewrite of an older article of mine on terrorism and security.
I've started to call the bizarre new TSA rules "magical thinking": if we somehow protect against the specific tactic of the previous terrorist, we make ourselves safe from the next terrorist.
EDITED TO ADD (12/29): I don’t know about this quote:
"I flew 265,000 miles last year," said Bruce Schneier, a cryptographer and security analyst. "You know what really pisses me off? Making me check my luggage. Not letting me use my laptop, so I can’t work. Taking away my Kindle, so I can’t read. I care about those things. I care about making me safer much, much less."
For the record, I do care about being safer. I just don’t think any of the airplane security measures proposed by the TSA accomplish that.
Good survey article by Alessandro Acquisti in IEEE Security & Privacy.
I don't want to even think about how much C4 I can strap to my legs and walk through your magnetometers.
And what sort of magical thinking is behind the rumored TSA rule about keeping passengers seated during the last hour of flight? Do we really think the terrorist won't think of blowing up their improvised explosive devices during the first hour of flight?
For years I've been saying this:
Only two things have made flying safer [since 9/11]: the reinforcement of cockpit doors, and the fact that passengers know now to resist hijackers.
This week, the second one worked over Detroit. Security succeeded.
EDITED TO ADD (12/26): Only one carry on? No electronics for the first hour of flight? I wish that, just once, some terrorist would try something that you can only foil by upgrading the passengers to first class and giving them free drinks.
Happy Squidmas, everybody.
Sometimes mediocre encryption is better than strong encryption, and sometimes no encryption is better still.
The Wall Street Journal reported this week that Iraqi, and possibly also Afghan, militants are using commercial software to eavesdrop on U.S. Predators, other unmanned aerial vehicles, or UAVs, and even piloted planes. The systems weren't "hacked" -- the insurgents can’t control them -- but because the downlink is unencrypted, they can watch the same video stream as the coalition troops on the ground.
The naive reaction is to ridicule the military. Encryption is so easy that HDTVs do it -- just a software routine and you're done -- and the Pentagon has known about this flaw since Bosnia in the 1990s. But encrypting the data is the easiest part; key management is the hard part. Each UAV needs to share a key with the ground station. These keys have to be produced, guarded, transported, used and then destroyed. And the equipment, both the Predators and the ground terminals, needs to be classified and controlled, and all the users need security clearance.
The command and control channel is, and always has been, encrypted -- because that's both more important and easier to manage. UAVs are flown by airmen sitting at comfortable desks on U.S. military bases, where key management is simpler. But the video feed is different. It needs to be available to all sorts of people, of varying nationalities and security clearances, on a variety of field terminals, in a variety of geographical areas, in all sorts of conditions -- with everything constantly changing. Key management in this environment would be a nightmare.
Additionally, how valuable is this video downlink is to the enemy? The primary fear seems to be that the militants watch the video, notice their compound being surveilled and flee before the missiles hit. Or notice a bunch of Marines walking through a recognizable area and attack them. This might make a great movie scene, but it's not very realistic. Without context, and just by peeking at random video streams, the risk caused by eavesdropping is low.
Contrast this with the additional risks if you encrypt: A soldier in the field doesn't have access to the real-time video because of a key management failure; a UAV can't be quickly deployed to a new area because the keys aren't in place; we can't share the video information with our allies because we can't give them the keys; most soldiers can't use this technology because they don't have the right clearances. Given this risk analysis, not encrypting the video is almost certainly the right decision.
There is another option, though. During the Cold War, the NSA's primary adversary was Soviet intelligence, and it developed its crypto solutions accordingly. Even though that level of security makes no sense in Bosnia, and certainly not in Iraq and Afghanistan, it is what the NSA had to offer. If you encrypt, they said, you have to do it "right."
The problem is, the world has changed. Today's insurgent adversaries don't have KGB-level intelligence gathering or cryptanalytic capabilities. At the same time, computer and network data gathering has become much cheaper and easier, so they have technical capabilities the Soviets could only dream of. Defending against these sorts of adversaries doesn't require military-grade encryption only where it counts; it requires commercial-grade encryption everywhere possible.
This sort of solution would require the NSA to develop a whole new level of lightweight commercial-grade security systems for military applications — not just office-data "Sensitive but Unclassified" or "For Official Use Only" classifications. It would require the NSA to allow keys to be handed to uncleared UAV operators, and perhaps read over insecure phone lines and stored in people's back pockets. It would require the sort of ad hoc key management systems you find in internet protocols, or in DRM systems. It wouldn't be anywhere near perfect, but it would be more commensurate with the actual threats.
And it would help defend against a completely different threat facing the Pentagon: The PR threat. Regardless of whether the people responsible made the right security decision when they rushed the Predator into production, or when they convinced themselves that local adversaries wouldn't know how to exploit it, or when they forgot to update their Bosnia-era threat analysis to account for advances in technology, the story is now being played out in the press. The Pentagon is getting beaten up because it's not protecting against the threat — because it's easy to make a sound bite where the threat sounds really dire. And now it has to defend against the perceived threat to the troops, regardless of whether the defense actually protects the troops or not. Reminds me of the TSA, actually.
So the military is now committed to encrypting the video ... eventually. The next generation Predators, called Reapers -- Who names this stuff? Second-grade boys? -- will have the same weakness. Maybe we’ll have encrypted video by 2010, or 2014, but I don't think that's even remotely possible unless the NSA relaxes its key management and classification requirements and embraces a lightweight, less secure encryption solution for these sorts of situations. The real failure here is the failure of the Cold War security model to deal with today's threats.
This essay originally appeared on Wired.com.
EDITED TO ADD (12/24): Good article from The New Yorker on the uses -- and politics -- of these UAVs.
EDITED TO ADD (12/30): Error corrected -- "uncleared UAV operators" should have read "uncleared UAV viewers." The point is that the operators in the U.S. are cleared and their communications are encrypted, but the viewers in Asia are uncleared and the data is unencrypted.
The essay is about veganism and plant eating, but I found the descriptions of plant security countermeasures interesting:
Plants can’t run away from a threat but they can stand their ground. “They are very good at avoiding getting eaten,” said Linda Walling of the University of California, Riverside. “It’s an unusual situation where insects can overcome those defenses.” At the smallest nip to its leaves, specialized cells on the plant’s surface release chemicals to irritate the predator or sticky goo to entrap it. Genes in the plant’s DNA are activated to wage systemwide chemical warfare, the plant’s version of an immune response. We need terpenes, alkaloids, phenolics — let’s move.
There's more in the essay.
Wow, is this a bad idea:
The Luggage Locator is an innovative product that travellers or anyone can use to locate items. It has been specifically engineered to help people find their luggage quickly and can also be used around the home or office.
Anyone care to guess what's most likely to happen if a piece of luggage in an airport starts flashing and chirping? I think it'll be taken out to the tarmac and blown up using remote controlled bazookas.
I head this rumor two days ago, and The New York Times is reporting today.
Reporters are calling me for reactions and opinions, but I just don't know. Schmidt is good, but I don't know if anyone can do well in a job with lots of responsibility but no actual authority. But maybe Obama will imbue the position with authority -- I don't know.
This is very serious.
This one for ZDNet.uk.
This seems like a solution in search of a problem:
MagTek discovered that no two magnetic strips are identical. This is due to the manufacturing process. Similar to DNA, the structure of every magnetic stripe is different and the differences are distinguishable.
Basically, each card gets a "fingerprint" of the magnetic strip printed on it. And the reader (merchant terminal, ATM, whatever) verifies not only the card information, but the fingerprint as well. So a thief can't skim your card information and make another card.
I see a couple of issues here. One, any fraud solution that requires the credit card companies to issue new readers simply isn't going to happen in the U.S. If it were, we'd have embedded chips in our credit cards already. Trying to convince the merchants to type additional data in by hand isn't going to work, either. We finally got merchants to type in a 3–4 digit CVV code -- that basically does the same thing as this idea (albeit with less security).
Two, physically cloning cards is much less of a threat than virtually cloning them: buying things over the phone and Internet, etc. Yes, there are losses here, but I'm sure they're not great enough to justify all of this infrastructure change.
Still, a clever security idea. I expect there's an application for this somewhere.
Carry-on baggage rules will be relaxed under a shake-up of aviation security announced by the Federal Government today.
I'm sure these rules won't apply to flights to the U.S., where security arrangements must still be targeted at movie-plot threats.
Thoughtful blog post by The Atlantic's Marc Ainbinder:
We allow Google, Amazon.com, credit companies and all manner of private corporations to collect intimate information about our lives, but we reflexively recoil when the government proposes to monitor (and not even collect) a fraction of that information, even with legal safeguards. We carry in our wallets credit cards with RFID chips. Data companies send unmarked vans in our neighborhoods, mapping wireless networks. The IBM scientist and tech guru Jeff Jonas noted on his blog that every time we send a text message, we're contributing to a cloud where "powerful analytics commingle space-time-travel data with tertiary data." Geolocated tweets can tell everyone where we are, what we're doing, and who we like. Sure, The data is ostensibly anonymized, but the reality is a bit different: we provide so much of it that, as Jonas notes, we tend to re-identify ourselves -- out our identity -- fairly quickly. This is good and bad; the world becomes more efficient, we leave less of a footprint, we get what we want more quickly. But we also sacrifice privacy, individuality, and other goods that can't be measured in dollars and cents.
My essay on who should be in charge of cybersecurity.
This is interesting:
Most Americans fail to appreciate that the Civil Rights movement was about the overthrow of an entrenched political order in each of the Southern states, that the segregationists who controlled this order did not hesitate to employ violence (law enforcement, paramilitary, mob) to preserve it, and that for nearly a century the federal government tacitly or overtly supported the segregationist state governments. That the Civil Rights movement employed nonviolent tactics should fool us no more than it did the segregationists, who correctly saw themselves as being at war. Significant change was never going to occur within the political system: it had to be forced. The aim of the segregationists was to keep the federal government on the sidelines. The aim of the Civil Rights movement was to "capture" the federal government -- to get it to apply its weight against the Southern states. As to why it matters: a major reason we were slow to grasp the emergence and extent of the insurgency in Iraq is that it didn't -- and doesn't -- look like a classic insurgency. In fact, the official Department of Defense definition of insurgency still reflects a Vietnam era understanding of the term. Looking at the Civil Rights movement as an insurgency is useful because it assists in thinking more comprehensively about the phenomenon of insurgency and assists in a more complete -- and therefore more useful -- definition of the term.
The link to his talk is broken, unfortunately.
EDITED TO ADD (12/15): Video here. Thanks, mcb.
Now this is interesting:
The United States has begun talks with Russia and a United Nations arms control committee about strengthening Internet security and limiting military use of cyberspace.
I'm not sure what can be achieved here, but talking is always good.
I just posted about cyberwar policy.
Video of the talk I gave to the Open Right Group last week in London.
Christmas is coming.
This one from Gulf News.
Rumors are that RSA president Art Coviello declined the job. No surprise: it has no actual authority but a lot of responsibility.
Security experts have pointed out that previous cybersecurity positions, cybersecurity czars and directors at the Department of Homeland Security, have been unable to make any significant changes to lock down federal systems. Virtually nothing can get done without some kind of budgetary authority, security expert Bruce Schneier has said about the vacant position. An advisor can set priorities and try to carry them out, but won't have the clout to force government agencies to make changes and adhere to policies.
For the record, I was never approached. But I would certainly decline; this is a political job, and someone political needs to fill it.
And if you're going to appoint a cybersecurity czar, you have to give him actual budgetary authority -- otherwise he won't be able to get anything done, either.
Maybe we should do a reality TV show: "America's Next Cybersecurity Czar."
EDITED TO ADD (12/12): Commentary.
Last month, researchers found a security flaw in the SSL protocol, which is used to protect sensitive web data. The protocol is used for online commerce, webmail, and social networking sites. Basically, hackers could hijack an SSL session and execute commands without the knowledge of either the client or the server. The list of affected products is enormous.
If this sounds serious to you, you're right. It is serious. Given that, what should you do now? Should you not use SSL until it's fixed, and only pay for internet purchases over the phone? Should you download some kind of protection? Should you take some other remedial action? What?
If you read the IT press regularly, you'll see this sort of question again and again. The answer for this particular vulnerability, as for pretty much any other vulnerability you read about, is the same: do nothing. That's right, nothing. Don't panic. Don't change your behavior. Ignore the problem, and let the vendors figure it out.
There are several reasons for this. One, it's hard to figure out which vulnerabilities are serious and which are not. Vulnerabilities such as this happen multiple times a month. They affect different software, different operating systems, and different web protocols. The press either mentions them or not, somewhat randomly; just because it's in the news doesn't mean it's serious.
Two, it's hard to figure out if there's anything you can do. Many vulnerabilities affect operating systems or Internet protocols. The only sure fix would be to avoid using your computer. Some vulnerabilities have surprising consequences. The SSL vulnerability mentioned above could be used to hack Twitter. Did you expect that? I sure didn't.
Three, the odds of a particular vulnerability affecting you are small. There are a lot of fish in the Internet, and you're just one of billions.
Four, often you can't do anything. These vulnerabilities affect clients and servers, individuals and corporations. A lot of your data isn't under your direct control -- it's on your web-based email servers, in some corporate database, or in a cloud computing application. If a vulnerability affects the computers running Facebook, for example, your data is at risk, whether you log in to Facebook or not.
It's much smarter to have a reasonable set of default security practices and continue doing them. This includes:
1. Install an antivirus program if you run Windows, and configure it to update daily. It doesn't matter which one you use; they're all about the same. For Windows, I like the free version of AVG Internet Security. Apple Mac and Linux users can ignore this, as virus writers target the operating system with the largest market share.
2. Configure your OS and network router properly. Microsoft's operating systems come with a lot of security enabled by default; this is good. But have someone who knows what they're doing check the configuration of your router, too.
3. Turn on automatic software updates. This is the mechanism by which your software patches itself in the background, without you having to do anything. Make sure it's turned on for your computer, OS, security software, and any applications that have the option. Yes, you have to do it for everything, as they often have separate mechanisms.
4. Show common sense regarding the Internet. This might be the hardest thing, and the most important. Know when an email is real, and when you shouldn't click on the link. Know when a website is suspicious. Know when something is amiss.
5. Perform regular backups. This is vital. If you're infected with something, you may have to reinstall your operating system and applications. Good backups ensure you don't lose your data -- documents, photographs, music -- if that becomes necessary.
That's basically it. I could give a longer list of safe computing practices, but this short one is likely to keep you safe. After that, trust the vendors. They spent all last month scrambling to fix the SSL vulnerability, and they'll spend all this month scrambling to fix whatever new vulnerabilities are discovered. Let that be their problem.
BoingBoing is pretty snarky:
The TSA has published a "redacted" version of their s00per s33kr1t screening procedure guidelines (Want to know whether to frisk a CIA operative at the checkpoint? Now you can!). Unfortunately, the security geniuses at the DHS don't know that drawing black blocks over the words you want to eliminate from your PDF doesn't actually make the words go away, and can be defeated by nefarious al Qaeda operatives through a complex technique known as ctrl-a/ctrl-c/ctrl-v. Thankfully, only the most elite terrorists would be capable of matching wits with the technology brilliance on display at the agency charged with defending our nation's skies by ensuring that imaginary hair-gel bombs are kept off of airplanes.
TSA is launching a "full review" to determine how this could have happened. I'll save them the effort: someone screwed up.
In a statement Tuesday night, the TSA sought to minimize the impact of the unintentional release -- calling the document "outdated," "unclassified" and unimplemented -- while saying that it took the incident "very seriously," and "took swift action" when it was discovered.
The original link to the document is dead, but here's the unredacted document.
I've skimmed it, and haven't found anything terribly interesting. Here's what Wired.com noticed:
One of the redacted sections, for example, indicates that an armed law enforcement officer in or out of uniform may pass beyond the checkpoint without screening after providing a U.S. government-issued photo ID and “Notice of LEO Flying Armed Document.”
I'm a little bit saddened when we all make a big deal about how dumb people are at redacting digital documents. We've had a steady stream of these badly redacted documents, and I don't want to lose that. I also don't want agencies deciding not to release documents at all, rather than risk this sort of embarrassment.
EDITED TO ADD (12/10): News:
Five Transportation Security Administration employees have been placed on administrative leave after a sensitive airport security manual was posted on the Internet, the agency announced Wednesday.
EDITED TO ADD (12/12): Did the TSA compromise an intelligence program?
I think judgment matters. If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place. If you really need that kind of privacy, the reality is that search engines -- including Google -- do retain this information for some time and it's important, for example, that we are all subject in the United States to the Patriot Act and it is possible that all that information could be made available to the authorities.
This, from 2006, is my response:
Privacy protects us from abuses by those in power, even if we're doing nothing wrong at the time of surveillance.
EDITED TO ADD: See also Daniel Solove's "'I've Got Nothing to Hide' and Other Misunderstandings of Privacy."
This, from The New England Journal of Medicine, sounds familiar:
This is the story line for most headline-grabbing illnesses — HIV, Ebola virus, SARS, typhoid. These diseases capture our imagination and ignite our fears in ways that more prosaic illnesses do not. These dramatic stakes lend themselves quite naturally to thriller books and movies; Dustin Hoffman hasn't starred in any blockbusters about emphysema or dysentery.
I missed this story:
Since 2007, the U.S. State Department has been issuing high-tech "e-passports," which contain computer chips carrying biometric data to prevent forgery. Unfortunately, according to a March report from the Government Accountability Office (GAO), getting one of these supersecure passports under false pretenses isn't particularly difficult for anyone with even basic forgery skills.
No credential can be more secure than its breeder documents and issuance procedures.
In an AP story on increased security at major football (the American variety) events, this sentence struck me:
"High-profile events are something that terrorist groups would love to interrupt somehow," said Anthony Mangione, chief of U.S. Immigration and Customs Enforcement's Miami office.
This is certainly the conventional wisdom, but is there any actual evidence that it's true? The 9/11 terrorists could have easily chosen a different date and a major event -- sporting or other -- to target, but they didn't. The London and Madrid train bombers could have just as easily chosen more high-profile events to bomb, but they didn't. The Mumbai terrorists chose an ordinary day and ordinary targets. Aum Shinrikyo chose an ordinary day and ordinary train lines. Timothy McVeigh chose the ordinary Oklahoma City Federal Building. Irish terrorists chose, and Palestinian terrorists continue to choose, ordinary targets. Some of this can be attributed to the fact that ordinary targets are easier targets, but not a lot of it.
The only examples that come to mind of terrorists choosing high-profile events or targets are the idiot wannabe terrorists who would have been incapable of doing anything unless egged on by a government informant. Hardly convincing evidence.
Yes, I've seen the movie Black Sunday. But is there any reason to believe that terrorists want to target these sorts of events other than us projecting our own fears and prejudices onto the terrorists' motives?
I wrote about protecting the World Series some years ago.
Wired summarizes research by Christopher Soghoian:
Sprint Nextel provided law enforcement agencies with customer location data more than 8 million times between September 2008 and October 2009, according to a company manager who disclosed the statistic at a non-public interception and wiretapping conference in October.
From Soghoian's blog:
Sprint Nextel provided law enforcement agencies with its customers' (GPS) location information over 8 million times between September 2008 and October 2009. This massive disclosure of sensitive customer information was made possible due to the roll-out by Sprint of a new, special web portal for law enforcement officers.
Sprint denies this; details in the Wired article. The odds of us ever learning the truth are probably very low.
It can be impossible to securely delete a file:
What are the security implications of Volume Shadow Copy?
This research centers on looking at the radio characteristics of individual RFID chips and creating a "fingerprint." It makes sense; fingerprinting individual radios based on their transmission characteristics is as old as WW II. But while the research centers on using this as an anti-counterfeiting measure, I think it would much more likely be used as an identification and surveillance tool. Even if the communication is fully encrypted, this technology could be used to uniquely identify the chip.
Would the United States ever use a more devastating weapon, perhaps shutting off the lights in an adversary nation? The answer is, almost certainly no, not unless America were attacked first.
A 240-page Rand study by Martin Libicki -- "Cyberdefense and Cyberwar" -- came to the same conclusion:
Predicting what an attack can do requires knowing how the system and its operators will respond to signs of dysfunction and knowing the behavior of processes and systems associated with the system being attacked. Even then, cyberwar operations neither directly harm individuals nor destroy equipment (albeit with some exceptions). At best, these operations can confuse and frustrate operators of military systems, and then only temporarily. Thus, cyberwar can only be a support function for other elements of warfare, for instance, in disarming the enemy.
Commenting on the Rand report:
The report backs its findings by measuring probable outcomes to cyberattacks and determining that the results are too scattered to carry out accurate predictions. This is coupled with the problem of countering an attack. It is difficult to determine who conducted a specific cyberattack so any counter strikes or retaliations could backfire. Rather than going on the offensive, the United States should pursue diplomacy and attempt to find and prosecute the cybercriminals involved in an initial strike.
I wrote about cyberwar back in 2005.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.