February 15, 2010
by Bruce Schneier
Chief Security Technology Officer, BT
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-1002.html>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively comment section. An RSS feed is available.
In this issue:
- Fixing Intelligence Failures
- Anonymity and the Internet
- Security and Function Creep
- TSA Logo Contest Semi-Finalists
- The Chinese Attack Against Google
- Schneier News
- New Attack on Threefish
President Obama, in his recent speech, rightly focused on fixing the intelligence failures that resulted in Umar Farouk Abdulmutallab being ignored, rather than on technologies targeted at the details of his underwear-bomb plot. But while Obama’s instincts are right, reforming intelligence for this new century and its new threats is a more difficult task than he might like. We don’t need new technologies, new laws, new bureaucratic overlords, or — for heaven’s sake — new agencies. What prevents information sharing among intelligence organizations is the culture of the generation that built those organizations.
The U.S. intelligence system is a sprawling apparatus, spanning the FBI and the State Department, the CIA and the National Security Agency, and the Department of Homeland Security — itself an amalgamation of two dozen different organizations — designed and optimized to fight the Cold War. The single, enormous adversary then was the Soviet Union: as bureaucratic as they come, with a huge budget, and capable of very sophisticated espionage operations. We needed to defend against technologically advanced electronic eavesdropping operations, their agents trying to bribe or seduce our agents, and a worldwide intelligence gathering capability that hung on our every word.
In that environment, secrecy was paramount. Information had to be protected by armed guards and double fences, shared only among those with appropriate security clearances and a legitimate “need to know,” and it was better not to transmit information at all than to transmit it insecurely.
Today’s adversaries are different. There are still governments, like China, who are after our secrets. But the secrets they’re after are more often corporate than military, and most of the other organizations of interest are like al Qaeda: decentralized, poorly funded and incapable of the intricate spy versus spy operations the Soviet Union could pull off.
Against these adversaries, sharing is far more important than secrecy. Our intelligence organizations need to trade techniques and expertise with industry, and they need to share information among the different parts of themselves. Today’s terrorist plots are loosely organized ad hoc affairs, and those dots that are so important for us to connect beforehand might be on different desks, in different buildings, owned by different organizations.
Critics have pointed to laws that prohibited inter-agency sharing but, as the 9/11 Commission found, the law allows for far more sharing than goes on. It doesn’t happen because of inter-agency rivalries, a reliance on outdated information systems, and a culture of secrecy. What we need is an intelligence community that shares ideas and hunches and facts on their versions of Facebook, Twitter and wikis. We need the bottom-up organization that has made the Internet the greatest collection of human knowledge and ideas ever assembled.
The problem is far more social than technological. Teaching your mom to “text” and your dad to Twitter doesn’t make them part of the Internet generation, and giving all those cold warriors blogging lessons won’t change their mentality — or the culture. The reason this continues to be a problem, the reason President George W. Bush couldn’t change things even after the 9/11 Commission came to much the same conclusions as President Obama’s recent review did, is generational. The Internet is the greatest generation gap since rock and roll, and it’s just as true inside government as out. We might have to wait for the elders inside these agencies to retire and be replaced by people who grew up with the Internet.
A version of this op-ed previously appeared in the San Francisco Chronicle.
The notion that U.S. intelligence should have “connected the dots,” and caught Abdulmutallab, isn’t going away. But reality is much more complicated, and dots are easy to connect after the fact.
I wrote about fixing intelligence failures in 2002:
Universal identification is portrayed by some as the holy grail of Internet security. Anonymity is bad, the argument goes; and if we abolish it, we can ensure only the proper people have access to their own information. We’ll know who is sending us spam and who is trying to hack into corporate networks. And when there are massive denial-of-service attacks, such as those against Estonia or Georgia or South Korea, we’ll know who was responsible and take action accordingly.
The problem is that it won’t work. Any design of the Internet must allow for anonymity. Universal identification is impossible. Even attribution — knowing who is responsible for particular Internet packets — is impossible. Attempting to build such a system is futile, and will only give criminals and hackers new ways to hide.
Imagine a magic world in which every Internet packet could be traced to its origin. Even in this world, our Internet security problems wouldn’t be solved. There’s a huge gap between proving that a packet came from a particular computer and that a packet was directed by a particular person. This is the exact problem we have with botnets, or pedophiles storing child porn on innocents’ computers. In these cases, we know the origins of the DDoS packets and the spam; they’re from legitimate machines that have been hacked. Attribution isn’t as valuable as you might think.
Implementing an Internet without anonymity is very difficult, and causes its own problems. In order to have perfect attribution, we’d need agencies — real-world organizations — to provide Internet identity credentials based on other identification systems: passports, national identity cards, driver’s licenses, whatever. Sloppier identification systems, based on things such as credit cards, are simply too easy to subvert. We have nothing that comes close to this global identification infrastructure. Moreover, centralizing information like this actually hurts security because it makes identity theft that much more profitable a crime.
And realistically, any theoretical ideal Internet would need to allow people access even without their magic credentials. People would still use the Internet at public kiosks and at friends’ houses. People would lose their magic Internet tokens just like they lose their driver’s licenses and passports today. The legitimate bypass mechanisms would allow even more ways for criminals and hackers to subvert the system.
On top of all this, the magic attribution technology doesn’t exist. Bits are bits; they don’t come with identity information attached to them. Every software system we’ve ever invented has been successfully hacked, repeatedly. We simply don’t have anywhere near the expertise to build an airtight attribution system.
Not that it really matters. Even if everyone could trace all packets perfectly, to the person or origin and not just the computer, anonymity would still be possible. It would just take one person to set up an anonymity server. If I wanted to send a packet anonymously to someone else, I’d just route it through that server. For even greater anonymity, I could route it through multiple servers. This is called onion routing and, with appropriate cryptography and enough users, it adds anonymity back to any communications system that prohibits it.
Attempts to banish anonymity from the Internet won’t affect those savvy enough to bypass it, would cost billions, and would have only a negligible effect on security. What such attempts would do is affect the average user’s access to free speech, including those who use the Internet’s anonymity to survive: dissidents in Iran, China, and elsewhere.
Mandating universal identity and attribution is the wrong goal. Accept that there will always be anonymous speech on the Internet. Accept that you’ll never truly know where a packet came from. Work on the problems you can solve: software that’s secure in the face of whatever packet it receives, identification systems that are secure enough in the face of the risks. We can do far better at these things than we’re doing, and they’ll do more to improve security than trying to fix insoluble problems.
The whole attribution problem is very similar to the copy-protection/digital-rights-management problem. Just as it’s impossible to make specific bits not copyable, it’s impossible to know where specific bits came from. Bits are bits. They don’t naturally come with restrictions on their use attached to them, and they don’t naturally come with author information attached to them. Any attempts to circumvent this limitation will fail, and will increasingly need to be backed up by the sort of real-world police-state measures that the entertainment industry is demanding in order to make copy-protection work. That’s how China does it: police, informants, and fear.
Just as the music industry needs to learn that the world of bits requires a different business model, law enforcement and others need to understand that the old ideas of identification don’t work on the Internet. For good or for bad, whether you like it or not, there’s always going to be anonymity on the Internet.
This essay originally appeared in Information Security, as part of a point/counterpoint with Marcus Ranum. You can read Marcus’s response below my essay.
Comments that anonymity is bad:
Storing child porn on innocents’ computers:
Clever ruse by a prison escape artist.
I don’t know if this discussion of privacy violations by Facebook employees is real, but it seems perfectly reasonable that all of Facebook is stored in a huge database that someone with the proper permissions can access and modify. And it also makes sense that developers and others would need the ability to assume anyone’s identity.
The problems of profiling at security checkpoints.
Wrasse punishing cheaters:
It’s a classic fishing attack.
Neat pictures of an ATM skimmer. I would never have noticed it, which is precisely the point.
Nice article on web security.
Transport Canada on its new security regulations. Okay, it’s really the Rick Mercer Report.
Penny shooter business card. Of course, this means the TSA will start banning wallets on airplanes.
UK police are planning on using unmanned spy drones for “routine” monitoring.
New technology could scan cargo for nuclear material and conventional explosives.
January 8th was World Privacy Day.
Celebrate by signing on to the Madrid Privacy Declaration, either as an individual or as an organization.
How unique is your browser? Can you be tracked simply by its characteristics? The EFF is trying to find out. Its site Panopticlick will measure the characteristics of your browser setup and tell you how unique it is. I ran the test on myself, and my browser is unique amongst the 120,000 browsers tested at that time. It’s my browser plugin details; no one else has the exact configuration I do. My list of system fonts is almost unique; only one other person has the exact configuration I do. (This seems odd to me; I have a week-old Sony laptop running Windows 7, and I haven’t done anything with the fonts.) EFF has some suggestions for self-defense, none of them very satisfactory.
Deconfliction: this is well worth watching.
The Foreign Policy website has its own list of movie-plot threats: machine-gun wielding terrorists on paragliders, disease-laden insect swarms, a dirty bomb made from smoke detector parts, planning via online games, and botulinum toxin in the food supply. The site fleshes these threats out a bit, but it’s nothing regular readers of this blog can’t imagine for themselves. Maybe they should have their own movie-plot threat contest.
Scaring the Senate Intelligence Committee.
10 cartoons about airport security.
Interesting research on the limits of visual inspection. This has implications for searching for contraband at airports.
Isn’t it a bit embarrassing for an “expert on counter-terrorism” to be quoted as saying this? “Bill Tupman, an expert on counter-terrorism from Exeter University, told BBC News: ‘The problem is trying to predict the mind of the al-Qaeda planner; there are so many things they might do. And it is also necessary to reassure the public that we are trying to outguess the al-Qaeda planner and we are in the process of protecting them from any threat.'” I think it’s necessary to convince the public to refuse to be terrorized. What frustrates me most about Abdulmutallab is that he caused terror even though his plot failed. I want us to be indomitable enough for the next attack to fail to cause terror, even if it succeeds. Remember: terrorism can’t destroy our country’s way of life; only our reaction to terrorism can.
Subversive organizations registering in South Carolina. The rumor was that the law was passed year, but it seems to be from the 1950s.
Dahlia Lithwick on “Terrorism Derangement Syndrome”:
Terrorists prohibited from using iTunes:
Interview with a Nigerian Internet scammer:
Man-in-the-middle attack against chip-and-PIN payment card system:
Car key copier:
Nice article about a would-be spy and his homebrew pencil-and-paper cryptography.
Crypto comic book:
James Fallows and the Chinese cyber-threat:
Security is rarely static. Technology changes both security systems and attackers. But there’s something else that changes security’s cost/benefit trade-off: how the underlying systems being secured are used. Far too often we build security for one purpose, only to find it being used for another purpose — one it wasn’t suited for in the first place. And then the security system has to play catch-up.
Take driver’s licenses, for example. Originally designed to demonstrate a credential — the ability to drive a car — they looked like other credentials: medical licenses or elevator certificates of inspection. They were wallet-sized, of course, but they didn’t have much security associated with them. Then, slowly, driver’s licenses took on a second application: they became age-verification tokens in bars and liquor stores. Of course the security wasn’t up to the task — teenagers can be extraordinarily resourceful if they set their minds to it — and over the decades driver’s licenses got photographs, tamper-resistant features (once, it was easy to modify the birth year), and technologies that made counterfeiting harder. There was little value in counterfeiting a driver’s license, but a lot of value in counterfeiting an age-verification token.
Today, US driver’s licenses are taking on yet another function: security against terrorists. The Real ID Act — the government’s attempt to make driver’s licenses even more secure — has nothing to do with driving or even with buying alcohol, and everything to do with trying to make that piece of plastic an effective way to verify that someone is not on the terrorist watch list. Whether this is a good idea, or actually improves security, is another matter entirely.
You can see this kind of function creep everywhere. Internet security systems designed for informational Web sites are suddenly expected to provide security for banking Web sites. Security systems that are good enough to protect cheap commodities from being stolen are suddenly ineffective once the price of those commodities rises high enough. Application security systems, designed for locally owned networks, are expected to work even when the application is moved to a cloud computing environment. And cloud computing security, designed for the needs of corporations, is expected to be suitable for government applications as well — maybe even military applications.
Sometimes it’s obvious that security systems designed for one environment won’t work in another. We don’t arm our soldiers the same way we arm our policemen, and we can’t take commercial vehicles and easily turn them into ones outfitted for the military. We understand that we might need to upgrade our home security system if we suddenly come into possession of a bag of diamonds. Yet many think the same security that protects our home computers will also protect voting machines, and the same operating systems that run our businesses are suitable for military uses.
But these are all conscious decisions, and we security professionals often know better. The real problems arise when the changes happen in the background, without any conscious thought. We build a network security system that’s perfectly adequate for the threat and — like a driver’s license becoming an age-verification token — the network accrues more and more functions. But because it has already been pronounced “secure,” we can’t get any budget to re-evaluate and improve the security until after the bad guys have figured out the vulnerabilities and exploited them.
I don’t like having to play catch-up in security, but we seem doomed to keep doing so.
This essay originally appeared in the January/February 2010 issue of IEEE Security and Privacy.
Last month, I announced a contest to redesign the TSA logo. We have finalists. Go and vote in the blog comments; clicking on the images will bring up larger and easier-to-read versions.
Patrick Smith and I promised copies of our books to the winner. The winner will also receive a fake boarding pass for any flight on any date, and an empty 12-ounce bottle labelled “saline” that can be refilled and taken through any TSA security checkpoint.
Last month, Google announced a sophisticated attack against them from China. There have been some interesting technical details since then.
The rumor that China used a system Google put in place to enable lawful intercepts, which I used as a news hook for an essay on CNN.com, has not been confirmed. At this point, I doubt that it’s true.
My initial reaction:
My CNN.com essay. Again, the rumor that the Chinese attack made use of pathways established for lawful interception seems not to be true.
Another blog entry on the topic:
Google and the NSA: the world’s largest data collector teams up with the world’s largest data collector. Does anyone think this is a good idea?
EPIC has filed a Freedom of Information Act request for information about the agreement:
I’ve already written about the NSA’s role in securing cyberspace:
I was interviewed on the New Horizons radio show in Boise:
I’ll be speaking at the Minneapolis College of Art and Design on Feb 18:
I’m also speaking at the RSA Conference in San Francisco on March 4:
At FSE 2010 last week, Dmitry Khovratovich and Ivica Nikolic presented a paper where they cryptanalyze ARX algorithms (algorithms that use only addition, rotation, and exclusive-OR operations): “Rotational Cryptanalysis of ARX.” In the paper, they demonstrate their attack against Threefish. Their attack breaks 39 (out of 72) rounds of Threefish-256 with a complexity of 2^252.4, 42 (out of 72) rounds of Threefish-512 with a complexity of 2^507, and 43.5 (out of 80) rounds of Threefish-1024 with a complexity of 2^1014.5. (Yes, that’s over 2^1000. Don’t laugh; it really is a valid attack, even though it — or any of these others — will never be practical.)
This is excellent work, and represents the best attacks against Threefish to date. (I suspect that the attacks can be extended a few more rounds with some clever cryptanalytic tricks, but no further.) The security of full Threefish isn’t at risk, of course; there’s still plenty of security margin.
We have always stood by the security of Threefish with any set of non-obviously-bad constants. Still, a trivial modification — changing a single constant in the key schedule — dramatically reduces the number of rounds through which this attack can penetrate. If NIST allows another round of tweaks to the SHA-3 candidate algorithms, we will almost certainly take the opportunity to improve Skein’s security; we’ll change this constant to a value that removes the rotational symmetries that this technique exploits. If they don’t, we’re still confident of the security of Threefish and Skein.
SHA-3 candidate algorithms:
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Schneier on Security,” “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2010 by Bruce Schneier.