June 15, 2012
by Bruce Schneier
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-1206.html>. These same essays and news items appear in the "Schneier on Security" blog at <http://www.schneier.com/blog>, along with a lively comment section. An RSS feed is available.
In this issue:
Recently, there have been several articles about the new market in zero-day exploits: new and unpatched computer vulnerabilities. It's not just software companies, who sometimes pay bounties to researchers who alert them of security vulnerabilities so they can fix them. And it's not only criminal organizations, who pay for vulnerabilities they can exploit. Now there are governments, and companies who sell to governments, who buy vulnerabilities with the intent of keeping them secret so they can exploit them.
This market is larger than most people realize, and it's becoming even larger. Forbes recently published a price list for zero-day exploits, along with the story of a hacker who received $250K from "a U.S. government contractor" (At first I didn't believe the story or the price list, but I have been convinced that they both are true.) Forbes published a profile of a company called Vupen, whose business is selling zero-day exploits. Other companies doing this range from startups like Netragard and Endgame to large defense contractors like Northrop Grumman, General Dynamics, and Raytheon.
This is very different than in 2007, when researcher Charlie Miller wrote about his attempts to sell zero-day exploits; and a 2010 survey implied that there wasn't much money in selling zero days. The market has matured substantially in the past few years.
This new market perturbs the economics of finding security vulnerabilities. And it does so to the detriment of us all.
I've long argued that the process of finding vulnerabilities in software systems increases overall security. This is because the economics of vulnerability hunting favored disclosure. As long as the principal gain from finding a vulnerability was notoriety, publicly disclosing vulnerabilities was the only obvious path. In fact, it took years for our industry to move from a norm of full-disclosure -- announcing the vulnerability publicly and damn the consequences -- to something called "responsible disclosure": giving the software vendor a head start in fixing the vulnerability. Changing economics is what made the change stick: instead of just hacker notoriety, a successful vulnerability finder could land some lucrative consulting gigs, and being a responsible security researcher helped. But regardless of the motivations, a disclosed vulnerability is one that -- at least in most cases -- is patched. And a patched vulnerability makes us all more secure.
This is why the new market for vulnerabilities is so dangerous; it results in vulnerabilities remaining secret and unpatched. That it's even more lucrative than the public vulnerabilities market means that more hackers will choose this path. And unlike the previous reward of notoriety and consulting gigs, it gives software programmers within a company the incentive to deliberately create vulnerabilities in the products they're working on -- and then secretly sell them to some government agency.
No commercial vendors perform the level of code review that would be necessary to detect, and prove mal-intent for, this kind of sabotage.
Even more importantly, the new market for security vulnerabilities results in a variety of government agencies around the world that have a strong interest in those vulnerabilities remaining unpatched. These range from law-enforcement agencies (like the FBI and the German police who are trying to build targeted Internet surveillance tools, to intelligence agencies like the NSA who are trying to build mass Internet surveillance tools, to military organizations who are trying to build cyber-weapons.
All of these agencies have long had to wrestle with the choice of whether to use newly discovered vulnerabilities to protect or to attack. Inside the NSA, this was traditionally known as the "equities issue," and the debate was between the COMSEC (communications security) side of the NSA and the SIGINT (signals intelligence) side. If they found a flaw in a popular cryptographic algorithm, they could either use that knowledge to fix the algorithm and make everyone's communications more secure, or they could exploit the flaw to eavesdrop on others -- while at the same time allowing even the people they wanted to protect to remain vulnerable. This debate raged through the decades inside the NSA. From what I've heard, by 2000, the COMSEC side had largely won, but things flipped completely around after 9/11.
The whole point of disclosing security vulnerabilities is to put pressure on vendors to release more secure software. It's not just that they patch the vulnerabilities that are made public -- the fear of bad press makes them implement more secure software development processes. It's another economic process; the cost of designing software securely in the first place is less than the cost of the bad press after a vulnerability is announced plus the cost of writing and deploying the patch. I'd be the first to admit that this isn't perfect -- there's a lot of very poorly written software still out there -- but it's the best incentive we have.
We've always expected the NSA, and those like them, to keep the vulnerabilities they discover secret. We have been counting on the public community to find and publicize vulnerabilities, forcing vendors to fix them. With the rise of these new pressures to keep zero-day exploits secret, and to sell them for exploitation, there will be even less incentive on software vendors to ensure the security of their products.
As the incentive for hackers to keep their vulnerabilities secret grows, the incentive for vendors to build secure software shrinks. As a recent EFF essay put it, this is "security for the 1%." And it makes the rest of us less safe.
This essay previously appeared on Forbes.com.
We're in the early years of a cyberwar arms race. It's expensive, it's destabilizing, and it threatens the very fabric of the Internet we use every day. Cyberwar treaties, as imperfect as they might be, are the only way to contain the threat.
If you read the press and listen to government leaders, we're already in the middle of a cyberwar. By any normal definition of the word "war," this is ridiculous. But the definition of cyberwar has been expanded to include government-sponsored espionage, potential terrorist attacks in cyberspace, large-scale criminal fraud, and even hacker kids attacking government networks and critical infrastructure. This definition is being pushed both by the military and by government contractors, who are gaining power and making money on cyberwar fear.
The danger is that military problems beg for military solutions. We're starting to see a power grab in cyberspace by the world's militaries: large-scale monitoring of networks, military control of Internet standards, even military takeover of cyberspace. Last year's debate over an "Internet kill switch" is an example of this; it's the sort of measure that might be deployed in wartime but makes no sense in peacetime. At the same time, countries are engaging in offensive actions in cyberspace, with tools like Stuxnet and Flame.
Arms races stem from ignorance and fear: ignorance of the other side's capabilities, and fear that their capabilities are greater than yours. Once cyberweapons exist, there will be an impetus to use them. Both Stuxnet and Flame damaged networks other than their intended targets. Any military-inserted back doors in Internet systems make us more vulnerable to criminals and hackers. And it is only a matter of time before something big happens, perhaps by the rash actions of a low-level military officer, perhaps by a non-state actor, perhaps by accident. And if the target nation retaliates, we could find ourselves in a real cyberwar.
The cyberwar arms race is destabilizing.
International cooperation and treaties are the only way to reverse this. Banning cyberweapons entirely is a good goal, but almost certainly unachievable. More likely are treaties that stipulate a no-first-use policy, outlaw unaimed or broadly targeted weapons, and mandate weapons that self-destruct at the end of hostilities. Treaties that restrict tactics and limit stockpiles could be a next step. We could prohibit cyberattacks against civilian infrastructure; international banking, for example, could be declared off-limits.
Yes, enforcement will be difficult. Remember how easy it was to hide a chemical weapons facility? Hiding a cyberweapons facility will be even easier. But we've learned a lot from our Cold War experience in negotiating nuclear, chemical, and biological treaties. The very act of negotiating limits the arms race and paves the way to peace. And even if they're breached, the world is safer because the treaties exist.
There's a common belief within the U.S. military that cyberweapons treaties are not in our best interest: that we currently have a military advantage in cyberspace that we should not squander. That's not true. We might have an offensive advantage -- although that's debatable -- but we certainly don't have a defensive advantage. More importantly, as a heavily networked country, we are inherently vulnerable in cyberspace.
Cyberspace threats are real. Military threats might get the publicity, but the criminal threats are both more dangerous and more damaging. Militarizing cyberspace will do more harm than good. The value of a free and open Internet is enormous.
Stop cyberwar fear mongering. Ratchet down cyberspace saber rattling. Start negotiations on limiting the militarization of cyberspace and increasing international police cooperation. This won't magically make us safe, but it will make us safer.
This essay first appeared on the "U.S. News and World Report" website.
According to a report from the DHS Office of Inspector General: "Federal investigators 'identified vulnerabilities in the screening process' at domestic airports using so-called 'full body scanners.'" EPIC obtained an unclassified version of the report in a FOIA response. Here's the summary.
Need some pre-industrial security for your USB drive? How about a wax seal? Neat, but I recommend combining it with encryption for even more security!
"Rules for Radicals" was written in 1971, but this still seems like a cool book. Excerpts from their list of tactics:
Cybersecurity at the doctor's office. I like this essay because it nicely illustrates the security mindset.
This story about a particular advertising fraud is difficult to follow, but the details are really interesting. It nicely illustrates how effective fraud leverages the natural externalities of the Internet to ensure that no one has the incentive to stop it.
Comment on racism as a vestigial remnant of a primitive security mechanism: "Our attitudes toward outgroups are part of a threat-detection system that allows us to rapidly determine friend from foe, says psychologist Steven Neuberg of ASU Tempe. The problem, he says, is that like smoke detectors, the system is designed to give many false alarms rather than miss a true threat. So outgroup faces alarm us even when there is no danger." Lots of interesting stuff in the article. Unfortunately, it requires (free) registration to access.
Privacy concerns around "social reading":
Interesting discussion of trust in this article on web hoaxes.
The banality of surveillance photos: an essay on a trove on surveillance photos from Cold War-era Prague.
The explosive from the latest foiled al Qaeda underwear bomb plot.
An interview with a safecracker: the legal kind.
Researchers found a possible backdoor in Chinese-made military silicon chips. The backdoor is real, but it's unclear whether it was put there surreptitiously by the Chinese or whether it's part of the debugging system of the chip. Lots of links to commentary, a response from the manufacturer, and a response from the researchers, on my blog.
The problem of false alarms in the context of tornado warnings.
When I talk about "Liars and Outliers" to security audiences, one of the things I stress is our traditional security focus -- on technical countermeasures -- is much narrower than it could be. Leveraging moral, reputational, and institutional pressures are likely to be much more effective in motivating cooperative behavior. This story illustrates the point. It's about the psychology of fraud, "why good people do bad things."
Bar code switching is a particularly clever form of retail theft -- especially when salesclerks are working fast and don't know the products -- is to switch bar codes. This particular thief stole Lego sets. If you know Lego, you know there's a vast price difference between the small sets and the large ones. He was caught by in-store surveillance.
News story on Obama's role in Stuxnet and Iranian cyberattacks.
Ghostery is a Firefox plug-in that tracks who is tracking your browsing habits in cyberspace.
Look at the last sentence in this article on hotel cleanliness: "'I relate this to homeland security. We are not any safer, but many people believe that we are,' he said." It's interesting to see the waste-of-money meme used so cavalierly.
This is an interesting essay -- it claims to be the first in a series -- that looks at the rise of "homeland security" as a catastrophic consequence of the 9/11 terrorist attacks:
There have been a bunch of stories about employers demanding passwords to social networking sites, like Facebook, from prospective employees, and several states have passed laws prohibiting this practice. This is the first story I've seen of a country doing this at its borders. The country is Israel, and they're asking for passwords to e-mail accounts.
Cheating -- when you're not supposed to -- in online classes:
A rare rational comment on al Qaeda's capabilities: "Few Americans harbor irrational fears about being killed by a lightning bolt. Abu Yahya al-Libi's death on Monday should remind them that fear of al Qaeda in its present state is even more irrational." Will anyone listen?
Flame seems to be another military-grade cyber-weapon, this one optimized for espionage. The worm is at least two years old, and is mainly confined to computers in the Middle East. (It does not replicate and spread automatically, which is certainly so that its controllers can target it better and evade detection longer.) And its espionage capabilities are pretty impressive. We'll know more in the coming days and weeks as different groups start analyzing it and publishing their results.
Microsoft's detailed blog post on the attack. The attackers managed to get a valid code-signing certificate using a signer that only accepts restricted client certificates.
MITM attack in the worm.
Remember my rebuttal of Sam Harris's essay advocating the profiling of Muslims at airports? That wasn't the end of it. Harris and I conducted a back-and-forth e-mail discussion, the results of which are here. At 14,000+ words, I only recommend it for the most stalwart of readers.
On Sunday, I will be participating in a public discussion about my new book on the FireDogLake website. James Fallows will be the moderator, and I will be answering questions from all comers -- you do have to register an ID, though -- from 5:00 - 7:00 EDT.
I'm speaking at the Workshop on the Economics of Information Security on June 26th in Berlin.
I'm speaking at the CISO Summit on June 28th in Prague.
He reviewed Liars and Outliers in his blog: "L&O is fresh thinking about live fire issues of today as well as moral issues that are ahead. Whatever your policy bent, this book will help you. Trust me on this, you don't have to buy everything Bruce says about TSA to read this book, take it to work, put it down on the table and say, 'this is brilliant stuff.'"
I hosted Kip Hawley on FireDogLake's Book Salon on Sunday last month. It was an interesting discussion.
Last week was the Fifth Interdisciplinary Workshop on Security and Human Behavior, SHB 2012. Google hosted it this year, at its offices in lower Manhattan.
SHB is an invitational gathering of psychologists, computer security researchers, behavioral economists, sociologists, law professors, business school professors, political scientists, anthropologists, philosophers, and others -- all of whom are studying the human side of security -- organized by Alessandro Acquisti, Ross Anderson, and me. It's not just an interdisciplinary event; most of the people here are individually interdisciplinary.
This is the best and most intellectually stimulating conference I attend all year. I told that to one of the participants yesterday, and he said something like: "Of course it is. You've specifically invited everyone you want to listen to." Which is basically correct. The workshop is organized into panels of 6-7 people. Each panelist gets ten minutes to talk about what he or she is working on, and then we spend the rest of the hour and a half in discussion.
Here is the list of participants. The list contains links to readings from each of them -- definitely a good place to browse for more information on this topic. https://www.schneier.com/shb2012/participants/
Ross Anderson, who has far more discipline than I, liveblogged this event. Go to the comments of that blog post to see summaries of the individual sessions.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Schneier on Security," "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2012 by Bruce Schneier.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.