Page 414

The Human Side of Heartbleed

The announcement on April 7 was alarming. A new Internet vulnerability called Heartbleed could allow hackers to steal your logins and passwords. It affected a piece of security software that is used on half a million websites worldwide. Fixing it would be hard: It would strain our security infrastructure and the patience of users everywhere.

It was a software insecurity, but the problem was entirely human.

Software has vulnerabilities because it’s written by people, and people make mistakes—thousands of mistakes. This particular mistake was made in 2011 by a German graduate student who was one of the unpaid volunteers working on a piece of software called OpenSSL. The update was approved by a British consultant.

In retrospect, the mistake should have been obvious, and it’s amazing that no one caught it. But even though thousands of large companies around the world used this critical piece of software for free, no one took the time to review the code after its release.

The mistake was discovered around March 21, 2014, and was reported on April 1 by Neel Mehta of Google’s security team, who quickly realized how potentially devastating it was. Two days later, in an odd coincidence, researchers at a security company called Codenomicon independently discovered it.

When a researcher discovers a major vulnerability in a widely used piece of software, he generally discloses it responsibly. Why? As soon as a vulnerability becomes public, criminals will start using it to hack systems, steal identities, and generally create mayhem, so we have to work together to fix the vulnerability quickly after it’s announced.

The researchers alerted some of the larger companies quietly so that they could fix their systems before the public announcement. (Who to tell early is another very human problem: If you tell too few, you’re not really helping, but if you tell too many, the secret could get out.) Then Codenomicon announced the vulnerability.

One of the biggest problems we face in the security community is how to communicate these sorts of vulnerabilities. The story is technical, and people often don’t know how to react to the risk. In this case, the Codenomicon researchers did well. They created a public website explaining (in simple terms) the vulnerability and how to fix it, and they created a logo—a red bleeding heart—that every news outlet used for coverage of the story.

The first week of coverage varied widely, as some people panicked and others downplayed the threat. This wasn’t surprising: There was a lot of uncertainty about the risk, and it wasn’t immediately obvious how disastrous the vulnerability actually was.

The major Internet companies were quick to patch vulnerable systems. Individuals were less likely to update their passwords, but by and large, that was OK.

True to form, hackers started exploiting the vulnerability within minutes of the announcement. We assume that governments also exploited the vulnerability while they could. I’m sure the U.S. National Security Agency had advance warning.

By now, it’s largely over. There are still lots of unpatched systems out there. (Many of them are embedded hardware systems that can’t be patched.) The risk of attack is still there, but minimal. In the end, the actual damage was also minimal, although the expense of restoring security was great.

The question that remains is this: What should we expect in the future—are there more Heartbleeds out there?

Yes. Yes there are. The software we use contains thousands of mistakes—many of them security vulnerabilities. Lots of people are looking for these vulnerabilities: Researchers are looking for them. Criminals and hackers are looking for them. National intelligence agencies in the United States, the United Kingdom, China, Russia, and elsewhere are looking for them. The software vendors themselves are looking for them.

What happens when a vulnerability is found depends on who finds it. If the vendor finds it, it quietly fixes it. If a researcher finds it, he or she alerts the vendor and then reports it to the public. If a national intelligence agency finds the vulnerability, it either quietly uses it to spy on others or—if we’re lucky—alerts the vendor. If criminals and hackers find it, they use it until a security company notices and alerts the vendor, and then it gets fixed—usually within a month.

Heartbleed was unique because there was no single fix. The software had to be updated, and then websites had to regenerate their encryption keys and get new public-key certificates. After that, people had to update their passwords. This multi-stage process had to take place publicly, which is why the announcement happened the way it did.

Yes, it’ll happen again. But most of the time, it’ll be easier to deal with than this.

This essay previously appeared on The Mark News.

Posted on June 4, 2014 at 6:23 AMView Comments

Chinese Hacking of the US

Chinese hacking of American computer networks is old news. For years we’ve known about their attacks against U.S. government and corporate targets. We’ve seen detailed reports of how they hacked The New York Times. Google has detected them going after Gmail accounts of dissidents. They’ve built sophisticated worldwide eavesdropping networks. These hacks target both military secrets and corporate intellectual property. They’re perpetrated by a combination of state, state-sponsored and state-tolerated hackers. It’s been going on for years.

On Monday, the Justice Department indicted five Chinese hackers in absentia, all associated with the Chinese military, for stealing corporate secrets from U.S. energy, metals and manufacturing companies. It’s entirely for show; the odds that the Chinese are going to send these people to the U.S. to stand trial is zero. But it does move what had been mostly a technical security problem into the world of diplomacy and foreign policy. By doing this, the U.S. government is taking a very public stand and saying “enough.”

The problem with that stand is that we’ve been doing much the same thing to China. Documents revealed by the whistleblower Edward Snowden show that the NSA has penetrated Chinese government and commercial networks, and is exfiltrating—that’s NSA talk for stealing—an enormous amount of secret data. We’ve hacked the networking hardware of one of their own companies, Huawei. We’ve intercepted networking equipment being sent there and installed monitoring devices. We’ve been listening in on their private communications channels.

The only difference between the U.S. and China’s actions is that the U.S. doesn’t engage in direct industrial espionage. That is, we don’t steal secrets from Chinese companies and pass them directly to U.S. competitors. But we do engage in economic espionage; we steal secrets from Chinese companies for an advantage in government trade negotiations, which directly benefits U.S. competitors. We might think this difference is important, but other countries are not as as impressed with our nuance.

Already the Chinese are retaliating against the U.S. actions with rhetoric of their own. I don’t know the Chinese expression for ‘pot calling the kettle black,’ but it certainly fits in this case.

Again, none of this is new. The U.S. and the Chinese have been conducting electronic espionage on each other throughout the Cold War, and there’s no reason to think it’s going to change anytime soon. What’s different now is the ease with which the two countries can do this safely and remotely, over the Internet, as well as the massive amount of information that can be stolen with a few computer commands.

On the Internet today, it is much easier to attack systems and break into them than it is to defend those systems against attack, so the advantage is to the attacker. This is true for a combination of reasons: the ability of an attacker to concentrate his attack, the nature of vulnerabilities in computer systems, poor software quality and the enormous complexity of computer systems.

The computer security industry is used to coping with criminal attacks. In general, such attacks are untargeted. Criminals might have broken into Target’s network last year and stolen 40 million credit and debit card numbers, but they would have been happy with any retailer’s large credit card database. If Target’s security had been better than its competitors, the criminals would have gone elsewhere. In this way, security is relative.

The Chinese attacks are different. For whatever reason, the government hackers wanted certain information inside the networks of Alcoa World Alumina, Westinghouse Electric, Allegheny Technologies, U.S. Steel, United Steelworkers Union and SolarWorld. It wouldn’t have mattered how those companies’ security compared with other companies; all that mattered was whether it was better than the ability of the attackers.

This is a fundamentally different security model—often called APT or Advanced Persistent Threat—and one that is much more difficult to defend against.

In a sense, American corporations are collateral damage in this battle of espionage between the U.S. and China. Taking the battle from the technical sphere into the foreign policy sphere might be a good idea, but it will work only if we have some moral high ground from which to demand that others not spy on us. As long as we run the largest surveillance network in the world and hack computer networks in foreign countries, we’re going to have trouble convincing others not to attempt the same on us.

This essay previously appeared on Time.com.

Posted on June 2, 2014 at 6:37 AMView Comments

TrueCrypt WTF

I have no idea what’s going on with TrueCrypt. There’s a good summary of the story at ArsTechnica, and Slashdot, Hacker News, and Reddit all have long comment threads. See also Brian Krebs and Cory Doctorow.

Speculations include a massive hack of the TrueCrypt developers, some Lavabit-like forced shutdown, and an internal power struggle within TrueCrypt. I suppose we’ll have to wait and see what develops.

Posted on May 29, 2014 at 8:02 AMView Comments

The Economics of Bulk Surveillance

Ross Anderson has an important new paper on the economics that drive government-on-population bulk surveillance:

My first big point is that all the three factors which lead to monopoly – network effects, low marginal costs and technical lock-in – are present and growing in the national-intelligence nexus itself. The Snowden papers show that neutrals like Sweden and India are heavily involved in information sharing with the NSA, even though they have tried for years to pretend otherwise. A non-aligned country such as India used to be happy to buy warplanes from Russia; nowadays it still does, but it shares intelligence with the NSA rather then the FSB. If you have a choice of joining a big spy network like America’s or a small one like Russia’s then it’s like choosing whether to write software for the PC or the Mac back in the 1990s. It may be partly an ideological choice, but the economics can often be stronger than the ideology.

Second, modern warfare, like the software industry, has seen the bulk of its costs turn from variable costs into fixed costs. In medieval times, warfare was almost entirely a matter of manpower, and society was organised appropriately; as well as rent or produce, tenants owed their feudal lord forty days’ service in peacetime, and sixty days during a war. Barons held their land from the king in return for an oath of fealty, and a duty to provide a certain size of force on demand; priests and scholars paid a tax in lieu of service, so that a mercenary could be hired in their place. But advancing technology brought steady industrialisation. When the UK and the USA attacked Germany in 1944, we did not send millions of men to Europe, as in the first world war, but a combat force of a couple of hundred thousand troops – though with thousands of tanks and backed by larger numbers of men in support roles in tens of thousands of aircraft and ships. Nowadays the transition from labour to capital has gone still further: to kill a foreign leader, we could get a drone fire a missile that costs $30,000. But that’s backed by colossal investment – the firms whose data are tapped by PRISM have a combined market capitalisation of over $1 trillion.

Third is the technical lock-in, which operates at a number of levels. First, there are lock-in effects in the underlying industries, where (for example) Cisco dominates the router market: those countries that have tried to build US-free information infrastructures (China) or even just government information infrastructures (Russia, Germany) find it’s expensive. China went to the trouble of sponsoring an indigenous vendor, Huawei, but it’s unclear how much separation that buys them because of the common code shared by router vendors: a vulnerability discovered in one firm’s products may affect another. Thus the UK government lets BT buy Huawei routers for all but its network’s most sensitive parts (the backbone and the lawful-intercept functions). Second, technical lock-in affects the equipment used by the intelligence agencies themselves, and is in fact promoted by the agencies via ETSI standards for functions such as lawful intercept.

Just as these three factors led to the IBM network dominating the mainframe age, the Intel/Microsoft network dominating the PC age, and Facebook dominating the social networking scene, so they push strongly towards global surveillance becoming a single connected ecosystem.

These are important considerations when trying to design national policies around surveillance.

Ross’s blog post.

Posted on May 27, 2014 at 10:13 AMView Comments

Peter Watts on the Harms of Surveillance

Biologist Peter Watts makes some good points:

Mammals don’t respond well to surveillance. We consider it a threat. It makes us paranoid, and aggressive and vengeful.

[…]

“Natural selection favors the paranoid,” Watts said. Those who run away. In the earliest days of man on the savannah, when we roamed among the predatory, wild animals, someone realized pretty quickly that lions stalked their prey from behind the tall, untamed grass. And so anyone hoping to keep on breathing developed a healthy fear of the lions in the grass and listened for the rustling in the brush in order to avoid becoming lunch for an animal more powerful than themselves. It was instinct. If the rustling, the perceived surveillance, turns out to just be the wind? Well, no harm done.

“For a very long time, people who don’t see agency have a disproportionate tendency to get eaten,” Watts noted.

And so, we’ve developed those protective instincts. “We see faces in the clouds; we hear ghosts and monsters in the stairs at night,” Watts said. “The link between surveillance and fear is a lot deeper than the average privacy advocate is willing to admit.”

[…]

“A lot of critics say blanket surveillance treats us like criminals, but it’s deeper than that,” he said. “It makes us feel like prey. We’re seeing stalking behavior in the illogical sense,” he said.

This is interesting. People accept government surveillance out of fear: fear of the terrorists, fear of the criminals. If Watts is right, then there’s a conflict of fears. Because terrorists and criminals—kidnappers, child pornographers, drug dealers, whatever—is more evocative than the nebulous fear of being stalked, it wins.

EDITED TO ADD (5/23): His own post is better than the write-up.

EDITED TO ADD (5/24): Peter Watts has responded to this post, complaining about the misquotes in the article I quoted. He will post a transcript of his talk, so we can see what he actually said. My guess is that I will still agree with it.

He also recommended this post of his, which is well worth reading.

EDITED TO ADD (5/27): Here is the transcript.

Posted on May 23, 2014 at 6:42 AMView Comments

Disclosing vs. Hoarding Vulnerabilities

There’s a debate going on about whether the US government—specifically, the NSA and United States Cyber Command—should stockpile Internet vulnerabilities or disclose and fix them. It’s a complicated problem, and one that starkly illustrates the difficulty of separating attack and defense in cyberspace.

A software vulnerability is a programming mistake that allows an adversary access into that system. Heartbleed is a recent example, but hundreds are discovered every year.

Unpublished vulnerabilities are called “zero-day” vulnerabilities, and they’re very valuable because no one is protected. Someone with one of those can attack systems world-wide with impunity.

When someone discovers one, he can either use it for defense or for offense. Defense means alerting the vendor and getting it patched. Lots of vulnerabilities are discovered by the vendors themselves and patched without any fanfare. Others are discovered by researchers and hackers. A patch doesn’t make the vulnerability go away, but most users protect themselves by patching their systems regularly.

Offense means using the vulnerability to attack others. This is the quintessential zero-day, because the vendor doesn’t even know the vulnerability exists until it starts being used by criminals or hackers. Eventually the affected software’s vendor finds out—the timing depends on how extensively the vulnerability is used—and issues a patch to close the vulnerability.

If an offensive military cyber unit discovers the vulnerability—or a cyber-weapons arms manufacturer—it keeps that vulnerability secret for use to deliver a cyber-weapon. If it is used stealthily, it might remain secret for a long time. If unused, it’ll remain secret until someone else discovers it.

Discoverers can sell vulnerabilities. There’s a rich market in zero-days for attack purposes—both military/commercial and black markets. Some vendors offer bounties for vulnerabilities to incent defense, but the amounts are much lower.

The NSA can play either defense or offense. It can either alert the vendor and get a still-secret vulnerability fixed, or it can hold on to it and use it to eavesdrop on foreign computer systems. Both are important US policy goals, but the NSA has to choose which one to pursue. By fixing the vulnerability, it strengthens the security of the Internet against all attackers: other countries, criminals, hackers. By leaving the vulnerability open, it is better able to attack others on the Internet. But each use runs the risk of the target government learning of, and using for itself, the vulnerability—or of the vulnerability becoming public and criminals starting to use it.

There is no way to simultaneously defend US networks while leaving foreign networks open to attack. Everyone uses the same software, so fixing us means fixing them, and leaving them vulnerable means leaving us vulnerable. As Harvard Law Professor Jack Goldsmith wrote, “every offensive weapon is a (potential) chink in our defense—and vice versa.”

To make matters even more difficult, there is an arms race going on in cyberspace. The Chinese, the Russians, and many other countries are finding vulnerabilities as well. If we leave a vulnerability unpatched, we run the risk of another country independently discovering it and using it in a cyber-weapon that we will be vulnerable to. But if we patch all the vulnerabilities we find, we won’t have any cyber-weapons to use against other countries.

Many people have weighed in on this debate. The president’s Review Group on Intelligence and Communications Technologies, convened post-Snowden, concluded (recommendation 30), that vulnerabilities should only be hoarded in rare instances and for short times. Cory Doctorow calls it a public health problem. I have said similar things. Dan Geer recommends that the US government corner the vulnerabilities market and fix them all. Both the FBI and the intelligence agencies claim that this amounts to unilateral disarmament.

It seems like an impossible puzzle, but the answer hinges on how vulnerabilities are distributed in software.

If vulnerabilities are sparse, then it’s obvious that every vulnerability we find and fix improves security. We render a vulnerability unusable, even if the Chinese government already knows about it. We make it impossible for criminals to find and use it. We improve the general security of our software, because we can find and fix most of the vulnerabilities.

If vulnerabilities are plentiful—and this seems to be true—the ones the US finds and the ones the Chinese find will largely be different. This means that patching the vulnerabilities we find won’t make it appreciably harder for criminals to find the next one. We don’t really improve general software security by disclosing and patching unknown vulnerabilities, because the percentage we find and fix is small compared to the total number that are out there.

But while vulnerabilities are plentiful, they’re not uniformly distributed. There are easier-to-find ones, and harder-to-find ones. Tools that automatically find and fix entire classes of vulnerabilities, and coding practices that eliminate many easy-to-find ones, greatly improve software security. And when a person finds a vulnerability, it is likely that another person soon will, or recently has, found the same vulnerability. Heartbleed, for example, remained undiscovered for two years, and then two independent researchers discovered it within two days of each other. This is why it is important for the government to err on the side of disclosing and fixing.

The NSA, and by extension US Cyber Command, tries its best to play both ends of this game. Former NSA Director Michael Hayden talks about NOBUS, “nobody but us.” The NSA has a classified process to determine what it should do about vulnerabilities, disclosing and closing most of the ones it finds, but holding back some—we don’t know how many—vulnerabilities that “nobody but us” could find for attack purposes.

This approach seems to be the appropriate general framework, but the devil is in the details. Many of us in the security field don’t know how to make NOBUS decisions, and the recent White House clarification posed more questions than it answered.

Who makes these decisions, and how? How often are they reviewed? Does this review process happen inside Department of Defense, or is it broader? Surely there needs to be a technical review of each vulnerability, but there should also be policy reviews regarding the sorts of vulnerabilities we are hoarding. Do we hold these vulnerabilities until someone else finds them, or only for a short period of time? How many do we stockpile? The US/Israeli cyberweapon Stuxnet used four zero-day vulnerabilities. Burning four on a single military operation implies that we are not hoarding a small number, but more like 100 or more.

There’s one more interesting wrinkle. Cyber-weapons are a combination of a payload—the damage the weapon does—and a delivery mechanism: the vulnerability used to get the payload into the enemy network. Imagine that China knows about a vulnerability and is using it in a still-unfired cyber-weapon, and that the NSA learns about it through espionage. Should the NSA disclose and patch the vulnerability, or should it use it itself for attack? If it discloses, then China could find a replacement vulnerability that the NSA won’t know about it. But if it doesn’t, it’s deliberately leaving the US vulnerable to cyber-attack. Maybe someday we can get to the point where we can patch vulnerabilities faster than the enemy can use them in an attack, but we’re nowhere near that point today.

The implications of US policy can be felt on a variety of levels. The NSA’s actions have resulted in a widespread mistrust of the security of US Internet products and services, greatly affecting American business. If we show that we’re putting security ahead of surveillance, we can begin to restore that trust. And by making the decision process much more public than it is today, we can demonstrate both our trustworthiness and the value of open government.

An unpatched vulnerability puts everyone at risk, but not to the same degree. The US and other Western countries are highly vulnerable, because of our critical electronic infrastructure, intellectual property, and personal wealth. Countries like China and Russia are less vulnerable—North Korea much less—so they have considerably less incentive to see vulnerabilities fixed. Fixing vulnerabilities isn’t disarmament; it’s making our own countries much safer. We also regain the moral authority to negotiate any broad international reductions in cyber-weapons; and we can decide not to use them even if others do.

Regardless of our policy towards hoarding vulnerabilities, the most important thing we can do is patch vulnerabilities quickly once they are disclosed. And that’s what companies are doing, even without any government involvement, because so many vulnerabilities are discovered by criminals.

We also need more research in automatically finding and fixing vulnerabilities, and in building secure and resilient software in the first place. Research over the last decade or so has resulted in software vendors being able to find and close entire classes of vulnerabilities. Although there are many cases of these security analysis tools not being used, all of our security is improved when they are. That alone is a good reason to continue disclosing vulnerability details, and something the NSA can do to vastly improve the security of the Internet worldwide. Here again, though, they would have to make the tools they have to automatically find vulnerabilities available for defense and not attack.

In today’s cyberwar arms race, unpatched vulnerabilities and stockpiled cyber-weapons are inherently destabilizing, especially because they are only effective for a limited time. The world’s militaries are investing more money in finding vulnerabilities than the commercial world is investing in fixing them. The vulnerabilities they discover affect the security of us all. No matter what cybercriminals do, no matter what other countries do, we in the US need to err on the side of security and fix almost all the vulnerabilities we find. But not all, yet.

This essay previously appeared on TheAtlantic.com.

Posted on May 22, 2014 at 6:15 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.