Entries Tagged "Internet"

Page 4 of 20

The Human Side of Heartbleed

The announcement on April 7 was alarming. A new Internet vulnerability called Heartbleed could allow hackers to steal your logins and passwords. It affected a piece of security software that is used on half a million websites worldwide. Fixing it would be hard: It would strain our security infrastructure and the patience of users everywhere.

It was a software insecurity, but the problem was entirely human.

Software has vulnerabilities because it’s written by people, and people make mistakes—thousands of mistakes. This particular mistake was made in 2011 by a German graduate student who was one of the unpaid volunteers working on a piece of software called OpenSSL. The update was approved by a British consultant.

In retrospect, the mistake should have been obvious, and it’s amazing that no one caught it. But even though thousands of large companies around the world used this critical piece of software for free, no one took the time to review the code after its release.

The mistake was discovered around March 21, 2014, and was reported on April 1 by Neel Mehta of Google’s security team, who quickly realized how potentially devastating it was. Two days later, in an odd coincidence, researchers at a security company called Codenomicon independently discovered it.

When a researcher discovers a major vulnerability in a widely used piece of software, he generally discloses it responsibly. Why? As soon as a vulnerability becomes public, criminals will start using it to hack systems, steal identities, and generally create mayhem, so we have to work together to fix the vulnerability quickly after it’s announced.

The researchers alerted some of the larger companies quietly so that they could fix their systems before the public announcement. (Who to tell early is another very human problem: If you tell too few, you’re not really helping, but if you tell too many, the secret could get out.) Then Codenomicon announced the vulnerability.

One of the biggest problems we face in the security community is how to communicate these sorts of vulnerabilities. The story is technical, and people often don’t know how to react to the risk. In this case, the Codenomicon researchers did well. They created a public website explaining (in simple terms) the vulnerability and how to fix it, and they created a logo—a red bleeding heart—that every news outlet used for coverage of the story.

The first week of coverage varied widely, as some people panicked and others downplayed the threat. This wasn’t surprising: There was a lot of uncertainty about the risk, and it wasn’t immediately obvious how disastrous the vulnerability actually was.

The major Internet companies were quick to patch vulnerable systems. Individuals were less likely to update their passwords, but by and large, that was OK.

True to form, hackers started exploiting the vulnerability within minutes of the announcement. We assume that governments also exploited the vulnerability while they could. I’m sure the U.S. National Security Agency had advance warning.

By now, it’s largely over. There are still lots of unpatched systems out there. (Many of them are embedded hardware systems that can’t be patched.) The risk of attack is still there, but minimal. In the end, the actual damage was also minimal, although the expense of restoring security was great.

The question that remains is this: What should we expect in the future—are there more Heartbleeds out there?

Yes. Yes there are. The software we use contains thousands of mistakes—many of them security vulnerabilities. Lots of people are looking for these vulnerabilities: Researchers are looking for them. Criminals and hackers are looking for them. National intelligence agencies in the United States, the United Kingdom, China, Russia, and elsewhere are looking for them. The software vendors themselves are looking for them.

What happens when a vulnerability is found depends on who finds it. If the vendor finds it, it quietly fixes it. If a researcher finds it, he or she alerts the vendor and then reports it to the public. If a national intelligence agency finds the vulnerability, it either quietly uses it to spy on others or—if we’re lucky—alerts the vendor. If criminals and hackers find it, they use it until a security company notices and alerts the vendor, and then it gets fixed—usually within a month.

Heartbleed was unique because there was no single fix. The software had to be updated, and then websites had to regenerate their encryption keys and get new public-key certificates. After that, people had to update their passwords. This multi-stage process had to take place publicly, which is why the announcement happened the way it did.

Yes, it’ll happen again. But most of the time, it’ll be easier to deal with than this.

This essay previously appeared on The Mark News.

Posted on June 4, 2014 at 6:23 AMView Comments

Chinese Hacking of the US

Chinese hacking of American computer networks is old news. For years we’ve known about their attacks against U.S. government and corporate targets. We’ve seen detailed reports of how they hacked The New York Times. Google has detected them going after Gmail accounts of dissidents. They’ve built sophisticated worldwide eavesdropping networks. These hacks target both military secrets and corporate intellectual property. They’re perpetrated by a combination of state, state-sponsored and state-tolerated hackers. It’s been going on for years.

On Monday, the Justice Department indicted five Chinese hackers in absentia, all associated with the Chinese military, for stealing corporate secrets from U.S. energy, metals and manufacturing companies. It’s entirely for show; the odds that the Chinese are going to send these people to the U.S. to stand trial is zero. But it does move what had been mostly a technical security problem into the world of diplomacy and foreign policy. By doing this, the U.S. government is taking a very public stand and saying “enough.”

The problem with that stand is that we’ve been doing much the same thing to China. Documents revealed by the whistleblower Edward Snowden show that the NSA has penetrated Chinese government and commercial networks, and is exfiltrating—that’s NSA talk for stealing—an enormous amount of secret data. We’ve hacked the networking hardware of one of their own companies, Huawei. We’ve intercepted networking equipment being sent there and installed monitoring devices. We’ve been listening in on their private communications channels.

The only difference between the U.S. and China’s actions is that the U.S. doesn’t engage in direct industrial espionage. That is, we don’t steal secrets from Chinese companies and pass them directly to U.S. competitors. But we do engage in economic espionage; we steal secrets from Chinese companies for an advantage in government trade negotiations, which directly benefits U.S. competitors. We might think this difference is important, but other countries are not as as impressed with our nuance.

Already the Chinese are retaliating against the U.S. actions with rhetoric of their own. I don’t know the Chinese expression for ‘pot calling the kettle black,’ but it certainly fits in this case.

Again, none of this is new. The U.S. and the Chinese have been conducting electronic espionage on each other throughout the Cold War, and there’s no reason to think it’s going to change anytime soon. What’s different now is the ease with which the two countries can do this safely and remotely, over the Internet, as well as the massive amount of information that can be stolen with a few computer commands.

On the Internet today, it is much easier to attack systems and break into them than it is to defend those systems against attack, so the advantage is to the attacker. This is true for a combination of reasons: the ability of an attacker to concentrate his attack, the nature of vulnerabilities in computer systems, poor software quality and the enormous complexity of computer systems.

The computer security industry is used to coping with criminal attacks. In general, such attacks are untargeted. Criminals might have broken into Target’s network last year and stolen 40 million credit and debit card numbers, but they would have been happy with any retailer’s large credit card database. If Target’s security had been better than its competitors, the criminals would have gone elsewhere. In this way, security is relative.

The Chinese attacks are different. For whatever reason, the government hackers wanted certain information inside the networks of Alcoa World Alumina, Westinghouse Electric, Allegheny Technologies, U.S. Steel, United Steelworkers Union and SolarWorld. It wouldn’t have mattered how those companies’ security compared with other companies; all that mattered was whether it was better than the ability of the attackers.

This is a fundamentally different security model—often called APT or Advanced Persistent Threat—and one that is much more difficult to defend against.

In a sense, American corporations are collateral damage in this battle of espionage between the U.S. and China. Taking the battle from the technical sphere into the foreign policy sphere might be a good idea, but it will work only if we have some moral high ground from which to demand that others not spy on us. As long as we run the largest surveillance network in the world and hack computer networks in foreign countries, we’re going to have trouble convincing others not to attempt the same on us.

This essay previously appeared on Time.com.

Posted on June 2, 2014 at 6:37 AMView Comments

Is Antivirus Dead?

Symantec declared anti-virus dead, and Brian Krebs writes a good response.

He’s right: antivirus won’t protect you from the ever-increasing percentage of malware that’s specifically designed to bypass antivirus software, but it will protect you from all the random unsophisticated attacks out there: the “background radiation” of the Internet.

Posted on May 15, 2014 at 1:18 PMView Comments

Internet Subversion

In addition to turning the Internet into a worldwide surveillance platform, the NSA has surreptitiously weakened the products, protocols, and standards we all use to protect ourselves. By doing so, it has destroyed the trust that underlies the Internet. We need that trust back.

Trust is inherently social. It is personal, relative, situational, and fluid. It is not uniquely human, but it is the underpinning of everything we have accomplished as a species. We trust other people, but we also trust organizations and processes. The psychology is complex, but when we trust a technology, we basically believe that it will work as intended.

This is how we technologists trusted the security of the Internet. We didn’t have any illusions that the Internet was secure, or that governments, criminals, hackers, and others couldn’t break into systems and networks if they were sufficiently skilled and motivated. We didn’t trust that the programmers were perfect, that the code was bug-free, or even that our crypto math was unbreakable. We knew that Internet security was an arms race, and the attackers had most of the advantages.

What we trusted was that the technologies would stand or fall on their own merits.

We now know that trust was misplaced. Through cooperation, bribery, threats, and compulsion, the NSA—and the United Kingdom’s GCHQ—forced companies to weaken the security of their products and services, then lie about it to their customers.

We know of a few examples of this weakening. The NSA convinced Microsoft to make some unknown changes to Skype in order to make eavesdropping on conversations easier. The NSA also inserted a degraded random number generator into a common standard, then worked to get that generator used more widely.

I have heard engineers working for the NSA, FBI, and other government agencies delicately talk around the topic of inserting a “backdoor” into security products to allow for government access. One of them told me, “It’s like going on a date. Sex is never explicitly mentioned, but you know it’s on the table.” The NSA’s SIGINT Enabling Project has a $250 million annual budget; presumably it has more to show for itself than the fragments that have become public. Reed Hundt calls for the government to support a secure Internet, but given its history of installing backdoors, why would we trust claims that it has turned the page?

We also have to assume that other countries have been doing the same things. We have long believed that networking products from the Chinese company Huawei have been backdoored by the Chinese government. Do we trust hardware and software from Russia? France? Israel? Anywhere?

This mistrust is poison. Because we don’t know, we can’t trust any of them. Internet governance was largely left to the benign dictatorship of the United States because everyone more or less believed that we were working for the security of the Internet instead of against it. But now that system is in turmoil. Foreign companies are fleeing US suppliers because they don’t trust American firms’ security claims. Far worse governments are using these revelations to push for a more isolationist Internet, giving them more control over what their citizens see and say.

All so we could eavesdrop better.

There is a term in the NSA: “nobus,” short for “nobody but us.” The NSA believes it can subvert security in such a way that only it can take advantage of that subversion. But that is hubris. There is no way to determine if or when someone else will discover a vulnerability. These subverted systems become part of our infrastructure; the harms to everyone, once the flaws are discovered, far outweigh the benefits to the NSA while they are secret.

We can’t both weaken the enemy’s networks and protect our own. Because we all use the same products, technologies, protocols, and standards, we either allow everyone to spy on everyone, or prevent anyone from spying on anyone. By weakening security, we are weakening it against all attackers. By inserting vulnerabilities, we are making everyone vulnerable. The same vulnerabilities used by intelligence agencies to spy on each other are used by criminals to steal your passwords. It is surveillance versus security, and we all rise and fall together.

Security needs to win. The Internet is too important to the world—and trust is too important to the Internet—to squander it like this. We’ll never get every power in the world to agree not to subvert the parts of the Internet they control, but we can stop subverting the parts we control. Most of the high-tech companies that make the Internet work are US companies, so our influence is disproportionate. And once we stop subverting, we can credibly devote our resources to detecting and preventing subversion by others.

This essay previously appeared in the Boston Review.

Posted on May 12, 2014 at 6:26 AMView Comments

More on Heartbleed

This is an update to my earlier post.

Cloudflare is reporting that it’s very difficult, if not practically impossible, to steal SSL private keys with this attack.

Here’s the good news: after extensive testing on our software stack, we have been unable to successfully use Heartbleed on a vulnerable server to retrieve any private key data. Note that is not the same as saying it is impossible to use Heartbleed to get private keys. We do not yet feel comfortable saying that. However, if it is possible, it is at a minimum very hard. And, we have reason to believe based on the data structures used by OpenSSL and the modified version of NGINX that we use, that it may in fact be impossible.

The reasoning is complicated, and I suggest people read the post. What I have heard from people who actually ran the attack against a various servers is that what you get is a huge variety of cruft, ranging from indecipherable binary to useless log messages to peoples’ passwords. The variability is huge.

This xkcd comic is a very good explanation of how the vulnerability works. And this post by Dan Kaminsky is worth reading.

I have a lot to say about the human aspects of this: auditing of open-source code, how the responsible disclosure process worked in this case, the ease with which anyone could weaponize this with just a few lines of script, how we explain vulnerabilities to the public—and the role that impressive logo played in the process—and our certificate issuance and revocation process. This may be a massive computer vulnerability, but all of the interesting aspects of it are human.

EDITED TO ADD (4/12): We have one example of someone successfully retrieving an SSL private key using Heartbleed. So it’s possible, but it seems to be much harder than we originally thought.

And we have a story where two anonymous sources have claimed that the NSA has been exploiting Heartbleed for two years.

EDITED TO ADD (4/12): Hijacking user sessions with Heartbleed. And a nice essay on the marketing and communications around the vulnerability

EDITED TO ADD (4/13): The US intelligence community has denied prior knowledge of Heatbleed. The statement is word-game free:

NSA was not aware of the recently identified vulnerability in OpenSSL, the so-called Heartbleed vulnerability, until it was made public in a private sector cybersecurity report. Reports that say otherwise are wrong.

The statement also says:

Unless there is a clear national security or law enforcement need, this process is biased toward responsibly disclosing such vulnerabilities.

Since when is “law enforcement need” included in that decision process? This national security exception to law and process is extending much too far into normal police work.

Another point. According to the original Bloomberg article:

http://www.bloomberg.com/news/2014-04-11/nsa-said-to-have-used-heartbleed-bug-exposing-consumers.html

Certainly a plausible statement. But if those millions didn’t discover something obvious like Heartbleed, shouldn’t we investigate them for incompetence?

Finally—not related to the NSA—this is good information on which sites are still vulnerable, including historical data.

Posted on April 11, 2014 at 1:10 PMView Comments

Heartbleed

Heartbleed is a catastrophic bug in OpenSSL:

“The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop communications, steal data directly from the services and users and to impersonate services and users.

Basically, an attacker can grab 64K of memory from a server. The attack leaves no trace, and can be done multiple times to grab a different random 64K of memory. This means that anything in memory—SSL private keys, user keys, anything—is vulnerable. And you have to assume that it is all compromised. All of it.

“Catastrophic” is the right word. On the scale of 1 to 10, this is an 11.

Half a million sites are vulnerable, including my own. Test your vulnerability here.

The bug has been patched. After you patch your systems, you have to get a new public/private key pair, update your SSL certificate, and then change every password that could potentially be affected.

At this point, the probability is close to one that every target has had its private keys extracted by multiple intelligence agencies. The real question is whether or not someone deliberately inserted this bug into OpenSSL, and has had two years of unfettered access to everything. My guess is accident, but I have no proof.

This article is worth reading. Hacker News thread is filled with commentary. XKCD cartoon.

EDITED TO ADD (4/9): Has anyone looked at all the low-margin non-upgradable embedded systems that use OpenSSL? An upgrade path that involves the trash, a visit to Best Buy, and a credit card isn’t going to be fun for anyone.

EDITED TO ADD (4/10): I’m hearing that the CAs are completely clogged, trying to reissue so many new certificates. And I’m not sure we have anything close to the infrastructure necessary to revoke half a million certificates.

Possible evidence that Heartbleed was exploited last year.

EDITED TO ADD (4/10): I wonder if there is going to be some backlash from the mainstream press and the public. If nothing really bad happens—if this turns out to be something like the Y2K bug—then we are going to face criticisms of crying wolf.

EDITED TO ADD (4/11): Brian Krebs and Ed Felten on how to protect yourself from Heartbleed.

Posted on April 9, 2014 at 5:03 AMView Comments

Nicholas Weaver Explains how QUANTUM Works

An excellent essay. For the non-technical, his conclusion is the most important:

Everything we’ve seen about QUANTUM and other internet activity can be replicated with a surprisingly moderate budget, using existing tools with just a little modification.

The biggest limitation on QUANTUM is location: The attacker must be able to see a request which identifies the target. Since the same techniques can work on a Wi-Fi network, a $50 Raspberry Pi, located in a Foggy Bottom Starbucks, can provide any country, big and small, with a little window of QUANTUM exploitation. A foreign government can perform the QUANTUM attack NSA-style wherever your traffic passes through their country.

And that’s the bottom line with the NSA’s QUANTUM program. The NSA does not have a monopoly on the technology, and their widespread use acts as implicit permission to others, both nation-state and criminal.

Moreover, until we fix the underlying Internet architecture that makes QUANTUM attacks possible, we are vulnerable to all of those attackers.

Posted on March 14, 2014 at 2:01 PMView Comments

Tor Appliance

Safeplug is an easy-to-use Tor appliance. I like that it can also act as a Tor exit node.

EDITED TO ADD: I know nothing about this appliance, nor do I endorse it. In fact, I would like it to be independently audited before we start trusting it. But it’s a fascinating proof-of-concept of encapsulating security so that normal Internet users can use it.

Posted on November 27, 2013 at 6:28 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.