Entries Tagged "patching"

Page 9 of 13

The Human Side of Heartbleed

The announcement on April 7 was alarming. A new Internet vulnerability called Heartbleed could allow hackers to steal your logins and passwords. It affected a piece of security software that is used on half a million websites worldwide. Fixing it would be hard: It would strain our security infrastructure and the patience of users everywhere.

It was a software insecurity, but the problem was entirely human.

Software has vulnerabilities because it’s written by people, and people make mistakes—thousands of mistakes. This particular mistake was made in 2011 by a German graduate student who was one of the unpaid volunteers working on a piece of software called OpenSSL. The update was approved by a British consultant.

In retrospect, the mistake should have been obvious, and it’s amazing that no one caught it. But even though thousands of large companies around the world used this critical piece of software for free, no one took the time to review the code after its release.

The mistake was discovered around March 21, 2014, and was reported on April 1 by Neel Mehta of Google’s security team, who quickly realized how potentially devastating it was. Two days later, in an odd coincidence, researchers at a security company called Codenomicon independently discovered it.

When a researcher discovers a major vulnerability in a widely used piece of software, he generally discloses it responsibly. Why? As soon as a vulnerability becomes public, criminals will start using it to hack systems, steal identities, and generally create mayhem, so we have to work together to fix the vulnerability quickly after it’s announced.

The researchers alerted some of the larger companies quietly so that they could fix their systems before the public announcement. (Who to tell early is another very human problem: If you tell too few, you’re not really helping, but if you tell too many, the secret could get out.) Then Codenomicon announced the vulnerability.

One of the biggest problems we face in the security community is how to communicate these sorts of vulnerabilities. The story is technical, and people often don’t know how to react to the risk. In this case, the Codenomicon researchers did well. They created a public website explaining (in simple terms) the vulnerability and how to fix it, and they created a logo—a red bleeding heart—that every news outlet used for coverage of the story.

The first week of coverage varied widely, as some people panicked and others downplayed the threat. This wasn’t surprising: There was a lot of uncertainty about the risk, and it wasn’t immediately obvious how disastrous the vulnerability actually was.

The major Internet companies were quick to patch vulnerable systems. Individuals were less likely to update their passwords, but by and large, that was OK.

True to form, hackers started exploiting the vulnerability within minutes of the announcement. We assume that governments also exploited the vulnerability while they could. I’m sure the U.S. National Security Agency had advance warning.

By now, it’s largely over. There are still lots of unpatched systems out there. (Many of them are embedded hardware systems that can’t be patched.) The risk of attack is still there, but minimal. In the end, the actual damage was also minimal, although the expense of restoring security was great.

The question that remains is this: What should we expect in the future—are there more Heartbleeds out there?

Yes. Yes there are. The software we use contains thousands of mistakes—many of them security vulnerabilities. Lots of people are looking for these vulnerabilities: Researchers are looking for them. Criminals and hackers are looking for them. National intelligence agencies in the United States, the United Kingdom, China, Russia, and elsewhere are looking for them. The software vendors themselves are looking for them.

What happens when a vulnerability is found depends on who finds it. If the vendor finds it, it quietly fixes it. If a researcher finds it, he or she alerts the vendor and then reports it to the public. If a national intelligence agency finds the vulnerability, it either quietly uses it to spy on others or—if we’re lucky—alerts the vendor. If criminals and hackers find it, they use it until a security company notices and alerts the vendor, and then it gets fixed—usually within a month.

Heartbleed was unique because there was no single fix. The software had to be updated, and then websites had to regenerate their encryption keys and get new public-key certificates. After that, people had to update their passwords. This multi-stage process had to take place publicly, which is why the announcement happened the way it did.

Yes, it’ll happen again. But most of the time, it’ll be easier to deal with than this.

This essay previously appeared on The Mark News.

Posted on June 4, 2014 at 6:23 AMView Comments

More on Heartbleed

This is an update to my earlier post.

Cloudflare is reporting that it’s very difficult, if not practically impossible, to steal SSL private keys with this attack.

Here’s the good news: after extensive testing on our software stack, we have been unable to successfully use Heartbleed on a vulnerable server to retrieve any private key data. Note that is not the same as saying it is impossible to use Heartbleed to get private keys. We do not yet feel comfortable saying that. However, if it is possible, it is at a minimum very hard. And, we have reason to believe based on the data structures used by OpenSSL and the modified version of NGINX that we use, that it may in fact be impossible.

The reasoning is complicated, and I suggest people read the post. What I have heard from people who actually ran the attack against a various servers is that what you get is a huge variety of cruft, ranging from indecipherable binary to useless log messages to peoples’ passwords. The variability is huge.

This xkcd comic is a very good explanation of how the vulnerability works. And this post by Dan Kaminsky is worth reading.

I have a lot to say about the human aspects of this: auditing of open-source code, how the responsible disclosure process worked in this case, the ease with which anyone could weaponize this with just a few lines of script, how we explain vulnerabilities to the public—and the role that impressive logo played in the process—and our certificate issuance and revocation process. This may be a massive computer vulnerability, but all of the interesting aspects of it are human.

EDITED TO ADD (4/12): We have one example of someone successfully retrieving an SSL private key using Heartbleed. So it’s possible, but it seems to be much harder than we originally thought.

And we have a story where two anonymous sources have claimed that the NSA has been exploiting Heartbleed for two years.

EDITED TO ADD (4/12): Hijacking user sessions with Heartbleed. And a nice essay on the marketing and communications around the vulnerability

EDITED TO ADD (4/13): The US intelligence community has denied prior knowledge of Heatbleed. The statement is word-game free:

NSA was not aware of the recently identified vulnerability in OpenSSL, the so-called Heartbleed vulnerability, until it was made public in a private sector cybersecurity report. Reports that say otherwise are wrong.

The statement also says:

Unless there is a clear national security or law enforcement need, this process is biased toward responsibly disclosing such vulnerabilities.

Since when is “law enforcement need” included in that decision process? This national security exception to law and process is extending much too far into normal police work.

Another point. According to the original Bloomberg article:

http://www.bloomberg.com/news/2014-04-11/nsa-said-to-have-used-heartbleed-bug-exposing-consumers.html

Certainly a plausible statement. But if those millions didn’t discover something obvious like Heartbleed, shouldn’t we investigate them for incompetence?

Finally—not related to the NSA—this is good information on which sites are still vulnerable, including historical data.

Posted on April 11, 2014 at 1:10 PMView Comments

Heartbleed

Heartbleed is a catastrophic bug in OpenSSL:

“The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop communications, steal data directly from the services and users and to impersonate services and users.

Basically, an attacker can grab 64K of memory from a server. The attack leaves no trace, and can be done multiple times to grab a different random 64K of memory. This means that anything in memory—SSL private keys, user keys, anything—is vulnerable. And you have to assume that it is all compromised. All of it.

“Catastrophic” is the right word. On the scale of 1 to 10, this is an 11.

Half a million sites are vulnerable, including my own. Test your vulnerability here.

The bug has been patched. After you patch your systems, you have to get a new public/private key pair, update your SSL certificate, and then change every password that could potentially be affected.

At this point, the probability is close to one that every target has had its private keys extracted by multiple intelligence agencies. The real question is whether or not someone deliberately inserted this bug into OpenSSL, and has had two years of unfettered access to everything. My guess is accident, but I have no proof.

This article is worth reading. Hacker News thread is filled with commentary. XKCD cartoon.

EDITED TO ADD (4/9): Has anyone looked at all the low-margin non-upgradable embedded systems that use OpenSSL? An upgrade path that involves the trash, a visit to Best Buy, and a credit card isn’t going to be fun for anyone.

EDITED TO ADD (4/10): I’m hearing that the CAs are completely clogged, trying to reissue so many new certificates. And I’m not sure we have anything close to the infrastructure necessary to revoke half a million certificates.

Possible evidence that Heartbleed was exploited last year.

EDITED TO ADD (4/10): I wonder if there is going to be some backlash from the mainstream press and the public. If nothing really bad happens—if this turns out to be something like the Y2K bug—then we are going to face criticisms of crying wolf.

EDITED TO ADD (4/11): Brian Krebs and Ed Felten on how to protect yourself from Heartbleed.

Posted on April 9, 2014 at 5:03 AMView Comments

Was the iOS SSL Flaw Deliberate?

Last October, I speculated on the best ways to go about designing and implementing a software backdoor. I suggested three characteristics of a good backdoor: low chance of discovery, high deniability if discovered, and minimal conspiracy to implement.

The critical iOS vulnerability that Apple patched last week is an excellent example. Look at the code. What caused the vulnerability is a single line of code: a second “goto fail;” statement. Since that statement isn’t a conditional, it causes the whole procedure to terminate.

The flaw is subtle, and hard to spot while scanning the code. It’s easy to imagine how this could have happened by error. And it would have been trivially easy for one person to add the vulnerability.

Was this done on purpose? I have no idea. But if I wanted to do something like this on purpose, this is exactly how I would do it.

EDITED TO ADD (2/27): If the Apple auditing system is any good, they would be able to trace this errant goto line not just to the source-code check-in details, but to the specific login that made the change. And they would quickly know whether this was just an error, or a deliberate change by a bad actor. Does anyone know what’s going on inside Apple?

EDITED TO ADD (2/27): Steve Bellovin has a pair of posts where he concludes that if this bug is enemy action, it’s fairly clumsy and unlikely to be the work of professionals.

Posted on February 27, 2014 at 6:03 AMView Comments

DRM and the Law

Cory Doctorow gives a good history of the intersection of Digital Rights Management (DRM) software and the law, describes how DRM software is antithetical to end-user security, and speculates how we might convince the law to recognize that.

Every security system relies on reports of newly discovered vulnerabilities as a means of continuously improving. The forces that work against security systems—scripts that automate attacks, theoretical advances, easy-to-follow guides that can be readily googled—are always improving so any system that does not benefit from its own continuous improvement becomes less effective over time. That is, the pool of adversaries capable of defeating the system goes up over time, and the energy they must expend to do so goes down over time, unless vulnerabilities are continuously reported and repaired.

Here is where DRM and your security work at cross-purposes. The DMCA’s injunction against publishing weaknesses in DRM means that its vulnerabilities remain unpatched for longer than in comparable systems that are not covered by the DMCA. That means that any system with DRM will on average be more dangerous for its users than one without DRM.

Posted on February 12, 2014 at 7:15 AMView Comments

Adware Vendors Buy and Abuse Chrome Extensions

This is not a good development:

To make matters worse, ownership of a Chrome extension can be transferred to another party, and users are never informed when an ownership change happens. Malware and adware vendors have caught wind of this and have started showing up at the doors of extension authors, looking to buy their extensions. Once the deal is done and the ownership of the extension is transferred, the new owners can issue an ad-filled update over Chrome’s update service, which sends the adware out to every user of that extension.

[…]

When malicious apps don’t follow Google’s disclosure policy, diagnosing something like this is extremely difficult. When Tweet This Page started spewing ads and malware into my browser, the only initial sign was that ads on the Internet had suddenly become much more intrusive, and many auto-played sound. The extension only started injecting ads a few days after it was installed in an attempt to make it more difficult to detect. After a while, Google search became useless, because every link would redirect to some other webpage. My initial thought was to take an inventory of every program I had installed recently—I never suspected an update would bring in malware. I ran a ton of malware/virus scanners, and they all found nothing. I was only clued into the fact that Chrome was the culprit because the same thing started happening on my Chromebook—if I didn’t notice that, the next step would have probably been a full wipe of my computer.

Posted on January 21, 2014 at 6:33 AMView Comments

Security Risks of Embedded Systems

We’re at a crisis point now with regard to the security of embedded systems, where computing is embedded into the hardware itself—as with the Internet of Things. These embedded computers are riddled with vulnerabilities, and there’s no good way to patch them.

It’s not unlike what happened in the mid-1990s, when the insecurity of personal computers was reaching crisis levels. Software and operating systems were riddled with security vulnerabilities, and there was no good way to patch them. Companies were trying to keep vulnerabilities secret, and not releasing security updates quickly. And when updates were released, it was hard—if not impossible—to get users to install them. This has changed over the past twenty years, due to a combination of full disclosure—publishing vulnerabilities to force companies to issue patches quicker—and automatic updates: automating the process of installing updates on users’ computers. The results aren’t perfect, but they’re much better than ever before.

But this time the problem is much worse, because the world is different: All of these devices are connected to the Internet. The computers in our routers and modems are much more powerful than the PCs of the mid-1990s, and the Internet of Things will put computers into all sorts of consumer devices. The industries producing these devices are even less capable of fixing the problem than the PC and software industries were.

If we don’t solve this soon, we’re in for a security disaster as hackers figure out that it’s easier to hack routers than computers. At a recent Def Con, a researcher looked at thirty home routers and broke into half of them—including some of the most popular and common brands.

To understand the problem, you need to understand the embedded systems market.

Typically, these systems are powered by specialized computer chips made by companies such as Broadcom, Qualcomm, and Marvell. These chips are cheap, and the profit margins slim. Aside from price, the way the manufacturers differentiate themselves from each other is by features and bandwidth. They typically put a version of the Linux operating system onto the chips, as well as a bunch of other open-source and proprietary components and drivers. They do as little engineering as possible before shipping, and there’s little incentive to update their “board support package” until absolutely necessary.

The system manufacturers—usually original device manufacturers (ODMs) who often don’t get their brand name on the finished product—choose a chip based on price and features, and then build a router, server, or whatever. They don’t do a lot of engineering, either. The brand-name company on the box may add a user interface and maybe some new features, make sure everything works, and they’re done, too.

The problem with this process is that no one entity has any incentive, expertise, or even ability to patch the software once it’s shipped. The chip manufacturer is busy shipping the next version of the chip, and the ODM is busy upgrading its product to work with this next chip. Maintaining the older chips and products just isn’t a priority.

And the software is old, even when the device is new. For example, one survey of common home routers found that the software components were four to five years older than the device. The minimum age of the Linux operating system was four years. The minimum age of the Samba file system software: six years. They may have had all the security patches applied, but most likely not. No one has that job. Some of the components are so old that they’re no longer being patched. This patching is especially important because security vulnerabilities are found “more easily” as systems age.

To make matters worse, it’s often impossible to patch the software or upgrade the components to the latest version. Often, the complete source code isn’t available. Yes, they’ll have the source code to Linux and any other open-source components. But many of the device drivers and other components are just “binary blobs”—no source code at all. That’s the most pernicious part of the problem: No one can possibly patch code that’s just binary.

Even when a patch is possible, it’s rarely applied. Users usually have to manually download and install relevant patches. But since users never get alerted about security updates, and don’t have the expertise to manually administer these devices, it doesn’t happen. Sometimes the ISPs have the ability to remotely patch routers and modems, but this is also rare.

The result is hundreds of millions of devices that have been sitting on the Internet, unpatched and insecure, for the last five to ten years.

Hackers are starting to notice. Malware DNS Changer attacks home routers as well as computers. In Brazil, 4.5 million DSL routers were compromised for purposes of financial fraud. Last month, Symantec reported on a Linux worm that targets routers, cameras, and other embedded devices.

This is only the beginning. All it will take is some easy-to-use hacker tools for the script kiddies to get into the game.

And the Internet of Things will only make this problem worse, as the Internet—as well as our homes and bodies—becomes flooded with new embedded devices that will be equally poorly maintained and unpatchable. But routers and modems pose a particular problem, because they’re: (1) between users and the Internet, so turning them off is increasingly not an option; (2) more powerful and more general in function than other embedded devices; (3) the one 24/7 computing device in the house, and are a natural place for lots of new features.

We were here before with personal computers, and we fixed the problem. But disclosing vulnerabilities in an effort to force vendors to fix the problem won’t work the same way as with embedded systems. The last time, the problem was computers, ones mostly not connected to the Internet, and slow-spreading viruses. The scale is different today: more devices, more vulnerability, viruses spreading faster on the Internet, and less technical expertise on both the vendor and the user sides. Plus vulnerabilities that are impossible to patch.

Combine full function with lack of updates, add in a pernicious market dynamic that has inhibited updates and prevented anyone else from updating, and we have an incipient disaster in front of us. It’s just a matter of when.

We simply have to fix this. We have to put pressure on embedded system vendors to design their systems better. We need open-source driver software—no more binary blobs!—so third-party vendors and ISPs can provide security tools and software updates for as long as the device is in use. We need automatic update mechanisms to ensure they get installed.

The economic incentives point to large ISPs as the driver for change. Whether they’re to blame or not, the ISPs are the ones who get the service calls for crashes. They often have to send users new hardware because it’s the only way to update a router or modem, and that can easily cost a year’s worth of profit from that customer. This problem is only going to get worse, and more expensive. Paying the cost up front for better embedded systems is much cheaper than paying the costs of the resultant security disasters.

This essay originally appeared on Wired.com.

Posted on January 9, 2014 at 6:33 AMView Comments

Security Vulnerabilities of Legacy Code

An interesting research paper documents a “honeymoon effect” when it comes to software and vulnerabilities: attackers are more likely to find vulnerabilities in older and more familiar code. It’s a few years old, but I haven’t seen it before now. The paper is by Sandy Clark, Stefan Frei, Matt Blaze, and Jonathan Smith: “Familiarity Breeds Contempt: The Honeymoon Effect and the Role of Legacy Code in Zero-Day Vulnerabilities,” Annual Computer Security Applications Conference 2010.

Abstract: Work on security vulnerabilities in software has primarily focused on three points in the software life-cycle: (1) finding and removing software defects, (2) patching or hardening software after vulnerabilities have been discovered, and (3) measuring the rate of vulnerability exploitation. This paper examines an earlier period in the software vulnerability life-cycle, starting from the release date of a version through to
the disclosure of the fourth vulnerability, with a particular focus on the time from release until the very first disclosed vulnerability.

Analysis of software vulnerability data, including up to a decade of data for several versions of the most popular operating systems, server applications and user applications (both open and closed source), shows that properties extrinsic to the software play a much greater role in the rate of vulnerability discovery than do intrinsic properties such as software quality. This leads us to the observation that (at least in the first phase of a product’s existence), software vulnerabilities have different properties from software defects.

We show that the length of the period after the release of a software product (or version) and before the discovery of the first vulnerability (the ‘Honeymoon’ period) is primarily a function of familiarity with the system. In addition, we demonstrate that legacy code resulting from code re-use is a major contributor to both the rate of vulnerability discovery and the numbers of vulnerabilities found; this has significant implications for software engineering principles and practice.

Posted on December 17, 2013 at 7:10 AMView Comments

Why It's Important to Publish the NSA Programs

The Guardian recently reported on how the NSA targets Tor users, along with details of how it uses centrally placed servers on the Internet to attack individual computers. This builds on a Brazilian news story from a mid-September that, in part, shows that the NSA is impersonating Google servers to users; a German story on how the NSA is hacking into smartphones; and a Guardian story from early September on how the NSA is deliberately weakening common security algorithms, protocols, and products.

The common thread among these stories is that the NSA is subverting the Internet and turning it into a massive surveillance tool. The NSA’s actions are making us all less safe, because its eavesdropping mission is degrading its ability to protect the US.

Among IT security professionals, it has been long understood that the public disclosure of vulnerabilities is the only consistent way to improve security. That’s why researchers publish information about vulnerabilities in computer software and operating systems, cryptographic algorithms, and consumer products like implantable medical devices, cars, and CCTV cameras.

It wasn’t always like this. In the early years of computing, it was common for security researchers to quietly alert the product vendors about vulnerabilities, so they could fix them without the “bad guys” learning about them. The problem was that the vendors wouldn’t bother fixing them, or took years before getting around to it. Without public pressure, there was no rush.

This all changed when researchers started publishing. Now vendors are under intense public pressure to patch vulnerabilities as quickly as possible. The majority of security improvements in the hardware and software we all use today is a result of this process. This is why Microsoft’s Patch Tuesday process fixes so many vulnerabilities every month. This is why Apple’s iPhone is designed so securely. This is why so many products push out security updates so often. And this is why mass-market cryptography has continually improved. Without public disclosure, you’d be much less secure against cybercriminals, hacktivists, and state-sponsored cyberattackers.

The NSA’s actions turn that process on its head, which is why the security community is so incensed. The NSA not only develops and purchases vulnerabilities, but deliberately creates them through secret vendor agreements. These actions go against everything we know about improving security on the Internet.

It’s folly to believe that any NSA hacking technique will remain secret for very long. Yes, the NSA has a bigger research effort than any other institution, but there’s a lot of research being done—by other governments in secret, and in academic and hacker communities in the open. These same attacks are being used by other governments. And technology is fundamentally democratizing: today’s NSA secret techniques are tomorrow’s PhD theses and the following day’s cybercrime attack tools.

It’s equal folly to believe that the NSA’s secretly installed backdoors will remain secret. Given how inept the NSA was at protecting its own secrets, it’s extremely unlikely that Edward Snowden was the first sysadmin contractor to walk out the door with a boatload of them. And the previous leakers could have easily been working for a foreign government. But it wouldn’t take a rogue NSA employee; researchers or hackers could discover any of these backdoors on their own.

This isn’t hypothetical. We already know of government-mandated backdoors being used by criminals in Greece, Italy, and elsewhere. We know China is actively engaging in cyber-espionage worldwide. A recent Economist article called it “akin to a government secretly commanding lockmakers to make their products easier to pick—and to do so amid an epidemic of burglary.”

The NSA has two conflicting missions. Its eavesdropping mission has been getting all the headlines, but it also has a mission to protect US military and critical infrastructure communications from foreign attack. Historically, these two missions have not come into conflict. During the cold war, for example, we would defend our systems and attack Soviet systems.

But with the rise of mass-market computing and the Internet, the two missions have become interwoven. It becomes increasingly difficult to attack their systems and defend our systems, because everything is using the same systems: Microsoft Windows, Cisco routers, HTML, TCP/IP, iPhones, Intel chips, and so on. Finding a vulnerability—or creating one—and keeping it secret to attack the bad guys necessarily leaves the good guys more vulnerable.

Far better would be for the NSA to take those vulnerabilities back to the vendors to patch. Yes, it would make it harder to eavesdrop on the bad guys, but it would make everyone on the Internet safer. If we believe in protecting our critical infrastructure from foreign attack, if we believe in protecting Internet users from repressive regimes worldwide, and if we believe in defending businesses and ourselves from cybercrime, then doing otherwise is lunacy.

It is important that we make the NSA’s actions public in sufficient detail for the vulnerabilities to be fixed. It’s the only way to force change and improve security.

This essay previously appeared in the Guardian.

Posted on October 8, 2013 at 6:44 AMView Comments

1 7 8 9 10 11 13

Sidebar photo of Bruce Schneier by Joe MacInnis.