Page 440

Security Vulnerabilities of Legacy Code

An interesting research paper documents a “honeymoon effect” when it comes to software and vulnerabilities: attackers are more likely to find vulnerabilities in older and more familiar code. It’s a few years old, but I haven’t seen it before now. The paper is by Sandy Clark, Stefan Frei, Matt Blaze, and Jonathan Smith: “Familiarity Breeds Contempt: The Honeymoon Effect and the Role of Legacy Code in Zero-Day Vulnerabilities,” Annual Computer Security Applications Conference 2010.

Abstract: Work on security vulnerabilities in software has primarily focused on three points in the software life-cycle: (1) finding and removing software defects, (2) patching or hardening software after vulnerabilities have been discovered, and (3) measuring the rate of vulnerability exploitation. This paper examines an earlier period in the software vulnerability life-cycle, starting from the release date of a version through to
the disclosure of the fourth vulnerability, with a particular focus on the time from release until the very first disclosed vulnerability.

Analysis of software vulnerability data, including up to a decade of data for several versions of the most popular operating systems, server applications and user applications (both open and closed source), shows that properties extrinsic to the software play a much greater role in the rate of vulnerability discovery than do intrinsic properties such as software quality. This leads us to the observation that (at least in the first phase of a product’s existence), software vulnerabilities have different properties from software defects.

We show that the length of the period after the release of a software product (or version) and before the discovery of the first vulnerability (the ‘Honeymoon’ period) is primarily a function of familiarity with the system. In addition, we demonstrate that legacy code resulting from code re-use is a major contributor to both the rate of vulnerability discovery and the numbers of vulnerabilities found; this has significant implications for software engineering principles and practice.

Posted on December 17, 2013 at 7:10 AMView Comments

Attacking Online Poker Players

This story is about how at least two professional online poker players had their hotel rooms broken into and their computers infected with malware.

I agree with the conclusion:

So, what’s the moral of the story? If you have a laptop that is used to move large amounts of money, take good care of it. Lock the keyboard when you step away. Put it in a safe when you’re not around it, and encrypt the disk to prevent off-line access. Don’t surf the web with it (use another laptop/device for that, they’re relatively cheap). This advice is true whether you’re a poker pro using a laptop for gaming or a business controller in a large company using the computer for wiring a large amount of funds.

Posted on December 16, 2013 at 6:09 AMView Comments

World War II Anecdote about Trust and Security

This is an interesting story from World War II about trust:

Jones notes that the Germans doubted their system because they knew the British could radio false orders to the German bombers with no trouble. As Jones recalls, “In fact we did not do this, but it seemed such an easy countermeasure that the German crews thought that we might, and they therefore began to be suspicious about the instructions that they received.”

The implications of this are perhaps obvious but worth stating nonetheless: a lack of trust can exist even if an adversary fails to exploit a weakness in the system. More importantly, this doubt can become a shadow adversary. According to Jones, “…it was not long before the crews found substance to their theory [that is, their doubt].” In support of this, he offers the anecdote of a German pilot who, returning to base after wandering off course, grumbled that “the British had given him a false order.”

I think about this all the time with respect to our IT systems and the NSA. Even though we don’t know which companies the NSA has compromised—or by what means—knowing that they could have compromised any of them is enough to make us mistrustful of all of them. This is going to make it hard for large companies like Google and Microsoft to get back the trust they lost. Even if they succeed in limiting government surveillance. Even if they succeed in improving their own internal security. The best they’ll be able to say is: “We have secured ourselves from the NSA, except for the parts that we either don’t know about or can’t talk about.”

Posted on December 13, 2013 at 11:20 AMView Comments

NSA Tracks People Using Google Cookies

The Washington Post has a detailed article on how the NSA uses cookie data to track individuals. The EFF also has a good post on this.

I have been writing and saying that surveillance is the business model of the Internet, and that government surveillance largely piggy backs on corporate capabilities. This is an example of that. The NSA doesn’t need the cooperation of any Internet company to use their cookies for surveillance purposes, but they do need their capabilities. And because the Internet is largely unencrypted, they can use those capabilities for their own purposes.

Reforming the NSA is not just about government surveillance. It has to address the public-private surveillance partnership. Even as a group of large Internet companies have come together to demand government surveillance reform, they are ignoring their own surveillance activities. But you can’t reform one without the other. The Free Software Foundation has written about this as well.

Little has been written about how QUANTUM interacts with cookie surveillance. QUANTUM is the NSA’s program for real-time responses to passive Internet monitoring. It’s what allows them to do packet injection attacks. The NSA’s Tor Stinks presentation talks about a subprogram called QUANTUMCOOKIE: “forces clients to divulge stored cookies.” My guess is that the NSA uses frame injection to surreptitiously force anonymous users to visit common sites like Google and Facebook and reveal their identifying cookies. Combined with the rest of their cookie surveillance activities, this can de-anonymize Tor users if they use Tor from the same browser they use for other Internet activities.

Posted on December 12, 2013 at 6:21 AMView Comments

NSA Spying on Online Gaming Worlds

The NSA is spying on chats in World of Warcraft and other games. There’s lots of information—and a good source document. While it’s fun to joke about the NSA and elves and dwarves from World of Warcraft, this kind of surveillance makes perfect sense. If, as Dan Geer has pointed out, your assigned mission is to ensure that something never happens, the only way you can be sure that something never happens is to know everything that does happen. Which puts you in the impossible position of having to eavesdrop on every possible communications channel, including online gaming worlds.

One bit (on page 2) jumped out at me:

The NMDC engaged SNORT, an open source packet-sniffing software, which runs on all FORNSAT survey packet data, to filter out WoW packets. GCHQ provided several WoW protocol parsing scripts to process the traffic and produce Warcraft metadata from all NMDC FORNSAT survey.

NMDC is the New Mission Development Center, and FORNSAT stands for Foreign Satellite Collection. MHS, which also appears in the source document, stands for—I think—Menwith Hill Station, a satellite eavesdropping location in the UK.

Since the Snowden documents first started being released, I have been saying that while the US has a bigger intelligence budget than the rest of the world’s countries combined, agencies like the NSA are not made of magic. They’re constrained by the laws of mathematics, physics, and economics—just like everyone else. Here’s an example. The NSA is using Snort—an open source product that anyone can download and use—because that’s a more cost-effective tool than anything they can develop in-house.

Posted on December 10, 2013 at 9:08 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.