How Microsoft Develops Security Patches
I thought this was an interesting read.
Page 6 of 15
I thought this was an interesting read.
In 2003, a group of security experts—myself included—published a paper saying that 1) software monocultures are dangerous and 2) Microsoft, being the largest creator of monocultures out there, is the most dangerous. Marcus Ranum responded with an essay that basically said we were full of it. Now, eight years later, Marcus and I thought it would be interesting to revisit the debate.
The basic problem with a monoculture is that it’s all vulnerable to the same attack. The Irish Potato Famine of 1845–9 is perhaps the most famous monoculture-related disaster. The Irish planted only one variety of potato, and the genetically identical potatoes succumbed to a rot caused by Phytophthora infestans. Compare that with the diversity of potatoes traditionally grown in South America, each one adapted to the particular soil and climate of its home, and you can see the security value in heterogeneity.
Similar risks exist in networked computer systems. If everyone is using the same operating system or the same applications software or the same networking protocol, and a security vulnerability is discovered in that OS or software or protocol, a single exploit can affect everyone. This is the problem of large-scale Internet worms: many have affected millions of computers on the Internet.
If our networking environment weren’t homogeneous, a single worm couldn’t do so much damage. We’d be more like South America’s potato crop than Ireland’s. Conclusion: monoculture is bad; embrace diversity or die along with everyone else.
This analysis makes sense as far as it goes, but suffers from three basic flaws. The first is the assumption that our IT monoculture is as simple as the potato’s. When the particularly virulent Storm worm hit, it only affected from 1–10 million of its billion-plus possible victims. Why? Because some computers were running updated antivirus software, or were within locked-down networks, or whatever. Two computers might be running the same OS or applications software, but they’ll be inside different networks with different firewalls and IDSs and router policies, they’ll have different antivirus programs and different patch levels and different configurations, and they’ll be in different parts of the Internet connected to different servers running different services. As Marcus pointed out back in 2003, they’ll be a little bit different themselves. That’s one of the reasons large-scale Internet worms don’t infect everyone—as well as the network’s ability to quickly develop and deploy patches, new antivirus signatures, new IPS signatures, and so on.
The second flaw in the monoculture analysis is that it downplays the cost of diversity. Sure, it would be great if a corporate IT department ran half Windows and half Linux, or half Apache and half Microsoft IIS, but doing so would require more expertise and cost more money. It wouldn’t cost twice the expertise and money—there is some overlap—but there are significant economies of scale that result from everyone using the same software and configuration. A single operating system locked down by experts is far more secure than two operating systems configured by sysadmins who aren’t so expert. Sometimes, as Mark Twain said: “Put all your eggs in one basket, and then guard that basket!”
The third flaw is that you can only get a limited amount of diversity by using two operating systems, or routers from three vendors. South American potato diversity comes from hundreds of different varieties. Genetic diversity comes from millions of different genomes. In monoculture terms, two is little better than one. Even worse, since a network’s security is primarily the minimum of the security of its components, a diverse network is less secure because it is vulnerable to attacks against any of its heterogeneous components.
Some monoculture is necessary in computer networks. As long as we have to talk to each other, we’re all going to have to use TCP/IP, HTML, PDF, and all sorts of other standards and protocols that guarantee interoperability. Yes, there will be different implementations of the same protocol—and this is a good thing—but that won’t protect you completely. You can’t be too different from everyone else on the Internet, because if you were, you couldn’t be on the Internet.
Species basically have two options for propagating their genes: the lobster strategy and the avian strategy. Lobsters lay 5,000 to 40,000 eggs at a time, and essentially ignore them. Only a minuscule percentage of the hatchlings live to be four weeks old, but that’s sufficient to ensure gene propagation; from every 50,000 eggs, an average of two lobsters is expected to survive to legal size. Conversely, birds produce only a few eggs at a time, then spend a lot of effort ensuring that most of the hatchlings survive. In ecology, this is known as r/K selection theory. In either case, each of those offspring varies slightly genetically, so if a new threat arises, some of them will be more likely to survive. But even so, extinctions happen regularly on our planet; neither strategy is foolproof.
Our IT infrastructure is a lot more like a bird than a lobster. Yes, monoculture is dangerous and diversity is important. But investing time and effort in ensuring our current infrastructure’s survival is even more important.
This essay was originally published in Information Security, and is the first half of a point/counterpoint with Marcus Ranum. You can read his response there as well.
EDITED TO ADD (12/13): Commentary.
The problem lies in the way that ASP.NET, Microsoft’s popular Web framework, implements the AES encryption algorithm to protect the integrity of the cookies these applications generate to store information during user sessions. A common mistake is to assume that encryption protects the cookies from tampering so that if any data in the cookie is modified, the cookie will not decrypt correctly. However, there are a lot of ways to make mistakes in crypto implementations, and when crypto breaks, it usually breaks badly.
“We knew ASP.NET was vulnerable to our attack several months ago, but we didn’t know how serious it is until a couple of weeks ago. It turns out that the vulnerability in ASP.NET is the most critical amongst other frameworks. In short, it totally destroys ASP.NET security,” said Thai Duong, who along with Juliano Rizzo, developed the attack against ASP.NET.
Here’s a demo of the attack, and the Microsoft Security Advisory. More articles. The theory behind this attack is here.
EDITED TO ADD (9/27): Three blog posts from Scott Guthrie.
EDITED TO ADD (9/28): There’s a patch.
EDITED TO ADD (10/13): Two more articles.
Who are these certificate authorities? At the beginning of Web history, there were only a handful of companies, like Verisign, Equifax, and Thawte, that made near-monopoly profits from being the only providers trusted by Internet Explorer or Netscape Navigator. But over time, browsers have trusted more and more organizations to verify Web sites. Safari and Firefox now trust more than 60 separate certificate authorities by default. Microsoft’s software trusts more than 100 private and government institutions.
Disturbingly, some of these trusted certificate authorities have decided to delegate their powers to yet more organizations, which aren’t tracked or audited by browser companies. By scouring the Net for certificates, security researchers have uncovered more than 600 groups who, through such delegation, are now also automatically trusted by most browsers, including the Department of Homeland Security, Google, and Ford Motorsand a UAE mobile phone company called Etisalat.
In 2005, a company called CyberTrust—which has since been purchased by Verizon—gave Etisalat, the government-connected mobile company in the UAE, the right to verify that a site is valid. Here’s why this is trouble: Since browsers now automatically trust Etisalat to confirm a site’s identity, the company has the potential ability to fake a secure connection to any site Etisalat subscribers might visit using a man-in-the-middle scheme.
EDITED TO ADD (9/14): EFF has gotten involved.
I don’t think this is a good idea.
Interesting research: “What You See is What They Get: Protecting users from unwanted use of microphones, cameras, and other sensors,” by Jon Howell and Stuart Schechter.
Abstract: Sensors such as cameras and microphones collect privacy-sensitive data streams without the user’s explicit action. Conventional sensor access policies either hassle users to grant applications access to sensors or grant with no approval at all. Once access is granted, an application may collect sensor data even after the application’s interface suggests that the sensor is no longer being accessed.
We introduce the sensor-access widget, a graphical user interface element that resides within an application’s display. The widget provides an animated representation of the personal data being collected by its corresponding sensor, calling attention to the application’s attempt to collect the data. The widget indicates whether the sensor data is currently allowed to flow to the application. The widget also acts as a control point through which the user can configure the sensor and grant or deny the application access. By building perpetual disclosure of sensor data collection into the platform, sensor-access widgets enable new access-control policies that relax the tension between the user’s privacy needs and applications’ ease of access.
Apple seems to be taking some steps in this direction with the location sensor disclosure in iPhone 4.0 OS.
This is an interesting piece of research evaluating different user interface designs by which applications disclose to users what sort of authority they need to install themselves. Given all the recent concerns about third-party access to user data on social networking sites (particularly Facebook), this is particularly timely research.
We have provided evidence of a growing trend among application platforms to disclose, via application installation consent dialogs, the resources and actions that applications will be authorized to perform if installed. To improve the design of these disclosures, we have have taken an important first step of testing key design elements. We hope these findings will assist future researchers in creating experiences that leave users feeling better informed and more confident in their installation decisions.
Within the admittedly constrained context of our laboratory study, disclosure design had surprisingly little effect on participants’ ability to absorb and search information. However, the great majority of participants preferred designs that used images or icons to represent resources. This great majority of participants also disliked designs that used paragraphs, the central design element of Facebook’s disclosures, and outlines, the central design element of Android’s disclosures.
It’s still only in the lab, but nothing detects it right now:
The attack is a clever “bait-and-switch” style move. Harmless code is passed to the security software for scanning, but as soon as it’s given the green light, it’s swapped for the malicious code. The attack works even more reliably on multi-core systems because one thread doesn’t keep an eye on other threads that are running simultaneously, making the switch easier.
The attack, called KHOBE (Kernel HOok Bypassing Engine), leverages a Windows module called the System Service Descriptor Table, or SSDT, which is hooked up to the Windows kernel. Unfortunately, SSDT is utilized by antivirus software.
This idea, by Stuart Schechter at Microsoft Research, is—I think—clever:
Abstract: Implantable medical devices, such as implantable cardiac defibrillators and pacemakers, now use wireless communication protocols vulnerable to attacks that can physically harm patients. Security measures that impede emergency access by physicians could be equally devastating. We propose that access keys be written into patients’ skin using ultraviolet-ink micropigmentation (invisible tattoos).
It certainly is a new way to look at the security threat model.
MS Word has been dethroned:
Files based on Reader were exploited in almost 49 per cent of the targeted attacks of 2009, compared with about 39 per cent that took aim at Microsoft Word. By comparison, in 2008, Acrobat was targeted in almost 29 per cent of attacks and Word was exploited by almost 35 per cent.
Sidebar photo of Bruce Schneier by Joe MacInnis.