Entries Tagged "monoculture"

Page 1 of 1

Me on App Store Monopolies and Security

There are two bills working their way through Congress that would force companies like Apple to allow competitive app stores. Apple hates this, since it would break its monopoly, and it’s making a variety of security arguments to bolster its argument. I have written a rebuttal:

I would like to address some of the unfounded security concerns raised about these bills. It’s simply not true that this legislation puts user privacy and security at risk. In fact, it’s fairer to say that this legislation puts those companies’ extractive business-models at risk. Their claims about risks to privacy and security are both false and disingenuous, and motivated by their own self-interest and not the public interest. App store monopolies cannot protect users from every risk, and they frequently prevent the distribution of important tools that actually enhance security. Furthermore, the alleged risks of third-party app stores and “side-loading” apps pale in comparison to their benefits. These bills will encourage competition, prevent monopolist extortion, and guarantee users a new right to digital self-determination.

Matt Stoller has also written about this.

EDITED TO ADD (2/13): Here are the two bills.

Posted on February 1, 2022 at 2:26 PMView Comments

Fearing Google

Mathias Döpfner writes an open letter explaining why he fears Google:

We know of no alternative which could offer even partially comparable technological prerequisites for the automated marketing of advertising. And we cannot afford to give up this source of revenue because we desperately need the money for technological investments in the future. Which is why other publishers are increasingly doing the same. We also know of no alternative search engine which could maintain or increase our online reach. A large proportion of high quality journalistic media receives its traffic primarily via Google. In other areas, especially of a non-journalistic nature, customers find their way to suppliers almost exclusively though Google. This means, in plain language, that we ­ and many others ­ are dependent on Google. At the moment Google has a 91.2 percent search-engine market share in Germany. In this case, the statement “if you don’t like Google, you can remove yourself from their listings and go elsewhere” is about as realistic as recommending to an opponent of nuclear power that he just stop using electricity. He simply cannot do this in real life ­ unless he wants to join the Amish.

A reaction. And another.

Posted on May 6, 2014 at 10:30 AMView Comments

Software Monoculture

In 2003, a group of security experts—myself included—published a paper saying that 1) software monocultures are dangerous and 2) Microsoft, being the largest creator of monocultures out there, is the most dangerous. Marcus Ranum responded with an essay that basically said we were full of it. Now, eight years later, Marcus and I thought it would be interesting to revisit the debate.

The basic problem with a monoculture is that it’s all vulnerable to the same attack. The Irish Potato Famine of 1845–9 is perhaps the most famous monoculture-related disaster. The Irish planted only one variety of potato, and the genetically identical potatoes succumbed to a rot caused by Phytophthora infestans. Compare that with the diversity of potatoes traditionally grown in South America, each one adapted to the particular soil and climate of its home, and you can see the security value in heterogeneity.

Similar risks exist in networked computer systems. If everyone is using the same operating system or the same applications software or the same networking protocol, and a security vulnerability is discovered in that OS or software or protocol, a single exploit can affect everyone. This is the problem of large-scale Internet worms: many have affected millions of computers on the Internet.

If our networking environment weren’t homogeneous, a single worm couldn’t do so much damage. We’d be more like South America’s potato crop than Ireland’s. Conclusion: monoculture is bad; embrace diversity or die along with everyone else.

This analysis makes sense as far as it goes, but suffers from three basic flaws. The first is the assumption that our IT monoculture is as simple as the potato’s. When the particularly virulent Storm worm hit, it only affected from 1–10 million of its billion-plus possible victims. Why? Because some computers were running updated antivirus software, or were within locked-down networks, or whatever. Two computers might be running the same OS or applications software, but they’ll be inside different networks with different firewalls and IDSs and router policies, they’ll have different antivirus programs and different patch levels and different configurations, and they’ll be in different parts of the Internet connected to different servers running different services. As Marcus pointed out back in 2003, they’ll be a little bit different themselves. That’s one of the reasons large-scale Internet worms don’t infect everyone—as well as the network’s ability to quickly develop and deploy patches, new antivirus signatures, new IPS signatures, and so on.

The second flaw in the monoculture analysis is that it downplays the cost of diversity. Sure, it would be great if a corporate IT department ran half Windows and half Linux, or half Apache and half Microsoft IIS, but doing so would require more expertise and cost more money. It wouldn’t cost twice the expertise and money—there is some overlap—but there are significant economies of scale that result from everyone using the same software and configuration. A single operating system locked down by experts is far more secure than two operating systems configured by sysadmins who aren’t so expert. Sometimes, as Mark Twain said: “Put all your eggs in one basket, and then guard that basket!”

The third flaw is that you can only get a limited amount of diversity by using two operating systems, or routers from three vendors. South American potato diversity comes from hundreds of different varieties. Genetic diversity comes from millions of different genomes. In monoculture terms, two is little better than one. Even worse, since a network’s security is primarily the minimum of the security of its components, a diverse network is less secure because it is vulnerable to attacks against any of its heterogeneous components.

Some monoculture is necessary in computer networks. As long as we have to talk to each other, we’re all going to have to use TCP/IP, HTML, PDF, and all sorts of other standards and protocols that guarantee interoperability. Yes, there will be different implementations of the same protocol—and this is a good thing—but that won’t protect you completely. You can’t be too different from everyone else on the Internet, because if you were, you couldn’t be on the Internet.

Species basically have two options for propagating their genes: the lobster strategy and the avian strategy. Lobsters lay 5,000 to 40,000 eggs at a time, and essentially ignore them. Only a minuscule percentage of the hatchlings live to be four weeks old, but that’s sufficient to ensure gene propagation; from every 50,000 eggs, an average of two lobsters is expected to survive to legal size. Conversely, birds produce only a few eggs at a time, then spend a lot of effort ensuring that most of the hatchlings survive. In ecology, this is known as r/K selection theory. In either case, each of those offspring varies slightly genetically, so if a new threat arises, some of them will be more likely to survive. But even so, extinctions happen regularly on our planet; neither strategy is foolproof.

Our IT infrastructure is a lot more like a bird than a lobster. Yes, monoculture is dangerous and diversity is important. But investing time and effort in ensuring our current infrastructure’s survival is even more important.

This essay was originally published in Information Security, and is the first half of a point/counterpoint with Marcus Ranum. You can read his response there as well.

EDITED TO ADD (12/13): Commentary.

Posted on December 1, 2010 at 5:55 AMView Comments

Dan Geer on Trade-Offs and Monoculture

In the April 2007 issue of Queue, Dan Geer writes about security trade-offs, monoculture, and genetic diversity in honeybees:

Security people are never in charge unless an acute embarrassment has occurred. Otherwise, their advice is tempered by “economic reality,” which is to say that security is means, not an end. This is as it should be. Since means are about tradeoffs, security is about tradeoffs, but you already knew that.

Posted on May 17, 2007 at 6:58 AMView Comments

Commentary on Vista Security and the Microsoft Monopoly

This is right:

As Dan Geer has been saying for years, Microsoft has a bit of a problem. Either it stonewalls and pretends there is no security problem, which is what Vista does, by taking over your computer to force patches (and DRM) down its throat. Or you actually change the basic design and produce a secure operating system, which risks people wondering why they’re sticking with Windows and Microsoft, then? It turns out the former course may also result in the latter result:

If you fit Microsoft’s somewhat convoluted definition of poor, it still wants to lock you in, you might get rich enough to afford the full-priced stuff someday. It is at a dangerous crossroads, if its software bumps up the price of a computer by 100 per cent, people might look to alternatives.

That means no MeII DRM infection lock in, no mass migration to the newer Office obfuscated and patented file formats, and worse yet, people might utter the W word. Yes, you guessed it, ‘why’. People might ask why they are sticking with the MS lock in, and at that point, it is in deep trouble.

Monopolies eventually overreach themselves and die. Maybe it’s finally Microsoft’s time to die. That would decrease the risk to the rest of us.

Posted on April 27, 2007 at 7:03 AMView Comments

Security and Monoculture

Interesting research.

EDITED TO ADD (8/1): The paper is only viewable by subscribers. Here are some excerpts:

Fortunately, buffer-overflow attacks have a weakness: the intruder must know precisely what part of the computer’s memory to target. In 1996, Forrest realised that these attacks could be foiled by scrambling the way a program uses a computer’s memory. When you launch a program, the operating system normally allocates the same locations in a computer’s random access memory (RAM) each time. Forrest wondered whether she could rewrite the operating system to force the program to use different memory locations that are picked randomly every time, thus flummoxing buffer-overflow attacks.

To test her concept, Forrest experimented with a version of the open-source operating system Linux. She altered the system to force programs to assign data to memory locations at random. Then she subjected the computer to several well-known attacks that used the buffer-overflow technique. None could get through. Instead, they targeted the wrong area of memory. Although part of the software would often crash, Linux would quickly restart it, and get rid of the virus in the process. In rare situations it would crash the entire operating system, a short-lived annoyance, certainly, but not bad considering the intruder had failed to take control of the machine.

Linux computer-security experts quickly picked up on Forrest’s idea. In 2003 Red Hat, the maker of a popular version of Linux, began including memory-space randomisation in its products. “We had several vulnerabilities which we could downgrade in severity,” says Marc J. Cox, a Red Hat security expert.

[…]

Memory scrambling isn’t the only way to add diversity to operating systems. Even more sophisticated techniques are in the works. Forrest has tried altering “instruction sets”, commands that programs use to communicate with a computer’s hardware, such as its processor chip or memory.

Her trick was to replace the “translator” program that interprets these instruction sets with a specially modified one. Every time the computer boots up, Forrest’s software loads into memory and encrypts the instruction sets in the hardware using a randomised encoding key. When a program wants to send a command to the computer, Forrest’s translator decrypts the command on the fly so the computer can understand it.

This produces an elegant form of protection. If an attacker manages to insert malicious code into a running program, that code will also be decrypted by the translator when it is passed to the hardware. However, since the attacker’s code is not encrypted in the first place, the decryption process turns it into digital gibberish so the computer hardware cannot understand it. Since it exists only in the computer’s memory and has not been written to the computer’s hard disc, it will vanish upon reboot.

Forrest has tested the process on several versions of Linux while launching buffer-overflow attacks. None were able to penetrate. As with memory randomisation, the failed attacks would, at worst, temporarily crash part of Linux – a small price to pay. Her translator program was a success. “It seemed like a crazy idea at first,” says Gabriel Barrantes, who worked with Forrest on the project. “But it turned out to be sound.”

[…]

In 2004, a group of researchers led by Hovav Shacham at Stanford University in California tried this trick against a copy of the popular web-server application Apache that was running on Linux, protected with memory randomisation. It took them 216 seconds per attack to break into it. They concluded that this protection is not sufficient to stop the most persistent viruses or a single, dedicated attacker.

Last year, a group of researchers at the University of Virginia, Charlottesville, performed a similar attack on a copy of Linux whose instruction set was protected by randomised encryption. They used a slightly more complex approach, making a series of guesses about different parts of the randomisation key. This time it took over 6 minutes to force a way in: the system was tougher, but hardly invulnerable.

[…]

Knight says that randomising the encryption on the instruction set is a more powerful technique because it can use larger and more complex forms of encryption. The only limitation is that as the encryption becomes more complicated, it takes the computer longer to decrypt each instruction, and this can slow the machine down. Barrantes found that instruction-set randomisation more than doubled the length of time an instruction took to execute. Make the encryption too robust, and computer users could find themselves drumming their fingers as they wait for a web page to load.

So he thinks the best approach is to combine different types of randomisation. Where one fails, another picks up. Last year, he took a variant of Linux and randomised both its memory-space allocation and its instruction sets. In December, he put 100 copies of the software online and hired a computer-security firm to try and penetrate them. The attacks failed. In May, he repeated the experiment but this time he provided the attackers with extra information about the randomised software. Their assault still failed.

The idea was to simulate what would happen if an adversary had a phenomenal amount of money, and secret information from an inside collaborator, says Knight. The results pleased him and, he hopes, will also please DARPA when he presents them to the agency. “We aren’t claiming we can do everything, but for broad classes of attack, these techniques appear to work very well. We have no reason to believe that there would be any change if we were to try to apply this to the real world.”

EDITED TO ADD (8/2): The article is online here.

Posted on August 1, 2006 at 6:26 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.