Security and Monoculture
EDITED TO ADD (8/1): The paper is only viewable by subscribers. Here are some excerpts:
Fortunately, buffer-overflow attacks have a weakness: the intruder must know precisely what part of the computer’s memory to target. In 1996, Forrest realised that these attacks could be foiled by scrambling the way a program uses a computer’s memory. When you launch a program, the operating system normally allocates the same locations in a computer’s random access memory (RAM) each time. Forrest wondered whether she could rewrite the operating system to force the program to use different memory locations that are picked randomly every time, thus flummoxing buffer-overflow attacks.
To test her concept, Forrest experimented with a version of the open-source operating system Linux. She altered the system to force programs to assign data to memory locations at random. Then she subjected the computer to several well-known attacks that used the buffer-overflow technique. None could get through. Instead, they targeted the wrong area of memory. Although part of the software would often crash, Linux would quickly restart it, and get rid of the virus in the process. In rare situations it would crash the entire operating system, a short-lived annoyance, certainly, but not bad considering the intruder had failed to take control of the machine.
Linux computer-security experts quickly picked up on Forrest’s idea. In 2003 Red Hat, the maker of a popular version of Linux, began including memory-space randomisation in its products. “We had several vulnerabilities which we could downgrade in severity,” says Marc J. Cox, a Red Hat security expert.
[…]
Memory scrambling isn’t the only way to add diversity to operating systems. Even more sophisticated techniques are in the works. Forrest has tried altering “instruction sets”, commands that programs use to communicate with a computer’s hardware, such as its processor chip or memory.
Her trick was to replace the “translator” program that interprets these instruction sets with a specially modified one. Every time the computer boots up, Forrest’s software loads into memory and encrypts the instruction sets in the hardware using a randomised encoding key. When a program wants to send a command to the computer, Forrest’s translator decrypts the command on the fly so the computer can understand it.
This produces an elegant form of protection. If an attacker manages to insert malicious code into a running program, that code will also be decrypted by the translator when it is passed to the hardware. However, since the attacker’s code is not encrypted in the first place, the decryption process turns it into digital gibberish so the computer hardware cannot understand it. Since it exists only in the computer’s memory and has not been written to the computer’s hard disc, it will vanish upon reboot.
Forrest has tested the process on several versions of Linux while launching buffer-overflow attacks. None were able to penetrate. As with memory randomisation, the failed attacks would, at worst, temporarily crash part of Linux – a small price to pay. Her translator program was a success. “It seemed like a crazy idea at first,” says Gabriel Barrantes, who worked with Forrest on the project. “But it turned out to be sound.”
[…]
In 2004, a group of researchers led by Hovav Shacham at Stanford University in California tried this trick against a copy of the popular web-server application Apache that was running on Linux, protected with memory randomisation. It took them 216 seconds per attack to break into it. They concluded that this protection is not sufficient to stop the most persistent viruses or a single, dedicated attacker.
Last year, a group of researchers at the University of Virginia, Charlottesville, performed a similar attack on a copy of Linux whose instruction set was protected by randomised encryption. They used a slightly more complex approach, making a series of guesses about different parts of the randomisation key. This time it took over 6 minutes to force a way in: the system was tougher, but hardly invulnerable.
[…]
Knight says that randomising the encryption on the instruction set is a more powerful technique because it can use larger and more complex forms of encryption. The only limitation is that as the encryption becomes more complicated, it takes the computer longer to decrypt each instruction, and this can slow the machine down. Barrantes found that instruction-set randomisation more than doubled the length of time an instruction took to execute. Make the encryption too robust, and computer users could find themselves drumming their fingers as they wait for a web page to load.
So he thinks the best approach is to combine different types of randomisation. Where one fails, another picks up. Last year, he took a variant of Linux and randomised both its memory-space allocation and its instruction sets. In December, he put 100 copies of the software online and hired a computer-security firm to try and penetrate them. The attacks failed. In May, he repeated the experiment but this time he provided the attackers with extra information about the randomised software. Their assault still failed.
The idea was to simulate what would happen if an adversary had a phenomenal amount of money, and secret information from an inside collaborator, says Knight. The results pleased him and, he hopes, will also please DARPA when he presents them to the agency. “We aren’t claiming we can do everything, but for broad classes of attack, these techniques appear to work very well. We have no reason to believe that there would be any change if we were to try to apply this to the real world.”
EDITED TO ADD (8/2): The article is online here.
JakeS • August 1, 2006 7:41 AM
May be interesting, but only New Scientist subscribers can read it.