Security Research and the Future
By Bruce Schneier
Dr. Dobb's Journal
Security threats will continue to loom
For the longest time, cryptography was a solution looking for a problem. And outside the military and a few paranoid individuals, there wasn't any problem. Then along came the Internet, and with the Internet came e-commerce, corporate intranets and extranets, voice over IP, B2B, and the like. Suddenly everyone is talking about cryptography. Suddenly everyone is talking about computer security. There are more companies and products, and more research. And a lot more interest.
But at the same time, the state of security is getting worse. There are more vulnerabilities being found in operating systems-not just Microsoft's, but everyone's-than ever before. There are more viruses (or worms) being released, and they're doing more damage. There are nastier denial-of-service tools, and more effective root kits. What research is necessary to reverse this trend? How can we make security work?
Security Products and Security Research
Unlike almost all other design criteria, security is independent of functionality. If you're coding a word processor, and you want to test the print functionality, you can hook up a printer and see if it prints. If you're smart, you hook up several kinds of printers and print different kinds of documents. That's easy; if the software functions as specified, then you know it works.
Security is different. Imagine that you are building an encryption function into that same word processor. You test it the same way: You encrypt a series of documents, and you decrypt them again. The decryption recovers the plaintext; the ciphertext looks like gibberish. It all works great. Unfortunately, the test indicates nothing about the security of the encryption. Products are useful for what they do; security products are useful because of what they prevent.
The result of this is that most security products are not very good. Using the right technology isn't enough; you have to use it properly. You have to make sure every aspect of the systemthe design, algorithms, protocols, implementation, installation, and so on- is strong. Security fails most often not because of some fundamental problem in the science behind the product, but because of some stupid mistake. Or, more likely, dozens of stupid mistakes.
The amount of security research, and the number of security conferences, has exploded in the past decade. There seems to be no end to the stream of papers, theses, monographs, and studies about computer and network security. Most of the work is mediocre, but some of it is very good. My worry is that none of it will really matter, because the real problems are much bigger than security research.
I don't spend a lot of time worrying about whether 100 bits or 120 bits is strong enough, whether this type of signature scheme is better than this other kind of signature scheme, or whether this kind of firewall works better than these other kinds. It's not about the technologies-it's about the implementation. It's about the use. It's about how well the users understand what they're doing.
Technologies to Watch
This being said, there are technologies on the horizon that will have an effect on security products, some good and some bad. it's well worth paying attention to them.
- Cryptographic breakthroughs. Almost no cryptography is based on mathematical proofs; the best that we can say is that we can't break it, and all the other smart people who tried can't break it either. There is always the possibility that someday we will learn new techniques that allow us to break what we can't break today. (There's a saying inside the NSA: "Attacks always get better; they never get worse.") We've seen this in the past, where once-secure algorithms fall to new techniques, and we're likely to see it in the future. Some people even assume that the NSA already knows much of this new mathematics, and is quietly and profitably breaking our strongest encryption algorithms. I just don't think so; they may have some secret techniques, but not many.
- Factoring breakthroughs. One worry is that all of the different public-key algorithms are fundamentally based on the same mathematical problems: the problem of factoring large numbers or the discrete logarithm problem. Factoring is getting easier, and it's getting easier faster than anyone ever thought it would. These problems are not mathematically proven to be hard, and it is certainly possible (although mathematicians don't think it likely) that within our lifetime, someone will come up with a way to efficiently solve these problems. If this happens, we could be in a world where public-key cryptography does not work and would be a quaint historical oddity. This won't be terrible; authentication infrastructure schemes based on symmetric cryptography can do much of the same job. Even so, I don't think it's likely.
- Quantum computers. Someday, quantum mechanics may fundamentally change the way computers work. Right now people can barely figure out how to make quantum computers add two 1-bit numbers, but who knows what will happen. Quantum computers will render most public-key algorithms obsolete (see the preceding item), but will only force us to double the key lengths for symmetric ciphers, hash functions, and MACs.
- Tamperproof hardware. A lot of security problems magically get a lot easier if you assume tamperproof hardware and put things inside of it. Breakthroughs in tamper-resistance technologies, especially breakthroughs in the cost of different tamper-resistance measures, could make a lot of security problems easier.
- Artificial intelligence. Many computer-- security countemneasures can be reduced to a simple problem: letting the good stuff in while keeping the bad stuff out. This is the way antivirus software, firewalls, intrusion detection systems, VPNs, credit-card antifraud systems, digital cell phone authentication, and a whole lot of other things work. There are two ways to do this. You can be dumb about it- if you see any of these 10,000 bit patterns in the file, that means the file has a virus-or be smart about it: If the program starts doing suspicious things to the computer, it's probably a virus and should be investigated further. The latter sounds an awful lot like Al. Unfortunately, this kind of thing was tried in antivirus software, and ended up being less effective than the dumb pattern checkers. The same kinds of things are being used in intrusion detection systems, and it is still unclear whether they do a better job than the dumb intrusion-detection products that just look for bit patterns indicating an attack. Still, this could be a big deal someday; if fundamental advances ever occur in the field of Al, it has the potential to revolutionize computer security.
- Automatic program checkers. Many security bugs, such as buffer overflows, are the result of sloppy programming. Good automatic tools that can scan computer code for potential securityrelated bugs would go a long way to making software more secure. Good language compilers, and good language syntax, can go a long way to preventing programmers from making security-related mistakes in the first place. We'd have to convince programmers to use them, which is probably another matter entirely. (There are a bunch of good tools out there, and no one seems to be using them yet.) And they're never going to catch all problems.
- Secure networking infrastructures. The Internet is not secure because security was never designed into the system. People who are working on the Internet-II (and whatever follows) should be thinking about security from the beginning. These new networks should assume that people will be eavesdropping on communications, that they will attempt to hijack connections, and that packet headers can be forged. They should assume that they will be used by mutually distrustful parties for all sorts of business and personal applications. There are a lot of problems that can't be solved with better network protocols, but a lot can.
- Traffic analysis. Traffic analysis is the study of communications patterns. Sometimes who is communicating to whom, and how frequently, is just as important as what they said. The Internet makes some forms of traffic analysis easy, and there has been little research on defenses. In some ways, the study of traffic analysis is in the same situation that the study of cryptography was in the early 1980s. Expect an explosion of research in the next decade.
- Assurance. Assurance means that a system does what it is supposed to do, and doesn't do anything else. Security assurance is very similar to safety assurance: A safe system is one that does what it is supposed to do, even in the presence of random faults. Security is harder; it has to provide assurance even in the presence of an intelligent and malicious adversary. A technology that could somehow provide strong assurance in software would do amazing things for computer security.
Most of these technologies are being worked on today. Practical advances in some of them are far in the future, some of them on the lunatic horizon. I wouldn't dismiss any of these technologies, though. if there's anything the twentieth century has taught us, it's to be parsimonious with the word "impossible."
Will We Ever Learn?
Consider buffer overflow attacks. These were first talked about in the security community as early as the 1960s - timesharing systems suffered from that problem- and were probably known by the security literati even earlier. Early networked computers in the 1970s had the problem, and it was often used as a point of attack against systems. The Morris worm, in 1988, exploited a buffer overflow in the UNIX finger command- a public use of this type of attack. Now, over a decade after Morris and about 35 years after they were first discovered, you'd think the security community would have solved the problem of security vulnerabilities based on buffer overflows. Think again. In 1998, over two-thirds of all CERT advisories were for vulnerabilities caused by buffer overflows. During a particularly bad fortnight in 1999, 18 separate security flaws, all caused by buffer overflows, were reported in Windows NT-based applications. During the first week of March, in which I wrote part of this article, there were three buffer overflows reported. And buffer overflows are just the lowhanging fruit. If we ever manage to eradicate the problem, others just as bad will replace them.
Consider encryption algorithms. Proprietary secret algorithms are regularly exposed and then trivially broken. Again and again, the marketplace learns that proprietary secret algorithms are a bad idea. But companies and industries continue to choose proprietary algorithms over public, free alternatives.
Or look at fixable problems. One particular security hole in Microsoft's Internet Information Server was used by hackers to steal thousands of credit-card numbers from a variety of e-commerce sites in early 2000. Microsoft issued a patch that fixed the vulnerability in July 1998, and reissued a warning in July 1999 when it became clear that many users never bothered installing the patch.
Isn't Anyone Paying Attention?
Not really. Or, at least, far fewer people are paying attention than should be. And the enormous need for digital security products necessitates experts to design, develop, and implement them. This resultant dearth of experts means that the percentage of people paying attention will get even smaller.
I'm constantly amazed by the kinds of things that break security products. I've seen a file encryption product with a user interface that accidentally saves the key in the clear. I've seen VPNs where the telephone configuration file accidentally allows entrusted persons to authenticate themselves to the server, or where one VPN client can see the files of all other VPN clients. There are a zillion ways to make a product unsecure, and manufacturers manage to stumble on a lot of those ways again and again.
They don't learn because they don't have to. Security research doesn't make it into products, and even if it does, it doesn't make it into products securely.
Computer security products, like software in general, have an odd product quality model. It's unlike an automobile, a skyscraper, or a box of fried chicken. If you buy a product, and get harmed because of a manufacturer's defect, you can sue..and you'll win. Car makers can't get away with building cars that explode on impact; lunch counters can't get away with selling strawberry tart with the odd rat mixed in. It just wouldn't do for building contractors to say things like: "Whoops. There goes another one. But just wait for Skyscraper 1.1; it'll be 100 percent collapse-- free." These companies are liable for their actions.
Software is different. It is sold without any liability whatsoever. For example, here's the language in the Windows 98 licensing agreement: "In no event shall Manufacturer or its suppliers be liable for any damages whatsoever-arising out of the use or of inability to use this product, even if Manufacturer has been advised of the possibility of such damages."
Your accounts receivable database could crash, taking your company down with it, and you have no claim against the software company. Your word processor could corrupt your entire book manuscript (something I spend way too much time worrying about while writing), wasting years of work, and you have no recourse. Your firewall could turn out to be completely ineffectual -hardly better than having nothing- and it's your fault. Microsoft could field Hotmail with a bug that lets anyone read the accounts of 40 or so million subscribers, password or no password, and not even bother to apologize.
Software manufacturers don't have to produce a quality product because they face no consequences if they don't. (Actually, product liability exists, but it is limited to replacing a physically defective diskette or CD-ROM.) And the effect of this for security products is that manufacturers don't have to produce products that are actually secure, because no one can sue them if they make a bunch of false claims of security.
The upshot of this is that the marketplace does not reward real security. Real security is harder, slower, and more expensive to design and implement. The buying public has no way to differentiate real security from bad security. The way to win in this marketplace is to design software as unsecure as you can possibly get away with.
Smart software companies know that reliable software is not cost effective. According to studies, 90 to 95 percent of all bugs are harmless; they're never found by users and they don't affect performance. It's much cheaper for a company to release buggy software and fix the 5 to 10 percent of bugs after people complain.
They also know that real security is not cost effective. They get whacked with a new security vulnerability several times a week. They fix the ones they can, write deceptive press releases about the ones they can't; then they wait for the press fervor to die down (which it always does). Then they issue a new version of their software with new features that add all sorts of new security problems, because users prefer cool features to security. And users always will. Until companies have some legal incentive to produce secure products, they won't bother. No amount of research will change that.
If there's anything that should be concluded from this article, it's that the fundamental security problems of today are not about technology, they're about using technology. Security research continues, and will always be important. But what is more important is convincing vendors and users to think about security properly, to implement security properly, and to use security properly. Security is not a product. There isn't a technology, existing today or anywhere on the horizon, that you can sprinkle over your network and magically make it secure. Security is a process. And it is the process of security that brings about security.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..