Security Research and the Future
By Bruce Schneier
Security threats will continue to loom
For the longest time, cryptography was a solution looking for a problem. And outside the military and a few paranoid individuals, there wasn't any problem. Then along came the Internet, and with the Internet came e-commerce, corporate intranets and extranets, voice over IP, B2B, and the like. Suddenly everyone is talking about cryptography. Suddenly everyone is talking about computer security. There are more companies and products, and more research. And a lot more interest.
But at the same time, the state of security is getting worse. There are more vulnerabilities being found in operating systems-not just Microsoft's, but everyone's-than ever before. There are more viruses (or worms) being released, and they're doing more damage. There are nastier denial-of-service tools, and more effective root kits. What research is necessary to reverse this trend? How can we make security work?
Security Products and Security Research
Unlike almost all other design criteria, security is independent of functionality. If you're coding a word processor, and you want to test the print functionality, you can hook up a printer and see if it prints. If you're smart, you hook up several kinds of printers and print different kinds of documents. That's easy; if the software functions as specified, then you know it works.
Security is different. Imagine that you are building an encryption function into that same word processor. You test it the same way: You encrypt a series of documents, and you decrypt them again. The decryption recovers the plaintext; the ciphertext looks like gibberish. It all works great. Unfortunately, the test indicates nothing about the security of the encryption. Products are useful for what they do; security products are useful because of what they prevent.
The result of this is that most security products are not very good. Using the right technology isn't enough; you have to use it properly. You have to make sure every aspect of the systemthe design, algorithms, protocols, implementation, installation, and so on- is strong. Security fails most often not because of some fundamental problem in the science behind the product, but because of some stupid mistake. Or, more likely, dozens of stupid mistakes.
The amount of security research, and the number of security conferences, has exploded in the past decade. There seems to be no end to the stream of papers, theses, monographs, and studies about computer and network security. Most of the work is mediocre, but some of it is very good. My worry is that none of it will really matter, because the real problems are much bigger than security research.
I don't spend a lot of time worrying about whether 100 bits or 120 bits is strong enough, whether this type of signature scheme is better than this other kind of signature scheme, or whether this kind of firewall works better than these other kinds. It's not about the technologies-it's about the implementation. It's about the use. It's about how well the users understand what they're doing.
Technologies to Watch
This being said, there are technologies on the horizon that will have an effect on security products, some good and some bad. it's well worth paying attention to them.
Most of these technologies are being worked on today. Practical advances in some of them are far in the future, some of them on the lunatic horizon. I wouldn't dismiss any of these technologies, though. if there's anything the twentieth century has taught us, it's to be parsimonious with the word "impossible."
Will We Ever Learn?
Consider buffer overflow attacks. These were first talked about in the security community as early as the 1960s - timesharing systems suffered from that problem- and were probably known by the security literati even earlier. Early networked computers in the 1970s had the problem, and it was often used as a point of attack against systems. The Morris worm, in 1988, exploited a buffer overflow in the UNIX finger command- a public use of this type of attack. Now, over a decade after Morris and about 35 years after they were first discovered, you'd think the security community would have solved the problem of security vulnerabilities based on buffer overflows. Think again. In 1998, over two-thirds of all CERT advisories were for vulnerabilities caused by buffer overflows. During a particularly bad fortnight in 1999, 18 separate security flaws, all caused by buffer overflows, were reported in Windows NT-based applications. During the first week of March, in which I wrote part of this article, there were three buffer overflows reported. And buffer overflows are just the lowhanging fruit. If we ever manage to eradicate the problem, others just as bad will replace them.
Consider encryption algorithms. Proprietary secret algorithms are regularly exposed and then trivially broken. Again and again, the marketplace learns that proprietary secret algorithms are a bad idea. But companies and industries continue to choose proprietary algorithms over public, free alternatives.
Or look at fixable problems. One particular security hole in Microsoft's Internet Information Server was used by hackers to steal thousands of credit-card numbers from a variety of e-commerce sites in early 2000. Microsoft issued a patch that fixed the vulnerability in July 1998, and reissued a warning in July 1999 when it became clear that many users never bothered installing the patch.
Isn't Anyone Paying Attention?
Not really. Or, at least, far fewer people are paying attention than should be. And the enormous need for digital security products necessitates experts to design, develop, and implement them. This resultant dearth of experts means that the percentage of people paying attention will get even smaller.
I'm constantly amazed by the kinds of things that break security products. I've seen a file encryption product with a user interface that accidentally saves the key in the clear. I've seen VPNs where the telephone configuration file accidentally allows entrusted persons to authenticate themselves to the server, or where one VPN client can see the files of all other VPN clients. There are a zillion ways to make a product unsecure, and manufacturers manage to stumble on a lot of those ways again and again.
They don't learn because they don't have to. Security research doesn't make it into products, and even if it does, it doesn't make it into products securely.
Computer security products, like software in general, have an odd product quality model. It's unlike an automobile, a skyscraper, or a box of fried chicken. If you buy a product, and get harmed because of a manufacturer's defect, you can sue..and you'll win. Car makers can't get away with building cars that explode on impact; lunch counters can't get away with selling strawberry tart with the odd rat mixed in. It just wouldn't do for building contractors to say things like: "Whoops. There goes another one. But just wait for Skyscraper 1.1; it'll be 100 percent collapse-- free." These companies are liable for their actions.
Software is different. It is sold without any liability whatsoever. For example, here's the language in the Windows 98 licensing agreement: "In no event shall Manufacturer or its suppliers be liable for any damages whatsoever-arising out of the use or of inability to use this product, even if Manufacturer has been advised of the possibility of such damages."
Your accounts receivable database could crash, taking your company down with it, and you have no claim against the software company. Your word processor could corrupt your entire book manuscript (something I spend way too much time worrying about while writing), wasting years of work, and you have no recourse. Your firewall could turn out to be completely ineffectual -hardly better than having nothing- and it's your fault. Microsoft could field Hotmail with a bug that lets anyone read the accounts of 40 or so million subscribers, password or no password, and not even bother to apologize.
Software manufacturers don't have to produce a quality product because they face no consequences if they don't. (Actually, product liability exists, but it is limited to replacing a physically defective diskette or CD-ROM.) And the effect of this for security products is that manufacturers don't have to produce products that are actually secure, because no one can sue them if they make a bunch of false claims of security.
The upshot of this is that the marketplace does not reward real security. Real security is harder, slower, and more expensive to design and implement. The buying public has no way to differentiate real security from bad security. The way to win in this marketplace is to design software as unsecure as you can possibly get away with.
Smart software companies know that reliable software is not cost effective. According to studies, 90 to 95 percent of all bugs are harmless; they're never found by users and they don't affect performance. It's much cheaper for a company to release buggy software and fix the 5 to 10 percent of bugs after people complain.
They also know that real security is not cost effective. They get whacked with a new security vulnerability several times a week. They fix the ones they can, write deceptive press releases about the ones they can't; then they wait for the press fervor to die down (which it always does). Then they issue a new version of their software with new features that add all sorts of new security problems, because users prefer cool features to security. And users always will. Until companies have some legal incentive to produce secure products, they won't bother. No amount of research will change that.
If there's anything that should be concluded from this article, it's that the fundamental security problems of today are not about technology, they're about using technology. Security research continues, and will always be important. But what is more important is convincing vendors and users to think about security properly, to implement security properly, and to use security properly. Security is not a product. There isn't a technology, existing today or anywhere on the horizon, that you can sprinkle over your network and magically make it secure. Security is a process. And it is the process of security that brings about security.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.