Cryptography: The Importance of Not Being Different
By Bruce Schneier
Suppose your doctor said, "I realize we have antibiotics that are good at treating your kind of infection without harmful side effects, and that there are decades of research to support this treatment. But I'm going to give you tortilla-chip powder instead, because, uh, it might work." You'd get a new doctor.
Practicing medicine is difficult. The profession doesn't rush to embrace new drugs; it takes years of testing before benefits can be proven, dosages established, and side effects cataloged. A good doctor won't treat a bacterial infection with a medicine he just invented when proven antibiotics are available. And a smart patient wants the same drug that cured the last person, not something different.
Cryptography is difficult, too. It combines mathematics, computer science, sometimes electrical engineering, and a twisted mindset that can figure out how to get around rules, break systems, and subvert the designers' intentions. Even very smart, knowledgeable, experienced people invent bad cryptography. In the crypto community, people aren't even all that embarrassed when their algorithms and protocols are broken. That's how hard it is.
Reusing Secure Components
Building cryptography into products is hard, too. Most cryptography products on the market are insecure. Some don't work as advertised. Some are obviously flawed. Others are more subtly flawed. Sometimes people discover the flaws quickly, while other times it takes years (usually because no one bothered to look for them). Sometimes a decade goes by before someone invents new mathematics to break something.
This difficulty is made even more serious for several reasons. First, flaws can appear anywhere. They can be in the trust model, the system design, the algorithms and protocols, the implementations, the source code, the human-computer interface, the procedures, the underlying computer system. Anywhere.
Second, these flaws cannot be found through normal beta testing. Security has nothing to do with functionality. A cryptography product can function normally and be completely insecure. Flaws remain undiscovered until someone looks for them explicitly.
Third, and most importantly, a single flaw breaks the security of the entire system. If you think of cryptography as a chain, the system is only as secure as its weakest link. This means that everything has to be secure. It's not enough to make the algorithms and protocols perfect if the implementation has problems. And a great product with a broken algorithm is useless. And a great algorithm, protocol, and implementation can be ruined by a flawed random number generator. And if there is a security flaw in the code, the rest of it doesn't matter.
Given this harsh reality, the most rational design decision is to use as few links as possible, and as high a percentage of strong links as possible. Since it is impractical for a system designer (or even a design team) to analyze a completely new system, a smart designer reuses components that are generally believed to be secure, and only invents new cryptography where absolutely necessary.
Trusting the Known
Consider IPSec, the Internet IP security protocol. Beginning in 1992, it was designed in the open by committee and was the subject of considerable public scrutiny from the start. Everyone knew it was an important protocol and people spent a lot of effort trying to get it right. Security technologies were proposed, broken, and then modified. Versions were codified and analyzed. The first draft of the standard was published in 1995. Aspects were debated on security merits and on performance, ease of implementation, upgradability, and use.
In November 1998, the committee published a pile of RFCs -- one in a series of steps to make IPSec an Internet standard. And it is still being studied. Cryptographers at the Naval Research Laboratory recently discovered a minor implementation flaw. The work continues, in public, by anyone and everyone who is interested.
On the other hand, Microsoft developed its own Point-to-Point Tunneling Protocol (PPTP) to do much the same thing. They invented their own authentication protocol, their own hash functions, and their own key-generation algorithm. Every one of these items was badly flawed. They used a known encryption algorithm, but they used it in such a way as to negate its security. They made implementation mistakes that weakened the system even further. But since they did all this work internally, no one knew that their PPTP was weak.
Microsoft fielded PPTP in Windows NT and 95, and used it in their virtual private network (VPN) products. It wasn't until summer of 1998 that Counterpane Systems published a paper describing the flaws we found. Microsoft quickly posted a series of fixes, which we have since evaluated and found wanting. They don't fix things nearly as well as Microsoft would like people to believe.
And then there is a company like TriStrata, which claimed to have a proprietary security solution without telling anyone how it works (because it's patent pending). You have to trust them. They claimed to have a new algorithm and new set of protocols that are much better than any that exist today. And even if they make their system public, the fact that they've patented it and retain proprietary control means that many cryptographers won't bother analyzing their claims.
Leveraging the Collective Strength
You can choose any of these three systems to secure your virtual private network. Although it's possible for any of them to be flawed, you want to minimize your risk. If you go with IPSec, you have a much greater assurance that the algorithms and protocols are strong. Of course, the product could still be flawed -- there could be an implementation bug or a bug in any of the odd little corners of the code not covered in the IPSec standards -- but at least you know that the algorithms and protocols have withstood a level of analysis and review that the Microsoft and TriStrata options have not.
Choosing the TriStrata system is like going to a doctor who has no medical degree and whose novel treatments (which he refuses to explain) have no support by the AMA. Sure, it's possible (although highly unlikely) that he's discovered a totally new branch of medicine, but do you want to be the guinea pig?
The point here is that the best security methods leverage the collective analytical ability of the cryptographic community. No single company (outside the military) has the financial resources necessary to evaluate a new cryptographic algorithm or shake the design flaws out of a complex protocol. The same holds true in cryptographic libraries. If you write your own, you will probably make mistakes. If you use one that's public and has been around for a while, some of the mistakes will have been found and corrected.
It's hard enough making strong cryptography work in a new system; it's just plain lunacy to use new cryptography when viable, long-studied alternatives exist. Yet most security companies, and even otherwise smart and sensible people, exhibit acute neophilia and are easily blinded by shiny new pieces of cryptography.
Following the Crowd
At Counterpane Systems, we analyze dozens of products a year. We review all sorts of cryptography, from new algorithms to new implementations. We break the vast majority of proprietary systems and, with no exception, the best products are the ones that use existing cryptography as much as possible.
Not only are the conservative choices generally smarter, but they mean we can actually analyze the system. We can review a simple cryptography product in a couple of days if it reuses existing algorithms and protocols, in a week or two if it uses newish protocols and existing algorithms. If it uses new algorithms, a week is barely enough time to get started.
This doesn't mean that everything new is lousy. What it does mean is that everything new is suspect. New cryptography belongs in academic papers, and then in demonstration systems. If it is truly better, then eventually cryptographers will come to trust it. And only then does it make sense to use it in real products. This process can take five to ten years for an algorithm, less for protocols or source-code libraries. Look at the length of time it is taking elliptic curve systems to be accepted, and even now they are only accepted when more trusted alternatives can't meet performance requirements.
In cryptography, there is security in following the crowd. A homegrown algorithm can't possibly be subjected to the hundreds of thousands of hours of cryptanalysis that DES and RSA have seen. A company, or even an industry association, can't begin to mobilize the resources that have been brought to bear against the Kerberos authentication protocol, for example. No one can duplicate the confidence that PGP offers, after years of people going over the code, line by line, looking for implementation flaws. By following the crowd, you can leverage the cryptanalytic expertise of the worldwide community, not just a few weeks of some analyst's time.
And beware the doctor who says, "I invented and patented this totally new treatment that consists of tortilla-chip powder. It has never been tried before, but I just know it is much better and I'm going to give it to you." There's a good reason we call new cryptography "snake oil."
Thanks to Matt Blaze for the analogy that opened this column.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..