Why the Worst Cryptography is in the Systems that Pass Initial Analysis

  • Bruce Schneier
  • Information Security
  • March 1999

Imagine this situation: An engineer builds a bridge. It stands for a day, and then collapses. He builds another. It stands for three days, and then collapses. Then, he builds a third, which stands for two weeks but collapses during the first rainstorm. So he builds a fourth. It’s been standing for a month, and has survived two rainstorms. Do you believe this fourth bridge is strong, secure and safe? Or is it more likely just another accident waiting to happen?

As bizarre as it may seem, this kind of design process happens all the time in cryptography, a field that is full of people who love to design their own algorithms and protocols. With so many aspiring cryptanalysts out there, however, there’s bound to be a lot of weak designs. The problem is this: Anyone, no matter how unskilled, can design an algorithm that he himself cannot break. Though a competent cryptanalyst can break most of this stuff after a short review, the rest of it survives, and in most cases is never looked at again (especially outside the military world). But just because an algorithm survives an initial review is no reason to trust it.

I had a client once who desperately wanted to design his own encryption algorithm. He had no cryptographic training, no experience analyzing other algorithms. He was a designer, he said, not an analyst. So Counterpane did his analysis for him, and we broke his algorithm in a day. He fixed it and sent it back, and we broke it in two days. He fixed it and sent it back again, and we broke it again. Finally, the fourth version of his algorithm resisted our attempts at cryptanalysis…at least for the full 40 hours our contract specified. The client was happy; finally, he had a secure algorithm.

In a way, the client is worse off than he was before he started. At first, he had an algorithm that was obviously flawed. If he included it in a product, he would have no analysis to show potential buyers and no responses to questions about its security. If a competent cryptographer looked at the algorithm—either because it was made public or by reverse-engineering the code—he could easily break it.

But after we went through the break-fix-break cycle a few times, he ended up with an algorithm that was not obviously flawed. He had a written analysis showing that we could not break it within 40 hours. Even if a competent cryptographer looked at the algorithm for a few days, he probably would not find a problem. Unless the algorithm was being used in some high-profile application—cellular telephony, a Microsoft product, an Internet standard—it might never be looked at any more closely. But that doesn’t mean that it’s not still flawed, or that it can’t be broken given enough time and resources.

This is not to say that the break-fix-break cycle is completely flawed. It’s not. In fact, it’s how most good cryptographic systems got to be good. Consider IPSec, the Internet IP security protocol. It was designed by committee, out in the open and in public, and from the start has been the subject of considerable public scrutiny. Everyone knew it was an important protocol, and people spent a lot of effort trying to get it right. Things were proposed, broken and then modified. Versions were codified and analyzed. Debates raged over its security merits, performance, ease-of-implementation, upgradability and use. Then, in November 1998, a pile of RFCs were published, the next in a series of steps to make IPSec an Internet standard. It is impossible to mimic this kind of analysis with a proprietary system. Still, many companies try, which begs the question: Why try to develop new algorithms and protocols at all? They’re generally not faster, or smaller, or more efficient. They’re just different.

Unfortunately, in the world of cryptography, different is bad. Cryptography is at its best when it is conservative, and the conservative approach is to choose an algorithm that has already been analyzed. The admonition not to put all your eggs in one basket does not apply in this case. The security of a system is the security of its weakest component, since the weakest component breaks the entire system. In cryptography, there is security in following the crowd. A home-grown algorithm can’t possibly be subjected to the hundreds of thousands of hours of analysis that DES and triple-DES have been subjected to. A company just can’t mobilize the resources that are being brought to bear against the AES candidates, or the IPSec Internet security protocol. No one can duplicate the confidence that RSA offers after 20 years of cryptanalytic review. A standard security review, even by competent cryptographers, can only prove insecurity; it can never prove security. By following the pack you can leverage the cryptanalytic expertise of the worldwide community, not just a handful of hours of a consultant’s time.

Categories: Computer and Information Security

Sidebar photo of Bruce Schneier by Joe MacInnis.