Risks of Relying on Cryptography

  • Bruce Schneier
  • Communications of the ACM
  • October 1999

Cryptography is often treated as if it were magic security dust: “sprinkle some on your system, and it is secure; then, you’re secure as long as the key length is large enough—112 bits, 128 bits, 256 bits” (I’ve even seen companies boast of 16,000 bits.) “Sure, there are always new developments in cryptanalysis, but we’ve never seen an operationally useful cryptanalytic attack against a standard algorithm. Even the analyses of DES aren’t any better than brute force in most operational situations. As long as you use a conservative published algorithm, you’re secure.”

This just isn’t true. Recently we’ve seen attacks that hack into the mathematics of cryptography and go beyond traditional cryptanalysis, forcing cryptography to do something new, different, and unexpected. For example:

  • Using information about timing, power consumption, and radiation of a device when it executes a cryptographic algorithm, cryptanalysts have been able to break smart cards and other would-be secure tokens. These are called “side-channel attacks.”
  • By forcing faults during operation, cryptanalysts have been able to break even more smart cards. This is called “failure analysis.” Similarly, cryptanalysts have been able to break other algorithms based on how systems respond to legitimate errors.
  • One researcher was able to break RSA-signed messages when formatted using the PKCS standard. He did not break RSA, but rather the way it was used. Just think of the beauty: we don’t know how to factor large numbers effectively, and we don’t know how to break RSA. But if you use RSA in a certain common way, then in some implementations it is possible to break the security of RSA … without breaking RSA.
  • Cryptanalysts have analyzed many systems by breaking the pseudorandom number generators used to supply cryptographic keys. The cryptographic algorithms might be secure, but the key-generation procedures were not. Again, think of the beauty: the algorithm is secure, but the method to produce keys for the algorithm has a weakness, which means that there aren’t as many possible keys as there should be.
  • Researchers have broken cryptographic systems by looking at the way different keys are related to each other. Each key might be secure, but the combination of several related keys can be enough to cryptanalyze the system.

The common thread through all of these exploits is that they’ve all pushed the envelope of what constitutes cryptanalysis by using out-of-band information to determine the keys. Before side-channel attacks, the open crypto community did not think about using information other than the plaintext and the ciphertext to attack algorithms. After the first paper, researchers began to look at invasive side channels, attacks based on introducing transient and permanent faults, and other side channels. Suddenly there was a whole new way to do cryptanalysis.

Several years ago I was talking with an NSA employee about a particular exploit. He told about how a system was broken; it was a sneaky attack, one that I didn’t think should even count. “That’s cheating,” I said. He looked at me as if I’d just arrived from Neptune.

“Defense against cheating” (that is, not playing by the assumed rules) is one of the basic tenets of security engineering. Conventional engineering is about making things work. It’s the genesis of the term “hack,” as in “he worked all night and hacked the code together.” The code works; it doesn’t matter what it looks like. Security engineering is different; it’s about making sure things don’t do something they shouldn’t. It’s making sure security isn’t broken, even in the presence of a malicious adversary who does everything in his power to make sure that things don’t work in the worst possible way at the worst possible times. A good attack is one that the engineers never even thought about.

Defending against these unknown attacks is impossible, but the risk can be mitigated with good system design. The mantra of any good security engineer is: “Security is a not a product, but a process.” It’s more than designing strong cryptography into a system; it’s designing the entire system such that all security measures, including cryptography, work together. It’s designing the entire system so that when the unexpected attack comes from nowhere, the system can be upgraded and resecured. It’s never a matter of “if a security flaw is found,” but “when a security flaw is found.”

This isn’t a temporary problem. Cryptanalysts will forever be pushing the envelope of attacks. And whenever crypto is used to protect massive financial resources (especially with world-wide master keys), these violations of designers’ assumptions can be expected to be used more aggressively by malicious attackers. As our society becomes more reliant on a digital infrastructure, the process of security must be designed in from the beginning.

Bruce Schneier is CTO of Counterpane Internet Security, Inc. You can subscribe to his free email newsletter, Crypto-Gram, at http://www.counterpane.com.

Categories: Computer and Information Security

Sidebar photo of Bruce Schneier by Joe MacInnis.