Risks of Relying on Cryptography
Inside Risks 112, Communications of the ACM, vol 42, n 10, Oct 1999.
Cryptography is often treated as if it were magic security dust: "sprinkle some on your system, and it is secure; then, you're secure as long as the key length is large enough--112 bits, 128 bits, 256 bits" (I've even seen companies boast of 16,000 bits.) "Sure, there are always new developments in cryptanalysis, but we've never seen an operationally useful cryptanalytic attack against a standard algorithm. Even the analyses of DES aren't any better than brute force in most operational situations. As long as you use a conservative published algorithm, you're secure."
This just isn't true. Recently we've seen attacks that hack into the mathematics of cryptography and go beyond traditional cryptanalysis, forcing cryptography to do something new, different, and unexpected. For example:
The common thread through all of these exploits is that they've all pushed the envelope of what constitutes cryptanalysis by using out-of-band information to determine the keys. Before side-channel attacks, the open crypto community did not think about using information other than the plaintext and the ciphertext to attack algorithms. After the first paper, researchers began to look at invasive side channels, attacks based on introducing transient and permanent faults, and other side channels. Suddenly there was a whole new way to do cryptanalysis.
Several years ago I was talking with an NSA employee about a particular exploit. He told about how a system was broken; it was a sneaky attack, one that I didn't think should even count. "That's cheating," I said. He looked at me as if I'd just arrived from Neptune.
"Defense against cheating" (that is, not playing by the assumed rules) is one of the basic tenets of security engineering. Conventional engineering is about making things work. It's the genesis of the term "hack," as in "he worked all night and hacked the code together." The code works; it doesn't matter what it looks like. Security engineering is different; it's about making sure things don't do something they shouldn't. It's making sure security isn't broken, even in the presence of a malicious adversary who does everything in his power to make sure that things don't work in the worst possible way at the worst possible times. A good attack is one that the engineers never even thought about.
Defending against these unknown attacks is impossible, but the risk can be mitigated with good system design. The mantra of any good security engineer is: "Security is a not a product, but a process." It's more than designing strong cryptography into a system; it's designing the entire system such that all security measures, including cryptography, work together. It's designing the entire system so that when the unexpected attack comes from nowhere, the system can be upgraded and resecured. It's never a matter of "if a security flaw is found," but "when a security flaw is found."
This isn't a temporary problem. Cryptanalysts will forever be pushing the envelope of attacks. And whenever crypto is used to protect massive financial resources (especially with world-wide master keys), these violations of designers' assumptions can be expected to be used more aggressively by malicious attackers. As our society becomes more reliant on a digital infrastructure, the process of security must be designed in from the beginning.
Bruce Schneier is CTO of Counterpane Internet Security, Inc. You can subscribe to his free email newsletter, Crypto-Gram, at http://www.counterpane.com.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.