Is Cryptography Engineering or Science?
Responding to a tweet by Thomas Ptacek saying, "If you're not learning crypto by coding attacks, you might not actually be learning crypto," Colin Percival published a well-thought-out rebuttal, saying in part:
If we were still in the 1990s, I would agree with Thomas. 1990s cryptography was full of holes, and the best you could hope for was to know how your tools were broken so you could try to work around their deficiencies. This was a time when DES and RC4 were widely used, despite having well-known flaws. This was a time when people avoided using CTR mode to convert block ciphers into stream ciphers, due to concern that a weak block cipher could break if fed input blocks which shared many (zero) bytes in common. This was a time when people cared about the "error propagation" properties of block ciphers Â that is, how much of the output would be mangled if a small number of bits in the ciphertext are flipped. This was a time when people routinely advised compressing data before encrypting it, because that "compacted" the entropy in the message, and thus made it "more difficult for an attacker to identify when he found the right key". It should come as no surprise that SSL, designed during this era, has had a long list of design flaws.
Cryptography in the 2010s is different. Now we start with basic components which are believed to be highly secure Â e.g., block ciphers which are believed to be indistinguishable from random permutations Â and which have been mathematically proven to be secure against certain types of attacks Â e.g., AES is known to be immune to differential cryptanalysis. From those components, we then build higher-order systems using mechanisms which have been proven to not introduce vulnerabilities. For example, if you generate an ordered sequence of packets by encrypting data using an indistinguishable-from-random-permutation block cipher (e.g., AES) in CTR mode using a packet sequence number as the CTR nonce, and then append a weakly-unforgeable MAC (e.g., HMAC-SHA256) of the encrypted data and the packet sequence number, the packets both preserve privacy and do not permit any undetected tampering (including replays and reordering of packets). Life will become even better once Keccak (aka. SHA-3) becomes more widely reviewed and trusted, as its "sponge" construction can be used to construct -- with provable security -- a very wide range of important cryptographic components.
He recommends a more modern approach to cryptography: "studying the theory and designing systems which you can prove are secure."
I think both of statements are true -- and not contradictory at all. The apparent disagreement stems from differing definitions of cryptography.
Many years ago, on the Cryptographer's Panel at an RSA conference, then-chief scientist for RSA Bert Kaliski talked about the rise of something he called the "crypto engineer." His point was that the practice of cryptography was changing. There was the traditional mathematical cryptography -- designing and analyzing algorithms and protocols, and building up cryptographic theory -- but there was also a more practice-oriented cryptography: taking existing cryptographic building blocks and creating secure systems out of them. It's this latter group he called crypto engineers. It's the group of people I wrote Applied Cryptography, and, most recently, co-wrote Cryptography Engineering, for. Colin knows this, directing his advice to "developers" -- Kaliski's crypto engineers.
Traditional cryptography is a science -- applied mathematics -- and applied cryptography is engineering. I prefer the term "security engineering," because it necessarily encompasses a lot more than cryptography -- see Ross Andersen's great book of that name. And mistakes in engineering are where a lot of real-world cryptographic systems break.
Provable security has its limitations. Cryptographer Lars Knudsen once said: "If it's provably secure, it probably isn't." Yes, we have provably secure cryptography, but those proofs take very specific forms against very specific attacks. They reduce the number of security assumptions we have to make about a system, but we still have to make a lot of security assumptions.
And cryptography has its limitations in general, despite the apparent strengths. Cryptography's great strength is that it gives the defender a natural advantage: adding a single bit to a cryptographic key increases the work to encrypt by only a small amount, but doubles the work required to break the encryption. This is how we design algorithms that -- in theory -- can't be broken until the universe collapses back on itself.
Despite this, cryptographic systems are broken all the time: well before the heat death of the universe. They're broken because of software mistakes in coding the algorithms. They're broken because the computerâs memory management system left a stray copy of the key lying around, and the operating system automatically copied it to disk. They're broken because of buffer overflows and other security flaws. They're broken by side-channel attacks. They're broken because of bad user interfaces, or insecure user practices.
Lots of people have said: "In theory, theory and practice are the same. But in practice, they are not." Itâs true about cryptography. If you want to be a cryptographer, study mathematics. Study the mathematics of cryptography, and especially cryptanalysis. There's a lot of art to the science, and you won't be able to design good algorithms and protocols until you gain experience in breaking existing ones. If you want to be a security engineer, study implementations and coding. Take the tools cryptographers create, and learn how to use them well.
The world needs security engineers even more than it needs cryptographers. We're great at mathematically secure cryptography, and terrible at using those tools to engineer secure systems.
After writing this, I found a conversation between the two where they both basically agreed with me.
Posted on July 5, 2013 at 7:04 AM • 25 Comments