Measuring Entropy and its Applications to Encryption
There have been a bunch of articles about an information theory paper with vaguely sensational headlines like “Encryption is less secure than we thought” and “Research shakes crypto foundations.” It’s actually not that bad.
Basically, the researchers argue that the traditional measurement of Shannon entropy isn’t the right model to use for cryptography, and that minimum entropy is. This difference may make some ciphertexts easier to decrypt, but not in ways that have practical implications in the general case. It’s the same thinking that leads us to guess passwords from a dictionary rather than randomly—because we know that humans both created the passwords and have to remember them.
This isn’t news—lots of cryptography papers make use of minimum entropy instead of Shannon entropy already—and it’s hard to see what the contribution of this paper is. Note that the paper was presented at an information theory conference, and not a cryptography conference. My guess is that there wasn’t enough crypto expertise on the program committee to reject the paper.
So don’t worry; cryptographic algorithms aren’t going to come crumbling down anytime soon. Well, they might—but not because of this result.
Slashdot thread.
Clive Robinson • August 21, 2013 9:06 AM
Yes the results do apply in some cases such as short lengths of “human” usable plain text. And this has been known since before WWII (and used incorrectly to continue the use of “poem code” hand ciphers long long after their “best before date, much to the cost of SOE agents lives).
As Bruce notes it’s current main use is to attack “human memorable” passwords and passphrases.
However some stream ciphers are known to have “slow start” issues where initial output statistics show charecteristics that can aid an attacker especialy when part of the “key” or “plaintext” is known or easily guessable to an attacker (which has happened with both network encryption and random access storage encryption).