The 1998 Crypto Year-in-Review

  • Bruce Schneier
  • Information Security
  • December 19, 1998

1998 was an exciting year to be a cryptographer, considering all the developments in algorithms, attacks and politics. At first glance, the important events of the year seem completely unrelated: done by different people, at different times and for different reasons. But when we step back and reflect on the year-that-was, some common threads emerge—as do important lessons about the evolution and direction of cryptography.

New Algorithms

In June, the NSA declassified KEA and Skipjack. KEA is a public-key Key Exchange Algorithm, while Skipjack is a block cipher first used in the ill-fated Clipper Chip. The NSA wanted Fortezza in software, and the only way they could get that was to declassify both algorithms.

This marks the first time that an NSA-developed algorithm has been declassified and released into the public domain, and also the first time that the U.S. military has used a public algorithm to encrypt classified traffic. More importantly, the release of Skipjack marks a watershed event in public cryptanalysis. Like DES, a reference algorithm by which all attacks were measured, Skipjack is an example of a “reference good algorithm.” Think of it as alien technology: for the next decade researchers will pick Skipjack apart, looking for clues on how to design and analyze block ciphers.

And there are lots of block ciphers to analyze. In June, 15 candidates for the Advanced Encryption Standard (AES), the replacement to DES, were submitted to NIST. NIST’s goal is to replace DES with another block cipher, one with a 128-bit block size and a key size of 128, 192 or 256 bits.

The process is a long one. Currently, all 15 algorithms are being reviewed by the crypto community. NIST will host a public workshop in Rome this March, with the public comment period ending this June. After that, NIST will pick about five candidates for a second round. Another workshop and public comment period will follow, after which NIST will pick a single winner sometime in 2000. Following that, they will take the algorithm through the FIPS process, and it will hopefully become an ANSI, ISO and IETF standard as well.

The AES review process is important for several reasons. First, DES is just too weak for modern use (see below). Second, since there’s no emergency, NIST can take its time and do this correctly. And third, if everyone plays fair, there will be an encryption standard that is endorsed by the cryptographic community, one not subject to NSA tampering.

New Attacks

1998 also saw several important developments in the flip side of cryptology: cryptanalysis. Quite simply, a lot of things were broken last year.

In July, the Electronic Frontier Foundation (EFF) built Deep Crack, a hardware DES cracker that can break DES in an average of four-and-a-half days. The $ 220,000 machine was not built to steal secrets or embezzle money, but to inarguably demonstrate that DES’s 56-bit key is too short for any real security.

The news here is not that DES is insecure, that hardware algorithm-crackers can be built or that a 56-bit key length is too short. We already knew all this; cryptographers have been saying it for years (I said it in my book, Applied Cryptography, back in 1994). Technological predictions about the declining costs of such a machine—predictions made from the late 1970s onward—turned out to be dead-on accurate.

Rather, the news is how long the government has been denying that these machines were possible. As recently as June 8, 1998, Robert Litt, principal associate deputy attorney general at the U.S. Department of Justice, denied that it was possible for the FBI to crack DES. “[It is a myth that] we have supercomputers that can crack anything that is out there,” Litt said at an EPIC conference. “Let me put the technical problem in context: It took 14,000 Pentium computers working for four months to decrypt a single message. . . . We are not just talking FBI and NSA [needing massive computing power], we are talking about every police department.” Litt looked foolish at the time, and he looks even more foolish now.

Not So Smart Cards

Cryptanalysts gave smart cards a good whack when researchers invented power analysis. In June, Paul Kocher and others demonstrated that secrets could be extracted from a smart card by watching the card’s power usage. Researchers have demonstrated this attack in several laboratories, extracting secret keys, bank balances and everything else from supposedly secure smart cards.

The important concept here is that there’s another way of looking at a cryptographic algorithm. We’re used to treating algorithms as mathematics, but they can also be looked at as systems. It’s a biological approach: What are its inputs and outputs? How does it move? How does it respond to different stimuli? By looking at a smart card as a concrete device with timing, power, radiation and other characteristics, it’s possible to attack many systems that were previously believed to be secure. By combining these techniques with fault analysis—another biological attack that measures how a smart card responds to randomly induced faults—the result is even more devastating.

What these attacks are telling us is not that we have to spend more effort making smart cards resistant—that’s probably not possible—but that we need to rethink how data is stored on smart cards. A system in which a device is owned by one party and the secrets within the device are owned by another is, fundamentally, a badly designed system.

No Substitutes

Some pretty lousy cryptography was exposed in 1998. In April University of California-Berkeley researchers found flaws in the GSM digital cellular encryption algorithm, used in about 90 million cell phones worldwide. And in an unrelated incident, in June researchers found flaws in Microsoft’s PPTP protocol, used as a virtual private network (VPN) security protocol by many companies.

The moral of these two attacks is not that it’s hard to do cryptography right and that it’s easy to make mistakes: we already knew that. The moral is that there is no substitute for the public review process when it comes to security. Both the GSM cellular system and Microsoft’s PPTP system were designed by a closed group, and remained proprietary. No one person or group can be expert in all things, and as a result both systems had major flaws. But because they were proprietary, the flaws were only discovered after the systems had been fielded.

Contrast this approach with that for IPSec, a protocol for secure Internet traffic. This protocol was developed in a public working group, and every step of the process has been available for public review. As with GSM and PPTP, the group designing IPSec was not expert in all things, and, to be sure, there were flaws. But these flaws were discovered by others involved in the process while the process was going on. The system has been broken, fixed, broken again, fixed again and so on. In the end we have a very robust system, the result of many people examining and commenting on drafts. This kind of expertise simply cannot be purchased by a single organization, and it’s foolish to believe otherwise.

High-Profile Cracks

In July, a Bell Labs researcher broke the RSA implementation in PKCS # 1. PKCS # 1 is a padding scheme used in many products (SSL is probably the most widely known), and the attack worked in an operational setting against these products. Vendors scrambled to fix the problem—the fix was easy, once you knew the problem—but the attack showed that even if the underlying algorithm is secure (RSA, in this example), the implementation may not be.

In August, two French cryptographers described an attack against SHA-0. For those who don’t remember, SHA is a NIST-standard hash function. It was invented by the NSA in 1993, and is largely inspired by MD4. In 1995, the NSA modified the standard (the new version is called SHA-1; the old version is now called SHA-0). The agency claimed the modification was designed to correct a weakness, although no justification was given. Well, we now understand the attack against SHA-0 and how the modification prevents it.

In August, a group of Israeli cryptographers presented “impossible differential cryptanalysis.” This is an esoteric mathematical cryptanalytic attack, applicable against several academic ciphers and one high-profile fielded one. Surprisingly enough, impossible differentials work against a Skipjack variant with 31 rounds (the real cipher has 32 rounds).

The implications of this attack are major, and there are two possible explanations: (1) The NSA didn’t know about this attack, in which case academic cryptographers have scored a major win over our military counterparts; or (2) the NSA did know about this attack, in which case they have some kind of mathematical model that permits them to field algorithms that are just marginally above the break point. Either explanation is fascinating, and points to some interesting research to come.

Old News

Some of the news in 1998 wasn’t really news at all, since we all knew what was coming.

In fall 1997, the first public-key patents expired. These patents had prevented people from implementing all public-key cryptographic algorithms (not just RSA) without paying royalties. So, in 1998 free public-key algorithms were used in standards for the first time; now, people can implement Diffie-Hellman key exchange, ElGamal encryption and ElGamal signatures without paying royalties to anyone.

Finally, serious doubts were again raised about the feasibility of key escrow, an issue I revisited in an October Word in Edgewise article in this magazine. Again, this isn’t “new” news. Researchers have long argued that the kind of key escrow the FBI wants causes more problems than it solves. In June, a distinguished group of cryptographers released a report explaining just how insecure such a system would be. The report looked at several new vulnerabilities that key escrow can introduce, including new ways of breaking messages, loss of security control, abuse by insiders, single points of attack and failure, loss of secrecy assumptions and complicated system design. The amazing thing about this analysis is not that it echoes the NSA’s own internal analysis (which it does), but that the more we learn about how to design and attack systems the harder this problem is to solve.

What About ’99?

Looking back, most of the highlights from 1998 were completely unpredictable in December 1997. Similarly, we have no idea what cryptographic success 1999 will bring. Some predictions are obvious: NIST will choose finalists for the AES selection process; the RSA patent won’t expire—that will happen in September 2000; and key escrow won’t get any more secure. For sure, there will be some interesting cryptanalytic results against some interesting algorithms. And some products we all use will be found to be weak, and hopefully they will be corrected.

But beyond these general observations, no one knows. Cryptography is a unique science because research can go backwards in time. A new compression algorithm might be better, but it won’t make the old algorithms compress any less efficiently. A new cryptographic technique can make already fielded algorithms, protocols and products less secure. There are a lot of very clever people working in cryptography, and it is unlikely they will have a dry year. Stay tuned for more information—as people invent it.

Categories: Computer and Information Security

Sidebar photo of Bruce Schneier by Joe MacInnis.