Crypto-Gram

July 15, 1998

by Bruce Schneier
President
Counterpane Systems

schneier@schneier.com
http://www.counterpane.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on cryptography and computer security.

Copyright (c) 1998 by Bruce Schneier


In this issue:


Breaking RSA in PKCS1

Reports of RSA’s death have been greatly exaggerated. There is a new attack on RSA implementations that can, in some circumstances, can be pretty devastating. Fortunately, the attack does not apply to RSA in general. Unfortunately, the “circumstances” aren’t all that uncommon.

The attack is simple to explain. I am the attacker, and I want to know the plaintext for a particular ciphertext encrypted with RSA. (Generally, this is a session key used for something else.) I send my victim a bunch of related messages (about one million of them) and watch his reaction. By learning which of those messages conform to particular data formats (PKCS1 in the paper), I can do some straightforward mathematical analysis and break the message I started with.

Point 1: the attacker does not recover the secret key, only the plaintext to a particular message. That means that after I send the victim one million messages and watch the reactions to each, I only get to read one secret message. If I want to read another secret message, it takes another million related messages.

Point 2: the attacker is relying on some information from the victim. In this case, he needs to know if the related messages he sends decrypt in a certain way. I like to call this general class of attack a “reaction attack,” since it uses the victim’s reaction as input. This is an old and powerful idea, but unfortunately in the age of computers it is easy to implement. Computer systems are good at automatically reacting to things, and then broadcasting those reactions to the world. Error messages, status messages, health information: it’s all there if an attacker wants it.

Point 3: the attacker has to send the victim a whole lot of related messages to break one message. The general attack requires one billion messages. This number can be reduced somewhat—the experiments against SSL required anywhere from 300,000 to 2 million related messages—but that’s still a lot of messages. Still, computers are good at dealing with a lot of messages, and automated systems are likely to process those kind of message quantities without even noticing. Smart cards that the attacker can put in his own test setup are also vulnerable.

Daniel Bleichenbacher’s attack (he’s at Bell Labs) will be presented in August at the Crypto ’98 conference. His paper has been circulating around the community for a while, and RSADSI and the various SSL vendors spent some time getting their ducks in a row before making a press announcement.

There were several fixes announced. (Obvious fix: don’t tell the attacker if the message was valid or not.) The quick ones increase the number of related messages required to break one message. These fixes make it much harder to mount this attack against on-line systems—the message volume will clog the system—and moderately harder against off-line systems like smart cards. Better fixes are to change the PKCS1 protocol, which specifies how the bits of plaintext are packed into a data structure that RSA can then encrypt. The RSA message packaging scheme in SET, for example, is not vulnerable to this attack.

The attack has ramifications outside PKCS1. Many protocols will have to be corrected and many systems will have to be changed. Many people will have no idea that this attack exists and will design insecure implementations of RSA.

Many years ago there was a string of theoretical cryptographic results that proved that every bit of RSA is as secure as the whole message. All of us cryptographers read the papers and decided that the results weren’t terribly useful: if the entire RSA-encrypted message is secure, then each individual bit is secure. This piece of work turns that result on its head: if you can break single bit of an RSA message, then you can break the whole message.

Nice attack.

http://www.wired.com/news/technology/…
RSADSI Technical Note: http://www.rsasecurity.com/rsalabs/pkcs1/bulletin7.html


The NSA has declassified Fortezza. Specifically, they declassified Skipjack (a symmetric block cipher) and KEA (a public-key key-exchange algorithm). SHA-1 (hash function) and DSA (digital signature algorithm) are also part of Fortezza, but they were already public.

They didn’t do this to help industry, or to help cryptographers, or to help anyone. They did it to help themselves; they did it because they had to cover for a mistake.

DMS (Defense Messaging System) is a classified system for computer messaging; e-mail, more or less. DMS uses Fortezza PCMCIA cards for security. Since some of the Fortezza algorithms were classified, Fortezza cards have all the physical controls and tamper-resistance features needed to protect classified algorithms.

Within the DMS protocols there is no way to have multiple cipher suites. S/MIME, for example, defines multiple algorithm suites. There is a flag in the S/MIME message that tells the recipient which algorithms were used to encrypt that particular message. DMS has no such feature; only the Fortezza algorithms work in DMS.

The problem arose because the NSA couldn’t install Fortezza cards and readers fast enough. I don’t know if they were too expensive (they are expensive), if they couldn’t ramp production up fast enough, or if installing the infrastructure (PCMCIA card readers and etc.) was too big a problem. I’m sure they counted on PCMCIA readers being ubiquitous in computers.

Whatever the cause, most people who needed to be on DMS did not have working hardware. If they could set up an alternate algorithm suite within DMS, then they could have released a software-only version with unclassified algorithms: triple-DES, Diffie-Hellman, etc. Each endpoint would know whether it was communicating with a Fortezza-enabled DMS system or a software-only DMS system, and the problem would go away. But DMS could not support this; it’s Skipjack/KEA or nothing. So they had to either turn off encryption, or put the classified algorithms into software.

Once you do that, you might as well declassify. The government’s public rationale is simply making a virtue out of necessity.

So, what’s Skipjack?

Skipjack is a 64-bit symmetric block cipher with an 80-bit key. It was used in the Clipper program, but it has not built-in key escrow. (Key escrow was part of the key exchange mechanism, not the data encryption.) It’s a high-risk algorithm, meaning that there was a high risk of compromise. Hence, the NSA is unlikely to put its most secret (or clever) design elements in the algorithm.

Its performance is good. Slower than Blowfish and some of the AES submissions, it’s still about twice as fast as DES on 32-bit microprocessors. It’s fast on smart cards, and efficient in hardware. It also has no key setup time. If it weren’t for the small 80-bit key, I’d consider Skipjack for my own applications.

Skipjack is interesting primarily for its design. This is the first NSA-developed algorithm we’ve ever seen. Cryptography is an adversarial science. Someone designs an algorithm; I break it. I design one; someone else breaks it. This is how we learn. Skipjack is a good target; it is an algorithm designed using secret methodologies by an organization we respect. (Think of it as the cryptographic equivalent of a piece of alien technology.) It’s a worthy target.

Skipjack is an unbalanced Feistel network (specifically, an incomplete construction), but it is obviously a product of military cryptography. Academic cryptography is mostly based on Feistel’s work in the mid-1970s at IBM: SP-networks and Feistel networks. Military cryptography started with rotor machines, and then generalized into shift registers. The block diagram and description of Skipjack clearly shows its shift-register roots. I find it fascinating the that the two different design paths are converging.

The first thing you notice about Skipjack is its simplicity. There are few design elements, and after some thought you can point to each one and explain why it is there. There are no weird constants. There are 32 rounds, and 32 rounds can hide a lot of faults, but the design seems sound.

And very fragile. Some algorithms are strong because they are of a strong type of algorithm. Similar algorithms will also be strong. Skipjack isn’t like that; it’s a single strong algorithm around a sea of mediocrity. Make almost any modification to Skipjack, even a small one, and the result can be broken. I predict that the most interesting cryptanalysis work will come cryptanalyzing Skipjack variants.

People are already attempting to cryptanalyze Skipjack. Mostly we’ve seen breaks of modified versions of the algorithm, together with explanations of why the attack won’t work against Skipjack itself. And Skipjack with fewer rounds can be broken, but that’s expected.

And finally, Skipjack is not a submission to AES. It does not meet the criteria. AES will be a 128-bit block cipher; Skipjack is a 64-bit cipher. AES will support key lengths of 128-, 192-, and 256-bits; Skipjack has an 80-bit key. I believe I can increase Skipjack’s block size without affecting security, but there is no obvious way to increase the key length of the algorithm.

Next month: KEA.

http://csrc.nist.gov/encryption/skipjack-kea.htm

Source code: ftp://ftp.funet.fi/pub/crypt/cryptography/symmetric/…


Counterpane Systems—Featured Research

“Cryptographic Support for Secure Logs on Untrusted Machines”

B. Schneier and J. Kelsey, The Seventh USENIX Security Symposium Proceedings, USENIX Press, January 1998, pp. 53-62.

In many real-world applications, sensitive information must be kept in log files on an untrusted machine. In the event that an attacker captures this machine, we would like to guarantee that he will gain little or no information from the log files and limit his ability to corrupt the log files. This paper describes an efficient method for making all log entries generated prior to the logging machine’s compromise impossible for the attacker to read, and also impossible to undetectably modify or destroy.

http://www.schneier.com/paper-secure-logs.html


News

The International Computer Security Association (despite the name, it’s a for-profit company) will insure companies against hackers, but the company has to pay ICSA a lot of money first.

http://www.nytimes.com/library/tech/98/06/biztech/…

More government propaganda: “The spread of strong encryption is going to mean that lives are going to be lost.” Sure, so will the spread of kitchen knives, the Interstate highway system, and ladders.

http://www.wired.com/news/politics/0,1283,12830,00.html

More on differential power analysis:

http://www.nytimes.com/library/tech/98/06/biztech/…

New virus spreads within Microsoft Word documents, randomly posts files to Internet newsgroups. I don’t know whether to be disgusted with yet another computer virus, or impressed with the cleverness of the idea.

http://www.isr.net/NETNEWS/netnewsmain.html [link dead]

Some people panicked when Microsoft received a patent for an anonymous cash system. There’s no big deal here. The research was presented at a crypto conference years ago, and the patent isn’t blocking. My guess is that Microsoft is starting to patent the random things their researchers do.

http://www.wired.com/news/news/technology/story/…


WIPO

WIPO is the World Intellectual Property Organization. There is something called the WIPO Treaty, designed to protect copyright in a world of digital works. This article is not about the futility of copy protection and watermarking schemes, or the artificial distinction between the owner of a work and the owner of a copy of the work. This article is about an obscure sentence in the U.S. Senate bill that implements the WIPO Treaty: “No person shall circumvent a technological protection measure that effectively controls access to a work protected under this title.”

Or, in other words, applied cryptanalysis will be illegal.

The bill passed 99-0 in the Senate (I don’t know who was napping during the vote) before anyone started paying attention. Groups concerned with cryptography research didn’t start lobbying until the bill got to the House Commerce Committee.

I spent a lot of time on this issue: meeting with Congresspeople, writing letters, alerting reporters. The backers of this provision are companies like Disney and Microsoft (two of the major lobbyists here), who seem more concerned with copies of The Little Mermaid than with strong cryptography to protect our critical infrastructure.

It was difficult. The other side was saying things like: “you can always test algorithms theoretically, just not in actual products” and “cracking contests are still legal” and “researchers can just ask for permission first” and “if they research but don’t publish it would be okay.”

Basically, there are two ways to deal with the problem. One, companies can design and implement strong security solutions to protect copyright. Or two, they can deploy weak solutions and try to hide the fact with a law. I don’t see why the security industry should be exempt from Consumer-Reports-style reviews of their products. It would be like the meat industry passing a low prohibiting anyone from determining whether there was rat hair in hamburger.

Two other illustrations. Under the law, I could create a virus that uses cryptography to protect itself. Removing it would be illegal, since it would entail circumventing a cryptographic protection measure.

Under the law, I could not test the security of firewalls. If a major bank came to me and asked me which firewall was secure, it would be illegal for me to answer the question. Banks would have to rely on the marketing claims of security companies, and we all know how reliable they are.

When security breaks become public, companies lose face. The cellphone industry didn’t like it when David Wagner and Ian Goldberg broke the GSM encryption algorithm. Microsoft didn’t like it when I broke Microsoft PPTP. But the cellphone industry said that they will fix the problem. And Microsoft, who has known about their problems for a long time, announced that they too would fix the problem. If researchers are not allowed to publish, then no one knows there are problems and companies don’t bother fixing them.

Right now there is a compromise amendment to the bill. Encryption research is allowed, publication is allowed, but you are going to have to prove you are a legitimate researcher and not a slimy hacker. Researchers have to lawfully obtain the thing they are going to break, make a good faith attempt to obtain authorization, and show that breaking the system was necessary to conduct research. They will have to disseminate the research results in “a manner reasonably calculated to advance the state of knowledge of development of encryption technology” (presumably hacker bulletin boards don’t count), and be “engaged in a legitimate course of study” or be employed and “appropriately trained or experienced in the field of encryption technology” (so no amateurs can try this).

This may not look like a compromise, but it is. I saw the iterations before the compromise; they were significantly worse.

My real fear is that this will have a great chilling effect on cryptography research. Companies like Microsoft and Disney will sue researchers who break their stuff. Even if the researchers are eventually vindicated, they will not have anywhere near the financial power that the companies will have. Just the mere threat of legal action might dissuade researchers. Right now security is far too important to sacrifice for a few less pirated copies of The Little Mermaid.

http://www.zdnet.com/zdnn/stories/zdnn_display/…
http://www.zdnet.com/pcweek/news/0622/22wipo.html
http://www.zdnet.com/zdnn/content/inwk/0523/326948.html


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on cryptography and computer security.

To subscribe, visit http://www.schneier.com/crypto-gram.html or send a blank message to crypto-gram-subscribe@chaparraltree.com. Back issues are available at http://www.schneier.com. To unsubscribe, visit http://www.schneier.com/crypto-gram-faq.html.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is president of Counterpane Systems, the author of Applied Cryptography, and an inventor of the Blowfish, Twofish, and Yarrow algorithms. He served on the board of the International Association for Cryptologic Research, EPIC, and VTW. He is a frequent writer and lecturer on cryptography.

Counterpane Systems is a five-person consulting firm specializing in cryptography and computer security. Counterpane provides expert consulting in, design and analysis, implementation and testing, threat modeling, product research and forecasting, classes and training, intellectual property, and export consulting. Contracts range from short-term design evaluations and expert opinions to multi-year development efforts.

Sidebar photo of Bruce Schneier by Joe MacInnis.