My Data, Your Machine

  • Bruce Schneier
  • Wired
  • November 30, 2006

Consider two different security problems. In the first, you store your valuables in a safe in your basement. The threat is burglars, of course. But the safe is yours, and the house is yours, too. You control access to the safe, and probably have an alarm system.

The second security problem is similar, but you store your valuables in someone else’s safe. Even worse, it’s someone you don’t trust. He doesn’t know the combination, but he controls access to the safe. He can try to break in at his leisure. He can transport the safe anyplace he needs to. He can use whatever tools he wants. In the first case, the safe needs to be secure, but it’s still just a part of your overall home security. In the second case, the safe is the only security device you have.

This second security problem might seem contrived, but it happens regularly in our information society: Data controlled by one person is stored on a device controlled by another. Think of a stored-value smart card: If the person owning the card can break the security, he can add money to the card. Think of a DRM system: Its security depends on the person owning the computer not being able to get at the insides of the DRM security. Think of the RFID chip on a passport. Or a postage meter. Or SSL traffic being sent over a public network.

These systems are difficult to secure, and not just because you give your attacker the device and let him utilize whatever time, equipment and expertise he needs to break it. It’s difficult to secure because breaks are generally “class breaks.” The expert who figures out how to do it can build hardware—or write software—to do it automatically. Only one person needs to break a given DRM system; the software can break every other device in the same class.

This means that the security needs to be secure not against the average attacker, but against the smartest, most motivated and best funded attacker.

I was reminded of this problem earlier this month, when researchers announced a new attack (.pdf) against implementations of the RSA cryptosystem. The attack exploits the fact that different operations take different times on modern CPUs. By closely monitoring—and actually affecting—the CPU during an RSA operation, an attacker can recover the key. The most obvious applications for this attack are DRM systems that try to use a protected partition in the CPU to prevent the computer’s owner from learning the DRM system’s cryptographic keys.

These sorts of attacks are not new. In 1995, researchers discovered they could recover cryptographic keys by comparing relative timings on chips. In later years, both power and radiation were used to break cryptosystems. I called these “side-channel attacks,” because they made use of information other than the plaintext and ciphertext. And where are they most useful? To recover secrets from smart cards.

Whenever I see security systems with this data/device separation, I try to solve the security problem by removing the separation. This means completely redesigning the system and the security assumptions behind it.

Compare a stored-value card with a debit card. In the former case, the card owner can create money by changing the value on the card. For this system to be secure, the card needs to be protected by a variety of security countermeasures. In the latter case, there aren’t any secrets on the card. Your bank doesn’t care that you can read the account number off the front of the card, or the data off the magnetic stripe off the back—the real data, and the security, are in the bank’s databases.

Or compare a DRM system with a financial model that doesn’t care about copying. The former is impossible to secure, the latter easy.

While common in digital systems, this kind of security problem isn’t limited to them. Last month, the province of Ontario started investigating insider fraud in their scratch-and-win lottery systems, after the CBC aired allegations that people selling the tickets are able to figure out which tickets are winners, and not sell them. It’s the same problem: the owners of the data on the tickets—the lottery commission—tried to keep that data secret from those who had physical control of the tickets. And they failed.

Compare that with a traditional drawing-at-the-end-of-the-week lottery system. The attack isn’t possible, because there are no secrets on the tickets for an attacker to learn.

Separating data ownership and device ownership doesn’t mean that security is impossible, only much more difficult. You can buy a safe so strong that you can lock your valuables in it and give it to your attacker—with confidence. I’m not so sure you can design a smart card that keeps secrets from its owner, or a DRM system that works on a general-purpose computer—especially because of the problem of class breaks. But in all cases, the best way to solve the security problem is not to have it in the first place.

Categories: Business of Security, Computer and Information Security

Sidebar photo of Bruce Schneier by Joe MacInnis.