The Fallacy of Trusted Client Software

Controlling what a user can do with a piece of data assumes a trust paradigm that doesn’t exist in the real world. Software copy protection, intellectual property theft, digital watermarking-different companies claim to solve different parts of this growing problem. Some companies market e-mail security solutions in which the e-mail cannot be read after a certain date, effectively “deleting” it. Other companies sell rights-management software: audio and video files that can’t be copied or redistributed, data that can be read but not printed and software that can’t be copied. Still other companies have software copy-protection technologies.

The common thread in all of these “solutions” is that they postulate a situation where the owner of a file controls what happens to that file after it’s sent to someone else. In the e-mail product, the sender of a file can control when the file is deleted on the recipient’s computer. In the various rights-management products, the sender of a file wants to control when and if the recipient can view the file, copy it, modify it, update it, etc. This doesn’t work. Controlling what the client can do with a piece of data assumes a trusted piece of software is running on the user’s computer-that is, trusted by the initial owner of the file. There’s no such thing, and so these solutions don’t work.

Battling Bots

As an example, look at the online gaming community. Many games allow for multiplayer interaction over the Internet, and some games even have tournaments for cash prizes. Hackers have written computer bots that assist play for some of these games, particularly Quake and NetTrek. The idea is that the bots can react much quicker than a human; the player becomes much more effective with their assistance. An arms race has ensued, as the game designers try to disable these bots and force fairer play, while the hackers continue to make the bots more clever.

These games are relying on trusted client software, and the hacker community has managed to break every trick the game designers have thrown at them. I am continually amazed by the efforts of hackers to break through security. The lesson is twofold: Not only is there no reasonable way to trust a client-side program in real usage, but there’s no possible way to ever achieve that level of protection. Against the average user, anything works; there’s no need for complex security software. Against the skilled attacker, on the other hand, nothing works. And even worse, most systems need to be secure against the smartest attacker. If one person hacks Quake (or Intertrust or Disappearing Inc.), he can write a point-and-click software tool that anyone can use. Suddenly a security system that is secure against almost everyone can now be compromised by anyone.

Against all of these systems-disappearing e-mail, rights management for music and videos, fair game playing-there are two types of attackers: the average user and the skilled attacker. Joe User just wants a single copy of Photoshop, “The Lion King” and Robin Hitchcock’s latest CD, and doesn’t want to pay for them. There’s no analogue for him in the physical world; Joe User couldn’t make a single copy of a Chanel handbag, even if he wanted one. On the one hand, he’s more elusive than the skilled attacker; on the other hand, he’s much less of a financial threat. Joe User isn’t an organized criminal; he’s not going to have a criminal network and he’s not going to leave much in the way of a trail. He might not even have bought the software, video or CD if he couldn’t get a free pirated copy.

Against Joe User, almost any countermeasure works. But against Jane Hacker, no countermeasure works. The problem is that Jane controls her computer. She can run debuggers, reverse-engineer code and analyze the protected program. If she’s smart enough, she can go into the software and disable the security code. The manufacturer can’t do a thing to stop her; all it can do is make her task harder. But to Jane, this challenge just entices her further. There are many Janes out there who break software copyprotection schemes as a hobby. They hang out on the ‘Net, trading illegal software. There are also those who do it for profit. They work in China, Taiwan and elsewhere, removing copy- protection code and reselling the software on CD-ROM for a tenth of the retail price. They can disable the most sophisticated copy-protection mechanisms. The lesson to learn from these people is that any copy-protection scheme can be broken. It’s the same lesson that the game companies learned from the bot hackers.

No Defense

Because breaking the countermeasure can have so much value, building a system that is secure against these attackers is futile. The only solution is to put the decryption mechanism in secure hardware, and then hope that this slows down the professionals by a few years. But as soon as someone wants a software player, it will be broken within weeks. This is what the DVD industry learned in 1999. This is what Glassbook learned in 2000, when unprotected copies of Stephen King’s Riding the Bullet materialized two days after the eBook version (supposedly secured against this kind of thing) was released.

Any rational security policy will recognize that technology is no defense against professional pirates. These people are no different than people who counterfeit Chanel handbags, and society has ways of catching them (non-computer detection and reaction mechanisms). They may or may not be effective ways, but that has nothing to do with the digital nature of the forgery. The same security policy would recognize that Uncle Steve is an amateur, and would imply that almost any countermeasure would work against him as long as it could not be broken completely (or trivially). This implies that content providers need to find alternate ways to make money. Selling physical copies of a book doesn’t work well in the digital world. It’s better to sell real-time updates, subscriptions and additional reasons to buy a paper copy. I like buying CDs instead of copying them because I get the liner notes. I like buying a physical book instead of printing a digital copy because I want the portability and the binding.

You can see alternate models in the public financing of good works: public television, public art, street performers. The performance is free, but individual contributions make it happen. Instead of Tom Clancy charging $4.99 a copy for his new book, maybe he should have put up a Web page asking for contributions. I would write the book and put it in the public domain, but only after I received $3 million in contributions. (This approach was actually used to fund some anti-Bush campaign ads this year. People would pledge contributions on their credit card, but would only be charged if the target total was reached. Notice that the credit-card company acted as the trusted-third party in this transaction.)

Other industries have different solutions. The smarter game companies dealt with the problem of bots by allowing them in some tournaments, and having final rounds of other tournaments live at trade shows, where the computer is trusted by the game company. The smarter self-distrusting e-mail companies emphasize the reduction in liability that installing such a system brings, rather than the absolute reliability of the software. There, the threat is not malicious users copying and distributing e-mail, but honest employees accidentally leaving e-mail undeleted and malicious lawyers subpoenaing the e-mail years later. Trying to limit the abilities of a user on a general-purpose computer is doomed to failure. It keeps the honest honest, and provides a nice false sense of security. Sometimes that’s good enough, but never always.

Categories: Computer and Information Security, Trust

Sidebar photo of Bruce Schneier by Joe MacInnis.