Entries Tagged "copyright"

Page 4 of 7


Last month Marine General James Cartwright told the House Armed Services Committee that the best cyber defense is a good offense.

As reported in Federal Computer Week, Cartwright said: “History teaches us that a purely defensive posture poses significant risks,” and that if “we apply the principle of warfare to the cyberdomain, as we do to sea, air and land, we realize the defense of the nation is better served by capabilities enabling us to take the fight to our adversaries, when necessary, to deter actions detrimental to our interests.”

The general isn’t alone. In 2003, the entertainment industry tried to get a law passed giving them the right to attack any computer suspected of distributing copyrighted material. And there probably isn’t a sys-admin in the world who doesn’t want to strike back at computers that are blindly and repeatedly attacking their networks.

Of course, the general is correct. But his reasoning illustrates perfectly why peacetime and wartime are different, and why generals don’t make good police chiefs.

A cyber-security policy that condones both active deterrence and retaliation—without any judicial determination of wrongdoing—is attractive, but it’s wrongheaded, not least because it ignores the line between war, where those involved are permitted to determine when counterattack is required, and crime, where only impartial third parties (judges and juries) can impose punishment.

In warfare, the notion of counterattack is extremely powerful. Going after the enemy—its positions, its supply lines, its factories, its infrastructure—is an age-old military tactic. But in peacetime, we call it revenge, and consider it dangerous. Anyone accused of a crime deserves a fair trial. The accused has the right to defend himself, to face his accuser, to an attorney, and to be presumed innocent until proven guilty.

Both vigilante counterattacks, and pre-emptive attacks, fly in the face of these rights. They punish people before who haven’t been found guilty. It’s the same whether it’s an angry lynch mob stringing up a suspect, the MPAA disabling the computer of someone it believes made an illegal copy of a movie, or a corporate security officer launching a denial-of-service attack against someone he believes is targeting his company over the net.

In all of these cases, the attacker could be wrong. This has been true for lynch mobs, and on the internet it’s even harder to know who’s attacking you. Just because my computer looks like the source of an attack doesn’t mean that it is. And even if it is, it might be a zombie controlled by yet another computer; I might be a victim, too. The goal of a government’s legal system is justice; the goal of a vigilante is expediency.

I understand the frustrations of General Cartwright, just as I do the frustrations of the entertainment industry, and the world’s sys-admins. Justice in cyberspace can be difficult. It can be hard to figure out who is attacking you, and it can take a long time to make them stop. It can be even harder to prove anything in court. The international nature of many attacks exacerbates the problems; more and more cybercriminals are jurisdiction shopping: attacking from countries with ineffective computer crime laws, easily bribable police forces and no extradition treaties.

Revenge is appealingly straightforward, and treating the whole thing as a military problem is easier than working within the legal system.

But that doesn’t make it right. In 1789, the Declaration of the Rights of Man and of the Citizen declared: “No person shall be accused, arrested, or imprisoned except in the cases and according to the forms prescribed by law. Any one soliciting, transmitting, executing, or causing to be executed any arbitrary order shall be punished.”

I’m glad General Cartwright thinks about offensive cyberwar; it’s how generals are supposed to think. I even agree with Richard Clarke’s threat of military-style reaction in the event of a cyber-attack by a foreign country or a terrorist organization. But short of an act of war, we’re far safer with a legal system that respects our rights.

This essay originally appeared in Wired.

Posted on April 5, 2007 at 7:35 AMView Comments

U.S. Patent Office Spreads FUD About Music Downloads

It’s simply amazing:

The United States Patent and Trademark Office claims that file-sharing sites could be setting up children for copyright infringement lawsuits and compromising national security.

“A decade ago, the idea that copyright infringement could become a threat to national security would have seemed implausible,” Patent and Trademark Director Jon Dudas said in a report released this week. “Now, it’s a sad reality.”

The report, which the patent office recently forwarded to the U.S. Department of Justice, states that peer-to-peer networks could manipulate sites so children violate copyright laws more frequently than adults. That could make children the target in most copyright lawsuits and, in turn, make those protecting their material appear antagonistic, according to the report.

File-sharing software also could be to blame for government workers who expose sensitive data and jeopardize national security after downloading free music on the job, the report states.

What happened? Did someone in the entertainment industry bribe the PTO to write this?

Report here.

Posted on March 20, 2007 at 6:58 AMView Comments

AACS Cracked?

This is a big deal. AACS (Advanced Access Content System), the copy protection is used in both Blu Ray and HD DVD, might have been cracked—but it’s still a rumor.

If it’s true, what will be interesting is the system’s in-the-field recovery system. Will it work?

Hypothetical fallout could be something like this: if PowerDVD is the source of the keys, an AACS initiative will be launched to revoke the player’s keys to render it inoperable and in need of an update. There is some confusion regarding this process, however. It is not the case that you can protect a cracked player by hiding it offline (the idea being that the player will never “update” with new code that way). Instead, the player’s existing keys will be revoked at the disc level, meaning that new pressings of discs won’t play on the cracked player. In this way, hiding a player from updates will not result in having a cracked player that will work throughout the years. It could mean that all bets are off for discs that are currently playable on the cracked player, however (provided it is not updated). Again, this is all hypothetical at this time.

Copy protection is inherently futile. The best it can be is a neverending arms race, which is why Big Media is increasingly relying on legal and social barriers.

EDITED TO ADD (12/30): An update.

EDITED TO ADD (1/3): More info from the author of the tool.

EDITED TO ADD (1/12): Excellent multi-part analysis here.

EDITED TO ADD (1/16): Part five of the above series of essays. And keys for different movies are starting to appear.

Posted on December 29, 2006 at 6:02 AMView Comments

A Cost Analysis of Windows Vista Content Protection

Peter Gutman’s “A Cost Analysis of Windows Vista Content Protection” is fascinating reading:

Executive Summary

Windows Vista includes an extensive reworking of core OS elements in order to provide content protection for so-called “premium content”, typically HD data from Blu-Ray and HD-DVD sources. Providing this protection incurs considerable costs in terms of system performance, system stability, technical support overhead, and hardware and software cost. These issues affect not only users of Vista but the entire PC industry, since the effects of the protection measures extend to cover all hardware and software that will ever come into contact with Vista, even if it’s not used directly with Vista (for example hardware in a Macintosh computer or on a Linux server). This document analyses the cost involved in Vista’s content protection, and the collateral damage that this incurs throughout the computer industry.

Executive Executive Summary

The Vista Content Protection specification could very well constitute the longest suicide note in history.

It contains stuff like:

Denial-of-Service via Driver Revocation

Once a weakness is found in a particular driver or device, that driver will have its signature revoked by Microsoft, which means that it will cease to function (details on this are a bit vague here, presumably some minimum functionality like generic 640×480 VGA support will still be available in order for the system to boot). This means that a report of a compromise of a particular driver or device will cause all support for that device worldwide to be turned off until a fix can be found. Again, details are sketchy, but if it’s a device problem then presumably the device turns into a paperweight once it’s revoked. If it’s an older device for which the vendor isn’t interested in rewriting their drivers (and in the fast-moving hardware market most devices enter “legacy” status within a year of two of their replacement models becoming available), all devices of that type worldwide become permanently unusable.

Read the whole thing.

And here’s commentary on the paper.

Posted on December 26, 2006 at 1:56 PMView Comments

Class Break of TiVoToGo DRM

Last week I wrote about the security problems of having a secret stored in a device given to your attacker, and how they are vulnerable to class breaks. I singled out DRM systems as being particularly vulnerable to this kind of security problem.

This week we have an example: The DRM in TiVoToGo has been cracked:

An open source command-line utility that converts TiVoToGo movies into an MPEG file and strips the DRM is now available online. Released under a BSD license, the utility—called TiVo File Decoder—builds on the extensive reverse engineering efforts of the TiVo hacking community. The goal of the project is to bring TiVo media viewing capabilities to unsupported platforms like OS X and the open source Linux operating system. TiVoToGo support is currently only available on Windows.

EDITED TO ADD (12/8): I have been told that TiVoTo Go has not been hacked: “The decryption engine has been reverse engineered in cross-platform code – replicating what TiVo already provides customers on the Windows platform (in the form of TiVo Desktop software). Each customer’s unique Media Access Key (MAK) is still needed as a *key* to decrypt content from their particular TiVo unit. I can’t decrypt shows from your TiVo, and you can’t decrypt shows from mine. Until someone figures out how to produce or bypass the required MAK, it hasn’t been cracked.”

And here’s a guide to installing TiVoToGo on your Mac.

EDITED TO ADD (12/17): Log of several hackers working on the problem. Interesting.

Posted on December 7, 2006 at 12:42 PMView Comments

MPAA Kills Anti-Pretexting Bill

Remember pretexting? It’s the cute name given to…well…fraud. It’s when you call someone and pretend to be someone else, in order to get information. Or when you go online and pretend to be someone else, in order to get something. There’s no question in my mind that it’s fraud and illegal, but it seems to be a gray area.

California is considering a bill that would make this kind of thing illegal, and allow victims to sue for damages.

Who could be opposed to this? The MPAA, that’s who:

The bill won approval in three committees and sailed through the state Senate with a 30-0 vote. Then, according to Lenny Goldberg, a lobbyist for the Privacy Rights Clearinghouse, the measure encountered unexpected, last-minute resistance from the Motion Picture Association of America.

“The MPAA has a tremendous amount of clout and they told legislators, ‘We need to pose as someone other than who we are to stop illegal downloading,'” Goldberg said.

These people are looking more and more like a criminal organization every day.

EDITED TO ADD (12/11): Congress has outlawed pretexting. The law doesn’t go as far as some of the state laws—which it pre-empts—but it’s still a good thing.

Posted on December 4, 2006 at 7:38 AMView Comments

Separating Data Ownership and Device Ownership

Consider two different security problems. In the first, you store your valuables in a safe in your basement. The threat is burglars, of course. But the safe is yours, and the house is yours, too. You control access to the safe, and probably have an alarm system.

The second security problem is similar, but you store your valuables in someone else’s safe. Even worse, it’s someone you don’t trust. He doesn’t know the combination, but he controls access to the safe. He can try to break in at his leisure. He can transport the safe anyplace he needs to. He can use whatever tools he wants. In the first case, the safe needs to be secure, but it’s still just a part of your overall home security. In the second case, the safe is the only security device you have.

This second security problem might seem contrived, but it happens regularly in our information society: Data controlled by one person is stored on a device controlled by another. Think of a stored-value smart card: If the person owning the card can break the security, he can add money to the card. Think of a DRM system: Its security depends on the person owning the computer not being able to get at the insides of the DRM security. Think of the RFID chip on a passport. Or a postage meter. Or SSL traffic being sent over a public network.

These systems are difficult to secure, and not just because you give your attacker the device and let him utilize whatever time, equipment and expertise he needs to break it. It’s difficult to secure because breaks are generally “class breaks.” The expert who figures out how to do it can build hardware—or write software—to do it automatically. Only one person needs to break a given DRM system; the software can break every other device in the same class.

This means that the security needs to be secure not against the average attacker, but against the smartest, most motivated and best funded attacker.

I was reminded of this problem earlier this month, when researchers announced a new attack (.pdf) against implementations of the RSA cryptosystem. The attack exploits the fact that different operations take different times on modern CPUs. By closely monitoring—and actually affecting—the CPU during an RSA operation, an attacker can recover the key. The most obvious applications for this attack are DRM systems that try to use a protected partition in the CPU to prevent the computer’s owner from learning the DRM system’s cryptographic keys.

These sorts of attacks are not new. In 1995, researchers discovered they could recover cryptographic keys by comparing relative timings on chips. In later years, both power and radiation were used to break cryptosystems. I called these “side-channel attacks,” because they made use of information other than the plaintext and ciphertext. And where are they most useful? To recover secrets from smart cards.

Whenever I see security systems with this data/device separation, I try to solve the security problem by removing the separation. This means completely redesigning the system and the security assumptions behind it.

Compare a stored-value card with a debit card. In the former case, the card owner can create money by changing the value on the card. For this system to be secure, the card needs to be protected by a variety of security countermeasures. In the latter case, there aren’t any secrets on the card. Your bank doesn’t care that you can read the account number off the front of the card, or the data off the magnetic stripe off the back—the real data, and the security, are in the bank’s databases.

Or compare a DRM system with a financial model that doesn’t care about copying. The former is impossible to secure, the latter easy.

While common in digital systems, this kind of security problem isn’t limited to them. Last month, the province of Ontario started investigating insider fraud in their scratch-and-win lottery systems, after the CBC aired allegations that people selling the tickets are able to figure out which tickets are winners, and not sell them. It’s the same problem: the owners of the data on the tickets—the lottery commission—tried to keep that data secret from those who had physical control of the tickets. And they failed.

Compare that with a traditional drawing-at-the-end-of-the-week lottery system. The attack isn’t possible, because there are no secrets on the tickets for an attacker to learn.

Separating data ownership and device ownership doesn’t mean that security is impossible, only much more difficult. You can buy a safe so strong that you can lock your valuables in it and give it to your attacker—with confidence. I’m not so sure you can design a smart card that keeps secrets from its owner, or a DRM system that works on a general-purpose computer—especially because of the problem of class breaks. But in all cases, the best way to solve the security problem is not to have it in the first place.

This essay originally appeared on Wired.com.

EDITED TO ADD (12/1): I completely misunderstood the lottery problem in Ontario. The frauds reported were perpetrated by lottery machine operators at convenience stores and the like stealing end-of-week draw tickets from unsuspecting customers. The customer would hand their ticket over the counter to be scanned to see if it was a winner. The clerk (knowing what the winning numbers actually were) would palm a non-winning ticket into the machine, inform the customer “sorry better luck next time” and claim the prize on their own at a later date.

Nice scam, but nothing to do with the point of this essay.

Posted on November 30, 2006 at 6:36 AMView Comments

New Timing Attack Against RSA

A new paper describes a timing attack against RSA, one that bypasses existing security measures against these sorts of attacks. The attack described is optimized for the Pentium 4, and is particularly suited for applications like DRM.

Meta moral: If Alice controls the device, and Bob wants to control secrets inside the device, Bob has a very difficult security problem. These “side-channel” attacks—timing, power, radiation, etc.—allow Alice to mount some very devastating attacks against Bob’s secrets.

I’m going to write more about this for Wired next week, but for now you can read the paper, the Slashdot thread, and the essay I wrote in 1998 about side-channel attacks (also this academic paper).

Posted on November 21, 2006 at 7:24 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.