Entries Tagged "copyright"

Page 5 of 7

New Timing Attack Against RSA

A new paper describes a timing attack against RSA, one that bypasses existing security measures against these sorts of attacks. The attack described is optimized for the Pentium 4, and is particularly suited for applications like DRM.

Meta moral: If Alice controls the device, and Bob wants to control secrets inside the device, Bob has a very difficult security problem. These “side-channel” attacks—timing, power, radiation, etc.—allow Alice to mount some very devastating attacks against Bob’s secrets.

I’m going to write more about this for Wired next week, but for now you can read the paper, the Slashdot thread, and the essay I wrote in 1998 about side-channel attacks (also this academic paper).

Posted on November 21, 2006 at 7:24 AMView Comments

FairUse4WM News

A couple of weeks I ago I wrote about the battle between Microsoft’s DRM system and FairUse4WM, which breaks it. The news for this week is that Microsoft has patched their security against FairUseWM 1.2 and filed a lawsuit against the program’s anonymous authors, and those same anonymous authors have released FairUse4WM 1.3, which breaks the latest Microsoft patch.

We asked Viodentia about Redmond’s accusation that he and/or his associates broke into its systems in order to obtain the IP necessary to crack PlaysForSure; Vio replied that he’s “utterly shocked” by the charge. “I didn’t use any Microsoft source code. However, I believe that this lawsuit is a fishing expedition to get identity information, which can then be used to either bring more targeted lawsuits, or to cause other trouble.” We’re sure Microsoft would like its partners and the public to think that its DRM is generally infallible and could only be cracked by stealing its IP, so Viodentia’s conclusion about its legal tactics seems pretty fair, obvious, and logical to us.

What’s interesting about this continuing saga is how different it is from the normal find-vulnerability-then-patch sequence. The authors of FairUse4WM aren’t finding bugs and figuring out how to exploit them, forcing Microsoft to patch them. This is a sequence of crack, fix, re-crack, re-fix, etc.

The reason we’re seeing this—and this is going to be the norm for DRM systems—is that DRM is fundamentally an impossible problem. Making it work at all involves tricks, and breaking DRM is akin to “fixing” the software so the tricks don’t work. Anyone looking for a demonstation that technical DRM is doomed should watch this story unfold. (If Microsoft has any chance of winning at all, it’s via the legal route.)

Posted on September 28, 2006 at 12:55 PMView Comments

Microsoft and FairUse4WM

If you really want to see Microsoft scramble to patch a hole in its software, don’t look to vulnerabilities that impact countless Internet Explorer users or give intruders control of thousands of Windows machines. Just crack Redmond’s DRM.

Security patches used to be rare. Software vendors were happy to pretend that vulnerabilities in their products were illusory—and then quietly fix the problem in the next software release.

That changed with the full disclosure movement. Independent security researchers started going public with the holes they found, making vulnerabilities impossible for vendors to ignore. Then worms became more common; patching—and patching quickly—became the norm.

But even now, no software vendor likes to issue patches. Every patch is a public admission that the company made a mistake. Moreover, the process diverts engineering resources from new development. Patches annoy users by making them update their software, and piss them off even more if the update doesn’t work properly.

For the vendor, there’s an economic balancing act: how much more will your users be annoyed by unpatched software than they will be by the patch, and is that reduction in annoyance worth the cost of patching?

Since 2003, Microsoft’s strategy to balance these costs and benefits has been to batch patches: instead of issuing them one at a time, it’s been issuing them all together on the second Tuesday of each month. This decreases Microsoft’s development costs and increases the reliability of its patches.

The user pays for this strategy by remaining open to known vulnerabilities for up to a month. On the other hand, users benefit from a predictable schedule: Microsoft can test all the patches that are going out at the same time, which means that patches are more reliable and users are able to install them faster with more confidence.

In the absence of regulation, software liability, or some other mechanism to make unpatched software costly for the vendor, “Patch Tuesday” is the best users are likely to get.

Why? Because it makes near-term financial sense to Microsoft. The company is not a public charity, and if the internet suffers, or if computers are compromised en masse, the economic impact on Microsoft is still minimal.

Microsoft is in the business of making money, and keeping users secure by patching its software is only incidental to that goal.

There’s no better example of this of this principle in action than Microsoft’s behavior around the vulnerability in its digital rights management software PlaysForSure.

Last week, a hacker developed an application called FairUse4WM that strips the copy protection from Windows Media DRM 10 and 11 files.

Now, this isn’t a “vulnerability” in the normal sense of the word: digital rights management is not a feature that users want. Being able to remove copy protection is a good thing for some users, and completely irrelevant for everyone else. No user is ever going to say: “Oh no. I can now play the music I bought for my computer in my car. I must install a patch so I can’t do that anymore.”

But to Microsoft, this vulnerability is a big deal. It affects the company’s relationship with major record labels. It affects the company’s product offerings. It affects the company’s bottom line. Fixing this “vulnerability” is in the company’s best interest; never mind the customer.

So Microsoft wasted no time; it issued a patch three days after learning about the hack. There’s no month-long wait for copyright holders who rely on Microsoft’s DRM.

This clearly demonstrates that economics is a much more powerful motivator than security.

It should surprise no one that the system didn’t stay patched for long. FairUse4WM 1.2 gets around Microsoft’s patch, and also circumvents the copy protection in Windows Media DRM 9 and 11beta2 files.

That was Saturday. Any guess on how long it will take Microsoft to patch Media Player once again? And then how long before the FairUse4WM people update their own software?

Certainly much less time than it will take Microsoft and the recording industry to realize they’re playing a losing game, and that trying to make digital files uncopyable is like trying to make water not wet.

If Microsoft abandoned this Sisyphean effort and put the same development effort into building a fast and reliable patching system, the entire internet would benefit. But simple economics says it probably never will.

This essay originally appeared on Wired.com.

EDITED TO ADD (9/8): Commentary.

EDITED TO ADD (9/9): Microsoft released a patch for FairUse4WM 1.2 on Thursday, September 7th.

EDITED TO ADD (9/13): BSkyB halts download service because of the breaks.

EDITED TO ADD (9/16): Microsoft is threatening legal action against people hosting copies of FairUse4WM.

Posted on September 7, 2006 at 8:33 AMView Comments

iPod Thefts

What happens if you distribute 50 million small,valuable, and easily sellable objects into the hands of men, women, and children all over the world, and tell them to walk around the streets with them? Why, people steal them, of course.

“Rise in crime blamed on iPods”, yells the front page of London’s Metro. “Muggers targeting iPod users”, says ITV. This is the reaction to the government’s revelation that robberies across the UK have risen by 8 per cent in the last year, from 90,747 to 98,204. The Home Secretary, John Reid, attributes this to the irresistible lure of “young people carrying expensive goods, such as mobile phones and MP3 players”. A separate British Crime Survey, however, suggests robbery has risen by 22 per cent, to 311,000.

This shouldn’t come as a surprise, just as it wasn’t a surprise in the 1990s when there was a wave of high-priced sneaker thefts. Or that there is also a wave of laptop thefts.

What to do about it? Basically, there’s not much you can do except be careful. Muggings have long been a low-risk crime, so it makes sense that we’re seeing an increase in them as the value of what people are carrying on their person goes up. And people carrying portable music players have an unmistakable indicator: those ubiquitous ear buds.

The economics of this crime are such that it will continue until one of three things happens. One, portable music players become much less valuable. Two, the costs of the crime become much higher. Three, society deals with its underclass and gives them a better career option than iPod thief.

And on a related topic, here’s a great essay by Cory Doctorow on how Apple’s iTunes copy protection screws the music industry.

EDITED TO ADD (8/5): Eric Rescorla comments.

Posted on July 31, 2006 at 7:05 AMView Comments

Economics and Information Security

I’m sitting in a conference room at Cambridge University, trying to simultaneously finish this article for Wired News and pay attention to the presenter onstage.

I’m in this awkward situation because 1) this article is due tomorrow, and 2) I’m attending the fifth Workshop on the Economics of Information Security, or WEIS: to my mind, the most interesting computer security conference of the year.

The idea that economics has anything to do with computer security is relatively new. Ross Anderson and I seem to have stumbled upon the idea independently. He, in his brilliant article from 2001, “Why Information Security Is Hard—An Economic Perspective” (.pdf), and me in various essays and presentations from that same period.

WEIS began a year later at the University of California at Berkeley and has grown ever since. It’s the only workshop where technologists get together with economists and lawyers and try to understand the problems of computer security.

And economics has a lot to teach computer security. We generally think of computer security as a problem of technology, but often systems fail because of misplaced economic incentives: The people who could protect a system are not the ones who suffer the costs of failure.

When you start looking, economic considerations are everywhere in computer security. Hospitals’ medical-records systems provide comprehensive billing-management features for the administrators who specify them, but are not so good at protecting patients’ privacy. Automated teller machines suffered from fraud in countries like the United Kingdom and the Netherlands, where poor regulation left banks without sufficient incentive to secure their systems, and allowed them to pass the cost of fraud along to their customers. And one reason the internet is insecure is that liability for attacks is so diffuse.

In all of these examples, the economic considerations of security are more important than the technical considerations.

More generally, many of the most basic security questions are at least as much economic as technical. Do we spend enough on keeping hackers out of our computer systems? Or do we spend too much? For that matter, do we spend appropriate amounts on police and Army services? And are we spending our security budgets on the right things? In the shadow of 9/11, questions like these have a heightened importance.

Economics can actually explain many of the puzzling realities of internet security. Firewalls are common, e-mail encryption is rare: not because of the relative effectiveness of the technologies, but because of the economic pressures that drive companies to install them. Corporations rarely publicize information about intrusions; that’s because of economic incentives against doing so. And an insecure operating system is the international standard, in part, because its economic effects are largely borne not by the company that builds the operating system, but by the customers that buy it.

Some of the most controversial cyberpolicy issues also sit squarely between information security and economics. For example, the issue of digital rights management: Is copyright law too restrictive—or not restrictive enough—to maximize society’s creative output? And if it needs to be more restrictive, will DRM technologies benefit the music industry or the technology vendors? Is Microsoft’s Trusted Computing initiative a good idea, or just another way for the company to lock its customers into Windows, Media Player and Office? Any attempt to answer these questions becomes rapidly entangled with both information security and economic arguments.

WEIS encourages papers on these and other issues in economics and computer security. We heard papers presented on the economics of digital forensics of cell phones (.pdf)—if you have an uncommon phone, the police probably don’t have the tools to perform forensic analysis—and the effect of stock spam on stock prices: It actually works in the short term. We learned that more-educated wireless network users are not more likely to secure their access points (.pdf), and that the best predictor of wireless security is the default configuration of the router.

Other researchers presented economic models to explain patch management (.pdf), peer-to-peer worms (.pdf), investment in information security technologies (.pdf) and opt-in versus opt-out privacy policies (.pdf). There was a field study that tried to estimate the cost to the U.S. economy for information infrastructure failures (.pdf): less than you might think. And one of the most interesting papers looked at economic barriers to adopting new security protocols (.pdf), specifically DNS Security Extensions.

This is all heady stuff. In the early years, there was a bit of a struggle as the economists and the computer security technologists tried to learn each others’ languages. But now it seems that there’s a lot more synergy, and more collaborations between the two camps.

I’ve long said that the fundamental problems in computer security are no longer about technology; they’re about applying technology. Workshops like WEIS are helping us understand why good security technologies fail and bad ones succeed, and that kind of insight is critical if we’re going to improve security in the information age.

This essay originally appeared on Wired.com.

Posted on June 29, 2006 at 4:31 PMView Comments

Border Security and the DHS

Surreal story about a person coming into the U.S. from Iraq who is held up at the border because he used to sell copyrighted images on T-shirts:

Homeland Security, the $40-billion-a-year agency set up to combat terrorism after 9/11, has been given universal jurisdiction and can hold anyone on Earth for crimes unrelated to national security—even me for a court date I missed while I was in Iraq helping America deter terror—without asking what I had been doing in Pakistan among Islamic extremists the agency is designated to stop. Instead, some of its actions are erasing the lines of jurisdiction between local police and the federal state, scarily bringing the words “police” and “state” closer together. As long as we allow Homeland Security to act like a Keystone Stasi, terrorism will continue to win in destroying our freedom.

Kevin Drum mentions it, too.

Posted on June 16, 2006 at 9:31 AMView Comments

Who Owns Your Computer?

When technology serves its owners, it is liberating. When it is designed to serve others, over the owner’s objection, it is oppressive. There’s a battle raging on your computer right now—one that pits you against worms and viruses, Trojans, spyware, automatic update features and digital rights management technologies. It’s the battle to determine who owns your computer.

You own your computer, of course. You bought it. You paid for it. But how much control do you really have over what happens on your machine? Technically you might have bought the hardware and software, but you have less control over what it’s doing behind the scenes.

Using the hacker sense of the term, your computer is “owned” by other people.

It used to be that only malicious hackers were trying to own your computers. Whether through worms, viruses, Trojans or other means, they would try to install some kind of remote-control program onto your system. Then they’d use your computers to sniff passwords, make fraudulent bank transactions, send spam, initiate phishing attacks and so on. Estimates are that somewhere between hundreds of thousands and millions of computers are members of remotely controlled “bot” networks. Owned.

Now, things are not so simple. There are all sorts of interests vying for control of your computer. There are media companies that want to control what you can do with the music and videos they sell you. There are companies that use software as a conduit to collect marketing information, deliver advertising or do whatever it is their real owners require. And there are software companies that are trying to make money by pleasing not only their customers, but other companies they ally themselves with. All these companies want to own your computer.

Some examples:

  • Entertainment software: In October 2005, it emerged that Sony had distributed a rootkit with several music CDs—the same kind of software that crackers use to own people’s computers. This rootkit secretly installed itself when the music CD was played on a computer. Its purpose was to prevent people from doing things with the music that Sony didn’t approve of: It was a DRM system. If the exact same piece of software had been installed secretly by a hacker, this would have been an illegal act. But Sony believed that it had legitimate reasons for wanting to own its customers’ machines.
  • Antivirus: You might have expected your antivirus software to detect Sony’s rootkit. After all, that’s why you bought it. But initially, the security programs sold by Symantec and others did not detect it, because Sony had asked them not to. You might have thought that the software you bought was working for you, but you would have been wrong.
  • Internet services: Hotmail allows you to blacklist certain e-mail addresses, so that mail from them automatically goes into your spam trap. Have you ever tried blocking all that incessant marketing e-mail from Microsoft? You can’t.
  • Application software: Internet Explorer users might have expected the program to incorporate easy-to-use cookie handling and pop-up blockers. After all, other browsers do, and users have found them useful in defending against Internet annoyances. But Microsoft isn’t just selling software to you; it sells Internet advertising as well. It isn’t in the company’s best interest to offer users features that would adversely affect its business partners.
  • Spyware: Spyware is nothing but someone else trying to own your computer. These programs eavesdrop on your behavior and report back to their real owners—sometimes without your knowledge or consent—about your behavior.
  • Internet security: It recently came out that the firewall in Microsoft Vista will ship with half its protections turned off. Microsoft claims that large enterprise users demanded this default configuration, but that makes no sense. It’s far more likely that Microsoft just doesn’t want adware—and DRM spyware—blocked by default.
  • Update: Automatic update features are another way software companies try to own your computer. While they can be useful for improving security, they also require you to trust your software vendor not to disable your computer for nonpayment, breach of contract or other presumed infractions.

Adware, software-as-a-service and Google Desktop search are all examples of some other company trying to own your computer. And Trusted Computing will only make the problem worse.

There is an inherent insecurity to technologies that try to own people’s computers: They allow individuals other than the computers’ legitimate owners to enforce policy on those machines. These systems invite attackers to assume the role of the third party and turn a user’s device against him.

Remember the Sony story: The most insecure feature in that DRM system was a cloaking mechanism that gave the rootkit control over whether you could see it executing or spot its files on your hard disk. By taking ownership away from you, it reduced your security.

If left to grow, these external control systems will fundamentally change your relationship with your computer. They will make your computer much less useful by letting corporations limit what you can do with it. They will make your computer much less reliable because you will no longer have control of what is running on your machine, what it does, and how the various software components interact. At the extreme, they will transform your computer into a glorified boob tube.

You can fight back against this trend by only using software that respects your boundaries. Boycott companies that don’t honestly serve their customers, that don’t disclose their alliances, that treat users like marketing assets. Use open-source software—software created and owned by users, with no hidden agendas, no secret alliances and no back-room marketing deals.

Just because computers were a liberating force in the past doesn’t mean they will be in the future. There is enormous political and economic power behind the idea that you shouldn’t truly own your computer or your software, despite having paid for it.

This essay originally appeared on Wired.com.

EDITED TO ADD (5/5): Commentary. It seems that some of my examples were not very good. I’ll come up with other ones for the Crypto-Gram version.

Posted on May 4, 2006 at 7:13 AMView Comments

Evading Copyright Through XOR

Monolith is an open-source program that can XOR two files together to create a third file, and—of course—can XOR that third file with one of the original two to create the other original file.

The website wonders about the copyright implications of all of this:

Things get interesting when you apply Monolith to copyrighted files. For example, munging two copyrighted files will produce a completely new file that, in most cases, contains no information from either file. In other words, the resulting Mono file is not “owned” by the original copyright holders (if owned at all, it would be owned by the person who did the munging). Given that the Mono file can be combined with either of the original, copyrighted files to reconstruct the other copyrighted file, this lack of Mono ownership may be seem hard to believe.

The website then postulates this as a mechanism to get around copyright law:

What does this mean? This means that Mono files can be freely distributed.

So what? Mono files are useless without their corresponding Basis files, right? And the Basis files are copyrighted too, so they cannot be freely distributed, right? There is one more twist to this idea. What happens when we use Basis files that are freely distributable? For example, we could use a Basis file that is in the public domain or one that is licensed for free distribution. Now we are getting somewhere.

None of the aforementioned properties of Mono files change when we use freely distributable Basis files, since the same arguments hold. Mono files are still not copyrighted by the people who hold the copyrights over the corresponding Element files. Now we can freely distribute Mono files and Basis files.

Interesting? Not really. But what you can do with these files, in the privacy of your own home, might be interesting, depending on your proclivities. For example, you can use the Mono files and the Basis files to reconstruct the Element files.

Clever, but it won’t hold up in court. In general, technical hair splitting is not an effective way to get around the law. My guess is that anyone who distributes that third file—they call it a “Mono” file—along with instructions on how to recover the copyrighted file is going to be found guilty of copyright violation.

The correct way to solve this problem is through law, not technology.

Posted on March 30, 2006 at 8:07 AMView Comments

"Lessons from the Sony CD DRM Episode"

“Lessons from the Sony CD DRM Episode” is an interesting paper by J. Alex Halderman and Edward W. Felten.

Abstract: In the fall of 2005, problems discovered in two Sony-BMG compact disc copy protection systems, XCP and MediaMax, triggered a public uproar that ultimately led to class-action litigation and the recall of millions of discs. We present an in-depth analysis of these technologies, including their design, implementation, and deployment. The systems are surprisingly complex and suffer from a diverse array of flaws that weaken their content protection and expose users to serious security and privacy risks. Their complexity, and their failure, makes them an interesting case study of digital rights management that carries valuable lessons for content companies, DRM vendors, policymakers, end users, and the security community.

Posted on February 17, 2006 at 2:11 PMView Comments

The Topology of Covert Conflict

Interesting research paper by Shishir Nagaraja and Ross Anderson. Implications for warfare, terrorism, and peer-to-peer file sharing:

Abstract:

Often an attacker tries to disconnect a network by destroying nodes or edges, while the defender counters using various resilience mechanisms. Examples include a music industry body attempting to close down a peer-to-peer file-sharing network; medics attempting to halt the spread of an infectious disease by selective vaccination; and a police agency trying to decapitate a terrorist organisation. Albert, Jeong and Barabási famously analysed the static case, and showed that vertex-order attacks are effective against scale-free networks. We extend this work to the dynamic case by developing a framework based on evolutionary game theory to explore the interaction of attack and defence strategies. We show, first, that naive defences don’t work against vertex-order attack; second, that defences based on simple redundancy don’t work much better, but that defences based on cliques work well; third, that attacks based on centrality work better against clique defences than vertex-order attacks do; and fourth, that defences based on complex strategies such as delegation plus clique resist centrality attacks better than simple clique defences. Our models thus build a bridge between network analysis and evolutionary game theory, and provide a framework for analysing defence and attack in networks where topology matters. They suggest definitions of efficiency of attack and defence, and may even explain the evolution of insurgent organisations from networks of cells to a more virtual leadership that facilitates operations rather than directing them. Finally, we draw some conclusions and present possible directions for future research.

Posted on February 6, 2006 at 7:03 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.