Entries Tagged "DRM"

Page 4 of 5

FairUse4WM News

A couple of weeks I ago I wrote about the battle between Microsoft’s DRM system and FairUse4WM, which breaks it. The news for this week is that Microsoft has patched their security against FairUseWM 1.2 and filed a lawsuit against the program’s anonymous authors, and those same anonymous authors have released FairUse4WM 1.3, which breaks the latest Microsoft patch.

We asked Viodentia about Redmond’s accusation that he and/or his associates broke into its systems in order to obtain the IP necessary to crack PlaysForSure; Vio replied that he’s “utterly shocked” by the charge. “I didn’t use any Microsoft source code. However, I believe that this lawsuit is a fishing expedition to get identity information, which can then be used to either bring more targeted lawsuits, or to cause other trouble.” We’re sure Microsoft would like its partners and the public to think that its DRM is generally infallible and could only be cracked by stealing its IP, so Viodentia’s conclusion about its legal tactics seems pretty fair, obvious, and logical to us.

What’s interesting about this continuing saga is how different it is from the normal find-vulnerability-then-patch sequence. The authors of FairUse4WM aren’t finding bugs and figuring out how to exploit them, forcing Microsoft to patch them. This is a sequence of crack, fix, re-crack, re-fix, etc.

The reason we’re seeing this—and this is going to be the norm for DRM systems—is that DRM is fundamentally an impossible problem. Making it work at all involves tricks, and breaking DRM is akin to “fixing” the software so the tricks don’t work. Anyone looking for a demonstation that technical DRM is doomed should watch this story unfold. (If Microsoft has any chance of winning at all, it’s via the legal route.)

Posted on September 28, 2006 at 12:55 PMView Comments

Microsoft and FairUse4WM

If you really want to see Microsoft scramble to patch a hole in its software, don’t look to vulnerabilities that impact countless Internet Explorer users or give intruders control of thousands of Windows machines. Just crack Redmond’s DRM.

Security patches used to be rare. Software vendors were happy to pretend that vulnerabilities in their products were illusory—and then quietly fix the problem in the next software release.

That changed with the full disclosure movement. Independent security researchers started going public with the holes they found, making vulnerabilities impossible for vendors to ignore. Then worms became more common; patching—and patching quickly—became the norm.

But even now, no software vendor likes to issue patches. Every patch is a public admission that the company made a mistake. Moreover, the process diverts engineering resources from new development. Patches annoy users by making them update their software, and piss them off even more if the update doesn’t work properly.

For the vendor, there’s an economic balancing act: how much more will your users be annoyed by unpatched software than they will be by the patch, and is that reduction in annoyance worth the cost of patching?

Since 2003, Microsoft’s strategy to balance these costs and benefits has been to batch patches: instead of issuing them one at a time, it’s been issuing them all together on the second Tuesday of each month. This decreases Microsoft’s development costs and increases the reliability of its patches.

The user pays for this strategy by remaining open to known vulnerabilities for up to a month. On the other hand, users benefit from a predictable schedule: Microsoft can test all the patches that are going out at the same time, which means that patches are more reliable and users are able to install them faster with more confidence.

In the absence of regulation, software liability, or some other mechanism to make unpatched software costly for the vendor, “Patch Tuesday” is the best users are likely to get.

Why? Because it makes near-term financial sense to Microsoft. The company is not a public charity, and if the internet suffers, or if computers are compromised en masse, the economic impact on Microsoft is still minimal.

Microsoft is in the business of making money, and keeping users secure by patching its software is only incidental to that goal.

There’s no better example of this of this principle in action than Microsoft’s behavior around the vulnerability in its digital rights management software PlaysForSure.

Last week, a hacker developed an application called FairUse4WM that strips the copy protection from Windows Media DRM 10 and 11 files.

Now, this isn’t a “vulnerability” in the normal sense of the word: digital rights management is not a feature that users want. Being able to remove copy protection is a good thing for some users, and completely irrelevant for everyone else. No user is ever going to say: “Oh no. I can now play the music I bought for my computer in my car. I must install a patch so I can’t do that anymore.”

But to Microsoft, this vulnerability is a big deal. It affects the company’s relationship with major record labels. It affects the company’s product offerings. It affects the company’s bottom line. Fixing this “vulnerability” is in the company’s best interest; never mind the customer.

So Microsoft wasted no time; it issued a patch three days after learning about the hack. There’s no month-long wait for copyright holders who rely on Microsoft’s DRM.

This clearly demonstrates that economics is a much more powerful motivator than security.

It should surprise no one that the system didn’t stay patched for long. FairUse4WM 1.2 gets around Microsoft’s patch, and also circumvents the copy protection in Windows Media DRM 9 and 11beta2 files.

That was Saturday. Any guess on how long it will take Microsoft to patch Media Player once again? And then how long before the FairUse4WM people update their own software?

Certainly much less time than it will take Microsoft and the recording industry to realize they’re playing a losing game, and that trying to make digital files uncopyable is like trying to make water not wet.

If Microsoft abandoned this Sisyphean effort and put the same development effort into building a fast and reliable patching system, the entire internet would benefit. But simple economics says it probably never will.

This essay originally appeared on Wired.com.

EDITED TO ADD (9/8): Commentary.

EDITED TO ADD (9/9): Microsoft released a patch for FairUse4WM 1.2 on Thursday, September 7th.

EDITED TO ADD (9/13): BSkyB halts download service because of the breaks.

EDITED TO ADD (9/16): Microsoft is threatening legal action against people hosting copies of FairUse4WM.

Posted on September 7, 2006 at 8:33 AMView Comments

iPod Thefts

What happens if you distribute 50 million small,valuable, and easily sellable objects into the hands of men, women, and children all over the world, and tell them to walk around the streets with them? Why, people steal them, of course.

“Rise in crime blamed on iPods”, yells the front page of London’s Metro. “Muggers targeting iPod users”, says ITV. This is the reaction to the government’s revelation that robberies across the UK have risen by 8 per cent in the last year, from 90,747 to 98,204. The Home Secretary, John Reid, attributes this to the irresistible lure of “young people carrying expensive goods, such as mobile phones and MP3 players”. A separate British Crime Survey, however, suggests robbery has risen by 22 per cent, to 311,000.

This shouldn’t come as a surprise, just as it wasn’t a surprise in the 1990s when there was a wave of high-priced sneaker thefts. Or that there is also a wave of laptop thefts.

What to do about it? Basically, there’s not much you can do except be careful. Muggings have long been a low-risk crime, so it makes sense that we’re seeing an increase in them as the value of what people are carrying on their person goes up. And people carrying portable music players have an unmistakable indicator: those ubiquitous ear buds.

The economics of this crime are such that it will continue until one of three things happens. One, portable music players become much less valuable. Two, the costs of the crime become much higher. Three, society deals with its underclass and gives them a better career option than iPod thief.

And on a related topic, here’s a great essay by Cory Doctorow on how Apple’s iTunes copy protection screws the music industry.

EDITED TO ADD (8/5): Eric Rescorla comments.

Posted on July 31, 2006 at 7:05 AMView Comments

Microsoft Windows Kill Switch

Does Microsoft have the ability to disable Windows remotely? Maybe:

Two weeks ago, I wrote about my serious objections to Microsoft’s latest salvo in the war against unauthorized copies of Windows. Two Windows Genuine Advantage components are being pushed onto users’ machines with insufficient notification and inadequate quality control, and the result is a big mess. (For details, see Microsoft presses the Stupid button.)

Guess what? WGA might be on the verge of getting even messier. In fact, one report claims WGA is about to become a Windows “kill switch” ­ and when I asked Microsoft for an on-the-record response, they refused to deny it.

And this, supposedly from someone at Microsoft Support:

He told me that “in the fall, having the latest WGA will become mandatory and if its not installed, Windows will give a 30 day warning and when the 30 days is up and WGA isn’t installed, Windows will stop working, so you might as well install WGA now.”

The stupidity of this idea is amazing. Not just the inevitability of false positives, but the potential for a hacker to co-opt the controls. I hope this rumor ends up not being true.

Although if they actually do it, the backlash could do more for non-Windows OSs than anything those OSs could do for themselves.

Posted on June 30, 2006 at 11:51 AMView Comments

Economics and Information Security

I’m sitting in a conference room at Cambridge University, trying to simultaneously finish this article for Wired News and pay attention to the presenter onstage.

I’m in this awkward situation because 1) this article is due tomorrow, and 2) I’m attending the fifth Workshop on the Economics of Information Security, or WEIS: to my mind, the most interesting computer security conference of the year.

The idea that economics has anything to do with computer security is relatively new. Ross Anderson and I seem to have stumbled upon the idea independently. He, in his brilliant article from 2001, “Why Information Security Is Hard—An Economic Perspective” (.pdf), and me in various essays and presentations from that same period.

WEIS began a year later at the University of California at Berkeley and has grown ever since. It’s the only workshop where technologists get together with economists and lawyers and try to understand the problems of computer security.

And economics has a lot to teach computer security. We generally think of computer security as a problem of technology, but often systems fail because of misplaced economic incentives: The people who could protect a system are not the ones who suffer the costs of failure.

When you start looking, economic considerations are everywhere in computer security. Hospitals’ medical-records systems provide comprehensive billing-management features for the administrators who specify them, but are not so good at protecting patients’ privacy. Automated teller machines suffered from fraud in countries like the United Kingdom and the Netherlands, where poor regulation left banks without sufficient incentive to secure their systems, and allowed them to pass the cost of fraud along to their customers. And one reason the internet is insecure is that liability for attacks is so diffuse.

In all of these examples, the economic considerations of security are more important than the technical considerations.

More generally, many of the most basic security questions are at least as much economic as technical. Do we spend enough on keeping hackers out of our computer systems? Or do we spend too much? For that matter, do we spend appropriate amounts on police and Army services? And are we spending our security budgets on the right things? In the shadow of 9/11, questions like these have a heightened importance.

Economics can actually explain many of the puzzling realities of internet security. Firewalls are common, e-mail encryption is rare: not because of the relative effectiveness of the technologies, but because of the economic pressures that drive companies to install them. Corporations rarely publicize information about intrusions; that’s because of economic incentives against doing so. And an insecure operating system is the international standard, in part, because its economic effects are largely borne not by the company that builds the operating system, but by the customers that buy it.

Some of the most controversial cyberpolicy issues also sit squarely between information security and economics. For example, the issue of digital rights management: Is copyright law too restrictive—or not restrictive enough—to maximize society’s creative output? And if it needs to be more restrictive, will DRM technologies benefit the music industry or the technology vendors? Is Microsoft’s Trusted Computing initiative a good idea, or just another way for the company to lock its customers into Windows, Media Player and Office? Any attempt to answer these questions becomes rapidly entangled with both information security and economic arguments.

WEIS encourages papers on these and other issues in economics and computer security. We heard papers presented on the economics of digital forensics of cell phones (.pdf)—if you have an uncommon phone, the police probably don’t have the tools to perform forensic analysis—and the effect of stock spam on stock prices: It actually works in the short term. We learned that more-educated wireless network users are not more likely to secure their access points (.pdf), and that the best predictor of wireless security is the default configuration of the router.

Other researchers presented economic models to explain patch management (.pdf), peer-to-peer worms (.pdf), investment in information security technologies (.pdf) and opt-in versus opt-out privacy policies (.pdf). There was a field study that tried to estimate the cost to the U.S. economy for information infrastructure failures (.pdf): less than you might think. And one of the most interesting papers looked at economic barriers to adopting new security protocols (.pdf), specifically DNS Security Extensions.

This is all heady stuff. In the early years, there was a bit of a struggle as the economists and the computer security technologists tried to learn each others’ languages. But now it seems that there’s a lot more synergy, and more collaborations between the two camps.

I’ve long said that the fundamental problems in computer security are no longer about technology; they’re about applying technology. Workshops like WEIS are helping us understand why good security technologies fail and bad ones succeed, and that kind of insight is critical if we’re going to improve security in the information age.

This essay originally appeared on Wired.com.

Posted on June 29, 2006 at 4:31 PMView Comments

Who Owns Your Computer?

When technology serves its owners, it is liberating. When it is designed to serve others, over the owner’s objection, it is oppressive. There’s a battle raging on your computer right now—one that pits you against worms and viruses, Trojans, spyware, automatic update features and digital rights management technologies. It’s the battle to determine who owns your computer.

You own your computer, of course. You bought it. You paid for it. But how much control do you really have over what happens on your machine? Technically you might have bought the hardware and software, but you have less control over what it’s doing behind the scenes.

Using the hacker sense of the term, your computer is “owned” by other people.

It used to be that only malicious hackers were trying to own your computers. Whether through worms, viruses, Trojans or other means, they would try to install some kind of remote-control program onto your system. Then they’d use your computers to sniff passwords, make fraudulent bank transactions, send spam, initiate phishing attacks and so on. Estimates are that somewhere between hundreds of thousands and millions of computers are members of remotely controlled “bot” networks. Owned.

Now, things are not so simple. There are all sorts of interests vying for control of your computer. There are media companies that want to control what you can do with the music and videos they sell you. There are companies that use software as a conduit to collect marketing information, deliver advertising or do whatever it is their real owners require. And there are software companies that are trying to make money by pleasing not only their customers, but other companies they ally themselves with. All these companies want to own your computer.

Some examples:

  • Entertainment software: In October 2005, it emerged that Sony had distributed a rootkit with several music CDs—the same kind of software that crackers use to own people’s computers. This rootkit secretly installed itself when the music CD was played on a computer. Its purpose was to prevent people from doing things with the music that Sony didn’t approve of: It was a DRM system. If the exact same piece of software had been installed secretly by a hacker, this would have been an illegal act. But Sony believed that it had legitimate reasons for wanting to own its customers’ machines.
  • Antivirus: You might have expected your antivirus software to detect Sony’s rootkit. After all, that’s why you bought it. But initially, the security programs sold by Symantec and others did not detect it, because Sony had asked them not to. You might have thought that the software you bought was working for you, but you would have been wrong.
  • Internet services: Hotmail allows you to blacklist certain e-mail addresses, so that mail from them automatically goes into your spam trap. Have you ever tried blocking all that incessant marketing e-mail from Microsoft? You can’t.
  • Application software: Internet Explorer users might have expected the program to incorporate easy-to-use cookie handling and pop-up blockers. After all, other browsers do, and users have found them useful in defending against Internet annoyances. But Microsoft isn’t just selling software to you; it sells Internet advertising as well. It isn’t in the company’s best interest to offer users features that would adversely affect its business partners.
  • Spyware: Spyware is nothing but someone else trying to own your computer. These programs eavesdrop on your behavior and report back to their real owners—sometimes without your knowledge or consent—about your behavior.
  • Internet security: It recently came out that the firewall in Microsoft Vista will ship with half its protections turned off. Microsoft claims that large enterprise users demanded this default configuration, but that makes no sense. It’s far more likely that Microsoft just doesn’t want adware—and DRM spyware—blocked by default.
  • Update: Automatic update features are another way software companies try to own your computer. While they can be useful for improving security, they also require you to trust your software vendor not to disable your computer for nonpayment, breach of contract or other presumed infractions.

Adware, software-as-a-service and Google Desktop search are all examples of some other company trying to own your computer. And Trusted Computing will only make the problem worse.

There is an inherent insecurity to technologies that try to own people’s computers: They allow individuals other than the computers’ legitimate owners to enforce policy on those machines. These systems invite attackers to assume the role of the third party and turn a user’s device against him.

Remember the Sony story: The most insecure feature in that DRM system was a cloaking mechanism that gave the rootkit control over whether you could see it executing or spot its files on your hard disk. By taking ownership away from you, it reduced your security.

If left to grow, these external control systems will fundamentally change your relationship with your computer. They will make your computer much less useful by letting corporations limit what you can do with it. They will make your computer much less reliable because you will no longer have control of what is running on your machine, what it does, and how the various software components interact. At the extreme, they will transform your computer into a glorified boob tube.

You can fight back against this trend by only using software that respects your boundaries. Boycott companies that don’t honestly serve their customers, that don’t disclose their alliances, that treat users like marketing assets. Use open-source software—software created and owned by users, with no hidden agendas, no secret alliances and no back-room marketing deals.

Just because computers were a liberating force in the past doesn’t mean they will be in the future. There is enormous political and economic power behind the idea that you shouldn’t truly own your computer or your software, despite having paid for it.

This essay originally appeared on Wired.com.

EDITED TO ADD (5/5): Commentary. It seems that some of my examples were not very good. I’ll come up with other ones for the Crypto-Gram version.

Posted on May 4, 2006 at 7:13 AMView Comments

Microsoft's BitLocker

BitLocker Drive Encryption is a new security feature in Windows Vista, designed to work with the Trusted Platform Module (TPM). Basically, it encrypts the C drive with a computer-generated key. In its basic mode, an attacker can still access the data on the drive by guessing the user’s password, but would not be able to get at the drive by booting the disk up using another operating system, or removing the drive and attaching it to another computer.

There are several modes for BitLocker. In the simplest mode, the TPM stores the key and the whole thing happens completely invisibly. The user does nothing differently, and notices nothing different.

The BitLocker key can also be stored on a USB drive. Here, the user has to insert the USB drive into the computer during boot. Then there’s a mode that uses a key stored in the TPM and a key stored on a USB drive. And finally, there’s a mode that uses a key stored in the TPM and a four-digit PIN that the user types into the computer. This happens early in the boot process, when there’s still ASCII text on the screen.

Note that if you configure BitLocker with a USB key or a PIN, password guessing doesn’t work. BitLocker doesn’t even let you get to a password screen to try.

For most people, basic mode is the best. People will keep their USB key in their computer bag with their laptop, so it won’t add much security. But if you can force users to attach it to their keychains—remember that you only need the key to boot the computer, not to operate the computer—and convince them to go through the trouble of sticking it in their computer every time they boot, then you’ll get a higher level of security.

There is a recovery key: optional but strongly encouraged. It is automatically generated by BitLocker, and it can be sent to some administrator or printed out and stored in some secure location. There are ways for an administrator to set group policy settings mandating this key.

There aren’t any back doors for the police, though.

You can get BitLocker to work in systems without a TPM, but it’s kludgy. You can only configure it for a USB key. And it only will work on some hardware: because BItLocker starts running before any device drivers are loaded, the BIOS must recognize USB drives in order for BitLocker to work.

Encryption particulars: The default data encryption algorithm is AES-128-CBC with an additional diffuser. The diffuser is designed to protect against ciphertext-manipulation attacks, and is independently keyed from AES-CBC so that it cannot damage the security you get from AES-CBC. Administrators can select the disk encryption algorithm through group policy. Choices are 128-bit AES-CBC plus the diffuser, 256-bit AES-CBC plus the diffuser, 128-bit AES-CBC, and 256-bit AES-CBC. (My advice: stick with the default.) The key management system uses 256-bit keys wherever possible. The only place where a 128-bit key limit is hard-coded is the recovery key, which is 48 digits (including checksums). It’s shorter because it has to be typed in manually; typing in 96 digits will piss off a lot of people—even if it is only for data recovery.

So, does this destroy dual-boot systems? Not really. If you have Vista running, then set up a dual boot system, Bitlocker will consider this sort of change to be an attack and refuse to run. But then you can use the recovery key to boot into Windows, then tell BitLocker to take the current configuration—with the dual boot code—as correct. After that, your dual boot system will work just fine, or so I’ve been told. You still won’t be able to share any files on your C drive between operating systems, but you will be able to share files on any other drive.

The problem is that it’s impossible to distinguish between a legitimate dual boot system and an attacker trying to use another OS—whether Linux or another instance of Vista—to get at the volume.

BitLocker is not a panacea. But it does mitigate a specific but significant risk: the risk of attackers getting at data on drives directly. It allows people to throw away or sell old drives without worry. It allows people to stop worrying about their drives getting lost or stolen. It stops a particular attack against data.

Right now BitLocker is only in the Ultimate and Enterprise editions of Vista. It’s a feature that is turned off by default. It is also Microsoft’s first TPM application. Presumably it will be enhanced in the future: allowing the encryption of other drives would be a good next step, for example.

EDITED TO ADD (5/3): BitLocker is not a DRM system. However, it is straightforward to turn it into a DRM system. Simply give programs the ability to require that files be stored only on BitLocker-enabled drives, and then only be transferrable to other BitLocker-enabled drives. How easy this would be to implement, and how hard it would be to subvert, depends on the details of the system.

Posted on May 2, 2006 at 6:54 AMView Comments

"Lessons from the Sony CD DRM Episode"

“Lessons from the Sony CD DRM Episode” is an interesting paper by J. Alex Halderman and Edward W. Felten.

Abstract: In the fall of 2005, problems discovered in two Sony-BMG compact disc copy protection systems, XCP and MediaMax, triggered a public uproar that ultimately led to class-action litigation and the recall of millions of discs. We present an in-depth analysis of these technologies, including their design, implementation, and deployment. The systems are surprisingly complex and suffer from a diverse array of flaws that weaken their content protection and expose users to serious security and privacy risks. Their complexity, and their failure, makes them an interesting case study of digital rights management that carries valuable lessons for content companies, DRM vendors, policymakers, end users, and the security community.

Posted on February 17, 2006 at 2:11 PMView Comments

The Sony Rootkit Saga Continues

I’m just not able to keep up with all the twists and turns in this story. (My previous posts are here, here, here, and here, but a way better summary of the events is on BoingBoing: here, here, and here. Actually, you should just read every post on the topic in Freedom to Tinker. This is also worth reading.)

Many readers pointed out to me that the DMCA is one of the reasons antivirus companies aren’t able to disable invasive copy-protection systems like Sony’s rootkit: it may very well be illegal for them to do so. (Adam Shostack made this point.)

Here are two posts about the rootkit before Russinovich posted about it.

And it turns out you can easily defeat the rootkit:

With a small bit of tape on the outer edge of the CD, the PC then treats the disc as an ordinary single-session music CD and the commonly used music “rip” programs continue to work as usual.

(Original here.)

The fallout from this has been simply amazing. I’ve heard from many sources that the anti-copy-protection forces in Sony and other companies have newly found power, and that copy-protection has been set back years. Let’s hope that the entertainment industry realizes that digital copy protection is a losing game here, and starts trying to make money by embracing the characteristics of digital technology instead of fighting against them. I’ve written about that here and here (both from 2001).

Even Foxtrot has a cartoon on the topic.

I think I’m done here. Others are covering this much more extensively than I am. Unless there’s a new twist that I simply have to comment on….

EDITED TO ADD (11/21): The EFF is suing Sony. (The page is a good summary of the whole saga.)

EDITED TO ADD (11/22): Here’s a great idea; Sony can use a feature of the rootkit to inform infected users that they’re infected.

As it turns out, there’s a clear solution: A self-updating messaging system already built into Sony’s XCP player. Every time a user plays a XCP-affected CD, the XCP player checks in with Sony’s server. As Russinovich explained, usually Sony’s server sends back a null response. But with small adjustments on Sony’s end—just changing the output of a single script on a Sony web server—the XCP player can automatically inform users of the software improperly installed on their hard drives, and of their resulting rights and choices.

This is so obviously the right thing to do. My guess is that it’ll never happen.

Texas is suing Sony. According to the official statement:

The suit is also the first filed under the state’s spyware law of 2005. It alleges the company surreptitiously installed the spyware on millions of compact music discs (CDs) that consumers inserted into their computers when they play the CDs, which can compromise the systems.

And here’s something I didn’t know: the rootkit consumes 1% – 2% of CPU time, whether or not you’re playing a Sony CD. You’d think there would be a “theft of services” lawsuit in there somewhere.

EDITED TO ADD (11/30): Business Week has a good article on the topic.

Posted on November 21, 2005 at 4:34 PMView Comments

Sony's DRM Rootkit: The Real Story

This is my sixth column for Wired.com:

It’s a David and Goliath story of the tech blogs defeating a mega-corporation.

On Oct. 31, Mark Russinovich broke the story in his blog: Sony BMG Music Entertainment distributed a copy-protection scheme with music CDs that secretly installed a rootkit on computers. This software tool is run without your knowledge or consent—if it’s loaded on your computer with a CD, a hacker can gain and maintain access to your system and you wouldn’t know it.

The Sony code modifies Windows so you can’t tell it’s there, a process called “cloaking” in the hacker world. It acts as spyware, surreptitiously sending information about you to Sony. And it can’t be removed; trying to get rid of it damages Windows.

This story was picked up by other blogs (including mine), followed by the computer press. Finally, the mainstream media took it up.

The outcry was so great that on Nov. 11, Sony announced it was temporarily halting production of that copy-protection scheme. That still wasn’t enough—on Nov. 14 the company announced it was pulling copy-protected CDs from store shelves and offered to replace customers’ infected CDs for free.

But that’s not the real story here.

It’s a tale of extreme hubris. Sony rolled out this incredibly invasive copy-protection scheme without ever publicly discussing its details, confident that its profits were worth modifying its customers’ computers. When its actions were first discovered, Sony offered a “fix” that didn’t remove the rootkit, just the cloaking.

Sony claimed the rootkit didn’t phone home when it did. On Nov. 4, Thomas Hesse, Sony BMG’s president of global digital business, demonstrated the company’s disdain for its customers when he said, “Most people don’t even know what a rootkit is, so why should they care about it?” in an NPR interview. Even Sony’s apology only admits that its rootkit “includes a feature that may make a user’s computer susceptible to a virus written specifically to target the software.”

However, imperious corporate behavior is not the real story either.

This drama is also about incompetence. Sony’s latest rootkit-removal tool actually leaves a gaping vulnerability. And Sony’s rootkit—designed to stop copyright infringement—itself may have infringed on copyright. As amazing as it might seem, the code seems to include an open-source MP3 encoder in violation of that library’s license agreement. But even that is not the real story.

It’s an epic of class-action lawsuits in California and elsewhere, and the focus of criminal investigations. The rootkit has even been found on computers run by the Department of Defense, to the Department of Homeland Security’s displeasure. While Sony could be prosecuted under U.S. cybercrime law, no one thinks it will be. And lawsuits are never the whole story.

This saga is full of weird twists. Some pointed out how this sort of software would degrade the reliability of Windows. Someone created malicious code that used the rootkit to hide itself. A hacker used the rootkit to avoid the spyware of a popular game. And there were even calls for a worldwide Sony boycott. After all, if you can’t trust Sony not to infect your computer when you buy its music CDs, can you trust it to sell you an uninfected computer in the first place? That’s a good question, but—again—not the real story.

It’s yet another situation where Macintosh users can watch, amused (well, mostly) from the sidelines, wondering why anyone still uses Microsoft Windows. But certainly, even that is not the real story.

The story to pay attention to here is the collusion between big media companies who try to control what we do on our computers and computer-security companies who are supposed to be protecting us.

Initial estimates are that more than half a million computers worldwide are infected with this Sony rootkit. Those are amazing infection numbers, making this one of the most serious internet epidemics of all time—on a par with worms like Blaster, Slammer, Code Red and Nimda.

What do you think of your antivirus company, the one that didn’t notice Sony’s rootkit as it infected half a million computers? And this isn’t one of those lightning-fast internet worms; this one has been spreading since mid-2004. Because it spread through infected CDs, not through internet connections, they didn’t notice? This is exactly the kind of thing we’re paying those companies to detect—especially because the rootkit was phoning home.

But much worse than not detecting it before Russinovich’s discovery was the deafening silence that followed. When a new piece of malware is found, security companies fall over themselves to clean our computers and inoculate our networks. Not in this case.

McAfee didn’t add detection code until Nov. 9, and as of Nov. 15 it doesn’t remove the rootkit, only the cloaking device. The company admits on its web page that this is a lousy compromise. “McAfee detects, removes and prevents reinstallation of XCP.” That’s the cloaking code. “Please note that removal will not impair the copyright-protection mechanisms installed from the CD. There have been reports of system crashes possibly resulting from uninstalling XCP.” Thanks for the warning.

Symantec’s response to the rootkit has, to put it kindly, evolved. At first the company didn’t consider XCP malware at all. It wasn’t until Nov. 11 that Symantec posted a tool to remove the cloaking. As of Nov. 15, it is still wishy-washy about it, explaining that “this rootkit was designed to hide a legitimate application, but it can be used to hide other objects, including malicious software.”

The only thing that makes this rootkit legitimate is that a multinational corporation put it on your computer, not a criminal organization.

You might expect Microsoft to be the first company to condemn this rootkit. After all, XCP corrupts Windows’ internals in a pretty nasty way. It’s the sort of behavior that could easily lead to system crashes—crashes that customers would blame on Microsoft. But it wasn’t until Nov. 13, when public pressure was just too great to ignore, that Microsoft announced it would update its security tools to detect and remove the cloaking portion of the rootkit.

Perhaps the only security company that deserves praise is F-Secure, the first and the loudest critic of Sony’s actions. And Sysinternals, of course, which hosts Russinovich’s blog and brought this to light.

Bad security happens. It always has and it always will. And companies do stupid things; always have and always will. But the reason we buy security products from Symantec, McAfee and others is to protect us from bad security.

I truly believed that even in the biggest and most-corporate security company there are people with hackerish instincts, people who will do the right thing and blow the whistle. That all the big security companies, with over a year’s lead time, would fail to notice or do anything about this Sony rootkit demonstrates incompetence at best, and lousy ethics at worst.

Microsoft I can understand. The company is a fan of invasive copy protection—it’s being built into the next version of Windows. Microsoft is trying to work with media companies like Sony, hoping Windows becomes the media-distribution channel of choice. And Microsoft is known for watching out for its business interests at the expense of those of its customers.

What happens when the creators of malware collude with the very companies we hire to protect us from that malware?

We users lose, that’s what happens. A dangerous and damaging rootkit gets introduced into the wild, and half a million computers get infected before anyone does anything.

Who are the security companies really working for? It’s unlikely that this Sony rootkit is the only example of a media company using this technology. Which security company has engineers looking for the others who might be doing it? And what will they do if they find one? What will they do the next time some multinational company decides that owning your computers is a good idea?

These questions are the real story, and we all deserve answers.

EDITED TO ADD (11/17): Slashdotted.

EDITED TO ADD (11/19): Details of Sony’s buyback program. And more GPL code was stolen and used in the rootkit.

Posted on November 17, 2005 at 9:08 AM

Sidebar photo of Bruce Schneier by Joe MacInnis.