Entries Tagged "malware"

Page 44 of 46

Sony Secretly Installs Rootkit on Computers

Mark Russinovich discovered a rootkit on his system. After much analysis, he discovered that the rootkit was installed as a part of the DRM software linked with a CD he bought. The package cannot be uninstalled. Even worse, the package actively cloaks itself from process listings and the file system.

At that point I knew conclusively that the rootkit and its associated files were related to the First 4 Internet DRM software Sony ships on its CDs. Not happy having underhanded and sloppily written software on my system I looked for a way to uninstall it. However, I didn’t find any reference to it in the Control Panel’s Add or Remove Programs list, nor did I find any uninstall utility or directions on the CD or on First 4 Internet’s site. I checked the EULA and saw no mention of the fact that I was agreeing to have software put on my system that I couldn’t uninstall. Now I was mad.

Removing the rootkit kills Windows.

Could Sony have violated the the Computer Misuse Act in the UK? If this isn’t clearly in the EULA, they have exceeded their privilege on the customer’s system by installing a rootkit to hide their software.

Certainly Mark has a reasonable lawsuit against Sony in the U.S.

EDITED TO ADD: The Washington Post is covering this story.

Sony lies about their rootkit:

November 2, 2005 – This Service Pack removes the cloaking technology component that has been recently discussed in a number of articles published regarding the XCP Technology used on SONY BMG content protected CDs. This component is not malicious and does not compromise security. However to alleviate any concerns that users may have about the program posing potential security vulnerabilities, this update has been released to enable users to remove this component from their computers.

Their update does not remove the rootkit, it just gets rid of the $sys$ cloaking.

Ed Felton has a great post on the issue:

The update is more than 3.5 megabytes in size, and it appears to contain new versions of almost all the files included in the initial installation of the entire DRM system, as well as creating some new files. In short, they’re not just taking away the rootkit-like function—they’re almost certainly adding things to the system as well. And once again, they’re not disclosing what they’re doing.

No doubt they’ll ask us to just trust them. I wouldn’t. The companies still assert—falsely—that the original rootkit-like software “does not compromise security” and “[t]here should be no concern” about it. So I wouldn’t put much faith in any claim that the new update is harmless. And the companies claim to have developed “new ways of cloaking files on a hard drive”. So I wouldn’t derive much comfort from carefully worded assertions that they have removed “the … component .. that has been discussed”.

And you can use the rootkit to avoid World of Warcraft spyware.

World of Warcraft hackers have confirmed that the hiding capabilities of Sony BMG’s content protection software can make tools made for cheating in the online world impossible to detect.

.

EDITED TO ADD: F-Secure makes a good point:

A member of our IT security team pointed out quite chilling thought about what might happen if record companies continue adding rootkit based copy protection into their CDs.

In order to hide from the system a rootkit must interface with the OS on very low level and in those areas theres no room for error.

It is hard enough to program something on that level, without having to worry about any other programs trying to do something with same parts of the OS.

Thus if there would be two DRM rootkits on the same system trying to hook same APIs, the results would be highly unpredictable. Or actually, a system crash is quite predictable result in such situation.

EDITED TO ADD: Declan McCullagh has a good essay on the topic. There will be lawsuits.

EDITED TO ADD: The Italian police are getting involved.

EDITED TO ADD: Here’s a Trojan that uses Sony’s rootkit to hide.

EDITED TO ADD: Sony temporarily halts production of CDs protected with this technology.

Posted on November 1, 2005 at 10:17 AMView Comments

Trusted Computing Best Practices

The Trusted Computing Group (TCG) is an industry consortium that is trying to build more secure computers. They have a lot of members, although the board of directors consists of Microsoft, Sony, AMD, Intel, IBM, SUN, HP, and two smaller companies who are voted on in a rotating basis.

The basic idea is that you build a computer from the ground up securely, with a core hardware “root of trust” called a Trusted Platform Module (TPM). Applications can run securely on the computer, can communicate with other applications and their owners securely, and can be sure that no untrusted applications have access to their data or code.

This sounds great, but it’s a double-edged sword. The same system that prevents worms and viruses from running on your computer might also stop you from using any legitimate software that your hardware or operating system vendor simply doesn’t like. The same system that protects spyware from accessing your data files might also stop you from copying audio and video files. The same system that ensures that all the patches you download are legitimate might also prevent you from, well, doing pretty much anything.

(Ross Anderson has an excellent FAQ on the topic. I wrote about it back when Microsoft called it Palladium.)

In May, the Trusted Computing Group published a best practices document: “Design, Implementation, and Usage Principles for TPM-Based Platforms.” Written for users and implementers of TCG technology, the document tries to draw a line between good uses and bad uses of this technology.

The principles that TCG believes underlie the effective, useful, and acceptable design, implementation, and use of TCG technologies are the following:

  • Security: TCG-enabled components should achieve controlled access to designated critical secured data and should reliably measure and report the system’s security properties. The reporting mechanism should be fully under the owner’s control.
  • Privacy: TCG-enabled components should be designed and implemented with privacy in mind and adhere to the letter and spirit of all relevant guidelines, laws, and regulations. This includes, but is not limited to, the OECD Guidelines, the Fair Information Practices, and the European Union Data Protection Directive (95/46/EC).
  • Interoperability: Implementations and deployments of TCG specifications should facilitate interoperability. Furthermore, implementations and deployments of TCG specifications should not introduce any new interoperability obstacles that are not for the purpose of security.
  • Portability of data: Deployment should support established principles and practices of data ownership.
  • Controllability: Each owner should have effective choice and control over the use and operation of the TCG-enabled capabilities that belong to them; their participation must be opt-in. Subsequently, any user should be able to reliably disable the TCG functionality in a way that does not violate the owner’s policy.
  • Ease-of-use: The nontechnical user should find the TCG-enabled capabilities comprehensible and usable.

It’s basically a good document, although there are some valid criticisms. I like that the document clearly states that coercive use of the technology—forcing people to use digital rights management systems, for example, are inappropriate:

The use of coercion to effectively force the use of the TPM capabilities is not an appropriate use of the TCG technology.

I like that the document tries to protect user privacy:

All implementations of TCG-enabled components should ensure that the TCG technology is not inappropriately used for data aggregation of personal information/

I wish that interoperability were more strongly enforced. The language has too much wiggle room for companies to break interoperability under the guise of security:

Furthermore, implementations and deployments of TCG specifications should not introduce any new interoperability obstacles that are not for the purpose of security.

That sounds good, but what does “security” mean in that context? Security of the user against malicious code? Security of big media against people copying music and videos? Security of software vendors against competition? The big problem with TCG technology is that it can be used to further all three of these “security” goals, and this document is where “security” should be better defined.

Complaints aside, it’s a good document and we should all hope that companies follow it. Compliance is totally voluntary, but it’s the kind of document that governments and large corporations can point to and demand that vendors follow.

But there’s something fishy going on. Microsoft is doing its best to stall the document, and to ensure that it doesn’t apply to Vista (formerly known as Longhorn), Microsoft’s next-generation operating system.

The document was first written in the fall of 2003, and went through the standard review process in early 2004. Microsoft delayed the adoption and publication of the document, demanding more review. Eventually the document was published in June of this year (with a May date on the cover).

Meanwhile, the TCG built a purely software version of the specification: Trusted Network Connect (TNC). Basically, it’s a TCG system without a TPM.

The best practices document doesn’t apply to TNC, because Microsoft (as a member of the TCG board of directors) blocked it. The excuse is that the document hadn’t been written with software-only applications in mind, so it shouldn’t apply to software-only TCG systems.

This is absurd. The document outlines best practices for how the system is used. There’s nothing in it about how the system works internally. There’s nothing unique to hardware-based systems, nothing that would be different for software-only systems. You can go through the document yourself and replace all references to “TPM” or “hardware” with “software” (or, better yet, “hardware or software”) in five minutes. There are about a dozen changes, and none of them make any meaningful difference.

The only reason I can think of for all this Machiavellian maneuvering is that the TCG board of directors is making sure that the document doesn’t apply to Vista. If the document isn’t published until after Vista is released, then obviously it doesn’t apply.

Near as I can tell, no one is following this story. No one is asking why TCG best practices apply to hardware-based systems if they’re writing software-only specifications. No one is asking why the document doesn’t apply to all TCG systems, since it’s obviously written without any particular technology in mind. And no one is asking why the TCG is delaying the adoption of any software best practices.

I believe the reason is Microsoft and Vista, but clearly there’s some investigative reporting to be done.

(A version of this essay previously appeared on CNet’s News.com and ZDNet.)

EDITED TO ADD: This comment completely misses my point. Which is odd; I thought I was pretty clear.

EDITED TO ADD: There is a thread on Slashdot on the topic.

EDITED TO ADD: The Sydney Morning Herald republished this essay. Also “The Age.”

Posted on August 31, 2005 at 8:27 AMView Comments

A Socio-Technical Approach to Internet Security

Interesting research grant from the NSF:

Technical security measures are often breached through social means, but little research has tackled the problem of system security in the context of the entire socio-technical system, with the interactions between the social and technical parts integrated into one model. Similar problems exist in the field of system safety, but recently a new accident model has been devised that uses a systems-theoretic approach to understand accident causation. Systems theory allows complex relationships between events and the system as a whole to be taken into account, so this new model permits an accident to be considered not simply as arising from a chain of individual component failures, but from the interactions among system components, including those that have not failed.

This exploratory research will examine how this new approach to safety can be applied to Internet security, using worms as a first example. The long-term goal is to create a general model of trustworthiness that can incorporate both safety and security, along with system modeling tools and analysis methods that can be used to create more trustworthy socio-technical systems. This research provides a unique opportunity to link two research disciplines, safety and security, that have many commonalities but, up to now, relatively little communication or interaction.

Posted on August 25, 2005 at 7:38 AMView Comments

Bluetooth Spam

Advertisers are beaming unwanted content to Bluetooth phones at a distance of 100 meters.

Sure, it’s annoying, but worse, there are serious security risks. Don’t believe this:

Furthermore, there is no risk of downloading viruses or other malware to the phone, says O’Regan: “We don’t send applications or executable code.” The system uses the phone’s native download interface so they should be able to see the kind of file they are downloading before accepting it, he adds.

This company might not send executable code, but someone else certainly could. And what percentage of people who use Bluetooth phones can recognize “the kind of file they are downloading”?

We’ve already seen two ways to steal data from Bluetooth devices. And we know that more and more sensitive data is being stored on these small devices, increasing the risk. This is almost certainly another avenue for attack.

Posted on August 23, 2005 at 12:24 PMView Comments

Stealing Imaginary Things

There’s a new Trojan that tries to steal World of Warcraft passwords.

That reminded me about this article, about people paying programmers to find exploits to make virtual money in multiplayer online games, and then selling the proceeds for real money.

And here’s a page about ways people steal fake money in the online game Neopets, including cookie grabbers, fake login pages, fake contests, social engineering, and pyramid schemes.

I regularly say that every form of theft and fraud in the real world will eventually be duplicated in cyberspace. Perhaps every method of stealing real money will eventually be used to steal imaginary money, too.

Posted on August 10, 2005 at 7:36 AMView Comments

More Lynn/Cisco Information

There’s some new information on last week’s Lynn/Cisco/ISS story: Mike Lynn gave an interesting interview to Wired. Here’s some news about the FBI’s investigation. And here’s a video of Cisco/ISS ripping pages out of the BlackHat conference proceedings.

Someone is setting up a legal defense fund for Lynn. Send donations via PayPal to Abaddon@IO.com. (Does anyone know the URL?) According to BoingBoing, donations not used to defend Lynn will be donated to the EFF.

Copies of Lynn’s talk have popped up on the Internet, but some have been removed due to legal cease-and-desist letters from ISS attorneys, like this one. Currently, Lynn’s slides are here, here, here, here, here, here, here, here, here, here, here, here, here, here, and here. (The list is from BoingBoing.) Note that the presentation above is not the same as the one Lynn gave at BlackHat. The presentation at BlackHat didn’t have the ISS logo at the bottom, as the one on the Internet does. Also, the critical code components were blacked out. (Photographs of Lynn’s actual presentation slides were available here, but have been removed due to legal threats from ISS.)

There have been a bunch of commentary and analyses on the whole story. Business Week completely missed the point. Larry Seltzer at eWeek is more balanced.

Hackers are working overtime to reconstruct Lynn’s attack and write an exploit. This, of course, means that we’re in much more danger of there being a worm that makes use of this vulnerability.

The sad thing is that we could have avoided this. If Cisco and ISS had simply let Lynn present his work, it would have been just another obscure presentation amongst the sea of obscure presentations that is BlackHat. By attempting to muzzle Lynn, the two companies ensured that 1) the vulnerability was the biggest story of the conference, and 2) some group of hackers would turn the vulnerability into exploit code just to get back at them.

EDITED TO ADD: Jennifer Granick is Lynn’s attorney, and she has blogged about what happened at BlackHat and DefCon. And photographs of the slides Lynn actually used for his talk are here (for now, at least). Is it just me, or does it seem like ISS is pursuing this out of malice? With Cisco I think it was simple stupidity, but I think it’s malice with ISS.

EDITED TO ADD: I don’t agree with Irs Winkler’s comments, either.

EDITED TO ADD: ISS defends itself.

EDITED TO ADD: More commentary.

EDITED TO ADD: Nice rebuttal to Winkler’s essay.

Posted on August 3, 2005 at 1:31 PMView Comments

Diebold Opti-Scan Voting Machine

An analysis of Diebold’s Opti-Scan (paper ballot) voting machine.

Computer expert Harri Hursti gained control over Leon County memory cards, which handle the vote-reporting from the precincts. Dr. Herbert Thompson, a security expert, took control of the Leon County central tabulator by implanting a trojan horse-like script.

Two programmers can become a lone programmer, says Hursti, who has figured out a way to control the entire central tabulator by way of a single memory card swap, and also how to make tampered polling place tapes match tampered central tabulator results. This more complex approach is untested, but based on testing performed May 26, Hursti says he has absolutely no reason to believe it wouldn’t work.

Three memory card tests demonstrated successful manipulation of election results, and showed that 1990 and 2002 FEC-required safeguards are being violated in the Diebold version 1.94 opti-scan system.

Posted on June 30, 2005 at 7:57 AMView Comments

Underhanded C Contest

As far as I know, this is the only security-related programming contest: the Underhanded C Contest. The object is to write clear, readable C code with hidden malicious behavior; in other words, to hide evil stuff in code that passes visual inspection of source by other programmers.

This year’s challenge: covert fingerprinting.

Posted on June 21, 2005 at 12:34 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.