Entries Tagged "essays"

Page 41 of 44

Surveillance and Oversight

Christmas 2003, Las Vegas. Intelligence hinted at a terrorist attack on New Year’s Eve. In the absence of any real evidence, the FBI tried to compile a real-time database of everyone who was visiting the city. It collected customer data from airlines, hotels, casinos, rental car companies, even storage locker rental companies. All this information went into a massive database—probably close to a million people overall—that the FBI’s computers analyzed, looking for links to known terrorists. Of course, no terrorist attack occurred and no plot was discovered: The intelligence was wrong.

A typical American citizen spending the holidays in Vegas might be surprised to learn that the FBI collected his personal data, but this kind of thing is increasingly common. Since 9/11, the FBI has been collecting all sorts of personal information on ordinary Americans, and it shows no signs of letting up.

The FBI has two basic tools for gathering information on large groups of Americans. Both were created in the 1970s to gather information solely on foreign terrorists and spies. Both were greatly expanded by the USA Patriot Act and other laws, and are now routinely used against ordinary, law-abiding Americans who have no connection to terrorism. Together, they represent an enormous increase in police power in the United States.

The first are FISA warrants (sometimes called Section 215 warrants, after the section of the Patriot Act that expanded their scope). These are issued in secret, by a secret court. The second are national security letters, less well known but much more powerful, and which FBI field supervisors can issue all by themselves. The exact numbers are secret, but a recent Washington Post article estimated that 30,000 letters each year demand telephone records, banking data, customer data, library records, and so on.

In both cases, the recipients of these orders are prohibited by law from disclosing the fact that they received them. And two years ago, Attorney General John Ashcroft rescinded a 1995 guideline that this information be destroyed if it is not relevant to whatever investigation it was collected for. Now, it can be saved indefinitely, and disseminated freely.

September 2005, Rotterdam. The police had already identified some of the 250 suspects in a soccer riot from the previous April, but most were unidentified but captured on video. In an effort to help, they sent text messages to 17,000 phones known to be in the vicinity of the riots, asking that anyone with information contact the police. The result was more evidence, and more arrests.

The differences between the Rotterdam and Las Vegas incidents are instructive. The Rotterdam police needed specific data for a specific purpose. Its members worked with federal justice officials to ensure that they complied with the country’s strict privacy laws. They obtained the phone numbers without any names attached, and deleted them immediately after sending the single text message. And their actions were public, widely reported in the press.

On the other hand, the FBI has no judicial oversight. With only a vague hinting that a Las Vegas attack might occur, the bureau vacuumed up an enormous amount of information. First its members tried asking for the data; then they turned to national security letters and, in some cases, subpoenas. There was no requirement to delete the data, and there is every reason to believe that the FBI still has it all. And the bureau worked in secret; the only reason we know this happened is that the operation leaked.

These differences illustrate four principles that should guide our use of personal information by the police. The first is oversight: In order to obtain personal information, the police should be required to show probable cause, and convince a judge to issue a warrant for the specific information needed. Second, minimization: The police should only get the specific information they need, and not any more. Nor should they be allowed to collect large blocks of information in order to go on “fishing expeditions,” looking for suspicious behavior. The third is transparency: The public should know, if not immediately then eventually, what information the police are getting and how it is being used. And fourth, destruction. Any data the police obtains should be destroyed immediately after its court-authorized purpose is achieved. The police should not be able to hold on to it, just in case it might become useful at some future date.

This isn’t about our ability to combat terrorism; it’s about police power. Traditional law already gives police enormous power to peer into the personal lives of people, to use new crime-fighting technologies, and to correlate that information. But unfettered police power quickly resembles a police state, and checks on that power make us all safer.

As more of our lives become digital, we leave an ever-widening audit trail in our wake. This information has enormous social value—not just for national security and law enforcement, but for purposes as mundane as using cell-phone data to track road congestion, and as important as using medical data to track the spread of diseases. Our challenge is to make this information available when and where it needs to be, but also to protect the principles of privacy and liberty our country is built on.

This essay originally appeared in the Minneapolis Star-Tribune.

Posted on November 22, 2005 at 6:06 AMView Comments

Sony's DRM Rootkit: The Real Story

This is my sixth column for Wired.com:

It’s a David and Goliath story of the tech blogs defeating a mega-corporation.

On Oct. 31, Mark Russinovich broke the story in his blog: Sony BMG Music Entertainment distributed a copy-protection scheme with music CDs that secretly installed a rootkit on computers. This software tool is run without your knowledge or consent—if it’s loaded on your computer with a CD, a hacker can gain and maintain access to your system and you wouldn’t know it.

The Sony code modifies Windows so you can’t tell it’s there, a process called “cloaking” in the hacker world. It acts as spyware, surreptitiously sending information about you to Sony. And it can’t be removed; trying to get rid of it damages Windows.

This story was picked up by other blogs (including mine), followed by the computer press. Finally, the mainstream media took it up.

The outcry was so great that on Nov. 11, Sony announced it was temporarily halting production of that copy-protection scheme. That still wasn’t enough—on Nov. 14 the company announced it was pulling copy-protected CDs from store shelves and offered to replace customers’ infected CDs for free.

But that’s not the real story here.

It’s a tale of extreme hubris. Sony rolled out this incredibly invasive copy-protection scheme without ever publicly discussing its details, confident that its profits were worth modifying its customers’ computers. When its actions were first discovered, Sony offered a “fix” that didn’t remove the rootkit, just the cloaking.

Sony claimed the rootkit didn’t phone home when it did. On Nov. 4, Thomas Hesse, Sony BMG’s president of global digital business, demonstrated the company’s disdain for its customers when he said, “Most people don’t even know what a rootkit is, so why should they care about it?” in an NPR interview. Even Sony’s apology only admits that its rootkit “includes a feature that may make a user’s computer susceptible to a virus written specifically to target the software.”

However, imperious corporate behavior is not the real story either.

This drama is also about incompetence. Sony’s latest rootkit-removal tool actually leaves a gaping vulnerability. And Sony’s rootkit—designed to stop copyright infringement—itself may have infringed on copyright. As amazing as it might seem, the code seems to include an open-source MP3 encoder in violation of that library’s license agreement. But even that is not the real story.

It’s an epic of class-action lawsuits in California and elsewhere, and the focus of criminal investigations. The rootkit has even been found on computers run by the Department of Defense, to the Department of Homeland Security’s displeasure. While Sony could be prosecuted under U.S. cybercrime law, no one thinks it will be. And lawsuits are never the whole story.

This saga is full of weird twists. Some pointed out how this sort of software would degrade the reliability of Windows. Someone created malicious code that used the rootkit to hide itself. A hacker used the rootkit to avoid the spyware of a popular game. And there were even calls for a worldwide Sony boycott. After all, if you can’t trust Sony not to infect your computer when you buy its music CDs, can you trust it to sell you an uninfected computer in the first place? That’s a good question, but—again—not the real story.

It’s yet another situation where Macintosh users can watch, amused (well, mostly) from the sidelines, wondering why anyone still uses Microsoft Windows. But certainly, even that is not the real story.

The story to pay attention to here is the collusion between big media companies who try to control what we do on our computers and computer-security companies who are supposed to be protecting us.

Initial estimates are that more than half a million computers worldwide are infected with this Sony rootkit. Those are amazing infection numbers, making this one of the most serious internet epidemics of all time—on a par with worms like Blaster, Slammer, Code Red and Nimda.

What do you think of your antivirus company, the one that didn’t notice Sony’s rootkit as it infected half a million computers? And this isn’t one of those lightning-fast internet worms; this one has been spreading since mid-2004. Because it spread through infected CDs, not through internet connections, they didn’t notice? This is exactly the kind of thing we’re paying those companies to detect—especially because the rootkit was phoning home.

But much worse than not detecting it before Russinovich’s discovery was the deafening silence that followed. When a new piece of malware is found, security companies fall over themselves to clean our computers and inoculate our networks. Not in this case.

McAfee didn’t add detection code until Nov. 9, and as of Nov. 15 it doesn’t remove the rootkit, only the cloaking device. The company admits on its web page that this is a lousy compromise. “McAfee detects, removes and prevents reinstallation of XCP.” That’s the cloaking code. “Please note that removal will not impair the copyright-protection mechanisms installed from the CD. There have been reports of system crashes possibly resulting from uninstalling XCP.” Thanks for the warning.

Symantec’s response to the rootkit has, to put it kindly, evolved. At first the company didn’t consider XCP malware at all. It wasn’t until Nov. 11 that Symantec posted a tool to remove the cloaking. As of Nov. 15, it is still wishy-washy about it, explaining that “this rootkit was designed to hide a legitimate application, but it can be used to hide other objects, including malicious software.”

The only thing that makes this rootkit legitimate is that a multinational corporation put it on your computer, not a criminal organization.

You might expect Microsoft to be the first company to condemn this rootkit. After all, XCP corrupts Windows’ internals in a pretty nasty way. It’s the sort of behavior that could easily lead to system crashes—crashes that customers would blame on Microsoft. But it wasn’t until Nov. 13, when public pressure was just too great to ignore, that Microsoft announced it would update its security tools to detect and remove the cloaking portion of the rootkit.

Perhaps the only security company that deserves praise is F-Secure, the first and the loudest critic of Sony’s actions. And Sysinternals, of course, which hosts Russinovich’s blog and brought this to light.

Bad security happens. It always has and it always will. And companies do stupid things; always have and always will. But the reason we buy security products from Symantec, McAfee and others is to protect us from bad security.

I truly believed that even in the biggest and most-corporate security company there are people with hackerish instincts, people who will do the right thing and blow the whistle. That all the big security companies, with over a year’s lead time, would fail to notice or do anything about this Sony rootkit demonstrates incompetence at best, and lousy ethics at worst.

Microsoft I can understand. The company is a fan of invasive copy protection—it’s being built into the next version of Windows. Microsoft is trying to work with media companies like Sony, hoping Windows becomes the media-distribution channel of choice. And Microsoft is known for watching out for its business interests at the expense of those of its customers.

What happens when the creators of malware collude with the very companies we hire to protect us from that malware?

We users lose, that’s what happens. A dangerous and damaging rootkit gets introduced into the wild, and half a million computers get infected before anyone does anything.

Who are the security companies really working for? It’s unlikely that this Sony rootkit is the only example of a media company using this technology. Which security company has engineers looking for the others who might be doing it? And what will they do if they find one? What will they do the next time some multinational company decides that owning your computers is a good idea?

These questions are the real story, and we all deserve answers.

EDITED TO ADD (11/17): Slashdotted.

EDITED TO ADD (11/19): Details of Sony’s buyback program. And more GPL code was stolen and used in the rootkit.

Posted on November 17, 2005 at 9:08 AM

The Zotob Worm

If you’ll forgive the possible comparison to hurricanes, Internet epidemics are much like severe weather: they happen randomly, they affect some segments of the population more than others, and your previous preparation determines how effective your defense is.

Zotob was the first major worm outbreak since MyDoom in January 2004. It happened quickly—less than five days after Microsoft published a critical security bulletin (its 39th of the year). Zotob’s effects varied greatly from organization to organization: some networks were brought to their knees, while others didn’t even notice.

The worm started spreading on Sunday, 14 August. Honestly, it wasn’t much of a big deal, but it got a lot of play in the press because it hit several major news outlets, most notably CNN. If a news organization is personally affected by something, it’s much more likely to report extensively on it. But my company, Counterpane Internet Security, monitors more than 500 networks worldwide, and we didn’t think it was worth all the press coverage.

By the 17th, there were at least a dozen other worms that exploited the same vulnerability, both Zotob variants and others that were completely different. Most of them tried to recruit computers for bot networks, and some of the different variants warred against each other—stealing “owned” computers back and forth. If your network was infected, it was a mess.

Two weeks later, the 18-year-old who wrote the original Zotob worm was arrested, along with the 21-year-old who paid him to write it. It seems likely the person who funded the worm’s creation was not a hacker, but rather a criminal looking to profit.

The nature of worms has changed in the past few years. Previously, hackers looking for prestige or just wanting to cause damage were responsible for most worms. Today, they’re increasingly written or commissioned by criminals. By taking over computers, worms can send spam, launch denial-of-service extortion attacks, or search for credit-card numbers and other personal information.

What could you have done beforehand to protect yourself against Zotob and its kin? “Install the patch” is the obvious answer, but it’s not really a satisfactory one. There are simply too many patches. Although a single computer user can easily set up patches to automatically download and install—at least Microsoft Windows system patches—large corporate networks can’t. Far too often, patches cause other things to break.

It would be great to know which patches are actually important and which ones just sound important. Before that weekend in August, the patch that would have protected against Zotob was just another patch; by Monday morning, it was the most important thing a sysadmin could do to secure the network.

Microsoft had six new patches available on 9 August, three designated as critical (including the one that Zotob used), one important, and two moderate. Could you have guessed beforehand which one would have actually been critical? With the next patch release, will you know which ones you can put off and for which ones you need to drop everything, test, and install across your network?

Given that it’s impossible to know what’s coming beforehand, how you respond to an actual worm largely determines your defense’s effectiveness. You might need to respond quickly, and you most certainly need to respond accurately. Because it’s impossible to know beforehand what the necessary response should be, you need a process for that response. Employees come and go, so the only thing that ensures a continuity of effective security is a process. You need accurate and timely information to fuel this process. And finally, you need experts to decipher the information, determine what to do, and implement a solution.

The Zotob storm was both typical and unique. It started soon after the vulnerability was published, but I don’t think that made a difference. Even worms that use six-month-old vulnerabilities find huge swaths of the Internet unpatched. It was a surprise, but they all are.

This essay will appear in the November/December 2005 issue of IEEE Security & Privacy.

Posted on November 11, 2005 at 7:46 AMView Comments

The Security of RFID Passports

My fifth column for Wired:

The State Department has done a great job addressing specific security and privacy concerns, but its lack of technical skills is hurting it. The collision-avoidance ID is just one example of where, apparently, the State Department didn’t have enough of the expertise it needed to do this right.

Of course it can fix the problem, but the real issue is how many other problems like this are lurking in the details of its design? We don’t know, and I doubt the State Department knows either. The only way to vet its design, and to convince us that RFID is necessary, would be to open it up to public scrutiny.

The State Department’s plan to issue RFID passports by October 2006 is both precipitous and risky. It made a mistake designing this behind closed doors. There needs to be some pretty serious quality assurance and testing before deploying this system, and this includes careful security evaluations by independent security experts. Right now the State Department has no intention of doing that; it’s already committed to a scheme before knowing if it even works or if it protects privacy.

My previous entries on RFID passports are here, here, and here.

Posted on November 3, 2005 at 8:30 AMView Comments

Liabilities and Software Vulnerabilities

My fourth column for Wired discusses liability for software vulnerabilities. Howard Schmidt argued that individual programmers should be liable for vulnerabilities in their code. (There’s a Slashdot thread on Schmidt’s comments.) I say that it should be the software vendors that should be liable, not the individual programmers.

Click on the essay for the whole argument, but here’s the critical point:

If end users can sue software manufacturers for product defects, then the cost of those defects to the software manufacturers rises. Manufacturers are now paying the true economic cost for poor software, and not just a piece of it. So when they’re balancing the cost of making their software secure versus the cost of leaving their software insecure, there are more costs on the latter side. This will provide an incentive for them to make their software more secure.

To be sure, making software more secure will cost money, and manufacturers will have to pass those costs on to users in the form of higher prices. But users are already paying extra costs for insecure software: costs of third-party security products, costs of consultants and security-services companies, direct and indirect costs of losses. Making software manufacturers liable moves those costs around, and as a byproduct causes the quality of software to improve.

This is why Schmidt’s idea won’t work. He wants individual software developers to be liable, and not the corporations. This will certainly give pissed-off users someone to sue, but it won’t reduce the externality and it won’t result in more-secure software.

EDITED TO ADD: Dan Farber has a good commentary on my essay. He says I got Schmidt wrong, that Schmidt wants programmers to be accountable but not liable. Be that as it may, I still think that making software vendors liable is a good idea.

There has been some confusion about this in the comments, that somehow this means that software vendors will be expected to achieve perfection and that they will be 100% liable for anything short of that. Clearly that’s ridiculous, and that’s not the way liabilities work. But equally ridiculous is the notion that software vendors should be 0% liable for defects. Somewhere in the middle there is a reasonable amount of liablity, and that’s what I want the courts to figure out.

EDITED TO ADD: Howard Schmidt writes: “It is unfortunate that my comments were reported inaccurately; at least Dan Farber has been trying to correct the inaccurate reports with his blog. I do not support PERSONAL LIABILITY for the developers NOR do I support liability against vendors. Vendors are nothing more then people (employees included) and anything against them hurts the very people who need to be given better tools, training and support.”

Howard wrote an essay on the topic.

Posted on October 20, 2005 at 5:19 AMView Comments

Phishing

My third Wired column is on line. It’s about phishing.

Financial companies have until now avoided taking on phishers in a serious way, because it’s cheaper and simpler to pay the costs of fraud. That’s unacceptable, however, because consumers who fall prey to these scams pay a price that goes beyond financial losses, in inconvenience, stress and, in some cases, blots on their credit reports that are hard to eradicate. As a result, lawmakers need to do more than create new punishments for wrongdoers—they need to create tough new incentives that will effectively force financial companies to change the status quo and improve the way they protect their customers’ assets.

EDITED TO ADD: There’s a discussion on Slashdot.

Posted on October 6, 2005 at 8:10 AMView Comments

Judge Roberts, Privacy, and the Future

My second essay for Wired was published today. It’s about the future privacy rulings of the Supreme Court:

Recent advances in technology have already had profound privacy implications, and there’s every reason to believe that this trend will continue into the foreseeable future. Roberts is 50 years old. If confirmed, he could be chief justice for the next 30 years. That’s a lot of future.

Privacy questions will arise from government actions in the “War on Terror”; they will arise from the actions of corporations and individuals. They will include questions of surveillance, profiling and search and seizure. And the decisions of the Supreme Court on these questions will have a profound effect on society.

Posted on September 22, 2005 at 12:28 PMView Comments

Katrina and Security

I had an op ed published in the Minneapolis Star-Tribune today.

Toward a Truly Safer Nation
Published September 11, 2005

Leaving aside the political posturing and the finger-pointing, how did our nation mishandle Katrina so badly? After spending tens of billions of dollars on homeland security (hundreds of billions, if you include the war in Iraq) in the four years after 9/11, what did we do wrong? Why were there so many failures at the local, state and federal levels?

These are reasonable questions. Katrina was a natural disaster and not a terrorist attack, but that only matters before the event. Large-scale terrorist attacks and natural disasters differ in cause, but they’re very similar in aftermath. And one can easily imagine a Katrina-like aftermath to a terrorist attack, especially one involving nuclear, biological or chemical weapons.

Improving our disaster response was discussed in the months after 9/11. We were going to give money to local governments to fund first responders. We established the Department of Homeland Security to streamline the chains of command and facilitate efficient and effective response.

The problem is that we all got caught up in “movie-plot threats,” specific attack scenarios that capture the imagination and then the dollars. Whether it’s terrorists with box cutters or bombs in their shoes, we fear what we can imagine. We’re searching backpacks in the subways of New York, because this year’s movie plot is based on a terrorist bombing in the London subways.

Funding security based on movie plots looks good on television, and gets people reelected. But there are millions of possible scenarios, and we’re going to guess wrong. The billions spent defending airlines are wasted if the terrorists bomb crowded shopping malls instead.

Our nation needs to spend its homeland security dollars on two things: intelligence-gathering and emergency response. These two things will help us regardless of what the terrorists are plotting, and the second helps both against terrorist attacks and national disasters.

Katrina demonstrated that we haven’t invested enough in emergency response. New Orleans police officers couldn’t talk with each other after power outages shut down their primary communications system—and there was no backup. The Department of Homeland Security, which was established in order to centralize federal response in a situation like this, couldn’t figure out who was in charge or what to do, and actively obstructed aid by others. FEMA did no better, and thousands died while turf battles were being fought.

Our government’s ineptitude in the aftermath of Katrina demonstrates how little we’re getting for all our security spending. It’s unconscionable that we’re wasting our money fingerprinting foreigners, profiling airline passengers, and invading foreign countries while emergency response at home goes underfunded.

Money spent on emergency response makes us safer, regardless of what the next disaster is, whether terrorist-made or natural.

This includes good communications on the ground, good coordination up the command chain, and resources—people and supplies—that can be quickly deployed wherever they’re needed.

Similarly, money spent on intelligence-gathering makes us safer, regardless of what the next disaster is. Against terrorism, that includes the NSA and the CIA. Against natural disasters, that includes the National Weather Service and the National Earthquake Information Center.

Katrina deftly illustrated homeland security’s biggest challenge: guessing correctly. The solution is to fund security that doesn’t rely on guessing. Defending against movie plots doesn’t make us appreciably safer. Emergency response does. It lessens the damage and suffering caused by disasters, whether man-made, like 9/11, or nature-made, like Katrina.

Posted on September 11, 2005 at 8:00 AMView Comments

Movie-Plot Threats

Wired.com just published an essay by me: “Terrorists Don’t Do Movie Plots.”

Sometimes it seems like the people in charge of homeland security spend too much time watching action movies. They defend against specific movie plots instead of against the broad threats of terrorism.

We all do it. Our imaginations run wild with detailed and specific threats. We imagine anthrax spread from crop dusters. Or a contaminated milk supply. Or terrorist scuba divers armed with almanacs. Before long, we’re envisioning an entire movie plot, without Bruce Willis saving the day. And we’re scared.

Psychologically, this all makes sense. Humans have good imaginations. Box cutters and shoe bombs conjure vivid mental images. “We must protect the Super Bowl” packs more emotional punch than the vague “we should defend ourselves against terrorism.”

The 9/11 terrorists used small pointy things to take over airplanes, so we ban small pointy things from airplanes. Richard Reid tried to hide a bomb in his shoes, so now we all have to take off our shoes. Recently, the Department of Homeland Security said that it might relax airplane security rules. It’s not that there’s a lessened risk of shoes, or that small pointy things are suddenly less dangerous. It’s that those movie plots no longer capture the imagination like they did in the months after 9/11, and everyone is beginning to see how silly (or pointless) they always were.

I’m now doing a bi-weekly column for them. I will post a link to the essays when they appear on the Wired.com site, and will reprint them in the next Crypto-Gram.

Posted on September 8, 2005 at 6:57 AMView Comments

Trusted Computing Best Practices

The Trusted Computing Group (TCG) is an industry consortium that is trying to build more secure computers. They have a lot of members, although the board of directors consists of Microsoft, Sony, AMD, Intel, IBM, SUN, HP, and two smaller companies who are voted on in a rotating basis.

The basic idea is that you build a computer from the ground up securely, with a core hardware “root of trust” called a Trusted Platform Module (TPM). Applications can run securely on the computer, can communicate with other applications and their owners securely, and can be sure that no untrusted applications have access to their data or code.

This sounds great, but it’s a double-edged sword. The same system that prevents worms and viruses from running on your computer might also stop you from using any legitimate software that your hardware or operating system vendor simply doesn’t like. The same system that protects spyware from accessing your data files might also stop you from copying audio and video files. The same system that ensures that all the patches you download are legitimate might also prevent you from, well, doing pretty much anything.

(Ross Anderson has an excellent FAQ on the topic. I wrote about it back when Microsoft called it Palladium.)

In May, the Trusted Computing Group published a best practices document: “Design, Implementation, and Usage Principles for TPM-Based Platforms.” Written for users and implementers of TCG technology, the document tries to draw a line between good uses and bad uses of this technology.

The principles that TCG believes underlie the effective, useful, and acceptable design, implementation, and use of TCG technologies are the following:

  • Security: TCG-enabled components should achieve controlled access to designated critical secured data and should reliably measure and report the system’s security properties. The reporting mechanism should be fully under the owner’s control.
  • Privacy: TCG-enabled components should be designed and implemented with privacy in mind and adhere to the letter and spirit of all relevant guidelines, laws, and regulations. This includes, but is not limited to, the OECD Guidelines, the Fair Information Practices, and the European Union Data Protection Directive (95/46/EC).
  • Interoperability: Implementations and deployments of TCG specifications should facilitate interoperability. Furthermore, implementations and deployments of TCG specifications should not introduce any new interoperability obstacles that are not for the purpose of security.
  • Portability of data: Deployment should support established principles and practices of data ownership.
  • Controllability: Each owner should have effective choice and control over the use and operation of the TCG-enabled capabilities that belong to them; their participation must be opt-in. Subsequently, any user should be able to reliably disable the TCG functionality in a way that does not violate the owner’s policy.
  • Ease-of-use: The nontechnical user should find the TCG-enabled capabilities comprehensible and usable.

It’s basically a good document, although there are some valid criticisms. I like that the document clearly states that coercive use of the technology—forcing people to use digital rights management systems, for example, are inappropriate:

The use of coercion to effectively force the use of the TPM capabilities is not an appropriate use of the TCG technology.

I like that the document tries to protect user privacy:

All implementations of TCG-enabled components should ensure that the TCG technology is not inappropriately used for data aggregation of personal information/

I wish that interoperability were more strongly enforced. The language has too much wiggle room for companies to break interoperability under the guise of security:

Furthermore, implementations and deployments of TCG specifications should not introduce any new interoperability obstacles that are not for the purpose of security.

That sounds good, but what does “security” mean in that context? Security of the user against malicious code? Security of big media against people copying music and videos? Security of software vendors against competition? The big problem with TCG technology is that it can be used to further all three of these “security” goals, and this document is where “security” should be better defined.

Complaints aside, it’s a good document and we should all hope that companies follow it. Compliance is totally voluntary, but it’s the kind of document that governments and large corporations can point to and demand that vendors follow.

But there’s something fishy going on. Microsoft is doing its best to stall the document, and to ensure that it doesn’t apply to Vista (formerly known as Longhorn), Microsoft’s next-generation operating system.

The document was first written in the fall of 2003, and went through the standard review process in early 2004. Microsoft delayed the adoption and publication of the document, demanding more review. Eventually the document was published in June of this year (with a May date on the cover).

Meanwhile, the TCG built a purely software version of the specification: Trusted Network Connect (TNC). Basically, it’s a TCG system without a TPM.

The best practices document doesn’t apply to TNC, because Microsoft (as a member of the TCG board of directors) blocked it. The excuse is that the document hadn’t been written with software-only applications in mind, so it shouldn’t apply to software-only TCG systems.

This is absurd. The document outlines best practices for how the system is used. There’s nothing in it about how the system works internally. There’s nothing unique to hardware-based systems, nothing that would be different for software-only systems. You can go through the document yourself and replace all references to “TPM” or “hardware” with “software” (or, better yet, “hardware or software”) in five minutes. There are about a dozen changes, and none of them make any meaningful difference.

The only reason I can think of for all this Machiavellian maneuvering is that the TCG board of directors is making sure that the document doesn’t apply to Vista. If the document isn’t published until after Vista is released, then obviously it doesn’t apply.

Near as I can tell, no one is following this story. No one is asking why TCG best practices apply to hardware-based systems if they’re writing software-only specifications. No one is asking why the document doesn’t apply to all TCG systems, since it’s obviously written without any particular technology in mind. And no one is asking why the TCG is delaying the adoption of any software best practices.

I believe the reason is Microsoft and Vista, but clearly there’s some investigative reporting to be done.

(A version of this essay previously appeared on CNet’s News.com and ZDNet.)

EDITED TO ADD: This comment completely misses my point. Which is odd; I thought I was pretty clear.

EDITED TO ADD: There is a thread on Slashdot on the topic.

EDITED TO ADD: The Sydney Morning Herald republished this essay. Also “The Age.”

Posted on August 31, 2005 at 8:27 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.