Entries Tagged "control"

Page 8 of 8

A U.S. National Firewall

This seems like a really bad idea:

Government has the right—even the responsibility—to see that its laws and regulations are enforced. The Internet is no exception. When the Internet is being used on American soil, it should comply with American law. And if it doesn’t, then the government should be able to step in and filter the illegal sites and activities.

Posted on September 7, 2005 at 3:53 PMView Comments

Trusted Computing Best Practices

The Trusted Computing Group (TCG) is an industry consortium that is trying to build more secure computers. They have a lot of members, although the board of directors consists of Microsoft, Sony, AMD, Intel, IBM, SUN, HP, and two smaller companies who are voted on in a rotating basis.

The basic idea is that you build a computer from the ground up securely, with a core hardware “root of trust” called a Trusted Platform Module (TPM). Applications can run securely on the computer, can communicate with other applications and their owners securely, and can be sure that no untrusted applications have access to their data or code.

This sounds great, but it’s a double-edged sword. The same system that prevents worms and viruses from running on your computer might also stop you from using any legitimate software that your hardware or operating system vendor simply doesn’t like. The same system that protects spyware from accessing your data files might also stop you from copying audio and video files. The same system that ensures that all the patches you download are legitimate might also prevent you from, well, doing pretty much anything.

(Ross Anderson has an excellent FAQ on the topic. I wrote about it back when Microsoft called it Palladium.)

In May, the Trusted Computing Group published a best practices document: “Design, Implementation, and Usage Principles for TPM-Based Platforms.” Written for users and implementers of TCG technology, the document tries to draw a line between good uses and bad uses of this technology.

The principles that TCG believes underlie the effective, useful, and acceptable design, implementation, and use of TCG technologies are the following:

  • Security: TCG-enabled components should achieve controlled access to designated critical secured data and should reliably measure and report the system’s security properties. The reporting mechanism should be fully under the owner’s control.
  • Privacy: TCG-enabled components should be designed and implemented with privacy in mind and adhere to the letter and spirit of all relevant guidelines, laws, and regulations. This includes, but is not limited to, the OECD Guidelines, the Fair Information Practices, and the European Union Data Protection Directive (95/46/EC).
  • Interoperability: Implementations and deployments of TCG specifications should facilitate interoperability. Furthermore, implementations and deployments of TCG specifications should not introduce any new interoperability obstacles that are not for the purpose of security.
  • Portability of data: Deployment should support established principles and practices of data ownership.
  • Controllability: Each owner should have effective choice and control over the use and operation of the TCG-enabled capabilities that belong to them; their participation must be opt-in. Subsequently, any user should be able to reliably disable the TCG functionality in a way that does not violate the owner’s policy.
  • Ease-of-use: The nontechnical user should find the TCG-enabled capabilities comprehensible and usable.

It’s basically a good document, although there are some valid criticisms. I like that the document clearly states that coercive use of the technology—forcing people to use digital rights management systems, for example, are inappropriate:

The use of coercion to effectively force the use of the TPM capabilities is not an appropriate use of the TCG technology.

I like that the document tries to protect user privacy:

All implementations of TCG-enabled components should ensure that the TCG technology is not inappropriately used for data aggregation of personal information/

I wish that interoperability were more strongly enforced. The language has too much wiggle room for companies to break interoperability under the guise of security:

Furthermore, implementations and deployments of TCG specifications should not introduce any new interoperability obstacles that are not for the purpose of security.

That sounds good, but what does “security” mean in that context? Security of the user against malicious code? Security of big media against people copying music and videos? Security of software vendors against competition? The big problem with TCG technology is that it can be used to further all three of these “security” goals, and this document is where “security” should be better defined.

Complaints aside, it’s a good document and we should all hope that companies follow it. Compliance is totally voluntary, but it’s the kind of document that governments and large corporations can point to and demand that vendors follow.

But there’s something fishy going on. Microsoft is doing its best to stall the document, and to ensure that it doesn’t apply to Vista (formerly known as Longhorn), Microsoft’s next-generation operating system.

The document was first written in the fall of 2003, and went through the standard review process in early 2004. Microsoft delayed the adoption and publication of the document, demanding more review. Eventually the document was published in June of this year (with a May date on the cover).

Meanwhile, the TCG built a purely software version of the specification: Trusted Network Connect (TNC). Basically, it’s a TCG system without a TPM.

The best practices document doesn’t apply to TNC, because Microsoft (as a member of the TCG board of directors) blocked it. The excuse is that the document hadn’t been written with software-only applications in mind, so it shouldn’t apply to software-only TCG systems.

This is absurd. The document outlines best practices for how the system is used. There’s nothing in it about how the system works internally. There’s nothing unique to hardware-based systems, nothing that would be different for software-only systems. You can go through the document yourself and replace all references to “TPM” or “hardware” with “software” (or, better yet, “hardware or software”) in five minutes. There are about a dozen changes, and none of them make any meaningful difference.

The only reason I can think of for all this Machiavellian maneuvering is that the TCG board of directors is making sure that the document doesn’t apply to Vista. If the document isn’t published until after Vista is released, then obviously it doesn’t apply.

Near as I can tell, no one is following this story. No one is asking why TCG best practices apply to hardware-based systems if they’re writing software-only specifications. No one is asking why the document doesn’t apply to all TCG systems, since it’s obviously written without any particular technology in mind. And no one is asking why the TCG is delaying the adoption of any software best practices.

I believe the reason is Microsoft and Vista, but clearly there’s some investigative reporting to be done.

(A version of this essay previously appeared on CNet’s News.com and ZDNet.)

EDITED TO ADD: This comment completely misses my point. Which is odd; I thought I was pretty clear.

EDITED TO ADD: There is a thread on Slashdot on the topic.

EDITED TO ADD: The Sydney Morning Herald republished this essay. Also “The Age.”

Posted on August 31, 2005 at 8:27 AMView Comments

The Myth of Panic

This New York Times op ed argues that panic is largely a myth. People feel stressed but they behave rationally, and it only gets called “panic” because of the stress.

If our leaders are really planning for panic, in the technical sense, then they are at best wasting resources on a future that is unlikely to happen. At worst, they may be doing our enemies’ work for them – while people are amazing under pressure, it cannot help to have predictions of panic drummed into them by supposed experts.

It can set up long-term foreboding, causing people to question whether they have the mettle to handle terrorists’ challenges. Studies have found that when interpreting ambiguous situations, people look to one another for cues. Panicky warnings can color the cues that people draw from one another when interpreting ambiguous situations, like seeing a South Asian-looking man with a backpack get on a bus.

Nor can it help if policy makers talk about possible draconian measures (like martial law and rigidly policed quarantines) to control the public and deny its right to manage its own affairs. The very planning for such measures can alienate citizens and the authorities from each other.

Whatever its source, the myth of panic is a threat to our welfare. Given the difficulty of using the term precisely and the rarity of actual panic situations, the cleanest solution is for the politicians and the press to avoid the term altogether. It’s time to end chatter about “panic” and focus on ways to support public resilience in an emergency.

Posted on August 9, 2005 at 7:25 AMView Comments

T-Mobile Hack

For at least seven months last year, a hacker had access to T-Mobile’s customer network. He’s known to have accessed information belonging to 400 customers—names, Social Security numbers, voicemail messages, SMS messages, photos—and probably had the ability to access data belonging to any of T-Mobile’s 16.3 million U.S. customers. But in its fervor to report on the security of cell phones, and T-Mobile in particular, the media missed the most important point of the story: The security of much of our data is not under our control.

This is new. A dozen years ago, if someone wanted to look through your mail, they would have to break into your house. Now they can just break into your ISP. Ten years ago, your voicemail was on an answering machine in your house; now it’s on a computer owned by a telephone company. Your financial data is on Websites protected only by passwords. The list of books you browse, and the books you buy, is stored in the computers of some online bookseller. Your affinity card allows your supermarket to know what food you like. Data that used to be under your direct control is now controlled by others.

We have no choice but to trust these companies with our privacy, even though the companies have little incentive to protect that privacy. T-Mobile suffered some bad press for its lousy security, nothing more. It’ll spend some money improving its security, but it’ll be security designed to protect its reputation from bad PR, not security designed to protect the privacy of its customers.

This loss of control over our data has other effects, too. Our protections against police abuse have been severely watered down. The courts have ruled that the police can search your data without a warrant, as long as that data is held by others. The police need a warrant to read the e-mail on your computer; but they don’t need one to read it off the backup tapes at your ISP. According to the Supreme Court, that’s not a search as defined by the 4th Amendment.

This isn’t a technology problem, it’s a legal problem. The courts need to recognize that in the information age, virtual privacy and physical privacy don’t have the same boundaries. We should be able to control our own data, regardless of where it is stored. We should be able to make decisions about the security and privacy of that data, and have legal recourse should companies fail to honor those decisions. And just as the Supreme Court eventually ruled that tapping a telephone was a Fourth Amendment search, requiring a warrant—even though it occurred at the phone company switching office—the Supreme Court must recognize that reading e-mail at an ISP is no different.

This essay appeared in eWeek.

Posted on February 14, 2005 at 4:26 PMView Comments

The Digital Person

Last week, I stayed at the St. Regis hotel in Washington, DC. It was my first visit, and the management gave me a questionnaire, asking me things like my birthday, my spouse’s name and birthday, my anniversary, and my favorite fruits, drinks, and sweets. The purpose was clear; the hotel wanted to be able to offer me a more personalized service the next time I visited. And it was a purpose I agreed with; I wanted more personalized service. But I was very uneasy about filling out the form.

It wasn’t that the information was particularly private. I make no secret of my birthday, or anniversary, or food preferences. Much of that information is even floating around the Web somewhere. Secrecy wasn’t the issue.

The issue was control. In the United States, information about a person is owned by the person who collects it, not by the person it is about. There are specific exceptions in the law, but they’re few and far between. There are no broad data protection laws, as you find in the European Union. There are no Privacy Commissioners, as you find in Canada. Privacy law in the United States is largely about secrecy: if the information is not secret, there’s little you can do to control its dissemination.

As a result, enormous databases exist that are filled with personal information. These databases are owned by marketing firms, credit bureaus, and the government. Amazon knows what books we buy. Our supermarket knows what foods we eat. Credit card companies know quite a lot about our purchasing habits. Credit bureaus know about our financial history, and what they don’t know is contained in bank records. Health insurance records contain details about our health and well-being. Government records contain our Social Security numbers, birthdates, addresses, mother’s maiden names, and a host of other things. Many driver’s license records contain digital pictures.

All of this data is being combined, indexed, and correlated. And it’s being used for all sorts of things. Targeted marketing campaigns are just the tip of the iceberg. This information is used by potential employers to judge our suitability as employees, by potential landlords to determine our suitability as renters, and by the government to determine our likelihood of being a terrorist.

Some stores are beginning to use our data to determine whether we are desirable customers or not. If customers take advantage of too many discount offers or make too many returns, they may be profiled as “bad” customers and be treated differently from the “good” customers.

And with alarming frequency, our data is being abused by identity thieves. The businesses that gather our data don’t care much about keeping it secure. So identity theft is a problem where those who suffer from it—the individuals—are not in a position to improve security, and those who are in a position to improve security don’t suffer from the problem.

The issue here is not about secrecy, it’s about control. The issue is that both government and commercial organizations are building “digital dossiers” about us, and that these dossiers are being used to judge and categorize us through some secret process.

A new book by George Washington University Law Professor Daniel Solove examines the problem of the growing accumulation of personal information in enormous databases. The book is called The Digital Person: Technology and Privacy in the Information Age, and it is a fascinating read.

Solove’s book explores this problem from a legal perspective, explaining what the problem is, how current U.S. law fails to deal with it, and what we should do to protect privacy today. It’s an unusually perceptive discussion of one of the most
vexing problems of the digital age—our loss of control over our personal information. It’s a fascinating journey into the almost surreal ways personal information is hoarded, used, and abused in the digital age.

Solove argues that our common conceptualization of the privacy problem as Big Brother—some faceless organization knowing our most intimate secrets—is only one facet of the issue. A better metaphor can be found in Franz Kafka’s The Trial. In the book, a vast faceless bureaucracy constructs a huge dossier about a person, who can’t find out what information exists about him in the dossier, why the information has been gathered, or what it will be used for. Privacy is not about intimate secrets; it’s about who has control of the millions of pieces of personal data that we leave like droppings as we go through our daily life. And until the U.S. legal system recognizes this fact, Americans will continue to live in an world where they have little control over their digital person.

In the end, I didn’t complete the questionnaire from the St. Regis Hotel. While I was fine with the St. Regis in Washington, DC, having that information to make my subsequent stays a little more personal, and was probably fine with that information being shared among other St. Regis hotels, I wasn’t comfortable with the St. Regis doing whatever they wanted with that information. I wasn’t comfortable with them selling the information to a marketing database. I wasn’t comfortable with anyone being able to buy that information. I wasn’t comfortable with that information ending up in a database of my habits, my preferences, my proclivities. It wasn’t the primary use of that information that bothered me, it was the secondary uses.

Solove has done much more thinking about this issue than I have. His book provides a clear account of the social problems involving information privacy, and haunting predictions of current U.S. legal policies. Even more importantly, the legal solutions he provides are compelling and worth serious consideration. I recommend his book highly.

The book’s website

Order the book on Amazon

Posted on December 9, 2004 at 9:18 AMView Comments

1 6 7 8

Sidebar photo of Bruce Schneier by Joe MacInnis.