Entries Tagged "control"

Page 7 of 8

"Data Mining and the Security-Liberty Debate"

Good paper: “Data Mining and the Security-Liberty Debate,” by Daniel J. Solove.

Abstract: In this essay, written for a symposium on surveillance for the University of Chicago Law Review, I examine some common difficulties in the way that liberty is balanced against security in the context of data mining. Countless discussions about the trade-offs between security and liberty begin by taking a security proposal and then weighing it against what it would cost our civil liberties. Often, the liberty interests are cast as individual rights and balanced against the security interests, which are cast in terms of the safety of society as a whole. Courts and commentators defer to the government’s assertions about the effectiveness of the security interest. In the context of data mining, the liberty interest is limited by narrow understandings of privacy that neglect to account for many privacy problems. As a result, the balancing concludes with a victory in favor of the security interest. But as I argue, important dimensions of data mining’s security benefits require more scrutiny, and the privacy concerns are significantly greater than currently acknowledged. These problems have undermined the balancing process and skewed the results toward the security side of the scale.

My only complaint: it’s not a liberty vs. security debate. Liberty is security. It’s a liberty vs. control debate.

Posted on June 12, 2007 at 7:11 AMView Comments

DRM in Windows Vista

Windows Vista includes an array of “features” that you don’t want. These features will make your computer less reliable and less secure. They’ll make your computer less stable and run slower. They will cause technical support problems. They may even require you to upgrade some of your peripheral hardware and existing software. And these features won’t do anything useful. In fact, they’re working against you. They’re digital rights management (DRM) features built into Vista at the behest of the entertainment industry.

And you don’t get to refuse them.

The details are pretty geeky, but basically Microsoft has reworked a lot of the core operating system to add copy protection technology for new media formats like HD DVD and Blu-ray disks. Certain high-quality output paths — audio and video — are reserved for protected peripheral devices. Sometimes output quality is artificially degraded; sometimes output is prevented entirely. And Vista continuously spends CPU time monitoring itself, trying to figure out if you’re doing something that it thinks you shouldn’t. If it does, it limits functionality and in extreme cases restarts just the video subsystem. We still don’t know the exact details of all this, and how far-reaching it is, but it doesn’t look good.

Microsoft put all those functionality-crippling features into Vista because it wants to own the entertainment industry. This isn’t how Microsoft spins it, of course. It maintains that it has no choice, that it’s Hollywood that is demanding DRM in Windows in order to allow “premium content” — meaning, new movies that are still earning revenue — onto your computer. If Microsoft didn’t play along, it’d be relegated to second-class status as Hollywood pulled its support for the platform.

It’s all complete nonsense. Microsoft could have easily told the entertainment industry that it was not going to deliberately cripple its operating system, take it or leave it. With 95% of the operating system market, where else would Hollywood go? Sure, Big Media has been pushing DRM, but recently some — Sony after their 2005 debacle and now EMI Group — are having second thoughts.

What the entertainment companies are finally realizing is that DRM doesn’t work, and just annoys their customers. Like every other DRM system ever invented, Microsoft’s won’t keep the professional pirates from making copies of whatever they want. The DRM security in Vista was broken the day it was released. Sure, Microsoft will patch it, but the patched system will get broken as well. It’s an arms race, and the defenders can’t possibly win.

I believe that Microsoft knows this and also knows that it doesn’t matter. This isn’t about stopping pirates and the small percentage of people who download free movies from the Internet. This isn’t even about Microsoft satisfying its Hollywood customers at the expense of those of us paying for the privilege of using Vista. This is about the overwhelming majority of honest users and who owns the distribution channels to them. And while it may have started as a partnership, in the end Microsoft is going to end up locking the movie companies into selling content in its proprietary formats.

We saw this trick before; Apple pulled it on the recording industry. First iTunes worked in partnership with the major record labels to distribute content, but soon Warner Music’s CEO Edgar Bronfman Jr. found that he wasn’t able to dictate a pricing model to Steve Jobs. The same thing will happen here; after Vista is firmly entrenched in the marketplace, Sony’s Howard Stringer won’t be able to dictate pricing or terms to Bill Gates. This is a war for 21st-century movie distribution and, when the dust settles, Hollywood won’t know what hit them.

To be fair, just last week Steve Jobs publicly came out against DRM for music. It’s a reasonable business position, now that Apple controls the online music distribution market. But Jobs never mentioned movies, and he is the largest single shareholder in Disney. Talk is cheap. The real question is would he actually allow iTunes Music Store purchases to play on Microsoft or Sony players, or is this just a clever way of deflecting blame to the — already hated — music labels.

Microsoft is reaching for a much bigger prize than Apple: not just Hollywood, but also peripheral hardware vendors. Vista’s DRM will require driver developers to comply with all kinds of rules and be certified; otherwise, they won’t work. And Microsoft talks about expanding this to independent software vendors as well. It’s another war for control of the computer market.

Unfortunately, we users are caught in the crossfire. We are not only stuck with DRM systems that interfere with our legitimate fair-use rights for the content we buy, we’re stuck with DRM systems that interfere with all of our computer use — even the uses that have nothing to do with copyright.

I don’t see the market righting this wrong, because Microsoft’s monopoly position gives it much more power than we consumers can hope to have. It might not be as obvious as Microsoft using its operating system monopoly to kill Netscape and own the browser market, but it’s really no different. Microsoft’s entertainment market grab might further entrench its monopoly position, but it will cause serious damage to both the computer and entertainment industries. DRM is bad, both for consumers and for the entertainment industry: something the entertainment industry is just starting to realize, but Microsoft is still fighting. Some researchers think that this is the final straw that will drive Windows users to the competition, but I think the courts are necessary.

In the meantime, the only advice I can offer you is to not upgrade to Vista. It will be hard. Microsoft’s bundling deals with computer manufacturers mean that it will be increasingly hard not to get the new operating system with new computers. And Microsoft has some pretty deep pockets and can wait us all out if it wants to. Yes, some people will shift to Macintosh and some fewer number to Linux, but most of us are stuck on Windows. Still, if enough customers say no to Vista, the company might actually listen.

This essay originally appeared on Forbes.com.

EDITED TO ADD (2/23): Some commentary.

Posted on February 12, 2007 at 10:37 AMView Comments

Facebook and Data Control

Earlier this month, the popular social networking site Facebook learned a hard lesson in privacy. It introduced a new feature called “News Feeds” that shows an aggregation of everything members do on the site: added and deleted friends, a change in relationship status, a new favorite song, a new interest, etc. Instead of a member’s friends having to go to his page to view any changes, these changes are all presented to them automatically.

The outrage was enormous. One group, Students Against Facebook News Feeds, amassed over 700,000 members. Members planned to protest at the company’s headquarters. Facebook’s founder was completely stunned, and the company scrambled to add some privacy options.

Welcome to the complicated and confusing world of privacy in the information age. Facebook didn’t think there would be any problem; all it did was take available data and aggregate it in a novel way for what it perceived was its customers’ benefit. Facebook members instinctively understood that making this information easier to display was an enormous difference, and that privacy is more about control than about secrecy.

But on the other hand, Facebook members are just fooling themselves if they think they can control information they give to third parties.

Privacy used to be about secrecy. Someone defending himself in court against the charge of revealing someone else’s personal information could use as a defense the fact that it was not secret. But clearly, privacy is more complicated than that. Just because you tell your insurance company something doesn’t mean you don’t feel violated when that information is sold to a data broker. Just because you tell your friend a secret doesn’t mean you’re happy when he tells others. Same with your employer, your bank, or any company you do business with.

But as the Facebook example illustrates, privacy is much more complex. It’s about who you choose to disclose information to, how, and for what purpose. And the key word there is “choose.” People are willing to share all sorts of information, as long as they are in control.

When Facebook unilaterally changed the rules about how personal information was revealed, it reminded people that they weren’t in control. Its eight million members put their personal information on the site based on a set of rules about how that information would be used. It’s no wonder those members — high school and college kids who traditionally don’t care much about their own privacy — felt violated when Facebook changed the rules.

Unfortunately, Facebook can change the rules whenever it wants. Its Privacy Policy is 2,800 words long, and ends with a notice that it can change at any time. How many members ever read that policy, let alone read it regularly and check for changes? Not that a Privacy Policy is the same as a contract. Legally, Facebook owns all data members upload to the site. It can sell the data to advertisers, marketers, and data brokers. (Note: there is no evidence that Facebook does any of this.) It can allow the police to search its databases upon request. It can add new features that change who can access what personal data, and how.

But public perception is important. The lesson here for Facebook and other companies — for Google and MySpace and AOL and everyone else who hosts our e-mails and webpages and chat sessions — is that people believe they own their data. Even though the user agreement might technically give companies the right to sell the data, change the access rules to that data, or otherwise own that data, we — the users — believe otherwise. And when we who are affected by those actions start expressing our views — watch out.

What Facebook should have done was add the feature as an option, and allow members to opt in if they wanted to. Then, members who wanted to share their information via News Feeds could do so, and everyone else wouldn’t have felt that they had no say in the matter. This is definitely a gray area, and it’s hard to know beforehand which changes need to be implemented slowly and which won’t matter. Facebook, and others, need to talk to its members openly about new features. Remember: members want control.

The lesson for Facebook members might be even more jarring: if they think they have control over their data, they’re only deluding themselves. They can rebel against Facebook for changing the rules, but the rules have changed, regardless of what the company does.

Whenever you put data on a computer, you lose some control over it. And when you put it on the internet, you lose a lot of control over it. News Feeds brought Facebook members face to face with the full implications of putting their personal information on Facebook. It had just been an accident of the user interface that it was difficult to aggregate the data from multiple friends into a single place. And even if Facebook eliminates News Feeds entirely, a third party could easily write a program that does the same thing. Facebook could try to block the program, but would lose that technical battle in the end.

We’re all still wrestling with the privacy implications of the Internet, but the balance has tipped in favor of more openness. Digital data is just too easy to move, copy, aggregate, and display. Companies like Facebook need to respect the social rules of their sites, to think carefully about their default settings — they have an enormous impact on the privacy mores of the online world — and to give users as much control over their personal information as they can.

But we all need to remember that much of that control is illusory.

This essay originally appeared on Wired.com.

Posted on September 21, 2006 at 5:57 AMView Comments

Recovering Data from Cell Phones

People sell, give away, and throw away their cell phones without even thinking about the data still on them:

A company, Trust Digital of McLean, Virginia, bought 10 different phones on eBay this summer to test phone-security tools it sells for businesses. The phones all were fairly sophisticated models capable of working with corporate e-mail systems.

Curious software experts at Trust Digital resurrected information on nearly all the used phones, including the racy exchanges between guarded lovers.

The other phones contained:

  • One company’s plans to win a multimillion-dollar federal transportation contract.
  • E-mails about another firm’s $50,000 payment for a software license.
  • Bank accounts and passwords.
  • Details of prescriptions and receipts for one worker’s utility payments.

The recovered information was equal to 27,000 pages — a stack of printouts 8 feet high.

“We found just a mountain of personal and corporate data,” said Nick Magliato, Trust Digital’s chief executive.

In many cases, this was data that the owners erased.

A popular practice among sellers, resetting the phone, often means sensitive information appears to have been erased. But it can be resurrected using specialized yet inexpensive software found on the Internet.

More and more, our data is not really under our control. We store it on devices and third-party websites, or on our own computer. We try to erase it, but we really can’t. We try to control its dissemination, but it’s harder and harder.

Posted on September 5, 2006 at 9:38 AM

Ignoring the "Great Firewall of China"

Richard Clayton is presenting a paper (blog post here) that discusses how to defeat China’s national firewall:

…the keyword detection is not actually being done in large routers on the borders of the Chinese networks, but in nearby subsidiary machines. When these machines detect the keyword, they do not actually prevent the packet containing the keyword from passing through the main router (this would be horribly complicated to achieve and still allow the router to run at the necessary speed). Instead, these subsiduary machines generate a series of TCP reset packets, which are sent to each end of the connection. When the resets arrive, the end-points assume they are genuine requests from the other end to close the connection — and obey. Hence the censorship occurs.

However, because the original packets are passed through the firewall unscathed, if both of the endpoints were to completely ignore the firewall’s reset packets, then the connection will proceed unhindered! We’ve done some real experiments on this — and it works just fine!! Think of it as the Harry Potter approach to the Great Firewall — just shut your eyes and walk onto Platform 9¾.

Ignoring resets is trivial to achieve by applying simple firewall rules… and has no significant effect on ordinary working. If you want to be a little more clever you can examine the hop count (TTL) in the reset packets and determine whether the values are consistent with them arriving from the far end, or if the value indicates they have come from the intervening censorship device. We would argue that there is much to commend examining TTL values when considering defences against denial-of-service attacks using reset packets. Having operating system vendors provide this new functionality as standard would also be of practical use because Chinese citizens would not need to run special firewall-busting code (which the authorities might attempt to outlaw) but just off-the-shelf software (which they would necessarily tolerate).

Posted on June 27, 2006 at 1:13 PMView Comments

When "Off" Doesn't Mean Off

According to the specs of the new Nintendo Wii (their new game machine), “Wii can communicate with the Internet even when the power is turned off.” Nintendo accentuates the positive: “This WiiConnect24 service delivers a new surprise or game update, even if users do not play with Wii,” while ignoring the possibility that Nintendo can deactivate a game if they choose to do so, or that someone else can deliver a different — not so wanted — surprise.

We all know that, but what’s interesting here is that Nintendo is changing the meaning of the word “off.” We are all conditioned to believe that “off” means off, and therefore safe. But in Nintendo’s case, “off” really means something like “on standby.” If users expect the Nintendo Wii to be truly off, they need to pull the power plug — assuming there isn’t a battery foiling that tactic. Maybe they need to pull both the power plug and the Ethernet cable. Unless they have a wireless network at home.

Maybe there is no way to turn the Nintendo Wii off.

There’s a serious security problem here, made worse by a bad user interface. “Off” should mean off.

Posted on May 10, 2006 at 6:45 AMView Comments

Who Owns Your Computer?

When technology serves its owners, it is liberating. When it is designed to serve others, over the owner’s objection, it is oppressive. There’s a battle raging on your computer right now — one that pits you against worms and viruses, Trojans, spyware, automatic update features and digital rights management technologies. It’s the battle to determine who owns your computer.

You own your computer, of course. You bought it. You paid for it. But how much control do you really have over what happens on your machine? Technically you might have bought the hardware and software, but you have less control over what it’s doing behind the scenes.

Using the hacker sense of the term, your computer is “owned” by other people.

It used to be that only malicious hackers were trying to own your computers. Whether through worms, viruses, Trojans or other means, they would try to install some kind of remote-control program onto your system. Then they’d use your computers to sniff passwords, make fraudulent bank transactions, send spam, initiate phishing attacks and so on. Estimates are that somewhere between hundreds of thousands and millions of computers are members of remotely controlled “bot” networks. Owned.

Now, things are not so simple. There are all sorts of interests vying for control of your computer. There are media companies that want to control what you can do with the music and videos they sell you. There are companies that use software as a conduit to collect marketing information, deliver advertising or do whatever it is their real owners require. And there are software companies that are trying to make money by pleasing not only their customers, but other companies they ally themselves with. All these companies want to own your computer.

Some examples:

  • Entertainment software: In October 2005, it emerged that Sony had distributed a rootkit with several music CDs — the same kind of software that crackers use to own people’s computers. This rootkit secretly installed itself when the music CD was played on a computer. Its purpose was to prevent people from doing things with the music that Sony didn’t approve of: It was a DRM system. If the exact same piece of software had been installed secretly by a hacker, this would have been an illegal act. But Sony believed that it had legitimate reasons for wanting to own its customers’ machines.
  • Antivirus: You might have expected your antivirus software to detect Sony’s rootkit. After all, that’s why you bought it. But initially, the security programs sold by Symantec and others did not detect it, because Sony had asked them not to. You might have thought that the software you bought was working for you, but you would have been wrong.
  • Internet services: Hotmail allows you to blacklist certain e-mail addresses, so that mail from them automatically goes into your spam trap. Have you ever tried blocking all that incessant marketing e-mail from Microsoft? You can’t.
  • Application software: Internet Explorer users might have expected the program to incorporate easy-to-use cookie handling and pop-up blockers. After all, other browsers do, and users have found them useful in defending against Internet annoyances. But Microsoft isn’t just selling software to you; it sells Internet advertising as well. It isn’t in the company’s best interest to offer users features that would adversely affect its business partners.
  • Spyware: Spyware is nothing but someone else trying to own your computer. These programs eavesdrop on your behavior and report back to their real owners — sometimes without your knowledge or consent — about your behavior.
  • Internet security: It recently came out that the firewall in Microsoft Vista will ship with half its protections turned off. Microsoft claims that large enterprise users demanded this default configuration, but that makes no sense. It’s far more likely that Microsoft just doesn’t want adware — and DRM spyware — blocked by default.
  • Update: Automatic update features are another way software companies try to own your computer. While they can be useful for improving security, they also require you to trust your software vendor not to disable your computer for nonpayment, breach of contract or other presumed infractions.

Adware, software-as-a-service and Google Desktop search are all examples of some other company trying to own your computer. And Trusted Computing will only make the problem worse.

There is an inherent insecurity to technologies that try to own people’s computers: They allow individuals other than the computers’ legitimate owners to enforce policy on those machines. These systems invite attackers to assume the role of the third party and turn a user’s device against him.

Remember the Sony story: The most insecure feature in that DRM system was a cloaking mechanism that gave the rootkit control over whether you could see it executing or spot its files on your hard disk. By taking ownership away from you, it reduced your security.

If left to grow, these external control systems will fundamentally change your relationship with your computer. They will make your computer much less useful by letting corporations limit what you can do with it. They will make your computer much less reliable because you will no longer have control of what is running on your machine, what it does, and how the various software components interact. At the extreme, they will transform your computer into a glorified boob tube.

You can fight back against this trend by only using software that respects your boundaries. Boycott companies that don’t honestly serve their customers, that don’t disclose their alliances, that treat users like marketing assets. Use open-source software — software created and owned by users, with no hidden agendas, no secret alliances and no back-room marketing deals.

Just because computers were a liberating force in the past doesn’t mean they will be in the future. There is enormous political and economic power behind the idea that you shouldn’t truly own your computer or your software, despite having paid for it.

This essay originally appeared on Wired.com.

EDITED TO ADD (5/5): Commentary. It seems that some of my examples were not very good. I’ll come up with other ones for the Crypto-Gram version.

Posted on May 4, 2006 at 7:13 AMView Comments

Liberty Increases Security

From the Scientific American essay “Murdercide: Science unravels the myth of suicide bombers“:

Another method [of reducing terrorism], says Princeton University economist Alan B. Krueger, is to increase the civil liberties of the countries that breed terrorist groups. In an analysis of State Department data on terrorism, Krueger discovered that “countries like Saudi Arabia and Bahrain, which have spawned relatively many terrorists, are economically well off yet lacking in civil liberties. Poor countries with a tradition of protecting civil liberties are unlikely to spawn suicide terrorists. Evidently, the freedom to assemble and protest peacefully without interference from the government goes a long way to providing an alternative to terrorism.” Let freedom ring.

This seems obvious to me.

Found on John Quarterman’s blog.

Posted on January 18, 2006 at 1:33 PMView Comments

A U.S. National Firewall

This seems like a really bad idea:

Government has the right — even the responsibility — to see that its laws and regulations are enforced. The Internet is no exception. When the Internet is being used on American soil, it should comply with American law. And if it doesn’t, then the government should be able to step in and filter the illegal sites and activities.

Posted on September 7, 2005 at 3:53 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.