Entries Tagged "control"

Page 4 of 8

Wiretapping the Internet

On Monday, The New York Times reported that President Obama will seek sweeping laws enabling law enforcement to more easily eavesdrop on the internet. Technologies are changing, the administration argues, and modern digital systems aren’t as easy to monitor as traditional telephones.

The government wants to force companies to redesign their communications systems and information networks to facilitate surveillance, and to provide law enforcement with back doors that enable them to bypass any security measures.

The proposal may seem extreme, but—unfortunately—it’s not unique. Just a few months ago, the governments of the United Arab Emirates, Saudi Arabia and India threatened to ban BlackBerry devices unless the company made eavesdropping easier. China has already built a massive internet surveillance system to better control its citizens.

Formerly reserved for totalitarian countries, this wholesale surveillance of citizens has moved into the democratic world as well. Governments like Sweden, Canada and the United Kingdom are debating or passing laws giving their police new powers of internet surveillance, in many cases requiring communications system providers to redesign products and services they sell. More are passing data retention laws, forcing companies to retain customer data in case they might need to be investigated later.

Obama isn’t the first U.S. president to seek expanded digital eavesdropping. The 1994 CALEA law required phone companies to build ways to better facilitate FBI eavesdropping into their digital phone switches. Since 2001, the National Security Agency has built substantial eavesdropping systems within the United States.

These laws are dangerous, both for citizens of countries like China and citizens of Western democracies. Forcing companies to redesign their communications products and services to facilitate government eavesdropping reduces privacy and liberty; that’s obvious. But the laws also make us less safe. Communications systems that have no inherent eavesdropping capabilities are more secure than systems with those capabilities built in.

Any surveillance system invites both criminal appropriation and government abuse. Function creep is the most obvious abuse: New police powers, enacted to fight terrorism, are already used in situations of conventional nonterrorist crime. Internet surveillance and control will be no different.

Official misuses are bad enough, but the unofficial uses are far more worrisome. An infrastructure conducive to surveillance and control invites surveillance and control, both by the people you expect and the people you don’t. Any surveillance and control system must itself be secured, and we’re not very good at that. Why does anyone think that only authorized law enforcement will mine collected internet data or eavesdrop on Skype and IM conversations?

These risks are not theoretical. After 9/11, the National Security Agency built a surveillance infrastructure to eavesdrop on telephone calls and e-mails within the United States. Although procedural rules stated that only non-Americans and international phone calls were to be listened to, actual practice didn’t always match those rules. NSA analysts collected more data than they were authorized to and used the system to spy on wives, girlfriends and famous people like former President Bill Clinton.

The most serious known misuse of a telecommunications surveillance infrastructure took place in Greece. Between June 2004 and March 2005, someone wiretapped more than 100 cell phones belonging to members of the Greek government—the prime minister and the ministers of defense, foreign affairs and justice—and other prominent people. Ericsson built this wiretapping capability into Vodafone’s products, but enabled it only for governments that requested it. Greece wasn’t one of those governments, but some still unknown party—a rival political group? organized crime?—figured out how to surreptitiously turn the feature on.

Surveillance infrastructure is easy to export. Once surveillance capabilities are built into Skype or Gmail or your BlackBerry, it’s easy for more totalitarian countries to demand the same access; after all, the technical work has already been done.

Western companies such as Siemens, Nokia and Secure Computing built Iran’s surveillance infrastructure, and U.S. companies like L-1 Identity Solutions helped build China’s electronic police state. The next generation of worldwide citizen control will be paid for by countries like the United States.

We should be embarrassed to export eavesdropping capabilities. Secure, surveillance-free systems protect the lives of people in totalitarian countries around the world. They allow people to exchange ideas even when the government wants to limit free exchange. They power citizen journalism, political movements and social change. For example, Twitter’s anonymity saved the lives of Iranian dissidents—anonymity that many governments want to eliminate.

Yes, communications technologies are used by both the good guys and the bad guys. But the good guys far outnumber the bad guys, and it’s far more valuable to make sure they’re secure than it is to cripple them on the off chance it might help catch a bad guy. It’s like the FBI demanding that no automobiles drive above 50 mph, so they can more easily pursue getaway cars. It might or might not work—but, regardless, the cost to society of the resulting slowdown would be enormous.

It’s bad civic hygiene to build technologies that could someday be used to facilitate a police state. No matter what the eavesdroppers say, these systems cost too much and put us all at greater risk.

This essay previously appeared on CNN.com, and was a rewrite of a 2009 op ed on MPR News Q—which itself was based in part on a 2007 Washington Post op ed by Susan Landau.

Three more articles.

Posted on September 30, 2010 at 6:02 AMView Comments

Consumerization and Corporate IT Security

If you’re a typical wired American, you’ve got a bunch of tech tools you like and a bunch more you covet. You have a cell phone that can easily text. You’ve got a laptop configured just the way you want it. Maybe you have a Kindle for reading, or an iPad. And when the next new thing comes along, some of you will line up on the first day it’s available.

So why can’t work keep up? Why are you forced to use an unfamiliar, and sometimes outdated, operating system? Why do you need a second laptop, maybe an older and clunkier one? Why do you need a second cell phone with a new interface, or a BlackBerry, when your phone already does e-mail? Or a second BlackBerry tied to corporate e-mail? Why can’t you use the cool stuff you already have?

More and more companies are letting you. They’re giving you an allowance and allowing you to buy whatever laptop you want, and to connect into the corporate network with whatever device you choose. They’re allowing you to use whatever cell phone you have, whatever portable e-mail device you have, whatever you personally need to get your job done. And the security office is freaking.

You can’t blame them, really. Security is hard enough when you have control of the hardware, operating system and software. Lose control of any of those things, and the difficulty goes through the roof. How do you ensure that the employee devices are secure, and have up-to-date security patches? How do you control what goes on them? How do you deal with the tech support issues when they fail? How do you even begin to manage this logistical nightmare? Better to dig your heels in and say “no.”

But security is on the losing end of this argument, and the sooner it realizes that, the better.

The meta-trend here is consumerization: cool technologies show up for the consumer market before they’re available to the business market. Every corporation is under pressure from its employees to allow them to use these new technologies at work, and that pressure is only getting stronger. Younger employees simply aren’t going to stand for using last year’s stuff, and they’re not going to carry around a second laptop. They’re either going to figure out ways around the corporate security rules, or they’re going to take another job with a more trendy company. Either way, senior management is going to tell security to get out of the way. It might even be the CEO, who wants to get to the company’s databases from his brand new iPad, driving the change. Either way, it’s going to be harder and harder to say no.

At the same time, cloud computing makes this easier. More and more, employee computing devices are nothing more than dumb terminals with a browser interface. When corporate e-mail is all webmail, corporate documents are all on GoogleDocs, and when all the specialized applications have a web interface, it’s easier to allow employees to use any up-to-date browser. It’s what companies are already doing with their partners, suppliers, and customers.

Also on the plus side, technology companies have woken up to this trend and—from Microsoft and Cisco on down to the startups—are trying to offer security solutions. Like everything else, it’s a mixed bag: some of them will work and some of them won’t, most of them will need careful configuration to work well, and few of them will get it right. The result is that we’ll muddle through, as usual.

Security is always a tradeoff, and security decisions are often made for non-security reasons. In this case, the right decision is to sacrifice security for convenience and flexibility. Corporations want their employees to be able to work from anywhere, and they’re going to have loosened control over the tools they allow in order to get it.

This essay first appeared as the second half of a point/counterpoint with Marcus Ranum in Information Security Magazine. You can read Marcus’s half here.

Posted on September 7, 2010 at 7:25 AMView Comments

UAE to Ban BlackBerrys

The United Arab Emirates—Dubai, etc.—is threatening to ban BlackBerrys because they can’t eavesdrop on them.

At the heart of the battle is access to the data transmitted by BlackBerrys. RIM processes the information through a handful of secure Network Operations Centers around the world, meaning that most governments can’t access the data easily on their own. The U.A.E. worries that because of jurisdictional issues, its courts couldn’t compel RIM to turn over secure data from its servers, which are outside the U.A.E. even in a national-security situation, a person familiar with the situation said.

This is a weird story for several reasons:

1. The UAE can’t eavesdrop on BlackBerry traffic because it is encrypted between RIM’s servers and the phones. That makes sense, but conventional e-mail services are no different. Gmail, for example, is encrypted between Google’s servers and the users’ computers. So are most other webmail services. Is the mobile nature of BlackBerrys really that different? Is it really not a problem that any smart phone can access webmail through an encrypted SSL tunnel?

2. This an isolated move in a complicated negotiation between the UAE and RIM.

The U.A.E. ban, due to start Oct. 11, was the result of the “failure of ongoing attempts, dating back to 2007, to bring BlackBerry services in the U.A.E. in line with U.A.E. telecommunications regulations,” the country’s Telecommunications Regulatory Authority said Sunday. The ban doesn’t affect telephone and text-messaging services.

And:

The U.A.E. wanted RIM to locate servers in the country, where it had legal jurisdiction over them; RIM had offered access to the data of 3,000 clients instead, the person said.

There’s no reason to announce the ban over a month before it goes into effect, other than to prod RIM to respond in some way.

3. It’s not obvious who will blink first. RIM has about 500,000 users in the UAE. RIM doesn’t want to lose those subscribers, but the UAE doesn’t want to piss those people off, either. The UAE needs them to work and do business in their country, especially as real estate prices continue to collapse.

4. India, China, and Russia threatened to kick BlackBerrys out for this reason, but relented when RIM agreed to “address concerns,” which is code for “allowed them to eavesdrop.”

Most countries have negotiated agreements with RIM that enable their security agencies to monitor and decipher this traffic. For example, Russia’s two main mobile phone providers, MTS and Vimpelcom, began selling BlackBerrys after they agreed to provide access to the federal security service. “We resolved this question,” Vimpelcom says. “We provided access.”

The launch of BlackBerry service by China Mobile was delayed until RIM negotiated an agreement that enables China to monitor traffic.

Similarly, last week India lifted a threat to ban BlackBerry services after RIM agreed to address concerns.

[…]

Nevertheless, while RIM has declined to comment on the details of its arrangements with any government, it issued an opaque statement on Monday: “RIM respects both the regulatory requirements of government and the security and privacy needs of corporations and consumers.”

How did they do that? Did they put RIM servers in those countries, and allow the government access to the traffic? Did they pipe the raw traffic back to those countries from their servers elsewhere? Did they just promise to turn over any data when asked?

RIM makes a big deal about how secure its users’ data is, but I don’t know how much of that to believe:

RIM said the BlackBerry network was set up so that “no one, including RIM, could access” customer data, which is encrypted from the time it leaves the device. It added that RIM would “simply be unable to accommodate any request” for a key to decrypt the data, since the company doesn’t have the key.

The BlackBerry network is designed “to exclude the capability for RIM or any third party to read encrypted information under any circumstances,” RIM’s statement said. Moreover, the location of BlackBerry’s servers doesn’t matter, the company said, because the data on them can’t be deciphered without a decryption key.

Am I missing something here? RIM isn’t providing a file storage service, where user-encrypted data is stored on its servers. RIM is providing a communications service. While the data is encrypted between RIM’s servers and the BlackBerrys, it has to be encrypted by RIM—so RIM has access to the plaintext.

In any case, RIM has already demonstrated that it has the technical ability to address the UAE’s concerns. Like the apocryphal story about Churchill and Lady Astor, all that’s left is to agree on a price.

5. For the record, I have absolutely no idea what this quote of mine from the Reuters story really means:

“If you want to eavesdrop on your people, then you ban whatever they’re using,” said Bruce Schneier, chief security technology officer at BT. “The basic problem is there’s encryption between the BlackBerries and the servers. We find this issue all around about encryption.”

I hope I wasn’t that incoherent during the phone interview.

EDITED TO ADD (8/5): I might have gotten a do-over with Reuters. On a phone interview yesterday, I said: “RIM’s carefully worded statements about BlackBerry security are designed to make their customers feel better, while giving the company ample room to screw them.” Jonathan Zittrain picks apart one of those statements.

Posted on August 3, 2010 at 11:08 AMView Comments

The Threat of Cyberwar Has Been Grossly Exaggerated

There’s a power struggle going on in the U.S. government right now.

It’s about who is in charge of cyber security, and how much control the government will exert over civilian networks. And by beating the drums of war, the military is coming out on top.

“The United States is fighting a cyberwar today, and we are losing,” said former NSA director—and current cyberwar contractor—Mike McConnell. “Cyber 9/11 has happened over the last ten years, but it happened slowly so we don’t see it,” said former National Cyber Security Division director Amit Yoran. Richard Clarke, whom Yoran replaced, wrote an entire book hyping the threat of cyberwar.

General Keith Alexander, the current commander of the U.S. Cyber Command, hypes it every chance he gets. This isn’t just rhetoric of a few over-eager government officials and headline writers; the entire national debate on cyberwar is plagued with exaggerations and hyperbole.

Googling those names and terms—as well as “cyber Pearl Harbor,” “cyber Katrina,” and even “cyber Armageddon“—gives some idea how pervasive these memes are. Prefix “cyber” to something scary, and you end up with something really scary.

Cyberspace has all sorts of threats, day in and day out. Cybercrime is by far the largest: fraud, through identity theft and other means, extortion, and so on. Cyber-espionage is another, both government- and corporate-sponsored. Traditional hacking, without a profit motive, is still a threat. So is cyber-activism: people, most often kids, playing politics by attacking government and corporate websites and networks.

These threats cover a wide variety of perpetrators, motivations, tactics, and goals. You can see this variety in what the media has mislabeled as “cyberwar.” The attacks against Estonian websites in 2007 were simple hacking attacks by ethnic Russians angry at anti-Russian policies; these were denial-of-service attacks, a normal risk in cyberspace and hardly unprecedented.

A real-world comparison might be if an army invaded a country, then all got in line in front of people at the DMV so they couldn’t renew their licenses. If that’s what war looks like in the 21st century, we have little to fear.

Similar attacks against Georgia, which accompanied an actual Russian invasion, were also probably the responsibility of citizen activists or organized crime. A series of power blackouts in Brazil was caused by criminal extortionists—or was it sooty insulators? China is engaging in espionage, not war, in cyberspace. And so on.

One problem is that there’s no clear definition of “cyberwar.” What does it look like? How does it start? When is it over? Even cybersecurity experts don’t know the answers to these questions, and it’s dangerous to broadly apply the term “war” unless we know a war is going on.

Yet recent news articles have claimed that China declared cyberwar on Google, that Germany attacked China, and that a group of young hackers declared cyberwar on Australia. (Yes, cyberwar is so easy that even kids can do it.) Clearly we’re not talking about real war here, but a rhetorical war: like the war on terror.

We have a variety of institutions that can defend us when attacked: the police, the military, the Department of Homeland Security, various commercial products and services, and our own personal or corporate lawyers. The legal framework for any particular attack depends on two things: the attacker and the motive. Those are precisely the two things you don’t know when you’re being attacked on the Internet. We saw this on July 4 last year, when U.S. and South Korean websites were attacked by unknown perpetrators from North Korea—or perhaps England. Or was it Florida?

We surely need to improve our cybersecurity. But words have meaning, and metaphors matter. There’s a power struggle going on for control of our nation’s cybersecurity strategy, and the NSA and DoD are winning. If we frame the debate in terms of war, if we accept the military’s expansive cyberspace definition of “war,” we feed our fears.

We reinforce the notion that we’re helpless—what person or organization can defend itself in a war?—and others need to protect us. We invite the military to take over security, and to ignore the limits on power that often get jettisoned during wartime.

If, on the other hand, we use the more measured language of cybercrime, we change the debate. Crime fighting requires both resolve and resources, but it’s done within the context of normal life. We willingly give our police extraordinary powers of investigation and arrest, but we temper these powers with a judicial system and legal protections for citizens.

We need to be prepared for war, and a Cyber Command is just as vital as an Army or a Strategic Air Command. And because kid hackers and cyber-warriors use the same tactics, the defenses we build against crime and espionage will also protect us from more concerted attacks. But we’re not fighting a cyberwar now, and the risks of a cyberwar are no greater than the risks of a ground invasion. We need peacetime cyber-security, administered within the myriad structure of public and private security institutions we already have.

This essay previously appeared on CNN.com.

EDITED TO ADD (7/7): Earlier this month, I participated in a debate: “The Cyberwar Threat has been Grossly Exaggerated.” (Transcript here, video here.) Marc Rotenberg of EPIC and I were for the motion; Mike McConnell and Jonathan Zittrain were against. We lost.

We lost fair and square, for a bunch of reasons—we didn’t present our case very well, Jonathan Zittrain is a way better debater than we were—but basically the vote came down to the definition of “cyberwar.” If you believed in an expansive definition of cyberwar, one that encompassed a lot more types of attacks than traditional war, then you voted against the motion. If you believed in a limited definition of cyberwar, one that is a subset of traditional war, then you voted for it.

This continues to be an important debate.

EDITED TO ADD (7/7): Last month the Senate Homeland Security Committee held hearings on “Protecting Cyberspace as a National Asset: Comprehensive Legislation for the 21st Century.” Unfortunately, the DHS is getting hammered at these hearings, and the NSA is consolidating its power.

EDITED TO ADD (7/7): North Korea was probably not responsible for last year’s cyberattacks. Good thing we didn’t retaliate.

Posted on July 7, 2010 at 12:58 PMView Comments

Privacy and Control

In January Facebook Chief Executive, Mark Zuckerberg, declared the age of privacy to be over. A month earlier, Google Chief Eric Schmidt expressed a similar sentiment. Add Scott McNealy’s and Larry Ellison’s comments from a few years earlier, and you’ve got a whole lot of tech CEOs proclaiming the death of privacy—especially when it comes to young people.

It’s just not true. People, including the younger generation, still care about privacy. Yes, they’re far more public on the Internet than their parents: writing personal details on Facebook, posting embarrassing photos on Flickr and having intimate conversations on Twitter. But they take steps to protect their privacy and vociferously complain when they feel it violated. They’re not technically sophisticated about privacy and make mistakes all the time, but that’s mostly the fault of companies and Web sites that try to manipulate them for financial gain.

To the older generation, privacy is about secrecy. And, as the Supreme Court said, once something is no longer secret, it’s no longer private. But that’s not how privacy works, and it’s not how the younger generation thinks about it. Privacy is about control. When your health records are sold to a pharmaceutical company without your permission; when a social-networking site changes your privacy settings to make what used to be visible only to your friends visible to everyone; when the NSA eavesdrops on everyone’s e-mail conversations—your loss of control over that information is the issue. We may not mind sharing our personal lives and thoughts, but we want to control how, where and with whom. A privacy failure is a control failure.

People’s relationship with privacy is socially complicated. Salience matters: People are more likely to protect their privacy if they’re thinking about it, and less likely to if they’re thinking about something else. Social-networking sites know this, constantly reminding people about how much fun it is to share photos and comments and conversations while downplaying the privacy risks. Some sites go even further, deliberately hiding information about how little control—and privacy—users have over their data. We all give up our privacy when we’re not thinking about it.

Group behavior matters; we’re more likely to expose personal information when our peers are doing it. We object more to losing privacy than we value its return once it’s gone. Even if we don’t have control over our data, an illusion of control reassures us. And we are poor judges of risk. All sorts of academic research backs up these findings.

Here’s the problem: The very companies whose CEOs eulogize privacy make their money by controlling vast amounts of their users’ information. Whether through targeted advertising, cross-selling or simply convincing their users to spend more time on their site and sign up their friends, more information shared in more ways, more publicly means more profits. This means these companies are motivated to continually ratchet down the privacy of their services, while at the same time pronouncing privacy erosions as inevitable and giving users the illusion of control.

You can see these forces in play with Google‘s launch of Buzz. Buzz is a Twitter-like chatting service, and when Google launched it in February, the defaults were set so people would follow the people they corresponded with frequently in Gmail, with the list publicly available. Yes, users could change these options, but—and Google knew this—changing options is hard and most people accept the defaults, especially when they’re trying out something new. People were upset that their previously private e-mail contacts list was suddenly public. A Federal Trade Commission commissioner even threatened penalties. And though Google changed its defaults, resentment remained.

Facebook tried a similar control grab when it changed people’s default privacy settings last December to make them more public. While users could, in theory, keep their previous settings, it took an effort. Many people just wanted to chat with their friends and clicked through the new defaults without realizing it.

Facebook has a history of this sort of thing. In 2006 it introduced News Feeds, which changed the way people viewed information about their friends. There was no true privacy change in that users could not see more information than before; the change was in control—or arguably, just in the illusion of control. Still, there was a large uproar. And Facebook is doing it again; last month, the company announced new privacy changes that will make it easier for it to collect location data on users and sell that data to third parties.

With all this privacy erosion, those CEOs may actually be right—but only because they’re working to kill privacy. On the Internet, our privacy options are limited to the options those companies give us and how easy they are to find. We have Gmail and Facebook accounts because that’s where we socialize these days, and it’s hard—especially for the younger generation—to opt out. As long as privacy isn’t salient, and as long as these companies are allowed to forcibly change social norms by limiting options, people will increasingly get used to less and less privacy. There’s no malice on anyone’s part here; it’s just market forces in action. If we believe privacy is a social good, something necessary for democracy, liberty and human dignity, then we can’t rely on market forces to maintain it. Broad legislation protecting personal privacy by giving people control over their personal data is the only solution.

This essay originally appeared on Forbes.com.

EDITED TO ADD (4/13): Google responds. And another essay on the topic.

Posted on April 6, 2010 at 7:47 AMView Comments

My Reaction to Eric Schmidt

Schmidt said:

I think judgment matters. If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place. If you really need that kind of privacy, the reality is that search engines—including Google—do retain this information for some time and it’s important, for example, that we are all subject in the United States to the Patriot Act and it is possible that all that information could be made available to the authorities.

This, from 2006, is my response:

Privacy protects us from abuses by those in power, even if we’re doing nothing wrong at the time of surveillance.

We do nothing wrong when we make love or go to the bathroom. We are not deliberately hiding anything when we seek out private places for reflection or conversation. We keep private journals, sing in the privacy of the shower, and write letters to secret lovers and then burn them. Privacy is a basic human need.

[…]

For if we are observed in all matters, we are constantly under threat of correction, judgment, criticism, even plagiarism of our own uniqueness. We become children, fettered under watchful eyes, constantly fearful that—either now or in the uncertain future—patterns we leave behind will be brought back to implicate us, by whatever authority has now become focused upon our once-private and innocent acts. We lose our individuality, because everything we do is observable and recordable.

[…]

This is the loss of freedom we face when our privacy is taken from us. This is life in former East Germany, or life in Saddam Hussein’s Iraq. And it’s our future as we allow an ever-intrusive eye into our personal, private lives.

Too many wrongly characterize the debate as “security versus privacy.” The real choice is liberty versus control. Tyranny, whether it arises under threat of foreign physical attack or under constant domestic authoritative scrutiny, is still tyranny. Liberty requires security without intrusion, security plus privacy. Widespread police surveillance is the very definition of a police state. And that’s why we should champion privacy even when we have nothing to hide.

EDITED TO ADD: See also Daniel Solove’s “‘I’ve Got Nothing to Hide’ and Other Misunderstandings of Privacy.”

Posted on December 9, 2009 at 12:22 PMView Comments

A Taxonomy of Social Networking Data

At the Internet Governance Forum in Sharm El Sheikh this week, there was a conversation on social networking data. Someone made the point that there are several different types of data, and it would be useful to separate them. This is my taxonomy of social networking data.

  1. Service data. Service data is the data you need to give to a social networking site in order to use it. It might include your legal name, your age, and your credit card number.
  2. Disclosed data. This is what you post on your own pages: blog entries, photographs, messages, comments, and so on.
  3. Entrusted data. This is what you post on other people’s pages. It’s basically the same stuff as disclosed data, but the difference is that you don’t have control over the data—someone else does.
  4. Incidental data. Incidental data is data the other people post about you. Again, it’s basically the same stuff as disclosed data, but the difference is that 1) you don’t have control over it, and 2) you didn’t create it in the first place.
  5. Behavioral data. This is data that the site collects about your habits by recording what you do and who you do it with.

Different social networking sites give users different rights for each data type. Some are always private, some can be made private, and some are always public. Some can be edited or deleted—I know one site that allows entrusted data to be edited or deleted within a 24-hour period—and some cannot. Some can be viewed and some cannot.

And people should have different rights with respect to each data type. It’s clear that people should be allowed to change and delete their disclosed data. It’s less clear what rights they have for their entrusted data. And far less clear for their incidental data. If you post pictures of a party with me in them, can I demand you remove those pictures—or at least blur out my face? And what about behavioral data? It’s often a critical part of a social networking site’s business model. We often don’t mind if they use it to target advertisements, but are probably less sanguine about them selling it to third parties.

As we continue our conversations about what sorts of fundamental rights people have with respect to their data, this taxonomy will be useful.

EDITED TO ADD (12/12): Another categorization centered on destination instead of trust level.

Posted on November 19, 2009 at 12:51 PMView Comments

File Deletion

File deletion is all about control. This used to not be an issue. Your data was on your computer, and you decided when and how to delete a file. You could use the delete function if you didn’t care about whether the file could be recovered or not, and a file erase program—I use BCWipe for Windows—if you wanted to ensure no one could ever recover the file.

As we move more of our data onto cloud computing platforms such as Gmail and Facebook, and closed proprietary platforms such as the Kindle and the iPhone, deleting data is much harder.

You have to trust that these companies will delete your data when you ask them to, but they’re generally not interested in doing so. Sites like these are more likely to make your data inaccessible than they are to physically delete it. Facebook is a known culprit: actually deleting your data from its servers requires a complicated procedure that may or may not work. And even if you do manage to delete your data, copies are certain to remain in the companies’ backup systems. Gmail explicitly says this in its privacy notice.

Online backups, SMS messages, photos on photo sharing sites, smartphone applications that store your data in the network: you have no idea what really happens when you delete pieces of data or your entire account, because you’re not in control of the computers that are storing the data.

This notion of control also explains how Amazon was able to delete a book that people had previously purchased on their Kindle e-book readers. The legalities are debatable, but Amazon had the technical ability to delete the file because it controls all Kindles. It has designed the Kindle so that it determines when to update the software, whether people are allowed to buy Kindle books, and when to turn off people’s Kindles entirely.

Vanish is a research project by Roxana Geambasu and colleagues at the University of Washington. They designed a prototype system that automatically deletes data after a set time interval. So you can send an email, create a Google Doc, post an update to Facebook, or upload a photo to Flickr, all designed to disappear after a set period of time. And after it disappears, no one—not anyone who downloaded the data, not the site that hosted the data, not anyone who intercepted the data in transit, not even you—will be able to read it. If the police arrive at Facebook or Google or Flickr with a warrant, they won’t be able to read it.

The details are complicated, but Vanish breaks the data’s decryption key into a bunch of pieces and scatters them around the web using a peer-to-peer network. Then it uses the natural turnover in these networks—machines constantly join and leave—to make the data disappear. Unlike previous programs that supported file deletion, this one doesn’t require you to trust any company, organisation, or website. It just happens.

Of course, Vanish doesn’t prevent the recipient of an email or the reader of a Facebook page from copying the data and pasting it into another file, just as Kindle’s deletion feature doesn’t prevent people from copying a book’s files and saving them on their computers. Vanish is just a prototype at this point, and it only works if all the people who read your Facebook entries or view your Flickr pictures have it installed on their computers as well; but it’s a good demonstration of how control affects file deletion. And while it’s a step in the right direction, it’s also new and therefore deserves further security analysis before being adopted on a wide scale.

We’ve lost the control of data on some of the computers we own, and we’ve lost control of our data in the cloud. We’re not going to stop using Facebook and Twitter just because they’re not going to delete our data when we ask them to, and we’re not going to stop using Kindles and iPhones because they may delete our data when we don’t want them to. But we need to take back control of data in the cloud, and projects like Vanish show us how we can.

Now we need something that will protect our data when a large corporation decides to delete it.

This essay originally appeared in The Guardian.

EDITED TO ADD (9/30): Vanish has been broken, paper here.

Posted on September 10, 2009 at 6:08 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.