Entries Tagged "Internet and society"

Page 6 of 6

The Death of Ephemeral Conversation

The political firestorm over former U.S. Rep. Mark Foley’s salacious instant messages hides another issue, one about privacy. We are rapidly turning into a society where our intimate conversations can be saved and made public later. This represents an enormous loss of freedom and liberty, and the only way to solve the problem is through legislation.

Everyday conversation used to be ephemeral. Whether face-to-face or by phone, we could be reasonably sure that what we said disappeared as soon as we said it. Of course, organized crime bosses worried about phone taps and room bugs, but that was the exception. Privacy was the default assumption.

This has changed. We now type our casual conversations. We chat in e-mail, with instant messages on our computer and SMS messages on our cellphones, and in comments on social networking Web sites like Friendster, LiveJournal, and MySpace. These conversations—with friends, lovers, colleagues, fellow employees—are not ephemeral; they leave their own electronic trails.

We know this intellectually, but we haven’t truly internalized it. We type on, engrossed in conversation, forgetting that we’re being recorded.

Foley’s instant messages were saved by the young men he talked to, but they could have also been saved by the instant messaging service. There are tools that allow both businesses and government agencies to monitor and log IM conversations. E-mail can be saved by your ISP or by the IT department in your corporation. Gmail, for example, saves everything, even if you delete it.

And these conversations can come back to haunt people—in criminal prosecutions, divorce proceedings or simply as embarrassing disclosures. During the 1998 Microsoft anti-trust trial, the prosecution pored over masses of e-mail, looking for a smoking gun. Of course they found things; everyone says things in conversation that, taken out of context, can prove anything.

The moral is clear: If you type it and send it, prepare to explain it in public later.

And voice is no longer a refuge. Face-to-face conversations are still safe, but we know that the National Security Agency is monitoring everyone’s international phone calls. (They said nothing about SMS messages, but one can assume they were monitoring those too.) Routine recording of phone conversations is still rare—certainly the NSA has the capability—but will become more common as telephone calls continue migrating to the IP network.

If you find this disturbing, you should. Fewer conversations are ephemeral, and we’re losing control over the data. We trust our ISPs, employers and cellphone companies with our privacy, but again and again they’ve proven they can’t be trusted. Identity thieves routinely gain access to these repositories of our information. Paris Hilton and other celebrities have been the victims of hackers breaking into their cellphone providers’ networks. Google reads our Gmail and inserts context-dependent ads.

Even worse, normal constitutional protections don’t apply to much of this. The police need a court-issued warrant to search our papers or eavesdrop on our communications, but can simply issue a subpoena—or ask nicely or threateningly—for data of ours that is held by a third party, including stored copies of our communications.

The Justice Department wants to make this problem even worse, by forcing ISPs and others to save our communications—just in case we’re someday the target of an investigation. This is not only bad privacy and security, it’s a blow to our liberty as well. A world without ephemeral conversation is a world without freedom.

We can’t turn back technology; electronic communications are here to stay. But as technology makes our conversations less ephemeral, we need laws to step in and safeguard our privacy. We need a comprehensive data privacy law, protecting our data and communications regardless of where it is stored or how it is processed. We need laws forcing companies to keep it private and to delete it as soon as it is no longer needed.

And we need to remember, whenever we type and send, we’re being watched.

Foley is an anomaly. Most of us do not send instant messages in order to solicit sex with minors. Law enforcement might have a legitimate need to access Foley’s IMs, e-mails and cellphone calling logs, but that’s why there are warrants supported by probable cause—they help ensure that investigations are properly focused on suspected pedophiles, terrorists and other criminals. We saw this in the recent UK terrorist arrests; focused investigations on suspected terrorists foiled the plot, not broad surveillance of everyone without probable cause.

Without legal privacy protections, the world becomes one giant airport security area, where the slightest joke—or comment made years before—lands you in hot water. The world becomes one giant market-research study, where we are all life-long subjects. The world becomes a police state, where we all are assumed to be Foleys and terrorists in the eyes of the government.

This essay originally appeared on Forbes.com.

Posted on October 18, 2006 at 3:30 PMView Comments

Facebook and Data Control

Earlier this month, the popular social networking site Facebook learned a hard lesson in privacy. It introduced a new feature called “News Feeds” that shows an aggregation of everything members do on the site: added and deleted friends, a change in relationship status, a new favorite song, a new interest, etc. Instead of a member’s friends having to go to his page to view any changes, these changes are all presented to them automatically.

The outrage was enormous. One group, Students Against Facebook News Feeds, amassed over 700,000 members. Members planned to protest at the company’s headquarters. Facebook’s founder was completely stunned, and the company scrambled to add some privacy options.

Welcome to the complicated and confusing world of privacy in the information age. Facebook didn’t think there would be any problem; all it did was take available data and aggregate it in a novel way for what it perceived was its customers’ benefit. Facebook members instinctively understood that making this information easier to display was an enormous difference, and that privacy is more about control than about secrecy.

But on the other hand, Facebook members are just fooling themselves if they think they can control information they give to third parties.

Privacy used to be about secrecy. Someone defending himself in court against the charge of revealing someone else’s personal information could use as a defense the fact that it was not secret. But clearly, privacy is more complicated than that. Just because you tell your insurance company something doesn’t mean you don’t feel violated when that information is sold to a data broker. Just because you tell your friend a secret doesn’t mean you’re happy when he tells others. Same with your employer, your bank, or any company you do business with.

But as the Facebook example illustrates, privacy is much more complex. It’s about who you choose to disclose information to, how, and for what purpose. And the key word there is “choose.” People are willing to share all sorts of information, as long as they are in control.

When Facebook unilaterally changed the rules about how personal information was revealed, it reminded people that they weren’t in control. Its eight million members put their personal information on the site based on a set of rules about how that information would be used. It’s no wonder those members—high school and college kids who traditionally don’t care much about their own privacy—felt violated when Facebook changed the rules.

Unfortunately, Facebook can change the rules whenever it wants. Its Privacy Policy is 2,800 words long, and ends with a notice that it can change at any time. How many members ever read that policy, let alone read it regularly and check for changes? Not that a Privacy Policy is the same as a contract. Legally, Facebook owns all data members upload to the site. It can sell the data to advertisers, marketers, and data brokers. (Note: there is no evidence that Facebook does any of this.) It can allow the police to search its databases upon request. It can add new features that change who can access what personal data, and how.

But public perception is important. The lesson here for Facebook and other companies—for Google and MySpace and AOL and everyone else who hosts our e-mails and webpages and chat sessions—is that people believe they own their data. Even though the user agreement might technically give companies the right to sell the data, change the access rules to that data, or otherwise own that data, we—the users—believe otherwise. And when we who are affected by those actions start expressing our views—watch out.

What Facebook should have done was add the feature as an option, and allow members to opt in if they wanted to. Then, members who wanted to share their information via News Feeds could do so, and everyone else wouldn’t have felt that they had no say in the matter. This is definitely a gray area, and it’s hard to know beforehand which changes need to be implemented slowly and which won’t matter. Facebook, and others, need to talk to its members openly about new features. Remember: members want control.

The lesson for Facebook members might be even more jarring: if they think they have control over their data, they’re only deluding themselves. They can rebel against Facebook for changing the rules, but the rules have changed, regardless of what the company does.

Whenever you put data on a computer, you lose some control over it. And when you put it on the internet, you lose a lot of control over it. News Feeds brought Facebook members face to face with the full implications of putting their personal information on Facebook. It had just been an accident of the user interface that it was difficult to aggregate the data from multiple friends into a single place. And even if Facebook eliminates News Feeds entirely, a third party could easily write a program that does the same thing. Facebook could try to block the program, but would lose that technical battle in the end.

We’re all still wrestling with the privacy implications of the Internet, but the balance has tipped in favor of more openness. Digital data is just too easy to move, copy, aggregate, and display. Companies like Facebook need to respect the social rules of their sites, to think carefully about their default settings—they have an enormous impact on the privacy mores of the online world—and to give users as much control over their personal information as they can.

But we all need to remember that much of that control is illusory.

This essay originally appeared on Wired.com.

Posted on September 21, 2006 at 5:57 AMView Comments

Privacy Risks of Public Mentions

Interesting paper: “You are what you say: privacy risks of public mentions,” Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 2006.

Abstract:

In today’s data-rich networked world, people express many aspects of their lives online. It is common to segregate different aspects in different places: you might write opinionated rants about movies in your blog under a pseudonym while participating in a forum or web site for scholarly discussion of medical ethics under your real name. However, it may be possible to link these separate identities, because the movies, journal articles, or authors you mention are from a sparse relation space whose properties (e.g., many items related to by only a few users) allow re-identification. This re-identification violates people’s intentions to separate aspects of their life and can have negative consequences; it also may allow other privacy violations, such as obtaining a stronger identifier like name and address.This paper examines this general problem in a specific setting: re-identification of users from a public web movie forum in a private movie ratings dataset. We present three major results. First, we develop algorithms that can re-identify a large proportion of public users in a sparse relation space. Second, we evaluate whether private dataset owners can protect user privacy by hiding data; we show that this requires extensive and undesirable changes to the dataset, making it impractical. Third, we evaluate two methods for users in a public forum to protect their own privacy, suppression and misdirection. Suppression doesn’t work here either. However, we show that a simple misdirection strategy works well: mention a few popular items that you haven’t rated.

Unfortunately, the paper is only available to ACM members.

EDITED TO ADD (8/24): Paper is here.

Posted on August 23, 2006 at 2:11 PMView Comments

MySpace Used as Forensics Tool

From CNN:

Detectives used profiles posted on the MySpace social networking Web site to identify six suspects in a rape and robbery….

[…]

She knew only their first names but their pictures were posted on MySpace.

“Primarily, we pulled up her friends list. It helped us identify some of the players,” said Bartley.

Posted on March 28, 2006 at 1:19 PMView Comments

Dog Poop Girl

Here’s the basic story: A woman and her dog are riding the Seoul subways. The dog poops in the floor. The woman refuses to clean it up, despite being told to by other passangers. Someone takes a picture of her, posts it on the Internet, and she is publicly shamed—and the story will live on the Internet forever. Then, the blogosphere debates the notion of the Internet as a social enforcement tool.

The Internet is changing our notions of personal privacy, and how the public enforces social norms.

Daniel Solove writes:

The dog-shit-girl case involves a norm that most people would seemingly agree to—clean up after your dog. Who could argue with that one? But what about when norm enforcement becomes too extreme? Most norm enforcement involves angry scowls or just telling a person off. But having a permanent record of one’s norm violations is upping the sanction to a whole new level. The blogosphere can be a very powerful norm-enforcing tool, allowing bloggers to act as a cyber-posse, tracking down norm violators and branding them with digital scarlet letters.

And that is why the law might be necessary—to modulate the harmful effects when the norm enforcement system gets out of whack. In the United States, privacy law is often the legal tool called in to address the situation. Suppose the dog poop incident occurred in the United States. Should the woman have legal redress under the privacy torts?

If this incident is any guide, then anyone acting outside the accepted norms of whatever segment of humanity surrounds him had better tread lightly. The question we need to answer is: is this the sort of society we want to live in? And if not, what technological or legal controls do we need to put in place to ensure that we don’t?

Solove again:

I believe that, as complicated as it might be, the law must play a role here. The stakes are too important. While entering law into the picture could indeed stifle freedom of discussion on the Internet, allowing excessive norm enforcement can be stifling to freedom as well.

All the more reason why we need to rethink old notions of privacy. Under existing notions, privacy is often thought of in a binary way ­ something either is private or public. According to the general rule, if something occurs in a public place, it is not private. But a more nuanced view of privacy would suggest that this case involved taking an event that occurred in one context and significantly altering its nature ­ by making it permanent and widespread. The dog-shit-girl would have been just a vague image in a few people’s memory if it hadn’t been for the photo entering cyberspace and spreading around faster than an epidemic. Despite the fact that the event occurred in public, there was no need for her image and identity to be spread across the Internet.

Could the law provide redress? This is a complicated question; certainly under existing doctrine, making a case would have many hurdles. And some will point to practical problems. Bloggers often don’t have deep pockets. But perhaps the possibility of lawsuits might help shape the norms of the Internet. In the end, I strongly doubt that the law alone can address this problem; but its greatest contribution might be to help along the development of blogging norms that will hopefully prevent more cases such as this one from having crappy endings.

Posted on July 29, 2005 at 4:21 PMView Comments

1 4 5 6

Sidebar photo of Bruce Schneier by Joe MacInnis.