Entries Tagged "Internet and society"

Page 5 of 6

Helpful Hint for Fugitives: Don't Update Your Location on Facebook

Fugitive caught after updating his status on Facebook.”

Investigators scoured social networking sites such as Facebook and MySpace but initially could find no trace of him and were unable to pin down his location in Mexico.

Several months later, a secret service agent, Seth Reeg, checked Facebook again and up popped MaxiSopo. His photo showed him partying in front of a backdrop featuring logos of BMW and Courvoisier cognac, sporting a black jacket adorned with a not-so-subtle white lion.

Although Sopo’s profile was set to private, his list of friends was not. Scoville started combing through it and was surprised to see that one friend listed an affiliation with the justice department. He sent a message requesting a phone call.

“We figured this was a person we could probably trust to keep our inquiry discreet,” Scoville said.

Proving the 2.0 adage that a friend on Facebook is rarely a friend indeed, the former official said he had met Sopo in Cancun’s nightclubs a few times, but did not really know him and had no idea he was a fugitive. The official learned where Sopo was living and passed that information back to Scoville, who provided it to Mexican authorities. They arrested Sopo last month.

It’s easy to say “so dumb,” and it would be true, but what’s interesting is how people just don’t think through the privacy implications of putting their information on the Internet. Facebook is how we interact with friends, and we think of it in the frame of interacting with friends. We don’t think that our employers might be looking — they’re not our friends! — that the information will be around forever, or that it might be abused. Privacy isn’t salient; chatting with friends is.

Posted on October 19, 2009 at 7:55 AMView Comments

Second SHB Workshop Liveblogging (8)

The penultimate session of the conference was “Privacy,” moderated by Tyler Moore.

Alessandro Acquisti, Carnegie Mellon University (suggested reading: What Can Behavioral Economics Teach Us About Privacy?; Privacy in Electronic Commerce and the Economics of Immediate Gratification), presented research on how people value their privacy. He started by listing a variety of cognitive biases that affect privacy decisions: illusion of control, overconfidence, optimism bias, endowment effect, and so on. He discussed two experiments. The first demonstrated a “herding effect”: if a subject believes that others reveal sensitive behavior, the subject is more likely to also reveal sensitive behavior. The second examined the “frog effect”: do privacy intrusions alert or desensitize people to revealing personal information? What he found is that people tend to set their privacy level at the beginning of a survey, and don’t respond well to being asked easy questions at first and then sensitive questions at the end. In the discussion, Joe Bonneau asked him about the notion that people’s privacy protections tend to ratchet up over time; he didn’t have conclusive evidence, but gave several possible explanations for the phenomenon.

Adam Joinson, University of Bath (suggested reading: Privacy, Trust and Self-Disclosure Online; Privacy concerns and privacy actions), also studies how people value their privacy. He talked about expressive privacy — privacy that allows people to express themselves and form interpersonal relationships. His research showed that differences between how people use Facebook in different countries depend on how much people trust Facebook as a company, rather than how much people trust other Facebook users. Another study looked at posts from Secret Tweet and Twitter. He found 16 markers that allowed him to automatically determine which tweets contain sensitive personal information and which do not, with high probability. Then he tried to determine if people with large Twitter followings post fewer secrets than people who are only twittering to a few people. He found absolutely no difference.

Peter Neumann, SRI (suggested reading: Holistic systems; Risks; Identity and Trust in Context), talked about lack of medical privacy (too many people have access to your data), about voting (the privacy problem makes the voting problem a lot harder, and the end-to-end voting security/privacy problem is much harder than just securing voting machines), and privacy in China (the government is requiring all computers sold in China to be sold with software allowing them to eavesdrop on the users). Any would-be solution needs to reflect the ubiquity of the threat. When we design systems, we need to anticipate what the privacy problems will be. Privacy problems are everywhere you look, and ordinary people have no idea of the depth of the problem.

Eric Johnson, Dartmouth College (suggested reading: Access Flexibility with Escalation and Audit; Security through Information Risk Management), studies the information access problem from a business perspective. He’s been doing field studies in companies like retail banks and investment banks, and found that role-based access control fails because companies can’t determine who has what role. Even worse, roles change quickly, especially in large complex organizations. For example, one business group of 3000 people experiences 1000 role changes within three months. The result is that organizations do access control badly, either over-entitling or under-entitling people. But since getting the job done is the most important thing, organizations tend to over-entitle: give people more access than they need. His current work is to find the right set of incentives and controls to set access more properly. The challege is to do this without making people risk averse. In the discussion, he agreed that a perfect access control system is not possible, and that organizations should probably allow a certain amount of access control violations — similar to the idea of posting a 55 mph speed limit but not ticketing people unless they go over 70 mph.

Christine Jolls, Yale Law School (suggested reading: Rationality and Consent in Privacy Law, Employee Privacy), made the point that people regularly share their most private information with their intimates — so privacy is not about secrecy, it’s more about control. There are moments when people make pretty big privacy decisions. For example, they grant employers the rights to monitor their e-mail, or test their urine without notice. In general, courts hold that blanket signing away of privacy rights — “you can test my urine on any day in the future” — are not valid, but immediate signing away of privacy of privacy rights — “you can test my urine today” — are. Jolls believes that this is reasonable for several reasons, such as optimism bias and an overfocus on the present at the expense of the future. Without realizing it, the courts have implemented the system that behavioral economics would find optimal. During the discussion, she talked about how coercion figures into this; the U.S. legal system tends not to be concerned with it.

Andrew Adams, University of Reading (suggested reading: Regulating CCTV), also looks at attitudes of privacy on social networking services. His results are preliminary, and based on interviews with university students in Canada, Japan, and the UK, and are very concordant with what danah boyd and Joe Bonneau said earlier. From the UK: People join social networking sites to increase their level of interaction with people they already know in real life. Revealing personal information is okay, but revealing too much is bad. Even more interestingly, it’s not okay to reveal more about others than they reveal themselves. From Japan: People are more open to making friends online. There’s more anonymity. It’s not okay to reveal information about others, but “the fault of this lies as much with the person whose data was revealed in not choosing friends wisely.” This victim responsibility is a common theme with other privacy and security elements in Japan. Data from Canada is still being compiled.

Great phrase: the “laundry belt” — close enough for students to go home on weekends with their laundry, but far enough away so they don’t feel as if their parents are looking over their shoulder — typically two hours by public transportation (in the UK).

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 3:01 PMView Comments

Second SHB Workshop Liveblogging (6)

The first session of the morning was “Foundations,” which is kind of a catch-all for a variety of things that didn’t really fit anywhere else. Rachel Greenstadt moderated.

Terence Taylor, International Council for the Live Sciences (suggested video to watch: Darwinian Security; Natural Security), talked about the lessons evolution teaches about living with risk. Successful species didn’t survive by eliminating the risks of their environment, they survived by adaptation. Adaptation isn’t always what you think. For example, you could view the collapse of the Soviet Union as a failure to adapt, but you could also view it as successful adaptation. Risk is good. Risk is essential for the survival of a society, because risk-takers are the drivers of change. In the discussion phase, John Mueller pointed out a key difference between human and biological systems: humans tend to respond dramatically to anomalous events (the anthrax attacks), while biological systems respond to sustained change. And David Livingstone Smith asked about the difference between biological adaptation that affects the reproductive success of an organism’s genes, even at the expense of the organism, with security adaptation. (I recommend the book he edited: Natural Security: A Darwinian Approach to a Dangerous World.)

Andrew Odlyzko, University of Minnesota (suggested reading: Network Neutrality, Search Neutrality, and the Never-Ending Conflict between Efficiency and Fairness in Markets, Economics, Psychology, and Sociology of Security), discussed human-space vs. cyberspace. People cannot build secure systems — we know that — but people also cannot live with secure systems. We require a certain amount of flexibility in our systems. And finally, people don’t need secure systems. We survive with an astounding amount of insecurity in our world. The problem with cyberspace is that it was originally conceived as separate from the physical world, and that it could correct for the inadequacies of the physical world. Really, the two are intertwined, and that human space more often corrects for the inadequacies of cyberspace. Lessons: build messy systems, not clean ones; create a web of ties to other systems; create permanent records.

danah boyd, Microsoft Research (suggested reading: Taken Out of Context — American Teen Sociality in Networked Publics), does ethnographic studies of teens in cyberspace. Teens tend not to lie to their friends in cyberspace, but they lie to the system. Since an early age, they’ve been taught that they need to lie online to be safe. Teens regularly share their passwords: with their parents when forced, or with their best friend or significant other. This is a way of demonstrating trust. It’s part of the social protocol for this generation. In general, teens don’t use social media in the same way as adults do. And when they grow up, they won’t use social media in the same way as today’s adults do. Teens view privacy in terms of control, and take their cues about privacy from celebrities and how they use social media. And their sense of privacy is much more nuanced and complicated. In the discussion phase, danah wasn’t sure whether the younger generation would be more or less susceptible to Internet scams than the rest of us — they’re not nearly as technically savvy as we might think they are. “The only thing that saves teenagers is fear of their parents”; they try to lock them out, and lock others out in the process. Socio-economic status matters a lot, in ways that she is still trying to figure out. There are three different types of social networks: personal networks, articulated networks, and behavioral networks, and they’re different.

Mark Levine, Lancaster University (suggested reading: The Kindness of Crowds; Intra-group Regulation of Violence: Bystanders and the (De)-escalation of Violence), does social psychology. He argued against the common belief that groups are bad (mob violence, mass hysteria, peer group pressure). He collected data from UK CCTV cameras, searches for aggressive behavior, and studies when and how bystanders either help escalate or de-escalate the situations. Results: as groups get bigger, there is no increase of anti-social acts and a significant increase in pro-social acts. He has much more analysis and results, too complicated to summarize here. One key finding: when a third party intervenes in an aggressive interaction, it is much more likely to de-escalate. Basically, groups can act against violence. “When it comes to violence (and security), group processes are part of the solution — not part of the problem?”

Jeff MacKie-Mason, University of Michigan (suggested reading: Humans are smart devices, but not programmable; Security when people matter; A Social Mechanism for Supporting Home Computer Security), is an economist: “Security problems are incentive problems.” He discussed motivation, and how to design systems to take motivation into account. Humans are smart devices; they can’t be programmed, but they can be influenced through the sciences of motivational behavior: microeconomics, game theory, social psychology, psychodynamics, and personality psychology. He gave a couple of general examples of how these theories can inform security system design.

Joe Bonneau, Cambridge University, talked about social networks like Facebook, and privacy. People misunderstand why privacy and security is important in social networking sites like Facebook. People underestimate of what Facebook really is; it really is a reimplementation of the entire Internet. “Everything on the Internet is becoming social,” and that makes security different. Phishing is different, 419-style scams are different. Social context makes some scams easier; social networks are fun, noisy, and unpredictable. “People use social networking systems with their brain turned off.” But social context can be used to spot frauds and anomalies, and can be used to establish trust.

Three more sessions to go. (I am enjoying liveblogging the event. It’s helping me focus and pay closer attention.)

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 9:54 AMView Comments

Second SHB Workshop Liveblogging (2)

The first session was about deception, moderated by David Clark.

Frank Stajano, Cambridge University (suggested reading: Understanding victims: Six principles for systems security), presented research with Paul Wilson, who films actual scams for “The Real Hustle.” His point is that we build security systems based on our “logic,” but users don’t always follow our logic. It’s fraudsters who really understand what people do, so we need to understand what the fraudsters understand. Things like distraction, greed, unknown accomplices, social compliance are important.

David Livingstone Smith, University of New England (suggested reading: Less than human: self-deception in the imagining of others; Talk on Lying at La Ciudad de Las Ideas; a subsequent discussion; Why War?), is a philosopher by training, and goes back to basics: “What are we talking about?” A theoretical definition — “that which something has to have to fall under a term” — of deception is difficult to define. “Cause to have a false belief,” from the Oxford English Dictionary, is inadequate. “To deceive is intentionally have someone to have a false belief” also doesn’t work. “Intentionally causing someone to have a false belief that the speaker knows to be false” still isn’t good enough. The fundamental problem is that these are anthropocentric definitions. Deception is not unique to humans; it gives organisms an evolutionary edge. For example, the mirror orchid fools a wasp into landing on it by looking like and giving off chemicals that mimic the female wasp. This example shows that we need a broader definition of “purpose.” His formal definition: “For systems A and B, A deceives B iff A possesses some character C with proper function F, and B possesses a mechanism C* with the proper function F* of producing representations, such that the proper function of C is to cause C* to fail to perform F* by causing C* to form false representations, and C does so in virtue of performing F, and B’s falsely representing enables some feature of A to perform its proper function.”

I spoke next, about the psychology of Conficker, how the human brain buys security, and why science fiction writers shouldn’t be hired to think about terrorism risks (to be published on Wired.com next week).

Dominic Johnson, University of Edinburgh (suggested reading: Paradigm Shifts in Security Strategy; Perceptions of victory and defeat), talked about his chapter in the book Natural Security: A Darwinian Approach to a Dangerous World. Life has 3.5 billion years of experience in security innovation; let’s look at how biology approaches security. Biomimicry, ecology, paleontology, animal behavior, evolutionary psychology, immunology, epidemiology, selection, and adaption are all relevant. Redundancy is a very important survival tool for species. Here’s an adaption example: The 9/11 threat was real and we knew about it, but we didn’t do anything. His thesis: Adaptation to novel security threats tends to occur after major disasters. There are many historical examples of this; Pearl Harbor, for example. Causes include sensory biases, psychological biases, leadership biases, organizational biases, and political biases — all pushing us towards maintaining the status quo. So it’s natural for us to poorly adapt to security threats in the modern world. A questioner from the audience asked whether control theory had any relevance to this model.

Jeff Hancock, Cornell University (suggested reading: On Lying and Being Lied To: A Linguistic Analysis of Deception in Computer-Mediated Communication; Separating Fact From Fiction: An Examination of Deceptive Self-Presentation in Online Dating Profiles), studies interpersonal deception: how the way we lie to each other intersects with communications technologies; and how technologies change the way we lie, and can technology be used to detect lying? Despite new technology, people lie for traditional reasons. For example: on dating sites, men tend to lie about their height and women tend to lie about their weight. The recordability of the Internet also changes how we lie. The use of the first person singular tends to go down the more people lie. He verified this in many spheres, such as how people describe themselves in chat rooms, and true versus false statements that the Bush administration made about 9/11 and Iraq. The effect was more pronounced when administration officials were answering questions than when they were reading prepared remarks.

EDITED TO ADD (6/11): Adam Shostack liveblogged this session, too. And Ross’s liveblogging is in his blog post’s comments.

EDITED TO ADD (6/11): Audio of the session is here.

Posted on June 11, 2009 at 9:37 AMView Comments

The Future of Ephemeral Conversation

When he becomes president, Barack Obama will have to give up his BlackBerry. Aides are concerned that his unofficial conversations would become part of the presidential record, subject to subpoena and eventually made public as part of the country’s historical record.

This reality of the information age might be particularly stark for the president, but it’s no less true for all of us. Conversation used to be ephemeral. Whether face-to-face or by phone, we could be reasonably sure that what we said disappeared as soon as we said it. Organized crime bosses worried about phone taps and room bugs, but that was the exception. Privacy was just assumed.

This has changed. We chat in e-mail, over SMS and IM, and on social networking websites like Facebook, MySpace, and LiveJournal. We blog and we Twitter. These conversations — with friends, lovers, colleagues, members of our cabinet — are not ephemeral; they leave their own electronic trails.

We know this intellectually, but we haven’t truly internalized it. We type on, engrossed in conversation, forgetting we’re being recorded and those recordings might come back to haunt us later.

Oliver North learned this, way back in 1987, when messages he thought he had deleted were saved by the White House PROFS system, and then subpoenaed in the Iran-Contra affair. Bill Gates learned this in 1998 when his conversational e-mails were provided to opposing counsel as part of the antitrust litigation discovery process. Mark Foley learned this in 2006 when his instant messages were saved and made public by the underage men he talked to. Paris Hilton learned this in 2005 when her cell phone account was hacked, and Sarah Palin learned it earlier this year when her Yahoo e-mail account was hacked. Someone in George W. Bush’s administration learned this, and millions of e-mails went mysteriously and conveniently missing.

Ephemeral conversation is dying.

Cardinal Richelieu famously said, :If one would give me six lines written by the hand of the most honest man, I would find something in them to have him hanged.” When all our ephemeral conversations can be saved for later examination, different rules have to apply. Conversation is not the same thing as correspondence. Words uttered in haste over morning coffee, whether spoken in a coffee shop or thumbed on a Blackberry, are not official pronouncements. Discussions in a meeting, whether held in a boardroom or a chat room, are not the same as answers at a press conference. And privacy isn’t just about having something to hide; it has enormous value to democracy, liberty, and our basic humanity.

We can’t turn back technology; electronic communications are here to stay and even our voice conversations are threatened. But as technology makes our conversations less ephemeral, we need laws to step in and safeguard ephemeral conversation. We need a comprehensive data privacy law, protecting our data and communications regardless of where it is stored or how it is processed. We need laws forcing companies to keep it private and delete it as soon as it is no longer needed. Laws requiring ISPs to store e-mails and other personal communications are exactly what we don’t need.

Rules pertaining to government need to be different, because of the power differential. Subjecting the president’s communications to eventual public review increases liberty because it reduces the government’s power with respect to the people. Subjecting our communications to government review decreases liberty because it reduces our power with respect to the government. The president, as well as other members of government, need some ability to converse ephemerally — just as they’re allowed to have unrecorded meetings and phone calls — but more of their actions need to be subject to public scrutiny.

But laws can only go so far. Law or no law, when something is made public it’s too late. And many of us like having complete records of all our e-mail at our fingertips; it’s like our offline brains.

In the end, this is cultural.

The Internet is the greatest generation gap since rock and roll. We’re now witnessing one aspect of that generation gap: the younger generation chats digitally, and the older generation treats those chats as written correspondence. Until our CEOs blog, our Congressmen Twitter, and our world leaders send each other LOLcats – until we have a Presidential election where both candidates have a complete history on social networking sites from before they were teenagers– we aren’t fully an information age society.

When everyone leaves a public digital trail of their personal thoughts since birth, no one will think twice about it being there. Obama might be on the younger side of the generation gap, but the rules he’s operating under were written by the older side. It will take another generation before society’s tolerance for digital ephemera changes.

This essay previously appeared on The Wall Street Journal website (not the print newspaper), and is an update of something I wrote previously.

Posted on November 24, 2008 at 2:06 PMView Comments

Hacking Reputation in MySpace and Facebook

I’ll be the first to admit it: I know next to nothing about MySpace or Facebook. I do know that they’re social networking sites, and that — at least to some extent — your reputation is based on who are your “friends” and what they say about you.

Which means that this follows, like day follows night. “Fake Your Space” is a site where you can hire fake friends to leave their pictures and personalized comments on your page. Now you can pretend that you’re more popular than you actually are:

FakeYourSpace is an exciting new service that enables normal everyday people like me and you to have Hot friends on popular social networking sites such as MySpace and FaceBook. Not only will you be able to see these Gorgeous friends on your friends list, but FakeYourSpace enables you to create customized messages and comments for our Models to leave you on your comment wall. FakeYourSpace makes it easy for any regular person to make it seem like they have a Model for a friend. It doesn’t stop there however. Maybe you want to appear as if you have a Model for a lover. FakeYourSpace can make this happen!

What’s next? Services that verify friends on your friends’ MySpace pages? Services that block friend verification services? Where will this all end up?

Posted on December 7, 2006 at 7:29 AMView Comments

The Death of Ephemeral Conversation

The political firestorm over former U.S. Rep. Mark Foley’s salacious instant messages hides another issue, one about privacy. We are rapidly turning into a society where our intimate conversations can be saved and made public later. This represents an enormous loss of freedom and liberty, and the only way to solve the problem is through legislation.

Everyday conversation used to be ephemeral. Whether face-to-face or by phone, we could be reasonably sure that what we said disappeared as soon as we said it. Of course, organized crime bosses worried about phone taps and room bugs, but that was the exception. Privacy was the default assumption.

This has changed. We now type our casual conversations. We chat in e-mail, with instant messages on our computer and SMS messages on our cellphones, and in comments on social networking Web sites like Friendster, LiveJournal, and MySpace. These conversations — with friends, lovers, colleagues, fellow employees — are not ephemeral; they leave their own electronic trails.

We know this intellectually, but we haven’t truly internalized it. We type on, engrossed in conversation, forgetting that we’re being recorded.

Foley’s instant messages were saved by the young men he talked to, but they could have also been saved by the instant messaging service. There are tools that allow both businesses and government agencies to monitor and log IM conversations. E-mail can be saved by your ISP or by the IT department in your corporation. Gmail, for example, saves everything, even if you delete it.

And these conversations can come back to haunt people — in criminal prosecutions, divorce proceedings or simply as embarrassing disclosures. During the 1998 Microsoft anti-trust trial, the prosecution pored over masses of e-mail, looking for a smoking gun. Of course they found things; everyone says things in conversation that, taken out of context, can prove anything.

The moral is clear: If you type it and send it, prepare to explain it in public later.

And voice is no longer a refuge. Face-to-face conversations are still safe, but we know that the National Security Agency is monitoring everyone’s international phone calls. (They said nothing about SMS messages, but one can assume they were monitoring those too.) Routine recording of phone conversations is still rare — certainly the NSA has the capability — but will become more common as telephone calls continue migrating to the IP network.

If you find this disturbing, you should. Fewer conversations are ephemeral, and we’re losing control over the data. We trust our ISPs, employers and cellphone companies with our privacy, but again and again they’ve proven they can’t be trusted. Identity thieves routinely gain access to these repositories of our information. Paris Hilton and other celebrities have been the victims of hackers breaking into their cellphone providers’ networks. Google reads our Gmail and inserts context-dependent ads.

Even worse, normal constitutional protections don’t apply to much of this. The police need a court-issued warrant to search our papers or eavesdrop on our communications, but can simply issue a subpoena — or ask nicely or threateningly — for data of ours that is held by a third party, including stored copies of our communications.

The Justice Department wants to make this problem even worse, by forcing ISPs and others to save our communications — just in case we’re someday the target of an investigation. This is not only bad privacy and security, it’s a blow to our liberty as well. A world without ephemeral conversation is a world without freedom.

We can’t turn back technology; electronic communications are here to stay. But as technology makes our conversations less ephemeral, we need laws to step in and safeguard our privacy. We need a comprehensive data privacy law, protecting our data and communications regardless of where it is stored or how it is processed. We need laws forcing companies to keep it private and to delete it as soon as it is no longer needed.

And we need to remember, whenever we type and send, we’re being watched.

Foley is an anomaly. Most of us do not send instant messages in order to solicit sex with minors. Law enforcement might have a legitimate need to access Foley’s IMs, e-mails and cellphone calling logs, but that’s why there are warrants supported by probable cause–they help ensure that investigations are properly focused on suspected pedophiles, terrorists and other criminals. We saw this in the recent UK terrorist arrests; focused investigations on suspected terrorists foiled the plot, not broad surveillance of everyone without probable cause.

Without legal privacy protections, the world becomes one giant airport security area, where the slightest joke — or comment made years before — lands you in hot water. The world becomes one giant market-research study, where we are all life-long subjects. The world becomes a police state, where we all are assumed to be Foleys and terrorists in the eyes of the government.

This essay originally appeared on Forbes.com.

Posted on October 18, 2006 at 3:30 PMView Comments

Facebook and Data Control

Earlier this month, the popular social networking site Facebook learned a hard lesson in privacy. It introduced a new feature called “News Feeds” that shows an aggregation of everything members do on the site: added and deleted friends, a change in relationship status, a new favorite song, a new interest, etc. Instead of a member’s friends having to go to his page to view any changes, these changes are all presented to them automatically.

The outrage was enormous. One group, Students Against Facebook News Feeds, amassed over 700,000 members. Members planned to protest at the company’s headquarters. Facebook’s founder was completely stunned, and the company scrambled to add some privacy options.

Welcome to the complicated and confusing world of privacy in the information age. Facebook didn’t think there would be any problem; all it did was take available data and aggregate it in a novel way for what it perceived was its customers’ benefit. Facebook members instinctively understood that making this information easier to display was an enormous difference, and that privacy is more about control than about secrecy.

But on the other hand, Facebook members are just fooling themselves if they think they can control information they give to third parties.

Privacy used to be about secrecy. Someone defending himself in court against the charge of revealing someone else’s personal information could use as a defense the fact that it was not secret. But clearly, privacy is more complicated than that. Just because you tell your insurance company something doesn’t mean you don’t feel violated when that information is sold to a data broker. Just because you tell your friend a secret doesn’t mean you’re happy when he tells others. Same with your employer, your bank, or any company you do business with.

But as the Facebook example illustrates, privacy is much more complex. It’s about who you choose to disclose information to, how, and for what purpose. And the key word there is “choose.” People are willing to share all sorts of information, as long as they are in control.

When Facebook unilaterally changed the rules about how personal information was revealed, it reminded people that they weren’t in control. Its eight million members put their personal information on the site based on a set of rules about how that information would be used. It’s no wonder those members — high school and college kids who traditionally don’t care much about their own privacy — felt violated when Facebook changed the rules.

Unfortunately, Facebook can change the rules whenever it wants. Its Privacy Policy is 2,800 words long, and ends with a notice that it can change at any time. How many members ever read that policy, let alone read it regularly and check for changes? Not that a Privacy Policy is the same as a contract. Legally, Facebook owns all data members upload to the site. It can sell the data to advertisers, marketers, and data brokers. (Note: there is no evidence that Facebook does any of this.) It can allow the police to search its databases upon request. It can add new features that change who can access what personal data, and how.

But public perception is important. The lesson here for Facebook and other companies — for Google and MySpace and AOL and everyone else who hosts our e-mails and webpages and chat sessions — is that people believe they own their data. Even though the user agreement might technically give companies the right to sell the data, change the access rules to that data, or otherwise own that data, we — the users — believe otherwise. And when we who are affected by those actions start expressing our views — watch out.

What Facebook should have done was add the feature as an option, and allow members to opt in if they wanted to. Then, members who wanted to share their information via News Feeds could do so, and everyone else wouldn’t have felt that they had no say in the matter. This is definitely a gray area, and it’s hard to know beforehand which changes need to be implemented slowly and which won’t matter. Facebook, and others, need to talk to its members openly about new features. Remember: members want control.

The lesson for Facebook members might be even more jarring: if they think they have control over their data, they’re only deluding themselves. They can rebel against Facebook for changing the rules, but the rules have changed, regardless of what the company does.

Whenever you put data on a computer, you lose some control over it. And when you put it on the internet, you lose a lot of control over it. News Feeds brought Facebook members face to face with the full implications of putting their personal information on Facebook. It had just been an accident of the user interface that it was difficult to aggregate the data from multiple friends into a single place. And even if Facebook eliminates News Feeds entirely, a third party could easily write a program that does the same thing. Facebook could try to block the program, but would lose that technical battle in the end.

We’re all still wrestling with the privacy implications of the Internet, but the balance has tipped in favor of more openness. Digital data is just too easy to move, copy, aggregate, and display. Companies like Facebook need to respect the social rules of their sites, to think carefully about their default settings — they have an enormous impact on the privacy mores of the online world — and to give users as much control over their personal information as they can.

But we all need to remember that much of that control is illusory.

This essay originally appeared on Wired.com.

Posted on September 21, 2006 at 5:57 AMView Comments

Privacy Risks of Public Mentions

Interesting paper: “You are what you say: privacy risks of public mentions,” Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 2006.

Abstract:

In today’s data-rich networked world, people express many aspects of their lives online. It is common to segregate different aspects in different places: you might write opinionated rants about movies in your blog under a pseudonym while participating in a forum or web site for scholarly discussion of medical ethics under your real name. However, it may be possible to link these separate identities, because the movies, journal articles, or authors you mention are from a sparse relation space whose properties (e.g., many items related to by only a few users) allow re-identification. This re-identification violates people’s intentions to separate aspects of their life and can have negative consequences; it also may allow other privacy violations, such as obtaining a stronger identifier like name and address.This paper examines this general problem in a specific setting: re-identification of users from a public web movie forum in a private movie ratings dataset. We present three major results. First, we develop algorithms that can re-identify a large proportion of public users in a sparse relation space. Second, we evaluate whether private dataset owners can protect user privacy by hiding data; we show that this requires extensive and undesirable changes to the dataset, making it impractical. Third, we evaluate two methods for users in a public forum to protect their own privacy, suppression and misdirection. Suppression doesn’t work here either. However, we show that a simple misdirection strategy works well: mention a few popular items that you haven’t rated.

Unfortunately, the paper is only available to ACM members.

EDITED TO ADD (8/24): Paper is here.

Posted on August 23, 2006 at 2:11 PMView Comments

MySpace Used as Forensics Tool

From CNN:

Detectives used profiles posted on the MySpace social networking Web site to identify six suspects in a rape and robbery….

[…]

She knew only their first names but their pictures were posted on MySpace.

“Primarily, we pulled up her friends list. It helped us identify some of the players,” said Bartley.

Posted on March 28, 2006 at 1:19 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.