Entries Tagged "generations"

Page 3 of 3

Password Sharing Among American Teenagers

Interesting article from the New York Times on password sharing as a show of affection.

“It’s a sign of trust,” Tiffany Carandang, a high school senior in San Francisco, said of the decision she and her boyfriend made several months ago to share passwords for e-mail and Facebook. “I have nothing to hide from him, and he has nothing to hide from me.”

“That is so cute,” said Cherry Ng, 16, listening in to her friend’s comments to a reporter outside school. “They really trust each other.”

We do, said Ms. Carandang, 17. “I know he’d never do anything to hurt my reputation,” she added.

It doesn’t always end so well, of course. Changing a password is simple, but students, counselors and parents say that damage is often done before a password is changed, or that the sharing of online lives can be the reason a relationship falters.

Ethnologist danah boyd discusses what’s happening:

For Meixing, sharing her password with her boyfriend is a way of being connected. But it’s precisely these kinds of narratives that have prompted all sorts of horror by adults over the last week since that NYTimes article came out. I can’t count the number of people who have gasped “How could they!?!” at me. For this reason, I feel the need to pick up on an issue that the NYTimes let out.

The idea of teens sharing passwords didn’t come out of thin air. In fact, it was normalized by adults. And not just any adult. This practice is the product of parental online safety norms. In most households, it’s quite common for young children to give their parents their passwords. With elementary and middle school youth, this is often a practical matter: children lose their passwords pretty quickly. Furthermore, most parents reasonably believe that young children should be supervised online. As tweens turn into teens, the narrative shifts. Some parents continue to require passwords be forked over, using explanations like “because I’m your mother.” But many parents use the language of “trust” to explain why teens should share their passwords with them.

Much more in her post.

Related: a profile of danah boyd.

Posted on January 27, 2012 at 6:39 AMView Comments

Late Teens and Facebook Privacy

Facebook Privacy Settings: Who Cares?” by danah boyd and Eszter Hargittai.

Abstract: With over 500 million users, the decisions that Facebook makes about its privacy settings have the potential to influence many people. While its changes in this domain have often prompted privacy advocates and news media to critique the company, Facebook has continued to attract more users to its service. This raises a question about whether or not Facebook’s changes in privacy approaches matter and, if so, to whom. This paper examines the attitudes and practices of a cohort of 18– and 19–year–olds surveyed in 2009 and again in 2010 about Facebook’s privacy settings. Our results challenge widespread assumptions that youth do not care about and are not engaged with navigating privacy. We find that, while not universal, modifications to privacy settings have increased during a year in which Facebook’s approach to privacy was hotly contested. We also find that both frequency and type of Facebook use as well as Internet skill are correlated with making modifications to privacy settings. In contrast, we observe few gender differences in how young adults approach their Facebook privacy settings, which is notable given that gender differences exist in so many other domains online. We discuss the possible reasons for our findings and their implications.

Posted on August 11, 2010 at 6:00 AMView Comments

Young People, Privacy, and the Internet

There’s a lot out there on this topic. I’ve already linked to danah boyd’s excellent SXSW talk (and her work in general), my essay on privacy and control, and my talk — “Security, Privacy, and the Generation Gap” — which I’ve given four times in the past two months.

Last week, two new papers were published on the topic.

Youth, Privacy, and Reputation” is a literature review published by Harvard’s Berkman Center. It’s long, but an excellent summary of what’s out there on the topic:

Conclusions: The prevailing discourse around youth and privacy assumes that young people don’t care about their privacy because they post so much personal information online. The implication is that posting personal information online puts them at risk from marketers, pedophiles, future employers, and so on. Thus, policy and technical solutions are proposed that presume that young would not put personal information online if they understood the consequences. However, our review of the literature suggests that young people care deeply about privacy, particularly with regard to parents and teachers viewing personal information. Young people are heavily monitored at home, at school, and in public by a variety of surveillance technologies. Children and teenagers want private spaces for socialization, exploration, and experimentation, away from adult eyes. Posting personal information online is a way for youth to express themselves, connect with peers, increase popularity, and bond with friends and members of peer groups. Subsequently, young people want to be able to restrict information provided online in a nuanced and granular way.

Much popular writing (and some research) discusses young people, online technologies, and privacy in ways that do not reflect the realities of most children and teenagers’ lives. However, this provides rich opportunities for future research in this area. For instance, there are no studies of the impact of surveillance on young people– at school, at home, or in public. Although we have cited several qualitative and ethnographic studies of young people’s privacy practices and attitudes, more work in this area is needed to fully understand similarities and differences in this age group, particularly within age cohorts, across socioeconomic classes, between genders, and so forth. Finally, given that the frequently-cited comparative surveys of young people and adult privacy practices and attitudes are quite old, new research would be invaluable. We look forward to new directions in research in this area.

How Different Are Young Adults from Older Adults When it Comes to Information Privacy Attitudes & Policy?” from the University of California Berkeley, describes the results of a broad survey on privacy attitudes.

Conclusion: In policy circles, it has become almost a cliché to claim that young people do not care about privacy. Certainly there are many troubling anecdotes surrounding young individuals’ use of the internet, and of social networking sites in particular. Nevertheless, we found that in large proportions young adults do care about privacy. The data show that they and older adults are more alike on many privacy topics than they are different. We suggest, then, that young-adult Americans have an aspiration for increased privacy even while they participate in an online reality that is optimized to increase their revelation of personal data.

Public policy agendas should therefore not start with the proposition that young adults do not care about privacy and thus do not need regulations and other safeguards. Rather, policy discussions should acknowledge that the current business environment along with other factors sometimes encourages young adults to release personal data in order to enjoy social inclusion even while in their most rational moments they may espouse more conservative norms. Education may be useful. Although many young adults are exposed to educational programs about the internet, the focus of these programs is on personal safety from online predators and cyberbullying with little emphasis on information security and privacy. Young adults certainly are different from older adults when it comes to knowledge of privacy law. They are more likely to believe that the law protects them both online and off. This lack of knowledge in a tempting environment, rather than a cavalier lack of concern regarding privacy, may be an important reason large numbers of them engage with the digital world in a seemingly unconcerned manner.

But education alone is probably not enough for young adults to reach aspirational levels of privacy. They likely need multiple forms of help from various quarters of society, including perhaps the regulatory arena, to cope with the complex online currents that aim to contradict their best privacy instincts.

They’re both worth reading for anyone interested in this topic.

Posted on April 20, 2010 at 1:50 PMView Comments

Schneier on "Security, Privacy, and the Generation Gap"

Last month at the RSA Conference, I gave a talk titled “Security, Privacy, and the Generation Gap.” It was pretty good, but it was the first time I gave that talk in front of a large audience — and its newness showed.

Last week, I gave the same talk again, at the CACR Higher Education Security Summit at Indiana University. It was much, much better the second time around, and there’s a video available.

Posted on April 9, 2010 at 12:55 PMView Comments

Privacy and Control

In January Facebook Chief Executive, Mark Zuckerberg, declared the age of privacy to be over. A month earlier, Google Chief Eric Schmidt expressed a similar sentiment. Add Scott McNealy’s and Larry Ellison’s comments from a few years earlier, and you’ve got a whole lot of tech CEOs proclaiming the death of privacy–especially when it comes to young people.

It’s just not true. People, including the younger generation, still care about privacy. Yes, they’re far more public on the Internet than their parents: writing personal details on Facebook, posting embarrassing photos on Flickr and having intimate conversations on Twitter. But they take steps to protect their privacy and vociferously complain when they feel it violated. They’re not technically sophisticated about privacy and make mistakes all the time, but that’s mostly the fault of companies and Web sites that try to manipulate them for financial gain.

To the older generation, privacy is about secrecy. And, as the Supreme Court said, once something is no longer secret, it’s no longer private. But that’s not how privacy works, and it’s not how the younger generation thinks about it. Privacy is about control. When your health records are sold to a pharmaceutical company without your permission; when a social-networking site changes your privacy settings to make what used to be visible only to your friends visible to everyone; when the NSA eavesdrops on everyone’s e-mail conversations–your loss of control over that information is the issue. We may not mind sharing our personal lives and thoughts, but we want to control how, where and with whom. A privacy failure is a control failure.

People’s relationship with privacy is socially complicated. Salience matters: People are more likely to protect their privacy if they’re thinking about it, and less likely to if they’re thinking about something else. Social-networking sites know this, constantly reminding people about how much fun it is to share photos and comments and conversations while downplaying the privacy risks. Some sites go even further, deliberately hiding information about how little control–and privacy–users have over their data. We all give up our privacy when we’re not thinking about it.

Group behavior matters; we’re more likely to expose personal information when our peers are doing it. We object more to losing privacy than we value its return once it’s gone. Even if we don’t have control over our data, an illusion of control reassures us. And we are poor judges of risk. All sorts of academic research backs up these findings.

Here’s the problem: The very companies whose CEOs eulogize privacy make their money by controlling vast amounts of their users’ information. Whether through targeted advertising, cross-selling or simply convincing their users to spend more time on their site and sign up their friends, more information shared in more ways, more publicly means more profits. This means these companies are motivated to continually ratchet down the privacy of their services, while at the same time pronouncing privacy erosions as inevitable and giving users the illusion of control.

You can see these forces in play with Google‘s launch of Buzz. Buzz is a Twitter-like chatting service, and when Google launched it in February, the defaults were set so people would follow the people they corresponded with frequently in Gmail, with the list publicly available. Yes, users could change these options, but–and Google knew this–changing options is hard and most people accept the defaults, especially when they’re trying out something new. People were upset that their previously private e-mail contacts list was suddenly public. A Federal Trade Commission commissioner even threatened penalties. And though Google changed its defaults, resentment remained.

Facebook tried a similar control grab when it changed people’s default privacy settings last December to make them more public. While users could, in theory, keep their previous settings, it took an effort. Many people just wanted to chat with their friends and clicked through the new defaults without realizing it.

Facebook has a history of this sort of thing. In 2006 it introduced News Feeds, which changed the way people viewed information about their friends. There was no true privacy change in that users could not see more information than before; the change was in control–or arguably, just in the illusion of control. Still, there was a large uproar. And Facebook is doing it again; last month, the company announced new privacy changes that will make it easier for it to collect location data on users and sell that data to third parties.

With all this privacy erosion, those CEOs may actually be right–but only because they’re working to kill privacy. On the Internet, our privacy options are limited to the options those companies give us and how easy they are to find. We have Gmail and Facebook accounts because that’s where we socialize these days, and it’s hard–especially for the younger generation–to opt out. As long as privacy isn’t salient, and as long as these companies are allowed to forcibly change social norms by limiting options, people will increasingly get used to less and less privacy. There’s no malice on anyone’s part here; it’s just market forces in action. If we believe privacy is a social good, something necessary for democracy, liberty and human dignity, then we can’t rely on market forces to maintain it. Broad legislation protecting personal privacy by giving people control over their personal data is the only solution.

This essay originally appeared on Forbes.com.

EDITED TO ADD (4/13): Google responds. And another essay on the topic.

Posted on April 6, 2010 at 7:47 AMView Comments

Second SHB Workshop Liveblogging (6)

The first session of the morning was “Foundations,” which is kind of a catch-all for a variety of things that didn’t really fit anywhere else. Rachel Greenstadt moderated.

Terence Taylor, International Council for the Live Sciences (suggested video to watch: Darwinian Security; Natural Security), talked about the lessons evolution teaches about living with risk. Successful species didn’t survive by eliminating the risks of their environment, they survived by adaptation. Adaptation isn’t always what you think. For example, you could view the collapse of the Soviet Union as a failure to adapt, but you could also view it as successful adaptation. Risk is good. Risk is essential for the survival of a society, because risk-takers are the drivers of change. In the discussion phase, John Mueller pointed out a key difference between human and biological systems: humans tend to respond dramatically to anomalous events (the anthrax attacks), while biological systems respond to sustained change. And David Livingstone Smith asked about the difference between biological adaptation that affects the reproductive success of an organism’s genes, even at the expense of the organism, with security adaptation. (I recommend the book he edited: Natural Security: A Darwinian Approach to a Dangerous World.)

Andrew Odlyzko, University of Minnesota (suggested reading: Network Neutrality, Search Neutrality, and the Never-Ending Conflict between Efficiency and Fairness in Markets, Economics, Psychology, and Sociology of Security), discussed human-space vs. cyberspace. People cannot build secure systems — we know that — but people also cannot live with secure systems. We require a certain amount of flexibility in our systems. And finally, people don’t need secure systems. We survive with an astounding amount of insecurity in our world. The problem with cyberspace is that it was originally conceived as separate from the physical world, and that it could correct for the inadequacies of the physical world. Really, the two are intertwined, and that human space more often corrects for the inadequacies of cyberspace. Lessons: build messy systems, not clean ones; create a web of ties to other systems; create permanent records.

danah boyd, Microsoft Research (suggested reading: Taken Out of Context — American Teen Sociality in Networked Publics), does ethnographic studies of teens in cyberspace. Teens tend not to lie to their friends in cyberspace, but they lie to the system. Since an early age, they’ve been taught that they need to lie online to be safe. Teens regularly share their passwords: with their parents when forced, or with their best friend or significant other. This is a way of demonstrating trust. It’s part of the social protocol for this generation. In general, teens don’t use social media in the same way as adults do. And when they grow up, they won’t use social media in the same way as today’s adults do. Teens view privacy in terms of control, and take their cues about privacy from celebrities and how they use social media. And their sense of privacy is much more nuanced and complicated. In the discussion phase, danah wasn’t sure whether the younger generation would be more or less susceptible to Internet scams than the rest of us — they’re not nearly as technically savvy as we might think they are. “The only thing that saves teenagers is fear of their parents”; they try to lock them out, and lock others out in the process. Socio-economic status matters a lot, in ways that she is still trying to figure out. There are three different types of social networks: personal networks, articulated networks, and behavioral networks, and they’re different.

Mark Levine, Lancaster University (suggested reading: The Kindness of Crowds; Intra-group Regulation of Violence: Bystanders and the (De)-escalation of Violence), does social psychology. He argued against the common belief that groups are bad (mob violence, mass hysteria, peer group pressure). He collected data from UK CCTV cameras, searches for aggressive behavior, and studies when and how bystanders either help escalate or de-escalate the situations. Results: as groups get bigger, there is no increase of anti-social acts and a significant increase in pro-social acts. He has much more analysis and results, too complicated to summarize here. One key finding: when a third party intervenes in an aggressive interaction, it is much more likely to de-escalate. Basically, groups can act against violence. “When it comes to violence (and security), group processes are part of the solution — not part of the problem?”

Jeff MacKie-Mason, University of Michigan (suggested reading: Humans are smart devices, but not programmable; Security when people matter; A Social Mechanism for Supporting Home Computer Security), is an economist: “Security problems are incentive problems.” He discussed motivation, and how to design systems to take motivation into account. Humans are smart devices; they can’t be programmed, but they can be influenced through the sciences of motivational behavior: microeconomics, game theory, social psychology, psychodynamics, and personality psychology. He gave a couple of general examples of how these theories can inform security system design.

Joe Bonneau, Cambridge University, talked about social networks like Facebook, and privacy. People misunderstand why privacy and security is important in social networking sites like Facebook. People underestimate of what Facebook really is; it really is a reimplementation of the entire Internet. “Everything on the Internet is becoming social,” and that makes security different. Phishing is different, 419-style scams are different. Social context makes some scams easier; social networks are fun, noisy, and unpredictable. “People use social networking systems with their brain turned off.” But social context can be used to spot frauds and anomalies, and can be used to establish trust.

Three more sessions to go. (I am enjoying liveblogging the event. It’s helping me focus and pay closer attention.)

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 9:54 AMView Comments

The Future of Ephemeral Conversation

When he becomes president, Barack Obama will have to give up his BlackBerry. Aides are concerned that his unofficial conversations would become part of the presidential record, subject to subpoena and eventually made public as part of the country’s historical record.

This reality of the information age might be particularly stark for the president, but it’s no less true for all of us. Conversation used to be ephemeral. Whether face-to-face or by phone, we could be reasonably sure that what we said disappeared as soon as we said it. Organized crime bosses worried about phone taps and room bugs, but that was the exception. Privacy was just assumed.

This has changed. We chat in e-mail, over SMS and IM, and on social networking websites like Facebook, MySpace, and LiveJournal. We blog and we Twitter. These conversations — with friends, lovers, colleagues, members of our cabinet — are not ephemeral; they leave their own electronic trails.

We know this intellectually, but we haven’t truly internalized it. We type on, engrossed in conversation, forgetting we’re being recorded and those recordings might come back to haunt us later.

Oliver North learned this, way back in 1987, when messages he thought he had deleted were saved by the White House PROFS system, and then subpoenaed in the Iran-Contra affair. Bill Gates learned this in 1998 when his conversational e-mails were provided to opposing counsel as part of the antitrust litigation discovery process. Mark Foley learned this in 2006 when his instant messages were saved and made public by the underage men he talked to. Paris Hilton learned this in 2005 when her cell phone account was hacked, and Sarah Palin learned it earlier this year when her Yahoo e-mail account was hacked. Someone in George W. Bush’s administration learned this, and millions of e-mails went mysteriously and conveniently missing.

Ephemeral conversation is dying.

Cardinal Richelieu famously said, :If one would give me six lines written by the hand of the most honest man, I would find something in them to have him hanged.” When all our ephemeral conversations can be saved for later examination, different rules have to apply. Conversation is not the same thing as correspondence. Words uttered in haste over morning coffee, whether spoken in a coffee shop or thumbed on a Blackberry, are not official pronouncements. Discussions in a meeting, whether held in a boardroom or a chat room, are not the same as answers at a press conference. And privacy isn’t just about having something to hide; it has enormous value to democracy, liberty, and our basic humanity.

We can’t turn back technology; electronic communications are here to stay and even our voice conversations are threatened. But as technology makes our conversations less ephemeral, we need laws to step in and safeguard ephemeral conversation. We need a comprehensive data privacy law, protecting our data and communications regardless of where it is stored or how it is processed. We need laws forcing companies to keep it private and delete it as soon as it is no longer needed. Laws requiring ISPs to store e-mails and other personal communications are exactly what we don’t need.

Rules pertaining to government need to be different, because of the power differential. Subjecting the president’s communications to eventual public review increases liberty because it reduces the government’s power with respect to the people. Subjecting our communications to government review decreases liberty because it reduces our power with respect to the government. The president, as well as other members of government, need some ability to converse ephemerally — just as they’re allowed to have unrecorded meetings and phone calls — but more of their actions need to be subject to public scrutiny.

But laws can only go so far. Law or no law, when something is made public it’s too late. And many of us like having complete records of all our e-mail at our fingertips; it’s like our offline brains.

In the end, this is cultural.

The Internet is the greatest generation gap since rock and roll. We’re now witnessing one aspect of that generation gap: the younger generation chats digitally, and the older generation treats those chats as written correspondence. Until our CEOs blog, our Congressmen Twitter, and our world leaders send each other LOLcats – until we have a Presidential election where both candidates have a complete history on social networking sites from before they were teenagers– we aren’t fully an information age society.

When everyone leaves a public digital trail of their personal thoughts since birth, no one will think twice about it being there. Obama might be on the younger side of the generation gap, but the rules he’s operating under were written by the older side. It will take another generation before society’s tolerance for digital ephemera changes.

This essay previously appeared on The Wall Street Journal website (not the print newspaper), and is an update of something I wrote previously.

Posted on November 24, 2008 at 2:06 PMView Comments

Teenagers and Risk Assessment

In an article on auto-asphyxiation, there’s commentary on teens and risk:

But the new debate also coincides with a reassessment of how teenagers think about risk. Conventional wisdom said adolescents often flirted with the edges of danger because they felt invulnerable.

Newer studies have dismissed that notion. They say that most teenagers are quite cool-headed in assessing risk and reward — and that is what sometimes gets them in trouble. Adults, by contrast, are more likely to rely on experience or gut feelings than rational calculation.

Asked whether it would ever make sense to play Russian roulette for a million dollars, for example, most adults immediately say no, said Valerie F. Reyna, a professor of human development and psychology at Cornell University.

But when Professor Reyna asks teenagers the same question in intervention sessions to teach smarter risk-taking behavior, they often stop to calculate or debate, she said — what exactly would the odds be of getting the chamber with the bullet?

“I use the example to try to get them to see that thinking rationally like that doesn’t always lead to rational choices,” she said.

Of course, reality is always more complicated. We can invent fictional scenarios where it makes sense to play that game of Russian roulette. Imagine you have terminal cancer, and that million dollars would make a huge difference to your survivors. You might very well take the risk.

Posted on March 29, 2007 at 6:48 AMView Comments

Changing Generational Notions of Privacy

Interesting article.

And after all, there is another way to look at this shift. Younger people, one could point out, are the only ones for whom it seems to have sunk in that the idea of a truly private life is already an illusion. Every street in New York has a surveillance camera. Each time you swipe your debit card at Duane Reade or use your MetroCard, that transaction is tracked. Your employer owns your e-mails. The NSA owns your phone calls. Your life is being lived in public whether you choose to acknowledge it or not.

So it may be time to consider the possibility that young people who behave as if privacy doesn’t exist are actually the sane people, not the insane ones. For someone like me, who grew up sealing my diary with a literal lock, this may be tough to accept. But under current circumstances, a defiant belief in holding things close to your chest might not be high-minded. It might be an artifact—quaint and naïve, like a determined faith that virginity keeps ladies pure. Or at least that might be true for someone who has grown up “putting themselves out there” and found that the benefits of being transparent make the risks worth it.

Shirky describes this generational shift in terms of pidgin versus Creole. “Do you know that distinction? Pidgin is what gets spoken when people patch things together from different languages, so it serves well enough to communicate. But Creole is what the children speak, the children of pidgin speakers. They impose rules and structure, which makes the Creole language completely coherent and expressive, on par with any language. What we are witnessing is the Creolization of media.”

Posted on March 9, 2007 at 7:28 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.