Entries Tagged "security awareness"

Page 3 of 5

New Report on Teens, Social Media, and Privacy

Interesting report from the From the Pew Internet and American Life Project:

Teens are sharing more information about themselves on their social media profiles than they did when we last surveyed in 2006:

  • 91% post a photo of themselves, up from 79% in 2006.
  • 71% post their school name, up from 49%.
  • 71% post the city or town where they live, up from 61%.
  • 53% post their email address, up from 29%.
  • 20% post their cell phone number, up from 2%.

60% of teen Facebook users set their Facebook profiles to private (friends only), and most report high levels of confidence in their ability to manage their settings.

danah boyd points out something interesting in the data:

My favorite finding of Pew’s is that 58% of teens cloak their messages either through inside jokes or other obscure references, with more older teens (62%) engaging in this practice than younger teens (46%)….

While adults are often anxious about shared data that might be used by government agencies, advertisers, or evil older men, teens are much more attentive to those who hold immediate power over them — parents, teachers, college admissions officers, army recruiters, etc. To adults, services like Facebook that may seem “private” because you can use privacy tools, but they don’t feel that way to youth who feel like their privacy is invaded on a daily basis. (This, btw, is part of why teens feel like Twitter is more intimate than Facebook. And why you see data like Pew’s that show that teens on Facebook have, on average 300 friends while, on Twitter, they have 79 friends.) Most teens aren’t worried about strangers; they’re worried about getting in trouble.

Over the last few years, I’ve watched as teens have given up on controlling access to content. It’s too hard, too frustrating, and technology simply can’t fix the power issues. Instead, what they’ve been doing is focusing on controlling access to meaning. A comment might look like it means one thing, when in fact it means something quite different. By cloaking their accessible content, teens reclaim power over those who they know who are surveilling them. This practice is still only really emerging en masse, so I was delighted that Pew could put numbers to it. I should note that, as Instagram grows, I’m seeing more and more of this. A picture of a donut may not be about a donut. While adults worry about how teens’ demographic data might be used, teens are becoming much more savvy at finding ways to encode their content and achieve privacy in public.

Posted on May 24, 2013 at 8:40 AMView Comments

Risk Reduction Strategies on Social Networking Sites

By two teenagers:

Mikalah uses Facebook but when she goes to log out, she deactivates her Facebook account. She knows that this doesn’t delete the account ­ that’s the point. She knows that when she logs back in, she’ll be able to reactivate the account and have all of her friend connections back. But when she’s not logged in, no one can post messages on her wall or send her messages privately or browse her content. But when she’s logged in, they can do all of that. And she can delete anything that she doesn’t like. Michael Ducker calls this practice “super-logoff” when he noticed a group of gay male adults doing the exact same thing.

And:

Shamika doesn’t deactivate her Facebook profile but she does delete every wall message, status update, and Like shortly after it’s posted. She’ll post a status update and leave it there until she’s ready to post the next one or until she’s done with it. Then she’ll delete it from her profile. When she’s done reading a friend’s comment on her page, she’ll delete it. She’ll leave a Like up for a few days for her friends to see and then delete it.

I’ve heard this practice called wall scrubbing.

In any reasonably competitive market economy, sites would offer these as options to better serve their customers. But in the give-it-away user-as-product economy we so often have on the Internet, the social networking sites have a different agenda.

Posted on December 1, 2010 at 1:27 PMView Comments

Seat Belt Use and Lessons for Security Awareness

From Lance Spitzner:

In January of this year the National Highway Traffic Safety Administration released a report called “Analyzing the First Years Of the Ticket or Click It Mobilizations“… While the report is focused on the use of seat belts, it has fascinating applications to the world of security awareness. The report focuses on 2000 – 2006, when most states in the United States began campaigns (called Ticket or Click-It) promoting and requiring the use of seat belts. Just like security awareness, the goal of the campaign was to change behaviors, specifically to get people to wear their seat belts when driving… The campaigns were very successful, resulting in a 20-23% increase in seat belt use regardless of which statistics they used. The key finding of the report was that enforcement and not money spent on media were key to results. The states that had the strongest enforcement had the most people using seat belts. The states with the weakest enforcement had the lowest seat belt usage.

[..]

I feel the key lesson here is not only must an awareness program effectively communicate, but to truly change behaviors what you communicate has to be enforced. An information security awareness campaign communicates what is enforced (your policies) and in addition it should communicate why. Then, follow-up that campaign with strong, visible enforcement.

Posted on April 28, 2010 at 7:39 AMView Comments

Young People, Privacy, and the Internet

There’s a lot out there on this topic. I’ve already linked to danah boyd’s excellent SXSW talk (and her work in general), my essay on privacy and control, and my talk — “Security, Privacy, and the Generation Gap” — which I’ve given four times in the past two months.

Last week, two new papers were published on the topic.

Youth, Privacy, and Reputation” is a literature review published by Harvard’s Berkman Center. It’s long, but an excellent summary of what’s out there on the topic:

Conclusions: The prevailing discourse around youth and privacy assumes that young people don’t care about their privacy because they post so much personal information online. The implication is that posting personal information online puts them at risk from marketers, pedophiles, future employers, and so on. Thus, policy and technical solutions are proposed that presume that young would not put personal information online if they understood the consequences. However, our review of the literature suggests that young people care deeply about privacy, particularly with regard to parents and teachers viewing personal information. Young people are heavily monitored at home, at school, and in public by a variety of surveillance technologies. Children and teenagers want private spaces for socialization, exploration, and experimentation, away from adult eyes. Posting personal information online is a way for youth to express themselves, connect with peers, increase popularity, and bond with friends and members of peer groups. Subsequently, young people want to be able to restrict information provided online in a nuanced and granular way.

Much popular writing (and some research) discusses young people, online technologies, and privacy in ways that do not reflect the realities of most children and teenagers’ lives. However, this provides rich opportunities for future research in this area. For instance, there are no studies of the impact of surveillance on young people– at school, at home, or in public. Although we have cited several qualitative and ethnographic studies of young people’s privacy practices and attitudes, more work in this area is needed to fully understand similarities and differences in this age group, particularly within age cohorts, across socioeconomic classes, between genders, and so forth. Finally, given that the frequently-cited comparative surveys of young people and adult privacy practices and attitudes are quite old, new research would be invaluable. We look forward to new directions in research in this area.

How Different Are Young Adults from Older Adults When it Comes to Information Privacy Attitudes & Policy?” from the University of California Berkeley, describes the results of a broad survey on privacy attitudes.

Conclusion: In policy circles, it has become almost a cliché to claim that young people do not care about privacy. Certainly there are many troubling anecdotes surrounding young individuals’ use of the internet, and of social networking sites in particular. Nevertheless, we found that in large proportions young adults do care about privacy. The data show that they and older adults are more alike on many privacy topics than they are different. We suggest, then, that young-adult Americans have an aspiration for increased privacy even while they participate in an online reality that is optimized to increase their revelation of personal data.

Public policy agendas should therefore not start with the proposition that young adults do not care about privacy and thus do not need regulations and other safeguards. Rather, policy discussions should acknowledge that the current business environment along with other factors sometimes encourages young adults to release personal data in order to enjoy social inclusion even while in their most rational moments they may espouse more conservative norms. Education may be useful. Although many young adults are exposed to educational programs about the internet, the focus of these programs is on personal safety from online predators and cyberbullying with little emphasis on information security and privacy. Young adults certainly are different from older adults when it comes to knowledge of privacy law. They are more likely to believe that the law protects them both online and off. This lack of knowledge in a tempting environment, rather than a cavalier lack of concern regarding privacy, may be an important reason large numbers of them engage with the digital world in a seemingly unconcerned manner.

But education alone is probably not enough for young adults to reach aspirational levels of privacy. They likely need multiple forms of help from various quarters of society, including perhaps the regulatory arena, to cope with the complex online currents that aim to contradict their best privacy instincts.

They’re both worth reading for anyone interested in this topic.

Posted on April 20, 2010 at 1:50 PMView Comments

DHS Cybersecurity Awareness Campaign Challenge

This is a little hokey, but better them than the NSA:

The National Cybersecurity Awareness Campaign Challenge Competition is designed to solicit ideas from industry and individuals alike on how best we can clearly and comprehensively discuss cybersecurity with the American public.

Key areas that should be factored into the competition are the following:

  • Teamwork
  • Ability to quantify the distribution method
  • Ability to quantify the receipt of message
  • Solution may under no circumstance create spam
  • Use of Web 2.0 Technology
  • Feedback mechanism
  • List building
  • Privacy protection
  • Repeatability
  • Transparency
  • Message

It should engage the Private Sector and Industry leaders to develop their own campaign strategy and metrics to track how to get a unified cyber security message out to the American public.

Deadline is end of April, if you want to submit something. “Winners of the Challenge will be invited to an event in Washington D.C. in late May or early June.” I wonder what kind of event.

Posted on April 2, 2010 at 6:14 AMView Comments

Users Rationally Rejecting Security Advice

This paper, by Cormac Herley at Microsoft Research, sounds like me:

Abstract: It is often suggested that users are hopelessly lazy and
unmotivated on security questions. They chose weak passwords, ignore security warnings, and are oblivious to certicates errors. We argue that users’ rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort. Looking at various examples of security advice we find that the advice is complex and growing, but the benefit is largely speculative or moot. For example, much of the advice concerning passwords is outdated and does little to address actual threats, and fully 100% of certificate error warnings appear to be false positives. Further, if users spent even a minute a day reading URLs to avoid phishing, the cost (in terms of user time) would be two orders of magnitude greater than all phishing losses. Thus we find that most security advice simply offers a poor cost-benefit tradeoff to users and is rejected. Security advice is a daily burden, applied to the whole population, while an upper bound on the benefit is the harm suffered by the fraction that become victims annually. When that fraction is small, designing security advice that is beneficial is very hard. For example, it makes little sense to burden all users with a daily task to spare 0.01% of them a modest annual pain.

Sounds like me.

EDITED TO ADD (12/12): Related article on usable security.

Posted on November 24, 2009 at 12:40 PMView Comments

Password Advice

Here’s some complicated advice on securing passwords that — I’ll bet — no one follows.

  • DO use a password manager such as those reviewed by Scott Dunn in his Sept. 18, 2008,
    Insider Tips
    column. Although Scott focused on free programs, I really like CallPod’s Keeper, a $15 utility that comes in Windows, Mac, and iPhone versions and allows you to keep all your passwords in sync. Find more information about the program and a download link for the 15-day free-trial version on the vendor’s site.

  • DO change passwords frequently. I change mine every six months or whenever I sign in to a site I haven’t visited in long time. Don’t reuse old passwords. Password managers can assign expiration dates to your passwords and remind you when the passwords are about to expire.
  • DO keep your passwords secret. Putting them into a file on your computer, e-mailing them to others, or writing them on a piece of paper in your desk is tantamount to giving them away. If you must allow someone else access to an account, create a temporary password just for them and then change it back immediately afterward.

    No matter how much you may trust your friends or colleagues, you can’t trust their computers. If they need ongoing access, consider creating a separate account with limited privileges for them to use.

  • DON’T use passwords comprised of dictionary words, birthdays, family and pet names, addresses, or any other personal information. Don’t use repeat characters such as 111 or sequences like abc, qwerty, or 123 in any part of your password.
  • DON’T use the same password for different sites. Otherwise, someone who culls your Facebook or Twitter password in a phishing exploit could, for example, access your bank account.
  • DON’T allow your computer to automatically sign in on boot-up and thus use any automatic e-mail, chat, or browser sign-ins. Avoid using the same Windows sign-in password on two different computers.

  • DON’T use the “remember me” or automatic sign-in option available on many Web sites. Keep sign-ins under the control of your password manager instead.

  • DON’T enter passwords on a computer you don’t control — such as a friend’s computer — because you don’t know what spyware or keyloggers might be on that machine.

  • DON’T access password-protected accounts over open Wi-Fi networks — or any other network you don’t trust — unless the site is secured via https. Use a VPN if you travel a lot. (See Ian “Gizmo” Richards’ Dec. 11, 2008, Best Software column, “Connect safely over open Wi-Fi networks,” for Wi-Fi security tips.)
  • DON’T enter a password or even your account name in any Web page you access via an e-mail link. These are most likely phishing scams. Instead, enter the normal URL for that site directly into your browser, and proceed to the page in question from there.

I regularly break seven of those rules. How about you? (Here’s my advice on choosing secure passwords.)

Posted on August 10, 2009 at 6:57 AMView Comments

Risk Intuition

People have a natural intuition about risk, and in many ways it’s very good. It fails at times due to a variety of cognitive biases, but for normal risks that people regularly encounter, it works surprisingly well: often better than we give it credit for.

This struck me as I listened to yet another conference presenter complaining about security awareness training. He was talking about the difficulty of getting employees at his company to actually follow his security policies: encrypting data on memory sticks, not sharing passwords, not logging in from untrusted wireless networks. “We have to make people understand the risks,” he said.

It seems to me that his co-workers understand the risks better than he does. They know what the real risks are at work, and that they all revolve around not getting the job done. Those risks are real and tangible, and employees feel them all the time. The risks of not following security procedures are much less real. Maybe the employee will get caught, but probably not. And even if he does get caught, the penalties aren’t serious.

Given this accurate risk analysis, any rational employee will regularly circumvent security to get his or her job done. That’s what the company rewards, and that’s what the company actually wants.

“Fire someone who breaks security procedure, quickly and publicly,” I suggested to the presenter. “That’ll increase security awareness faster than any of your posters or lectures or newsletters.” If the risks are real, people will get it.

You see the same sort of risk intuition on motorways. People are less careful about posted speed limits than they are about the actual speeds police issue tickets for. It’s also true on the streets: people respond to real crime rates, not public officials proclaiming that a neighbourhood is safe.

The warning stickers on ladders might make you think the things are considerably riskier than they are, but people have a good intuition about ladders and ignore most of the warnings. (This isn’t to say that some people don’t do stupid things around ladders, but for the most part they’re safe. The warnings are more about the risk of lawsuits to ladder manufacturers than risks to people who climb ladders.)

As a species, we are naturally tuned in to the risks inherent in our environment. Throughout our evolution, our survival depended on making reasonably accurate risk management decisions intuitively, and we’re so good at it, we don’t even realise we’re doing it.

Parents know this. Children have surprisingly perceptive risk intuition. They know when parents are serious about a threat and when their threats are empty. And they respond to the real risks of parental punishment, not the inflated risks based on parental rhetoric. Again, awareness training lectures don’t work; there have to be real consequences.

It gets even weirder. The University College London professor John Adams popularised the metaphor of a mental risk thermostat. We tend to seek some natural level of risk, and if something becomes less risky, we tend to make it more risky. Motorcycle riders who wear helmets drive faster than riders who don’t.

Our risk thermostats aren’t perfect (that newly helmeted motorcycle rider will still decrease his overall risk) and will tend to remain within the same domain (he might drive faster, but he won’t increase his risk by taking up smoking), but in general, people demonstrate an innate and finely tuned ability to understand and respond to risks.

Of course, our risk intuition fails spectacularly and often, with regards to rare risks , unknown risks, voluntary risks, and so on. But when it comes to the common risks we face every day—the kinds of risks our evolutionary survival depended on—we’re pretty good.

So whenever you see someone in a situation who you think doesn’t understand the risks, stop first and make sure you understand the risks. You might be surprised.

This essay previously appeared in The Guardian.

EDITED TO ADD (8/12): Commentary on risk thermostat.

Posted on August 6, 2009 at 5:08 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.