Entries Tagged "Facebook"

Page 8 of 10

Hacking HTTP Status Codes

One website can learn if you’re logged into other websites.

When you visit my website, I can automatically and silently determine if you’re logged into Facebook, Twitter, Gmail and Digg. There are almost certainly thousands of other sites with this issue too, but I picked a few vulnerable well known ones to get your attention. You may not care that I can tell you’re logged into Gmail, but would you care if I could tell you’re logged into one or more porn or warez sites? Perhaps http://oppressive-regime.example.org/ would like to collect a list of their users who are logged into http://controversial-website.example.com/?

Posted on February 2, 2011 at 2:26 PMView Comments

Whitelisting vs. Blacklisting

The whitelist/blacklist debate is far older than computers, and it’s instructive to recall what works where. Physical security works generally on a whitelist model: if you have a key, you can open the door; if you know the combination, you can open the lock. We do it this way not because it’s easier—although it is generally much easier to make a list of people who should be allowed through your office door than a list of people who shouldn’t—but because it’s a security system that can be implemented automatically, without people.

To find blacklists in the real world, you have to start looking at environments where almost everyone is allowed. Casinos are a good example: everyone can come in and gamble except those few specifically listed in the casino’s black book or the more general Griffin book. Some retail stores have the same model—a Google search on “banned from Wal-Mart” results in 1.5 million hits, including Megan Fox—although you have to wonder about enforcement. Does Wal-Mart have the same sort of security manpower as casinos?

National borders certainly have that kind of manpower, and Marcus is correct to point to passport control as a system with both a whitelist and a blacklist. There are people who are allowed in with minimal fuss, people who are summarily arrested with as minimal a fuss as possible, and people in the middle who receive some amount of fussing. Airport security works the same way: the no-fly list is a blacklist, and people with redress numbers are on the whitelist.

Computer networks share characteristics with your office and Wal-Mart: sometimes you only want a few people to have access, and sometimes you want almost everybody to have access. And you see whitelists and blacklists at work in computer networks. Access control is whitelisting: if you know the password, or have the token or biometric, you get access. Antivirus is blacklisting: everything coming into your computer from the Internet is assumed to be safe unless it appears on a list of bad stuff. On computers, unlike the real world, it takes no extra manpower to implement a blacklist—the software can do it largely for free.

Traditionally, execution control has been based on a blacklist. Computers are so complicated and applications so varied that it just doesn’t make sense to limit users to a specific set of applications. The exception is constrained environments, such as computers in hotel lobbies and airline club lounges. On those, you’re often limited to an Internet browser and a few common business applications.

Lately, we’re seeing more whitelisting on closed computing platforms. The iPhone works on a whitelist: if you want a program to run on the phone, you need to get it approved by Apple and put in the iPhone store. Your Wii game machine works the same way. This is done primarily because the manufacturers want to control the economic environment, but it’s being sold partly as a security measure. But in this case, more security equals less liberty; do you really want your computing options limited by Apple, Microsoft, Google, Facebook, or whoever controls the particular system you’re using?

Turns out that many people do. Apple’s control over its apps hasn’t seemed to hurt iPhone sales, and Facebook’s control over its apps hasn’t seemed to affect Facebook’s user numbers. And honestly, quite a few of us would have had an easier time over the Christmas holidays if we could have implemented a whitelist on the computers of our less-technical relatives.

For these two reasons, I think the whitelist model will continue to make inroads into our general purpose computers. And those of us who want control over our own environments will fight back—perhaps with a whitelist we maintain personally, but more probably with a blacklist.

This essay previously appeared in Information Security as the first half of a point-counterpoint with Marcus Ranum. You can read Marcus’s half there as well.

Posted on January 28, 2011 at 5:02 AMView Comments

Risk Reduction Strategies on Social Networking Sites

By two teenagers:

Mikalah uses Facebook but when she goes to log out, she deactivates her Facebook account. She knows that this doesn’t delete the account ­ that’s the point. She knows that when she logs back in, she’ll be able to reactivate the account and have all of her friend connections back. But when she’s not logged in, no one can post messages on her wall or send her messages privately or browse her content. But when she’s logged in, they can do all of that. And she can delete anything that she doesn’t like. Michael Ducker calls this practice “super-logoff” when he noticed a group of gay male adults doing the exact same thing.

And:

Shamika doesn’t deactivate her Facebook profile but she does delete every wall message, status update, and Like shortly after it’s posted. She’ll post a status update and leave it there until she’s ready to post the next one or until she’s done with it. Then she’ll delete it from her profile. When she’s done reading a friend’s comment on her page, she’ll delete it. She’ll leave a Like up for a few days for her friends to see and then delete it.

I’ve heard this practice called wall scrubbing.

In any reasonably competitive market economy, sites would offer these as options to better serve their customers. But in the give-it-away user-as-product economy we so often have on the Internet, the social networking sites have a different agenda.

Posted on December 1, 2010 at 1:27 PMView Comments

Firesheep

Firesheep is a new Firefox plugin that makes it easy for you to hijack other people’s social network connections. Basically, Facebook authenticates clients with cookies. If someone is using a public WiFi connection, the cookies are sniffable. Firesheep uses wincap to capture and display the authentication information for accounts it sees, allowing you to hijack the connection.

Slides from the Toorcon talk.

Protect yourself by forcing the authentication to happen over TLS. Or stop logging in to Facebook from public networks.

EDITED TO ADD (10/27): To protect against this attack, you have to encrypt the entire session—not just the initial authentication.

EDITED TO ADD (11/4): Foiling Firesheep.

EDITED TO ADD (11/10): More info.

EDITED TO ADD (11/17): Blacksheep detects Firesheep.

Posted on October 27, 2010 at 7:53 AMView Comments

Monitoring Employees' Online Behavior

Not their online behavior at work, but their online behavior in life.

Using automation software that slogs through Facebook, Twitter, Flickr, YouTube, LinkedIn, blogs, and “thousands of other sources,” the company develops a report on the “real you”—not the carefully crafted you in your resume. The service is called Social Intelligence Hiring. The company promises a 48-hour turn-around.

[…]

The reports feature a visual snapshot of what kind of person you are, evaluating you in categories like “Poor Judgment,” “Gangs,” “Drugs and Drug Lingo” and “Demonstrating Potentially Violent Behavior.” The company mines for rich nuggets of raw sewage in the form of racy photos, unguarded commentary about drugs and alcohol and much more.

The company also offers a separate Social Intelligence Monitoring service to watch the personal activity of existing employees on an ongoing basis…. The service provides real-time notification alerts, so presumably the moment your old college buddy tags an old photo of you naked, drunk and armed on Facebook, the boss gets a text message with a link.

This is being sold using fear:

…company spokespeople emphasize liability. What happens if one of your employees freaks out, comes to work and starts threatening coworkers with a samurai sword? You’ll be held responsible because all of the signs of such behavior were clear for all to see on public Facebook pages. That’s why you should scan every prospective hire and run continued scans on every existing employee.

In other words, they make the case that now that people use social networks, companies will be expected (by shareholders, etc.) to monitor those services and protect the company from lawsuits, damage to reputation, and other harm.

Posted on October 4, 2010 at 6:31 AMView Comments

Social Steganography

From danah boyd:

Carmen is engaging in social steganography. She’s hiding information in plain sight, creating a message that can be read in one way by those who aren’t in the know and read differently by those who are. She’s communicating to different audiences simultaneously, relying on specific cultural awareness to provide the right interpretive lens. While she’s focused primarily on separating her mother from her friends, her message is also meaningless to broader audiences who have no idea that she had just broken up with her boyfriend.

Posted on August 25, 2010 at 6:20 AMView Comments

Late Teens and Facebook Privacy

Facebook Privacy Settings: Who Cares?” by danah boyd and Eszter Hargittai.

Abstract: With over 500 million users, the decisions that Facebook makes about its privacy settings have the potential to influence many people. While its changes in this domain have often prompted privacy advocates and news media to critique the company, Facebook has continued to attract more users to its service. This raises a question about whether or not Facebook’s changes in privacy approaches matter and, if so, to whom. This paper examines the attitudes and practices of a cohort of 18– and 19–year–olds surveyed in 2009 and again in 2010 about Facebook’s privacy settings. Our results challenge widespread assumptions that youth do not care about and are not engaged with navigating privacy. We find that, while not universal, modifications to privacy settings have increased during a year in which Facebook’s approach to privacy was hotly contested. We also find that both frequency and type of Facebook use as well as Internet skill are correlated with making modifications to privacy settings. In contrast, we observe few gender differences in how young adults approach their Facebook privacy settings, which is notable given that gender differences exist in so many other domains online. We discuss the possible reasons for our findings and their implications.

Posted on August 11, 2010 at 6:00 AMView Comments

Reading Me

The number of different ways to read my essays, commentaries, and links has grown recently. Here’s the rundown:

I think that about covers it for useful distribution formats right now.

EDITED TO ADD (6/20): One more; there’s a Crypto-Gram podcast.

Posted on June 15, 2010 at 1:05 PMView Comments

Applications Disclosing Required Authority

This is an interesting piece of research evaluating different user interface designs by which applications disclose to users what sort of authority they need to install themselves. Given all the recent concerns about third-party access to user data on social networking sites (particularly Facebook), this is particularly timely research.

We have provided evidence of a growing trend among application platforms to disclose, via application installation consent dialogs, the resources and actions that applications will be authorized to perform if installed. To improve the design of these disclosures, we have have taken an important first step of testing key design elements. We hope these findings will assist future researchers in creating experiences that leave users feeling better informed and more confident in their installation decisions.

Within the admittedly constrained context of our laboratory study, disclosure design had surprisingly little effect on participants’ ability to absorb and search information. However, the great majority of participants preferred designs that used images or icons to represent resources. This great majority of participants also disliked designs that used paragraphs, the central design element of Facebook’s disclosures, and outlines, the central design element of Android’s disclosures.

Posted on May 21, 2010 at 1:17 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.