Latest

Page 2

A Revised Taxonomy of Social Networking Data

Lately I’ve been reading about user security and privacy—control, really—on social networking sites. The issues are hard and the solutions harder, but I’m seeing a lot of confusion in even forming the questions. Social networking sites deal with several different types of user data, and it’s essential to separate them.

Below is my taxonomy of social networking data, which I first presented at the Internet Governance Forum meeting last November, and again—revised—at an OECD workshop on the role of Internet intermediaries in June.

  • Service data is the data you give to a social networking site in order to use it. Such data might include your legal name, your age, and your credit-card number.
  • Disclosed data is what you post on your own pages: blog entries, photographs, messages, comments, and so on.
  • Entrusted data is what you post on other people’s pages. It’s basically the same stuff as disclosed data, but the difference is that you don’t have control over the data once you post it—another user does.
  • Incidental data is what other people post about you: a paragraph about you that someone else writes, a picture of you that someone else takes and posts. Again, it’s basically the same stuff as disclosed data, but the difference is that you don’t have control over it, and you didn’t create it in the first place.
  • Behavioral data is data the site collects about your habits by recording what you do and who you do it with. It might include games you play, topics you write about, news articles you access (and what that says about your political leanings), and so on.
  • Derived data is data about you that is derived from all the other data. For example, if 80 percent of your friends self-identify as gay, you’re likely gay yourself.

There are other ways to look at user data. Some of it you give to the social networking site in confidence, expecting the site to safeguard the data. Some of it you publish openly and others use it to find you. And some of it you share only within an enumerated circle of other users. At the receiving end, social networking sites can monetize all of it: generally by selling targeted advertising.

Different social networking sites give users different rights for each data type. Some are always private, some can be made private, and some are always public. Some can be edited or deleted—I know one site that allows entrusted data to be edited or deleted within a 24-hour period—and some cannot. Some can be viewed and some cannot.

It’s also clear that users should have different rights with respect to each data type. We should be allowed to export, change, and delete disclosed data, even if the social networking sites don’t want us to. It’s less clear what rights we have for entrusted data—and far less clear for incidental data. If you post pictures from a party with me in them, can I demand you remove those pictures—or at least blur out my face? (Go look up the conviction of three Google executives in Italian court over a YouTube video.) And what about behavioral data? It’s frequently a critical part of a social networking site’s business model. We often don’t mind if a site uses it to target advertisements, but are less sanguine when it sells data to third parties.

As we continue our conversations about what sorts of fundamental rights people have with respect to their data, and more countries contemplate regulation on social networking sites and user data, it will be important to keep this taxonomy in mind. The sorts of things that would be suitable for one type of data might be completely unworkable and inappropriate for another.

This essay previously appeared in IEEE Security & Privacy.

Edited to add: this post has been translated into Portuguese.

Posted on August 10, 2010 at 6:51 AMView Comments

In Praise of Security Theater

While visiting some friends and their new baby in the hospital last week, I noticed an interesting bit of security. To prevent infant abduction, all babies had RFID tags attached to their ankles by a bracelet. There are sensors on the doors to the maternity ward, and if a baby passes through, an alarm goes off.

Infant abduction is rare, but still a risk. In the last 22 years, about 233 such abductions have occurred in the United States. About 4 million babies are born each year, which means that a baby has a 1-in-375,000 chance of being abducted. Compare this with the infant mortality rate in the U.S.—one in 145—and it becomes clear where the real risks are.

And the 1-in-375,000 chance is not today’s risk. Infant abduction rates have plummeted in recent years, mostly due to education programs at hospitals.

So why are hospitals bothering with RFID bracelets? I think they’re primarily to reassure the mothers. Many times during my friends’ stay at the hospital the doctors had to take the baby away for this or that test. Millions of years of evolution have forged a strong bond between new parents and new baby; the RFID bracelets are a low-cost way to ensure that the parents are more relaxed when their baby was out of their sight.

Security is both a reality and a feeling. The reality of security is mathematical, based on the probability of different risks and the effectiveness of different countermeasures. We know the infant abduction rates and how well the bracelets reduce those rates. We also know the cost of the bracelets, and can thus calculate whether they’re a cost-effective security measure or not. But security is also a feeling, based on individual psychological reactions to both the risks and the countermeasures. And the two things are different: You can be secure even though you don’t feel secure, and you can feel secure even though you’re not really secure.

The RFID bracelets are what I’ve come to call security theater: security primarily designed to make you feel more secure. I’ve regularly maligned security theater as a waste, but it’s not always, and not entirely, so.

It’s only a waste if you consider the reality of security exclusively. There are times when people feel less secure than they actually are. In those cases—like with mothers and the threat of baby abduction—a palliative countermeasure that primarily increases the feeling of security is just what the doctor ordered.

Tamper-resistant packaging for over-the-counter drugs started to appear in the 1980s, in response to some highly publicized poisonings. As a countermeasure, it’s largely security theater. It’s easy to poison many foods and over-the-counter medicines right through the seal—with a syringe, for example—or to open and replace the seal well enough that an unwary consumer won’t detect it. But in the 1980s, there was a widespread fear of random poisonings in over-the-counter medicines, and tamper-resistant packaging brought people’s perceptions of the risk more in line with the actual risk: minimal.

Much of the post-9/11 security can be explained by this as well. I’ve often talked about the National Guard troops in airports right after the terrorist attacks, and the fact that they had no bullets in their guns. As a security countermeasure, it made little sense for them to be there. They didn’t have the training necessary to improve security at the checkpoints, or even to be another useful pair of eyes. But to reassure a jittery public that it’s OK to fly, it was probably the right thing to do.

Security theater also addresses the ancillary risk of lawsuits. Lawsuits are ultimately decided by juries, or settled because of the threat of jury trial, and juries are going to decide cases based on their feelings as well as the facts. It’s not enough for a hospital to point to infant abduction rates and rightly claim that RFID bracelets aren’t worth it; the other side is going to put a weeping mother on the stand and make an emotional argument. In these cases, security theater provides real security against the legal threat.

Like real security, security theater has a cost. It can cost money, time, concentration, freedoms and so on. It can come at the cost of reducing the things we can do. Most of the time security theater is a bad trade-off, because the costs far outweigh the benefits. But there are instances when a little bit of security theater makes sense.

We make smart security trade-offs—and by this I mean trade-offs for genuine security—when our feeling of security closely matches the reality. When the two are out of alignment, we get security wrong. Security theater is no substitute for security reality, but, used correctly, security theater can be a way of raising our feeling of security so that it more closely matches the reality of security. It makes us feel more secure handing our babies off to doctors and nurses, buying over-the-counter medicines, and flying on airplanes—closer to how secure we should feel if we had all the facts and did the math correctly.

Of course, too much security theater and our feeling of security becomes greater than the reality, which is also bad. And others—politicians, corporations and so on—can use security theater to make us feel more secure without doing the hard work of actually making us secure. That’s the usual way security theater is used, and why I so often malign it.

But to write off security theater completely is to ignore the feeling of security. And as long as people are involved with security trade-offs, that’s never going to work.

This essay appeared on Wired.com, and is dedicated to my new godson, Nicholas Quillen Perry.

EDITED TO ADD: This essay has been translated into Portuguese.

Posted on January 25, 2007 at 5:50 AMView Comments

An Impressive Car Theft

The armored Mercedes belonging to the CEO of DaimlerChrysler has been stolen:

The black company car, which is worth about 800,000 euros ($1 million), disappeared on the night of Oct. 26, police spokesman Klaus-Peter Arand said in a telephone interview. The limousine, which sports a 12-cylinder engine and is equipped with a broadcasting device to help retrieve the car, hasn’t yet been found, the police said.

There are two types of thieves, whether they be car thieves or otherwise. First, there are the thieves that want a car, any car. And second, there are the thieves that want one particular car. Against the first type, any security measure that makes your car harder to steal than the car next to it is good enough. Against the second type, even a sophisticated GPS tracking system might not be enough.

Posted on December 1, 2004 at 11:01 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.