Entries Tagged "de-anonymization"

Page 5 of 6

Flash Cookies

Flash has the equivalent of cookies, and they’re hard to delete:

Unlike traditional browser cookies, Flash cookies are relatively unknown to web users, and they are not controlled through the cookie privacy controls in a browser. That means even if a user thinks they have cleared their computer of tracking objects, they most likely have not.

What’s even sneakier?

Several services even use the surreptitious data storage to reinstate traditional cookies that a user deleted, which is called ‘re-spawning’ in homage to video games where zombies come back to life even after being “killed,” the report found. So even if a user gets rid of a website’s tracking cookie, that cookie’s unique ID will be assigned back to a new cookie again using the Flash data as the “backup.”

Posted on August 17, 2009 at 6:36 AMView Comments

On the Anonymity of Home/Work Location Pairs

Interesting:

Philippe Golle and Kurt Partridge of PARC have a cute paper on the anonymity of geo-location data. They analyze data from the U.S. Census and show that for the average person, knowing their approximate home and work locations—to a block level—identifies them uniquely.

Even if we look at the much coarser granularity of a census tract—tracts correspond roughly to ZIP codes; there are on average 1,500 people per census tract—for the average person, there are only around 20 other people who share the same home and work location. There’s more: 5% of people are uniquely identified by their home and work locations even if it is known only at the census tract level. One reason for this is that people who live and work in very different areas (say, different counties) are much more easily identifiable, as one might expect.

“On the Anonymity of Home/Work Location Pairs,” by Philippe Golle and Kurt Partridge:

Abstract:

Many applications benefit from user location data, but location data raises privacy concerns. Anonymization can protect privacy, but identities can sometimes be inferred from supposedly anonymous data. This paper studies a new attack on the anonymity of location data. We show that if the approximate locations of an individual’s home and workplace can both be deduced from a location trace, then the median size of the individual’s anonymity set in the U.S. working population is 1, 21 and 34,980, for locations known at the granularity of a census block, census track and county respectively. The location data of people who live and work in different regions can be re-identified even more easily. Our results show that the threat of re-identification for location data is much greater when the individual’s home and work locations can both be deduced from the data. To preserve anonymity, we offer guidance for obfuscating location traces before they are disclosed.

This is all very troubling, given the number of location-based services springing up and the number of databases that are collecting location data.

Posted on May 21, 2009 at 6:15 AMView Comments

Identifying People using Anonymous Social Networking Data

Interesting:

Computer scientists Arvind Narayanan and Dr Vitaly Shmatikov, from the University of Texas at Austin, developed the algorithm which turned the anonymous data back into names and addresses.

The data sets are usually stripped of personally identifiable information, such as names, before it is sold to marketing companies or researchers keen to plumb it for useful information.

Before now, it was thought sufficient to remove this data to make sure that the true identities of subjects could not be reconstructed.

The algorithm developed by the pair looks at relationships between all the members of a social network—not just the immediate friends that members of these sites connect to.

Social graphs from Twitter, Flickr and Live Journal were used in the research.

The pair found that one third of those who are on both Flickr and Twitter can be identified from the completely anonymous Twitter graph. This is despite the fact that the overlap of members between the two services is thought to be about 15%.

The researchers suggest that as social network sites become more heavily used, then people will find it increasingly difficult to maintain a veil of anonymity.

More details:

In “De-anonymizing social networks,” Narayanan and Shmatikov take an anonymous graph of the social relationships established through Twitter and find that they can actually identify many Twitter accounts based on an entirely different data source—in this case, Flickr.

One-third of users with accounts on both services could be identified on Twitter based on their Flickr connections, even when the Twitter social graph being used was completely anonymous. The point, say the authors, is that “anonymity is not sufficient for privacy when dealing with social networks,” since their scheme relies only on a social network’s topology to make the identification.

The issue is of more than academic interest, as social networks now routinely release such anonymous social graphs to advertisers and third-party apps, and government and academic researchers ask for such data to conduct research. But the data isn’t nearly as “anonymous” as those releasing it appear to think it is, and it can easily be cross-referenced to other data sets to expose user identities.

It’s not just about Twitter, either. Twitter was a proof of concept, but the idea extends to any sort of social network: phone call records, healthcare records, academic sociological datasets, etc.

Here’s the paper.

Posted on April 6, 2009 at 6:51 AMView Comments

Fingerprinting Paper

Interesting paper:

Fingerprinting Blank Paper Using Commodity Scanners

Will Clarkson, Tim Weyrich, Adam Finkelstein, Nadia Heninger, Alex Halderman, and Edward W. Felten

Abstract: This paper presents a novel technique for authenticating physical documents based on random, naturally occurring imperfections in paper texture. We introduce a new method for measuring the three-dimensional surface of a page using only a commodity scanner and without modifying the document in any way. From this physical feature, we generate a concise fingerprint that uniquely identifies the document. Our technique is secure against counterfeiting and robust to harsh handling; it can be used even before any content is printed on a page. It has a wide range of applications, including detecting forged currency and tickets, authenticating passports, and halting counterfeit goods. Document identification could also be applied maliciously to de-anonymize printed surveys and to compromise the secrecy of paper ballots.

Posted on March 19, 2009 at 6:07 AMView Comments

Defeating Caller ID Blocking

TrapCall is a new service that reveals the caller ID on anonymous or blocked calls:

TrapCall instructs new customers to reprogram their cellphones to send all rejected, missed and unanswered calls to TrapCall’s own toll-free number. If the user sees an incoming call with Caller ID blocked, he just presses the button on the phone that would normally send it to voicemail. The call invisibly loops through TelTech’s system, then back to the user’s phone, this time with the caller’s number displayed as the Caller ID.

There’s more:

In addition to the free service, branded Fly Trap, a $10-per-month upgrade called Mouse Trap provides human-created transcripts of voicemail messages, and in some cases uses text messaging to send you the name of the caller—information not normally available to wireless customers. Mouse Trap will also send you text messages with the numbers of people who call while your phone was powered off, even if they don’t leave a message.

With the $25-a-month Bear Trap upgrade, you can also automatically record your incoming calls, and get text messages with the billing name and street address of some of your callers, which TelTech says is derived from commercial databases.

Posted on February 26, 2009 at 12:53 PMView Comments

The NSA Teams Up with the Chinese Government to Limit Internet Anonymity

Definitely strange bedfellows:

A United Nations agency is quietly drafting technical standards, proposed by the Chinese government, to define methods of tracing the original source of Internet communications and potentially curbing the ability of users to remain anonymous.

The U.S. National Security Agency is also participating in the “IP Traceback” drafting group, named Q6/17, which is meeting next week in Geneva to work on the traceback proposal. Members of Q6/17 have declined to release key documents, and meetings are closed to the public.

[…]

A second, apparently leaked ITU document offers surveillance and monitoring justifications that seem well-suited to repressive regimes:

A political opponent to a government publishes articles putting the government in an unfavorable light. The government, having a law against any opposition, tries to identify the source of the negative articles but the articles having been published via a proxy server, is unable to do so protecting the anonymity of the author.

This is being sold as a way to go after the bad guys, but it won’t help. Here’s Steve Bellovin on that issue:

First, very few attacks these days use spoofed source addresses; the real IP address already tells you where the attack is coming from. Second, in case of a DDoS attack, there are too many sources; you can’t do anything with the information. Third, the machine attacking you is almost certainly someone else’s hacked machine and tracking them down (and getting them to clean it up) is itself time-consuming.

TraceBack is most useful in monitoring the activities of large masses of people. But of course, that’s why the Chinese and the NSA are so interested in this proposal in the first place.

It’s hard to figure out what the endgame is; the U.N. doesn’t have the authority to impose Internet standards on anyone. In any case, this idea is counter to the U.N. Universal Declaration of Human Rights, Article 19: “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.” In the U.S., it’s counter to the First Amendment, which has long permitted anonymous speech. On the other hand, basic human and constitutional rights have been jettisoned left and right in the years after 9/11; why should this be any different?

But when the Chinese government and the NSA get together to enhance their ability to spy on us all, you have to wonder what’s gone wrong with the world.

Posted on September 18, 2008 at 6:34 AMView Comments

Anonymity and the Netflix Dataset

Last year, Netflix published 10 million movie rankings by 500,000 customers, as part of a challenge for people to come up with better recommendation systems than the one the company was using. The data was anonymized by removing personal details and replacing names with random numbers, to protect the privacy of the recommenders.

Arvind Narayanan and Vitaly Shmatikov, researchers at the University of Texas at Austin, de-anonymized some of the Netflix data by comparing rankings and timestamps with public information in the Internet Movie Database, or IMDb.

Their research (.pdf) illustrates some inherent security problems with anonymous data, but first it’s important to explain what they did and did not do.

They did not reverse the anonymity of the entire Netflix dataset. What they did was reverse the anonymity of the Netflix dataset for those sampled users who also entered some movie rankings, under their own names, in the IMDb. (While IMDb’s records are public, crawling the site to get them is against the IMDb’s terms of service, so the researchers used a representative few to prove their algorithm.)

The point of the research was to demonstrate how little information is required to de-anonymize information in the Netflix dataset.

On one hand, isn’t that sort of obvious? The risks of anonymous databases have been written about before, such as in this 2001 paper published in an IEEE journal. The researchers working with the anonymous Netflix data didn’t painstakingly figure out people’s identities—as others did with the AOL search database last year—they just compared it with an already identified subset of similar data: a standard data-mining technique.

But as opportunities for this kind of analysis pop up more frequently, lots of anonymous data could end up at risk.

Someone with access to an anonymous dataset of telephone records, for example, might partially de-anonymize it by correlating it with a catalog merchants’ telephone order database. Or Amazon’s online book reviews could be the key to partially de-anonymizing a public database of credit card purchases, or a larger database of anonymous book reviews.

Google, with its database of users’ internet searches, could easily de-anonymize a public database of internet purchases, or zero in on searches of medical terms to de-anonymize a public health database. Merchants who maintain detailed customer and purchase information could use their data to partially de-anonymize any large search engine’s data, if it were released in an anonymized form. A data broker holding databases of several companies might be able to de-anonymize most of the records in those databases.

What the University of Texas researchers demonstrate is that this process isn’t hard, and doesn’t require a lot of data. It turns out that if you eliminate the top 100 movies everyone watches, our movie-watching habits are all pretty individual. This would certainly hold true for our book reading habits, our internet shopping habits, our telephone habits and our web searching habits.

The obvious countermeasures for this are, sadly, inadequate. Netflix could have randomized its dataset by removing a subset of the data, changing the timestamps or adding deliberate errors into the unique ID numbers it used to replace the names. It turns out, though, that this only makes the problem slightly harder. Narayanan’s and Shmatikov’s de-anonymization algorithm is surprisingly robust, and works with partial data, data that has been perturbed, even data with errors in it.

With only eight movie ratings (of which two may be completely wrong), and dates that may be up to two weeks in error, they can uniquely identify 99 percent of the records in the dataset. After that, all they need is a little bit of identifiable data: from the IMDb, from your blog, from anywhere. The moral is that it takes only a small named database for someone to pry the anonymity off a much larger anonymous database.

Other research reaches the same conclusion. Using public anonymous data from the 1990 census, Latanya Sweeney found that 87 percent of the population in the United States, 216 million of 248 million, could likely be uniquely identified by their five-digit ZIP code, combined with their gender and date of birth. About half of the U.S. population is likely identifiable by gender, date of birth and the city, town or municipality in which the person resides. Expanding the geographic scope to an entire county reduces that to a still-significant 18 percent. “In general,” the researchers wrote, “few characteristics are needed to uniquely identify a person.”

Stanford University researchers reported similar results using 2000 census data. It turns out that date of birth, which (unlike birthday month and day alone) sorts people into thousands of different buckets, is incredibly valuable in disambiguating people.

This has profound implications for releasing anonymous data. On one hand, anonymous data is an enormous boon for researchers—AOL did a good thing when it released its anonymous dataset for research purposes, and it’s sad that the CTO resigned and an entire research team was fired after the public outcry. Large anonymous databases of medical data are enormously valuable to society: for large-scale pharmacology studies, long-term follow-up studies and so on. Even anonymous telephone data makes for fascinating research.

On the other hand, in the age of wholesale surveillance, where everyone collects data on us all the time, anonymization is very fragile and riskier than it initially seems.

Like everything else in security, anonymity systems shouldn’t be fielded before being subjected to adversarial attacks. We all know that it’s folly to implement a cryptographic system before it’s rigorously attacked; why should we expect anonymity systems to be any different? And, like everything else in security, anonymity is a trade-off. There are benefits, and there are corresponding risks.

Narayanan and Shmatikov are currently working on developing algorithms and techniques that enable the secure release of anonymous datasets like Netflix’s. That’s a research result we can all benefit from.

This essay originally appeared on Wired.com.

Posted on December 18, 2007 at 5:53 AMView Comments

Dan Egerstad Arrested

I previously wrote about Dan Egerstad, a security researcher who ran a Tor anonymity network and was able to sniff some pretty impressive usernames and passwords.

Swedish police arrested him:

About 9am Egerstad walked downstairs to move his car when he was accosted by the officers in a scene “taken out of a bad movie”, he said in an email interview.

“I got a couple of police IDs in my face while told that they are taking me in for questioning,” he said.

But not before the agents, who had staked out his house in undercover blue and grey Saabs (“something that screams cop to every person in Sweden from miles away”), searched his apartment and confiscated computers, CDs and portable hard drives.

“They broke my wardrobe, short cutted my electricity, pulled out my speakers, phone and other cables having nothing to do with this and been touching my bookkeeping, which they have no right to do,” he said.

While questioning Egerstad at the station, the police “played every trick in the book, good cop, bad cop and crazy mysterious guy in the corner not wanting to tell his name and just staring at me”.

“Well, if they want to try to manipulate, I can play that game too. [I] gave every known body signal there is telling of lies … covered my mouth, scratched my elbow, looked away and so on.”

No charges have been filed. I’m not sure there’s anything wrong with what he did.

Here’s a good article on what he did; it was published just before the arrest.

Posted on November 16, 2007 at 2:27 PMView Comments

Anonymity and the Tor Network

As the name implies, Alcoholics Anonymous meetings are anonymous. You don’t have to sign anything, show ID or even reveal your real name. But the meetings are not private. Anyone is free to attend. And anyone is free to recognize you: by your face, by your voice, by the stories you tell. Anonymity is not the same as privacy.

That’s obvious and uninteresting, but many of us seem to forget it when we’re on a computer. We think “it’s secure,” and forget that secure can mean many different things.

Tor is a free tool that allows people to use the internet anonymously. Basically, by joining Tor you join a network of computers around the world that pass internet traffic randomly amongst each other before sending it out to wherever it is going. Imagine a tight huddle of people passing letters around. Once in a while a letter leaves the huddle, sent off to some destination. If you can’t see what’s going on inside the huddle, you can’t tell who sent what letter based on watching letters leave the huddle.

I’ve left out a lot of details, but that’s basically how Tor works. It’s called “onion routing,” and it was first developed at the Naval Research Laboratory. The communications between Tor nodes are encrypted in a layered protocol—hence the onion analogy—but the traffic that leaves the Tor network is in the clear. It has to be.

If you want your Tor traffic to be private, you need to encrypt it. If you want it to be authenticated, you need to sign it as well. The Tor website even says:

Yes, the guy running the exit node can read the bytes that come in and out there. Tor anonymizes the origin of your traffic, and it makes sure to encrypt everything inside the Tor network, but it does not magically encrypt all traffic throughout the internet.

Tor anonymizes, nothing more.

Dan Egerstad is a Swedish security researcher; he ran five Tor nodes. Last month, he posted a list of 100 e-mail credentials—server IP addresses, e-mail accounts and the corresponding passwords—for
embassies and government ministries
around the globe, all obtained by sniffing exit traffic for usernames and passwords of e-mail servers.

The list contains mostly third-world embassies: Kazakhstan, Uzbekistan, Tajikistan, India, Iran, Mongolia—but there’s a Japanese embassy on the list, as well as the UK Visa Application Center in Nepal, the Russian Embassy in Sweden, the Office of the Dalai Lama and several Hong Kong Human Rights Groups. And this is just the tip of the iceberg; Egerstad sniffed more than 1,000 corporate accounts this way, too. Scary stuff, indeed.

Presumably, most of these organizations are using Tor to hide their network traffic from their host countries’ spies. But because anyone can join the Tor network, Tor users necessarily pass their traffic to organizations they might not trust: various intelligence agencies, hacker groups, criminal organizations and so on.

It’s simply inconceivable that Egerstad is the first person to do this sort of eavesdropping; Len Sassaman published a paper on this attack earlier this year. The price you pay for anonymity is exposing your traffic to shady people.

We don’t really know whether the Tor users were the accounts’ legitimate owners, or if they were hackers who had broken into the accounts by other means and were now using Tor to avoid being caught. But certainly most of these users didn’t realize that anonymity doesn’t mean privacy. The fact that most of the accounts listed by Egerstad were from small nations is no surprise; that’s where you’d expect weaker security practices.

True anonymity is hard. Just as you could be recognized at an AA meeting, you can be recognized on the internet as well. There’s a lot of research on breaking anonymity in general—and Tor specifically—but sometimes it doesn’t even take much. Last year, AOL made 20,000 anonymous search queries public as a research tool. It wasn’t very hard to identify people from the data.

A research project called Dark Web, funded by the National Science Foundation, even tried to identify anonymous writers by their style:

One of the tools developed by Dark Web is a technique called Writeprint, which automatically extracts thousands of multilingual, structural, and semantic features to determine who is creating “anonymous” content online. Writeprint can look at a posting on an online bulletin board, for example, and compare it with writings found elsewhere on the Internet. By analyzing these certain features, it can determine with more than 95 percent accuracy if the author has produced other content in the past.

And if your name or other identifying information is in just one of those writings, you can be identified.

Like all security tools, Tor is used by both good guys and bad guys. And perversely, the very fact that something is on the Tor network means that someone—for some reason—wants to hide the fact he’s doing it.

As long as Tor is a magnet for “interesting” traffic, Tor will also be a magnet for those who want to eavesdrop on that traffic—especially because more than 90 percent of Tor users don’t encrypt.

This essay previously appeared on Wired.com.

Posted on September 20, 2007 at 5:38 AMView Comments

Another E-Voting Problem: Not-Secret Ballots

Uh-oh:

Ohio law permits anyone to walk into a county election office and obtain two crucial documents: a list of voters in the order they voted, and a time-stamped list of the actual votes. “We simply take the two pieces of paper together, merge them, and then we have which voter voted and in which way,” said James Moyer, a longtime privacy activist and poll worker who lives in Columbus, Ohio.

EDITED TO ADD (9/13): Commentary by Ed Felton.

Posted on August 21, 2007 at 7:01 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.