Entries Tagged "Twitter"

Page 5 of 6

File Deletion

File deletion is all about control. This used to not be an issue. Your data was on your computer, and you decided when and how to delete a file. You could use the delete function if you didn’t care about whether the file could be recovered or not, and a file erase program — I use BCWipe for Windows — if you wanted to ensure no one could ever recover the file.

As we move more of our data onto cloud computing platforms such as Gmail and Facebook, and closed proprietary platforms such as the Kindle and the iPhone, deleting data is much harder.

You have to trust that these companies will delete your data when you ask them to, but they’re generally not interested in doing so. Sites like these are more likely to make your data inaccessible than they are to physically delete it. Facebook is a known culprit: actually deleting your data from its servers requires a complicated procedure that may or may not work. And even if you do manage to delete your data, copies are certain to remain in the companies’ backup systems. Gmail explicitly says this in its privacy notice.

Online backups, SMS messages, photos on photo sharing sites, smartphone applications that store your data in the network: you have no idea what really happens when you delete pieces of data or your entire account, because you’re not in control of the computers that are storing the data.

This notion of control also explains how Amazon was able to delete a book that people had previously purchased on their Kindle e-book readers. The legalities are debatable, but Amazon had the technical ability to delete the file because it controls all Kindles. It has designed the Kindle so that it determines when to update the software, whether people are allowed to buy Kindle books, and when to turn off people’s Kindles entirely.

Vanish is a research project by Roxana Geambasu and colleagues at the University of Washington. They designed a prototype system that automatically deletes data after a set time interval. So you can send an email, create a Google Doc, post an update to Facebook, or upload a photo to Flickr, all designed to disappear after a set period of time. And after it disappears, no one — not anyone who downloaded the data, not the site that hosted the data, not anyone who intercepted the data in transit, not even you — will be able to read it. If the police arrive at Facebook or Google or Flickr with a warrant, they won’t be able to read it.

The details are complicated, but Vanish breaks the data’s decryption key into a bunch of pieces and scatters them around the web using a peer-to-peer network. Then it uses the natural turnover in these networks — machines constantly join and leave — to make the data disappear. Unlike previous programs that supported file deletion, this one doesn’t require you to trust any company, organisation, or website. It just happens.

Of course, Vanish doesn’t prevent the recipient of an email or the reader of a Facebook page from copying the data and pasting it into another file, just as Kindle’s deletion feature doesn’t prevent people from copying a book’s files and saving them on their computers. Vanish is just a prototype at this point, and it only works if all the people who read your Facebook entries or view your Flickr pictures have it installed on their computers as well; but it’s a good demonstration of how control affects file deletion. And while it’s a step in the right direction, it’s also new and therefore deserves further security analysis before being adopted on a wide scale.

We’ve lost the control of data on some of the computers we own, and we’ve lost control of our data in the cloud. We’re not going to stop using Facebook and Twitter just because they’re not going to delete our data when we ask them to, and we’re not going to stop using Kindles and iPhones because they may delete our data when we don’t want them to. But we need to take back control of data in the cloud, and projects like Vanish show us how we can.

Now we need something that will protect our data when a large corporation decides to delete it.

This essay originally appeared in The Guardian.

EDITED TO ADD (9/30): Vanish has been broken, paper here.

Posted on September 10, 2009 at 6:08 AMView Comments

Risks of Cloud Computing

Excellent essay by Jonathan Zittrain on the risks of cloud computing:

The cloud, however, comes with real dangers.

Some are in plain view. If you entrust your data to others, they can let you down or outright betray you. For example, if your favorite music is rented or authorized from an online subscription service rather than freely in your custody as a compact disc or an MP3 file on your hard drive, you can lose your music if you fall behind on your payments — or if the vendor goes bankrupt or loses interest in the service. Last week Amazon apparently conveyed a publisher’s change-of-heart to owners of its Kindle e-book reader: some purchasers of Orwell’s “1984” found it removed from their devices, with nothing to show for their purchase other than a refund. (Orwell would be amused.)

Worse, data stored online has less privacy protection both in practice and under the law. A hacker recently guessed the password to the personal e-mail account of a Twitter employee, and was thus able to extract the employee’s Google password. That in turn compromised a trove of Twitter’s corporate documents stored too conveniently in the cloud. Before, the bad guys usually needed to get their hands on people’s computers to see their secrets; in today’s cloud all you need is a password.

Thanks in part to the Patriot Act, the federal government has been able to demand some details of your online activities from service providers — and not to tell you about it. There have been thousands of such requests lodged since the law was passed, and the F.B.I.’s own audits have shown that there can be plenty of overreach — perhaps wholly inadvertent — in requests like these.

Here’s me on cloud computing.

Posted on July 30, 2009 at 7:06 AMView Comments

Did a Public Twitter Post Lead to a Burglary?

No evidence one way or the other:

Like a lot of people who use social media, Israel Hyman and his wife Noell went on Twitter to share real-time details of a recent trip. Their posts said they were “preparing to head out of town,” that they had “another 10 hours of driving ahead,” and that they “made it to Kansas City.”

While they were on the road, their home in Mesa, Ariz., was burglarized. Hyman has an online video business called IzzyVideo.com, with 2,000 followers on Twitter. He thinks his Twitter updates tipped the burglars off.

“My wife thinks it could be a random thing, but I just have my suspicions,” he said. “They didn’t take any of our normal consumer electronics.” They took his video editing equipment.

I’m not saying that there isn’t a connection, but people have a propensity for seeing these sorts of connections.

Posted on June 15, 2009 at 2:26 PMView Comments

Second SHB Workshop Liveblogging (8)

The penultimate session of the conference was “Privacy,” moderated by Tyler Moore.

Alessandro Acquisti, Carnegie Mellon University (suggested reading: What Can Behavioral Economics Teach Us About Privacy?; Privacy in Electronic Commerce and the Economics of Immediate Gratification), presented research on how people value their privacy. He started by listing a variety of cognitive biases that affect privacy decisions: illusion of control, overconfidence, optimism bias, endowment effect, and so on. He discussed two experiments. The first demonstrated a “herding effect”: if a subject believes that others reveal sensitive behavior, the subject is more likely to also reveal sensitive behavior. The second examined the “frog effect”: do privacy intrusions alert or desensitize people to revealing personal information? What he found is that people tend to set their privacy level at the beginning of a survey, and don’t respond well to being asked easy questions at first and then sensitive questions at the end. In the discussion, Joe Bonneau asked him about the notion that people’s privacy protections tend to ratchet up over time; he didn’t have conclusive evidence, but gave several possible explanations for the phenomenon.

Adam Joinson, University of Bath (suggested reading: Privacy, Trust and Self-Disclosure Online; Privacy concerns and privacy actions), also studies how people value their privacy. He talked about expressive privacy — privacy that allows people to express themselves and form interpersonal relationships. His research showed that differences between how people use Facebook in different countries depend on how much people trust Facebook as a company, rather than how much people trust other Facebook users. Another study looked at posts from Secret Tweet and Twitter. He found 16 markers that allowed him to automatically determine which tweets contain sensitive personal information and which do not, with high probability. Then he tried to determine if people with large Twitter followings post fewer secrets than people who are only twittering to a few people. He found absolutely no difference.

Peter Neumann, SRI (suggested reading: Holistic systems; Risks; Identity and Trust in Context), talked about lack of medical privacy (too many people have access to your data), about voting (the privacy problem makes the voting problem a lot harder, and the end-to-end voting security/privacy problem is much harder than just securing voting machines), and privacy in China (the government is requiring all computers sold in China to be sold with software allowing them to eavesdrop on the users). Any would-be solution needs to reflect the ubiquity of the threat. When we design systems, we need to anticipate what the privacy problems will be. Privacy problems are everywhere you look, and ordinary people have no idea of the depth of the problem.

Eric Johnson, Dartmouth College (suggested reading: Access Flexibility with Escalation and Audit; Security through Information Risk Management), studies the information access problem from a business perspective. He’s been doing field studies in companies like retail banks and investment banks, and found that role-based access control fails because companies can’t determine who has what role. Even worse, roles change quickly, especially in large complex organizations. For example, one business group of 3000 people experiences 1000 role changes within three months. The result is that organizations do access control badly, either over-entitling or under-entitling people. But since getting the job done is the most important thing, organizations tend to over-entitle: give people more access than they need. His current work is to find the right set of incentives and controls to set access more properly. The challege is to do this without making people risk averse. In the discussion, he agreed that a perfect access control system is not possible, and that organizations should probably allow a certain amount of access control violations — similar to the idea of posting a 55 mph speed limit but not ticketing people unless they go over 70 mph.

Christine Jolls, Yale Law School (suggested reading: Rationality and Consent in Privacy Law, Employee Privacy), made the point that people regularly share their most private information with their intimates — so privacy is not about secrecy, it’s more about control. There are moments when people make pretty big privacy decisions. For example, they grant employers the rights to monitor their e-mail, or test their urine without notice. In general, courts hold that blanket signing away of privacy rights — “you can test my urine on any day in the future” — are not valid, but immediate signing away of privacy of privacy rights — “you can test my urine today” — are. Jolls believes that this is reasonable for several reasons, such as optimism bias and an overfocus on the present at the expense of the future. Without realizing it, the courts have implemented the system that behavioral economics would find optimal. During the discussion, she talked about how coercion figures into this; the U.S. legal system tends not to be concerned with it.

Andrew Adams, University of Reading (suggested reading: Regulating CCTV), also looks at attitudes of privacy on social networking services. His results are preliminary, and based on interviews with university students in Canada, Japan, and the UK, and are very concordant with what danah boyd and Joe Bonneau said earlier. From the UK: People join social networking sites to increase their level of interaction with people they already know in real life. Revealing personal information is okay, but revealing too much is bad. Even more interestingly, it’s not okay to reveal more about others than they reveal themselves. From Japan: People are more open to making friends online. There’s more anonymity. It’s not okay to reveal information about others, but “the fault of this lies as much with the person whose data was revealed in not choosing friends wisely.” This victim responsibility is a common theme with other privacy and security elements in Japan. Data from Canada is still being compiled.

Great phrase: the “laundry belt” — close enough for students to go home on weekends with their laundry, but far enough away so they don’t feel as if their parents are looking over their shoulder — typically two hours by public transportation (in the UK).

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 3:01 PMView Comments

Fake Facts on Twitter

Clever hack:

Back during the debate for HR 1, I was amazed at how easily conservatives were willing to accept and repeat lies about spending in the stimulus package, even after those provisions had been debunked as fabrications. The $30 million for the salt marsh mouse is a perfect example, and Kagro X documented well over a dozen congressmen repeating the lie.

To test the limits of this phenomenon, I started a parody Twitter account last Thursday, which I called “InTheStimulus“, where all the tweets took the format “InTheStimulus is $x million for ______”. I went through the followers of Republican Twitter feeds and in turn followed them, all the way up to the limit of 2000. From people following me back, I was able to get 500 followers in less than a day, and 1000 by Sunday morning.

You can read through all the retweets and responses by looking at the Twitter search for “InTheStimulus“. For the most part, my first couple days of posts were believable, but unsourced lies:

  • $3 million for replacement tires for 1992-1995 Geo Metros.
  • $750,000 for an underground tunnel connecting a middle school and high school in North Carolina.
  • $4.7 million for a program supplying public television to K-8 classrooms.
  • $2.3 million for a museum dedicated to the electric bass guitar.

The Twitter InTheStimulus site appears to have been taken down.

There a several things going on here. First is confirmation bias, which is the tendency of people to believe things that reinforce their prior beliefs. But the second is the limited bandwidth of Twitter—140-character messages—that makes it very difficult to authenticate anything. Twitter is an ideal medium to inject fake facts into society for precisely this reason.

EDITED TO ADD (5/14): False Twitter rumors about Swine Flu.

Posted on April 24, 2009 at 6:29 AMView Comments

Identifying People using Anonymous Social Networking Data

Interesting:

Computer scientists Arvind Narayanan and Dr Vitaly Shmatikov, from the University of Texas at Austin, developed the algorithm which turned the anonymous data back into names and addresses.

The data sets are usually stripped of personally identifiable information, such as names, before it is sold to marketing companies or researchers keen to plumb it for useful information.

Before now, it was thought sufficient to remove this data to make sure that the true identities of subjects could not be reconstructed.

The algorithm developed by the pair looks at relationships between all the members of a social network — not just the immediate friends that members of these sites connect to.

Social graphs from Twitter, Flickr and Live Journal were used in the research.

The pair found that one third of those who are on both Flickr and Twitter can be identified from the completely anonymous Twitter graph. This is despite the fact that the overlap of members between the two services is thought to be about 15%.

The researchers suggest that as social network sites become more heavily used, then people will find it increasingly difficult to maintain a veil of anonymity.

More details:

In “De-anonymizing social networks,” Narayanan and Shmatikov take an anonymous graph of the social relationships established through Twitter and find that they can actually identify many Twitter accounts based on an entirely different data source—in this case, Flickr.

One-third of users with accounts on both services could be identified on Twitter based on their Flickr connections, even when the Twitter social graph being used was completely anonymous. The point, say the authors, is that “anonymity is not sufficient for privacy when dealing with social networks,” since their scheme relies only on a social network’s topology to make the identification.

The issue is of more than academic interest, as social networks now routinely release such anonymous social graphs to advertisers and third-party apps, and government and academic researchers ask for such data to conduct research. But the data isn’t nearly as “anonymous” as those releasing it appear to think it is, and it can easily be cross-referenced to other data sets to expose user identities.

It’s not just about Twitter, either. Twitter was a proof of concept, but the idea extends to any sort of social network: phone call records, healthcare records, academic sociological datasets, etc.

Here’s the paper.

Posted on April 6, 2009 at 6:51 AMView Comments

Helping the Terrorists

It regularly comes as a surprise to people that our own infrastructure can be used against us. And in the wake of terrorist attacks or plots, there are fear-induced calls to ban, disrupt or control that infrastructure. According to officials investigating the Mumbai attacks, the terrorists used images from Google Earth to help learn their way around. This isn’t the first time Google Earth has been charged with helping terrorists: in 2007, Google Earth images of British military bases were found in the homes of Iraqi insurgents. Incidents such as these have led many governments to demand that Google remove or blur images of sensitive locations: military bases, nuclear reactors, government buildings, and so on. An Indian court has been asked to ban Google Earth entirely.

This isn’t the only way our information technology helps terrorists. Last year, a US army intelligence report worried that terrorists could plan their attacks using Twitter, and there are unconfirmed reports that the Mumbai terrorists read the Twitter feeds about their attacks to get real-time information they could use. British intelligence is worried that terrorists might use voice over IP services such as Skype to communicate. Terrorists may train on Second Life and World of Warcraft. We already know they use websites to spread their message and possibly even to recruit.

Of course, all of this is exacerbated by open-wireless access, which has been repeatedly labelled a terrorist tool and which has been the object of attempted bans.

Mobile phone networks help terrorists, too. The Mumbai terrorists used them to communicate with each other. This has led some cities, including New York and London, to propose turning off mobile phone coverage in the event of a terrorist attack.

Let’s all stop and take a deep breath. By its very nature, communications infrastructure is general. It can be used to plan both legal and illegal activities, and it’s generally impossible to tell which is which. When I send and receive email, it looks exactly the same as a terrorist doing the same thing. To the mobile phone network, a call from one terrorist to another looks exactly the same as a mobile phone call from one victim to another. Any attempt to ban or limit infrastructure affects everybody. If India bans Google Earth, a future terrorist won’t be able to use it to plan; nor will anybody else. Open Wi-Fi networks are useful for many reasons, the large majority of them positive, and closing them down affects all those reasons. Terrorist attacks are very rare, and it is almost always a bad trade-off to deny society the benefits of a communications technology just because the bad guys might use it too.

Communications infrastructure is especially valuable during a terrorist attack. Twitter was the best way for people to get real-time information about the attacks in Mumbai. If the Indian government shut Twitter down – or London blocked mobile phone coverage – during a terrorist attack, the lack of communications for everyone, not just the terrorists, would increase the level of terror and could even increase the body count. Information lessens fear and makes people safer.

None of this is new. Criminals have used telephones and mobile phones since they were invented. Drug smugglers use airplanes and boats, radios and satellite phones. Bank robbers have long used cars and motorcycles as getaway vehicles, and horses before then. I haven’t seen it talked about yet, but the Mumbai terrorists used boats as well. They also wore boots. They ate lunch at restaurants, drank bottled water, and breathed the air. Society survives all of this because the good uses of infrastructure far outweigh the bad uses, even though the good uses are – by and large – small and pedestrian and the bad uses are rare and spectacular. And while terrorism turns society’s very infrastructure against itself, we only harm ourselves by dismantling that infrastructure in response – just as we would if we banned cars because bank robbers used them too.

This essay originally appeared in The Guardian.

EDITED TO ADD (1/29): Other ways we help the terrorists: we put computers in our libraries, we allow anonymous chat rooms, we permit commercial databases and we engage in biomedical research. Grocery stores, too, sell food to just anyone who walks in.

EDITED TO ADD (2/3): Washington DC wants to jam cell phones too.

EDITED TO ADD (2/9): Another thing that will help the terrorists: in-flight Internet.

Posted on January 29, 2009 at 6:00 AMView Comments

Shaping the Obama Administration's Counterterrorism Strategy

I’m at a two-day conference: Shaping the Obama Adminstration’s Counterterrorism Strategy, sponsored by the Cato Institute in Washington, DC. It’s sold out, but you can watch or listen to the event live on the Internet. I’ll be on a panel tomorrow at 9:00 AM.

I’ve been told that there’s a lively conversation about the conference on Twitter, but — as I have previously said — I don’t Twitter.

Posted on January 12, 2009 at 12:44 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.