Entries Tagged "Twitter"

Page 6 of 7

Did a Public Twitter Post Lead to a Burglary?

No evidence one way or the other:

Like a lot of people who use social media, Israel Hyman and his wife Noell went on Twitter to share real-time details of a recent trip. Their posts said they were “preparing to head out of town,” that they had “another 10 hours of driving ahead,” and that they “made it to Kansas City.”

While they were on the road, their home in Mesa, Ariz., was burglarized. Hyman has an online video business called IzzyVideo.com, with 2,000 followers on Twitter. He thinks his Twitter updates tipped the burglars off.

“My wife thinks it could be a random thing, but I just have my suspicions,” he said. “They didn’t take any of our normal consumer electronics.” They took his video editing equipment.

I’m not saying that there isn’t a connection, but people have a propensity for seeing these sorts of connections.

Posted on June 15, 2009 at 2:26 PMView Comments

Second SHB Workshop Liveblogging (8)

The penultimate session of the conference was “Privacy,” moderated by Tyler Moore.

Alessandro Acquisti, Carnegie Mellon University (suggested reading: What Can Behavioral Economics Teach Us About Privacy?; Privacy in Electronic Commerce and the Economics of Immediate Gratification), presented research on how people value their privacy. He started by listing a variety of cognitive biases that affect privacy decisions: illusion of control, overconfidence, optimism bias, endowment effect, and so on. He discussed two experiments. The first demonstrated a “herding effect”: if a subject believes that others reveal sensitive behavior, the subject is more likely to also reveal sensitive behavior. The second examined the “frog effect”: do privacy intrusions alert or desensitize people to revealing personal information? What he found is that people tend to set their privacy level at the beginning of a survey, and don’t respond well to being asked easy questions at first and then sensitive questions at the end. In the discussion, Joe Bonneau asked him about the notion that people’s privacy protections tend to ratchet up over time; he didn’t have conclusive evidence, but gave several possible explanations for the phenomenon.

Adam Joinson, University of Bath (suggested reading: Privacy, Trust and Self-Disclosure Online; Privacy concerns and privacy actions), also studies how people value their privacy. He talked about expressive privacy—privacy that allows people to express themselves and form interpersonal relationships. His research showed that differences between how people use Facebook in different countries depend on how much people trust Facebook as a company, rather than how much people trust other Facebook users. Another study looked at posts from Secret Tweet and Twitter. He found 16 markers that allowed him to automatically determine which tweets contain sensitive personal information and which do not, with high probability. Then he tried to determine if people with large Twitter followings post fewer secrets than people who are only twittering to a few people. He found absolutely no difference.

Peter Neumann, SRI (suggested reading: Holistic systems; Risks; Identity and Trust in Context), talked about lack of medical privacy (too many people have access to your data), about voting (the privacy problem makes the voting problem a lot harder, and the end-to-end voting security/privacy problem is much harder than just securing voting machines), and privacy in China (the government is requiring all computers sold in China to be sold with software allowing them to eavesdrop on the users). Any would-be solution needs to reflect the ubiquity of the threat. When we design systems, we need to anticipate what the privacy problems will be. Privacy problems are everywhere you look, and ordinary people have no idea of the depth of the problem.

Eric Johnson, Dartmouth College (suggested reading: Access Flexibility with Escalation and Audit; Security through Information Risk Management), studies the information access problem from a business perspective. He’s been doing field studies in companies like retail banks and investment banks, and found that role-based access control fails because companies can’t determine who has what role. Even worse, roles change quickly, especially in large complex organizations. For example, one business group of 3000 people experiences 1000 role changes within three months. The result is that organizations do access control badly, either over-entitling or under-entitling people. But since getting the job done is the most important thing, organizations tend to over-entitle: give people more access than they need. His current work is to find the right set of incentives and controls to set access more properly. The challege is to do this without making people risk averse. In the discussion, he agreed that a perfect access control system is not possible, and that organizations should probably allow a certain amount of access control violations—similar to the idea of posting a 55 mph speed limit but not ticketing people unless they go over 70 mph.

Christine Jolls, Yale Law School (suggested reading: Rationality and Consent in Privacy Law, Employee Privacy), made the point that people regularly share their most private information with their intimates—so privacy is not about secrecy, it’s more about control. There are moments when people make pretty big privacy decisions. For example, they grant employers the rights to monitor their e-mail, or test their urine without notice. In general, courts hold that blanket signing away of privacy rights—”you can test my urine on any day in the future”—are not valid, but immediate signing away of privacy of privacy rights—”you can test my urine today”—are. Jolls believes that this is reasonable for several reasons, such as optimism bias and an overfocus on the present at the expense of the future. Without realizing it, the courts have implemented the system that behavioral economics would find optimal. During the discussion, she talked about how coercion figures into this; the U.S. legal system tends not to be concerned with it.

Andrew Adams, University of Reading (suggested reading: Regulating CCTV), also looks at attitudes of privacy on social networking services. His results are preliminary, and based on interviews with university students in Canada, Japan, and the UK, and are very concordant with what danah boyd and Joe Bonneau said earlier. From the UK: People join social networking sites to increase their level of interaction with people they already know in real life. Revealing personal information is okay, but revealing too much is bad. Even more interestingly, it’s not okay to reveal more about others than they reveal themselves. From Japan: People are more open to making friends online. There’s more anonymity. It’s not okay to reveal information about others, but “the fault of this lies as much with the person whose data was revealed in not choosing friends wisely.” This victim responsibility is a common theme with other privacy and security elements in Japan. Data from Canada is still being compiled.

Great phrase: the “laundry belt”—close enough for students to go home on weekends with their laundry, but far enough away so they don’t feel as if their parents are looking over their shoulder—typically two hours by public transportation (in the UK).

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 3:01 PMView Comments

Fake Facts on Twitter

Clever hack:

Back during the debate for HR 1, I was amazed at how easily conservatives were willing to accept and repeat lies about spending in the stimulus package, even after those provisions had been debunked as fabrications. The $30 million for the salt marsh mouse is a perfect example, and Kagro X documented well over a dozen congressmen repeating the lie.

To test the limits of this phenomenon, I started a parody Twitter account last Thursday, which I called “InTheStimulus“, where all the tweets took the format “InTheStimulus is $x million for ______”. I went through the followers of Republican Twitter feeds and in turn followed them, all the way up to the limit of 2000. From people following me back, I was able to get 500 followers in less than a day, and 1000 by Sunday morning.

You can read through all the retweets and responses by looking at the Twitter search for “InTheStimulus“. For the most part, my first couple days of posts were believable, but unsourced lies:

  • $3 million for replacement tires for 1992-1995 Geo Metros.
  • $750,000 for an underground tunnel connecting a middle school and high school in North Carolina.
  • $4.7 million for a program supplying public television to K-8 classrooms.
  • $2.3 million for a museum dedicated to the electric bass guitar.

The Twitter InTheStimulus site appears to have been taken down.

There a several things going on here. First is confirmation bias, which is the tendency of people to believe things that reinforce their prior beliefs. But the second is the limited bandwidth of Twitter—140-character messages—that makes it very difficult to authenticate anything. Twitter is an ideal medium to inject fake facts into society for precisely this reason.

EDITED TO ADD (5/14): False Twitter rumors about Swine Flu.

Posted on April 24, 2009 at 6:29 AMView Comments

Identifying People using Anonymous Social Networking Data

Interesting:

Computer scientists Arvind Narayanan and Dr Vitaly Shmatikov, from the University of Texas at Austin, developed the algorithm which turned the anonymous data back into names and addresses.

The data sets are usually stripped of personally identifiable information, such as names, before it is sold to marketing companies or researchers keen to plumb it for useful information.

Before now, it was thought sufficient to remove this data to make sure that the true identities of subjects could not be reconstructed.

The algorithm developed by the pair looks at relationships between all the members of a social network—not just the immediate friends that members of these sites connect to.

Social graphs from Twitter, Flickr and Live Journal were used in the research.

The pair found that one third of those who are on both Flickr and Twitter can be identified from the completely anonymous Twitter graph. This is despite the fact that the overlap of members between the two services is thought to be about 15%.

The researchers suggest that as social network sites become more heavily used, then people will find it increasingly difficult to maintain a veil of anonymity.

More details:

In “De-anonymizing social networks,” Narayanan and Shmatikov take an anonymous graph of the social relationships established through Twitter and find that they can actually identify many Twitter accounts based on an entirely different data source—in this case, Flickr.

One-third of users with accounts on both services could be identified on Twitter based on their Flickr connections, even when the Twitter social graph being used was completely anonymous. The point, say the authors, is that “anonymity is not sufficient for privacy when dealing with social networks,” since their scheme relies only on a social network’s topology to make the identification.

The issue is of more than academic interest, as social networks now routinely release such anonymous social graphs to advertisers and third-party apps, and government and academic researchers ask for such data to conduct research. But the data isn’t nearly as “anonymous” as those releasing it appear to think it is, and it can easily be cross-referenced to other data sets to expose user identities.

It’s not just about Twitter, either. Twitter was a proof of concept, but the idea extends to any sort of social network: phone call records, healthcare records, academic sociological datasets, etc.

Here’s the paper.

Posted on April 6, 2009 at 6:51 AMView Comments

Helping the Terrorists

It regularly comes as a surprise to people that our own infrastructure can be used against us. And in the wake of terrorist attacks or plots, there are fear-induced calls to ban, disrupt or control that infrastructure. According to officials investigating the Mumbai attacks, the terrorists used images from Google Earth to help learn their way around. This isn’t the first time Google Earth has been charged with helping terrorists: in 2007, Google Earth images of British military bases were found in the homes of Iraqi insurgents. Incidents such as these have led many governments to demand that Google remove or blur images of sensitive locations: military bases, nuclear reactors, government buildings, and so on. An Indian court has been asked to ban Google Earth entirely.

This isn’t the only way our information technology helps terrorists. Last year, a US army intelligence report worried that terrorists could plan their attacks using Twitter, and there are unconfirmed reports that the Mumbai terrorists read the Twitter feeds about their attacks to get real-time information they could use. British intelligence is worried that terrorists might use voice over IP services such as Skype to communicate. Terrorists may train on Second Life and World of Warcraft. We already know they use websites to spread their message and possibly even to recruit.

Of course, all of this is exacerbated by open-wireless access, which has been repeatedly labelled a terrorist tool and which has been the object of attempted bans.

Mobile phone networks help terrorists, too. The Mumbai terrorists used them to communicate with each other. This has led some cities, including New York and London, to propose turning off mobile phone coverage in the event of a terrorist attack.

Let’s all stop and take a deep breath. By its very nature, communications infrastructure is general. It can be used to plan both legal and illegal activities, and it’s generally impossible to tell which is which. When I send and receive email, it looks exactly the same as a terrorist doing the same thing. To the mobile phone network, a call from one terrorist to another looks exactly the same as a mobile phone call from one victim to another. Any attempt to ban or limit infrastructure affects everybody. If India bans Google Earth, a future terrorist won’t be able to use it to plan; nor will anybody else. Open Wi-Fi networks are useful for many reasons, the large majority of them positive, and closing them down affects all those reasons. Terrorist attacks are very rare, and it is almost always a bad trade-off to deny society the benefits of a communications technology just because the bad guys might use it too.

Communications infrastructure is especially valuable during a terrorist attack. Twitter was the best way for people to get real-time information about the attacks in Mumbai. If the Indian government shut Twitter down – or London blocked mobile phone coverage – during a terrorist attack, the lack of communications for everyone, not just the terrorists, would increase the level of terror and could even increase the body count. Information lessens fear and makes people safer.

None of this is new. Criminals have used telephones and mobile phones since they were invented. Drug smugglers use airplanes and boats, radios and satellite phones. Bank robbers have long used cars and motorcycles as getaway vehicles, and horses before then. I haven’t seen it talked about yet, but the Mumbai terrorists used boats as well. They also wore boots. They ate lunch at restaurants, drank bottled water, and breathed the air. Society survives all of this because the good uses of infrastructure far outweigh the bad uses, even though the good uses are – by and large – small and pedestrian and the bad uses are rare and spectacular. And while terrorism turns society’s very infrastructure against itself, we only harm ourselves by dismantling that infrastructure in response – just as we would if we banned cars because bank robbers used them too.

This essay originally appeared in The Guardian.

EDITED TO ADD (1/29): Other ways we help the terrorists: we put computers in our libraries, we allow anonymous chat rooms, we permit commercial databases and we engage in biomedical research. Grocery stores, too, sell food to just anyone who walks in.

EDITED TO ADD (2/3): Washington DC wants to jam cell phones too.

EDITED TO ADD (2/9): Another thing that will help the terrorists: in-flight Internet.

Posted on January 29, 2009 at 6:00 AMView Comments

Shaping the Obama Administration's Counterterrorism Strategy

I’m at a two-day conference: Shaping the Obama Adminstration’s Counterterrorism Strategy, sponsored by the Cato Institute in Washington, DC. It’s sold out, but you can watch or listen to the event live on the Internet. I’ll be on a panel tomorrow at 9:00 AM.

I’ve been told that there’s a lively conversation about the conference on Twitter, but—as I have previously said—I don’t Twitter.

Posted on January 12, 2009 at 12:44 PMView Comments

Communications During Terrorist Attacks are Not Bad

Twitter was a vital source of information in Mumbai:

News on the Bombay attacks is breaking fast on Twitter with hundreds of people using the site to update others with first-hand accounts of the carnage.

The website has a stream of comments on the attacks which is being updated by the second, often by eye-witnesses and people in the city. Although the chatter cannot be verified immediately and often reflects the chaos on the streets, it is becoming the fastest source of information for those seeking unfiltered news from the scene.

But we simply have to be smarter than this:

In the past hour, people using Twitter reported that bombings and attacks were continuing, but none of these could be confirmed. Others gave details on different locations in which hostages were being held.

And this morning, Twitter users said that Indian authorities was asking users to stop updating the site for security reasons.

One person wrote: “Police reckon tweeters giving away strategic info to terrorists via Twitter”.

Another link:

I can’t stress enough: people can and will use these devices and apps in a terrorist attack, so it is imperative that officials start telling us what kind of information would be relevant from Twitter, Flickr, etc. (and, BTW, what shouldn’t be spread: one Twitter user in Mumbai tweeted me that people were sending the exact location of people still in the hotels, and could tip off the terrorists) and that they begin to monitor these networks in disasters, terrorist attacks, etc.

This fear is exactly backwards. During a terrorist attack—during any crisis situation, actually—the one thing people can do is exchange information. It helps people, calms people, and actually reduces the thing the terrorists are trying to achieve: terror. Yes, there are specific movie-plot scenarios where certain public pronouncements might help the terrorists, but those are rare. I would much rather err on the side of more information, more openness, and more communication.

Posted on December 1, 2008 at 12:02 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.