Entries Tagged "data mining"

Page 2 of 7

Notice and Consent

New Research: Rebecca Lipman, “Online Privacy and the Invisible Market for Our Data.” The paper argues that notice and consent doesn’t work, and suggests how it could be made to work.

Abstract: Consumers constantly enter into blind bargains online. We trade our personal information for free websites and apps, without knowing exactly what will be done with our data. There is nominally a notice and choice regime in place via lengthy privacy policies. However, virtually no one reads them. In this ill-informed environment, companies can gather and exploit as much data as technologically possible, with very few legal boundaries. The consequences for consumers are often far-removed from their actions, or entirely invisible to them. Americans deserve a rigorous notice and choice regime. Such a regime would allow consumers to make informed decisions and regain some measure of control over their personal information. This article explores the problems with the current marketplace for our digital data, and explains how we can make a robust notice and choice regime work for consumers.

Posted on February 26, 2016 at 12:22 PMView Comments

Fugitive Located by Spotify

The latest in identification by data:

Webber said a tipster had spotted recent activity from Nunn on the Spotify streaming service and alerted law enforcement. He scoured the Internet for other evidence of Nunn and Barr’s movements, eventually filling out 12 search warrants for records at different technology companies. Those searches led him to an IP address that traced Nunn to Cabo San Lucas, Webber said.

Nunn, he said, had been avidly streaming television shows and children’s programs on various online services, giving the sheriff’s department a hint to the couple’s location.

Posted on July 29, 2015 at 1:43 PMView Comments

Research on The Trade-off Between Free Services and Personal Data

New report: “The Tradeoff Fallacy: How marketers are misrepresenting American consumers and opening them up to exploitation.”

New Annenberg survey results indicate that marketers are misrepresenting a large majority of Americans by claiming that Americas give out information about themselves as a tradeoff for benefits they receive. To the contrary, the survey reveals most Americans do not believe that ‘data for discounts’ is a square deal.

The findings also suggest, in contrast to other academics’ claims, that Americans’ willingness to provide personal information to marketers cannot be explained by the public’s poor knowledge of the ins and outs of digital commerce. In fact, people who know more about ways marketers can use their personal information are more likely rather than less likely to accept discounts in exchange for data when presented with a real-life scenario.

Our findings, instead, support a new explanation: a majority of Americans are resigned to giving up their data­—and that is why many appear to be engaging in tradeoffs. Resignation occurs when a person believes an undesirable outcome is inevitable and feels powerless to stop it. Rather than feeling able to make choices, Americans believe it is futile to manage what companies can learn about them. Our study reveals that more than half do not want to lose control over their information but also believe this loss of control has already happened.

By misrepresenting the American people and championing the tradeoff argument, marketers give policymakers false justifications for allowing the collection and use of all kinds of consumer data often in ways that the public find objectionable. Moreover, the futility we found, combined with a broad public fear about what companies can do with the data, portends serious difficulties not just for individuals but also—over time—for the institution of consumer commerce.

Some news articles.

Posted on June 17, 2015 at 6:44 AMView Comments

Corporations Misusing Our Data

In the Internet age, we have no choice but to entrust our data with private companies: e-mail providers, service providers, retailers, and so on.

We realize that this data is at risk from hackers. But there’s another risk as well: the employees of the companies who are holding our data for us.

In the early years of Facebook, employees had a master password that enabled them to view anything they wanted in any account. NSA employees occasionally snoop on their friends and partners. The agency even has a name for it: LOVEINT. And well before the Internet, people with access to police or medical records occasionally used that power to look up either famous people or people they knew.

The latest company accused of allowing this sort of thing is Uber, the Internet car-ride service. The company is under investigation for spying on riders without their permission. Called the “god view,” some Uber employees are able to see who is using the service and where they’re going—and used this at least once in 2011 as a party trick to show off the service. A senior executive also suggested the company should hire people to dig up dirt on their critics, making their database of people’s rides even more “useful.”

None of us wants to be stalked—whether it’s from looking at our location data, our medical data, our emails and texts, or anything else—by friends or strangers who have access due to their jobs. Unfortunately, there are few rules protecting us.

Government employees are prohibited from looking at our data, although none of the NSA LOVEINT creeps were ever prosecuted. The HIPAA law protects the privacy of our medical records, but we have nothing to protect most of our other information.

Your Facebook and Uber data are only protected by company culture. There’s nothing in their license agreements that you clicked “agree” to but didn’t read that prevents those companies from violating your privacy.

This needs to change. Corporate databases containing our data should be secured from everyone who doesn’t need access for their work. Voyeurs who peek at our data without a legitimate reason should be punished.

There are audit technologies that can detect this sort of thing, and they should be required. As long as we have to give our data to companies and government agencies, we need assurances that our privacy will be protected.

This essay previously appeared on CNN.com.

Posted on December 5, 2014 at 6:45 AMView Comments

Is Google Too Big to Trust?

Interesting essay about how Google’s lack of transparency is hurting their trust:

The reality is that Google’s business is and has always been about mining as much data as possible to be able to present information to users. After all, it can’t display what it doesn’t know. Google Search has always been an ad-supported service, so it needs a way to sell those users to advertisers—that’s how the industry works. Its Google Now voice-based service is simply a form of Google Search, so it too serves advertisers’ needs.

In the digital world, advertisers want to know more than the 100,000 people who might be interested in buying a new car. They now want to know who those people are, so they can reach out to them with custom messages that are more likely to be effective. They may not know you personally, but they know your digital persona—basically, you. Google needs to know about you to satisfy its advertisers’ demands.

Once you understand that, you understand why Google does what it does. That’s simply its business. Nothing is free, so if you won’t pay cash, you’ll have to pay with personal information. That business model has been around for decades; Google didn’t invent that business model, but Google did figure out how to make it work globally, pervasively, appealingly, and nearly instantaneously.

I don’t blame Google for doing that, but I blame it for being nontransparent. Putting unmarked sponsored ads in the “regular” search results section is misleading, because people have been trained by Google to see that section of the search results as neutral. They are in fact not. Once you know that, you never quite trust Google search results again. (Yes, Bing’s results are similarly tainted. But Microsoft never promised to do no evil, and most people use Google.)

Posted on April 24, 2014 at 6:45 AMView Comments

Big Data Surveillance Results in Bad Policy

Evgeny Morozov makes a point about surveillance and big data: it just looks for useful correlations without worrying about causes, and leads people to implement “fixes” based simply on those correlations—rather than understanding and correcting the underlying causes.

As the media academic Mark Andrejevic points out in Infoglut, his new book on the political implications of information overload, there is an immense—but mostly invisible—cost to the embrace of Big Data by the intelligence community (and by just about everyone else in both the public and private sectors). That cost is the devaluation of individual and institutional comprehension, epitomized by our reluctance to investigate the causes of actions and jump straight to dealing with their consequences. But, argues Andrejevic, while Google can afford to be ignorant, public institutions cannot.

“If the imperative of data mining is to continue to gather more data about everything,” he writes, “its promise is to put this data to work, not necessarily to make sense of it. Indeed, the goal of both data mining and predictive analytics is to generate useful patterns that are far beyond the ability of the human mind to detect or even explain.” In other words, we don’t need to inquire why things are the way they are as long as we can affect them to be the way we want them to be. This is rather unfortunate. The abandonment of comprehension as a useful public policy goal would make serious political reforms impossible.

Forget terrorism for a moment. Take more mundane crime. Why does crime happen? Well, you might say that it’s because youths don’t have jobs. Or you might say that’s because the doors of our buildings are not fortified enough. Given some limited funds to spend, you can either create yet another national employment program or you can equip houses with even better cameras, sensors, and locks. What should you do?

If you’re a technocratic manager, the answer is easy: Embrace the cheapest option. But what if you are that rare breed, a responsible politician? Just because some crimes have now become harder doesn’t mean that the previously unemployed youths have finally found employment. Surveillance cameras might reduce crime—even though the evidence here is mixed—but no studies show that they result in greater happiness of everyone involved. The unemployed youths are still as stuck as they were before—only that now, perhaps, they displace anger onto one another. On this reading, fortifying our streets without inquiring into the root causes of crime is a self-defeating strategy, at least in the long run.

Big Data is very much like the surveillance camera in this analogy: Yes, it can help us avoid occasional jolts and disturbances and, perhaps, even stop the bad guys. But it can also blind us to the fact that the problem at hand requires a more radical approach. Big Data buys us time, but it also gives us a false illusion of mastery.

Posted on July 8, 2013 at 11:50 AMView Comments

CIA Invests in Social-Network Datamining

From Wired:

In-Q-Tel, the investment arm of the CIA and the wider intelligence community, is putting cash into Visible Technologies, a software firm that specializes in monitoring social media. It’s part of a larger movement within the spy services to get better at using “open source intelligence“—information that’s publicly available, but often hidden in the flood of TV shows, newspaper articles, blog posts, online videos and radio reports generated every day.

Here’s the Visible Technologies press release on the funding.

Posted on October 26, 2009 at 6:53 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.