Entries Tagged "Google"

Page 16 of 19

Cory Doctorow Gets Phished

It can happen to anyone:

Here’s how I got fooled. On Monday, I unlocked my Nexus One phone, installing a new and more powerful version of the Android operating system that allowed me to do some neat tricks, like using the phone as a wireless modem on my laptop. In the process of reinstallation, I deleted all my stored passwords from the phone. I also had a couple of editorials come out that day, and did a couple of interviews, and generally emitted a pretty fair whack of information.

The next day, Tuesday, we were ten minutes late getting out of the house. My wife and I dropped my daughter off at the daycare, then hurried to our regular coffee shop to get take-outs before parting ways to go to our respective offices. Because we were a little late arriving, the line was longer than usual. My wife went off to read the free newspapers, I stood in the line. Bored, I opened up my phone fired up my freshly reinstalled Twitter client and saw that I had a direct message from an old friend in Seattle, someone I know through fandom. The message read “Is this you????”? and was followed by one of those ubiquitous shortened URLs that consist of a domain and a short code, like this: http://owl.ly/iuefuew.

The whole story is worth reading.

Posted on May 7, 2010 at 6:56 AMView Comments

Privacy and Control

In January Facebook Chief Executive, Mark Zuckerberg, declared the age of privacy to be over. A month earlier, Google Chief Eric Schmidt expressed a similar sentiment. Add Scott McNealy’s and Larry Ellison’s comments from a few years earlier, and you’ve got a whole lot of tech CEOs proclaiming the death of privacy—especially when it comes to young people.

It’s just not true. People, including the younger generation, still care about privacy. Yes, they’re far more public on the Internet than their parents: writing personal details on Facebook, posting embarrassing photos on Flickr and having intimate conversations on Twitter. But they take steps to protect their privacy and vociferously complain when they feel it violated. They’re not technically sophisticated about privacy and make mistakes all the time, but that’s mostly the fault of companies and Web sites that try to manipulate them for financial gain.

To the older generation, privacy is about secrecy. And, as the Supreme Court said, once something is no longer secret, it’s no longer private. But that’s not how privacy works, and it’s not how the younger generation thinks about it. Privacy is about control. When your health records are sold to a pharmaceutical company without your permission; when a social-networking site changes your privacy settings to make what used to be visible only to your friends visible to everyone; when the NSA eavesdrops on everyone’s e-mail conversations—your loss of control over that information is the issue. We may not mind sharing our personal lives and thoughts, but we want to control how, where and with whom. A privacy failure is a control failure.

People’s relationship with privacy is socially complicated. Salience matters: People are more likely to protect their privacy if they’re thinking about it, and less likely to if they’re thinking about something else. Social-networking sites know this, constantly reminding people about how much fun it is to share photos and comments and conversations while downplaying the privacy risks. Some sites go even further, deliberately hiding information about how little control—and privacy—users have over their data. We all give up our privacy when we’re not thinking about it.

Group behavior matters; we’re more likely to expose personal information when our peers are doing it. We object more to losing privacy than we value its return once it’s gone. Even if we don’t have control over our data, an illusion of control reassures us. And we are poor judges of risk. All sorts of academic research backs up these findings.

Here’s the problem: The very companies whose CEOs eulogize privacy make their money by controlling vast amounts of their users’ information. Whether through targeted advertising, cross-selling or simply convincing their users to spend more time on their site and sign up their friends, more information shared in more ways, more publicly means more profits. This means these companies are motivated to continually ratchet down the privacy of their services, while at the same time pronouncing privacy erosions as inevitable and giving users the illusion of control.

You can see these forces in play with Google‘s launch of Buzz. Buzz is a Twitter-like chatting service, and when Google launched it in February, the defaults were set so people would follow the people they corresponded with frequently in Gmail, with the list publicly available. Yes, users could change these options, but—and Google knew this—changing options is hard and most people accept the defaults, especially when they’re trying out something new. People were upset that their previously private e-mail contacts list was suddenly public. A Federal Trade Commission commissioner even threatened penalties. And though Google changed its defaults, resentment remained.

Facebook tried a similar control grab when it changed people’s default privacy settings last December to make them more public. While users could, in theory, keep their previous settings, it took an effort. Many people just wanted to chat with their friends and clicked through the new defaults without realizing it.

Facebook has a history of this sort of thing. In 2006 it introduced News Feeds, which changed the way people viewed information about their friends. There was no true privacy change in that users could not see more information than before; the change was in control—or arguably, just in the illusion of control. Still, there was a large uproar. And Facebook is doing it again; last month, the company announced new privacy changes that will make it easier for it to collect location data on users and sell that data to third parties.

With all this privacy erosion, those CEOs may actually be right—but only because they’re working to kill privacy. On the Internet, our privacy options are limited to the options those companies give us and how easy they are to find. We have Gmail and Facebook accounts because that’s where we socialize these days, and it’s hard—especially for the younger generation—to opt out. As long as privacy isn’t salient, and as long as these companies are allowed to forcibly change social norms by limiting options, people will increasingly get used to less and less privacy. There’s no malice on anyone’s part here; it’s just market forces in action. If we believe privacy is a social good, something necessary for democracy, liberty and human dignity, then we can’t rely on market forces to maintain it. Broad legislation protecting personal privacy by giving people control over their personal data is the only solution.

This essay originally appeared on Forbes.com.

EDITED TO ADD (4/13): Google responds. And another essay on the topic.

Posted on April 6, 2010 at 7:47 AMView Comments

Marc Rotenberg on Google's Italian Privacy Case

Interesting commentary:

I don’t think this is really a case about ISP liability at all. It is a case about the use of a person’s image, without their consent, that generates commercial value for someone else. That is the essence of the Italian law at issue in this case. It is also how the right of privacy was first established in the United States.

The video at the center of this case was very popular in Italy and drove lots of users to the Google Video site. This boosted advertising and support for other Google services. As a consequence, Google actually had an incentive not to respond to the many requests it received before it actually took down the video.

Back in the U.S., here is the relevant history: after Brandeis and Warren published their famous article on the right to privacy in 1890, state courts struggled with its application. In a New York state case in 1902, a court rejected the newly proposed right. In a second case, a Georgia state court in 1905 endorsed it.

What is striking is that both cases involved the use of a person’s image without their consent. In New York, it was a young girl, whose image was drawn and placed on an oatmeal box for advertising purposes. In Georgia, a man’s image was placed in a newspaper, without his consent, to sell insurance.

Also important is the fact that the New York judge who rejected the privacy claim, suggested that the state assembly could simple pass a law to create the right. The New York legislature did exactly that and in 1903 New York enacted the first privacy law in the United States to protect a person’s “name or likeness” for commercial use.

The whole thing is worth reading.

EDITED TO ADD (3/18): A rebuttal.

Posted on March 9, 2010 at 12:36 PMView Comments

Google in The Onion

Funny:

MOUNTAIN VIEW, CA—Responding to recent public outcries over its handling of private data, search giant Google offered a wide-ranging and eerily well-informed apology to its millions of users Monday.

“We would like to extend our deepest apologies to each and every one of you,” announced CEO Eric Schmidt, speaking from the company’s Googleplex headquarters. “Clearly there have been some privacy concerns as of late, and judging by some of the search terms we’ve seen, along with the tens of thousands of personal e-mail exchanges and Google Chat conversations we’ve carefully examined, it looks as though it might be a while before we regain your trust.”

Google expressed regret to some of its third-generation Irish-American users on Smithwood between Barlow and Lake.

Added Schmidt, “Whether you’re Michael Paulson who lives at 3425 Longview Terrace and makes $86,400 a year, or Jessica Goldblatt from Lynnwood, WA, who already has well-established trust issues, we at Google would just like to say how very, truly sorry we are.”

Posted on March 8, 2010 at 2:24 PMView Comments

More Details on the Chinese Attack Against Google

Three weeks ago, Google announced a sophisticated attack against them from China. There have been some interesting technical details since then. And the NSA is helping Google analyze the attack.

The rumor that China used a system Google put in place to enable lawful intercepts, which I used as a news hook for this essay, has not been confirmed. At this point, I doubt that it’s true.

EDITED TO ADD (2/12): Good article.

Posted on February 8, 2010 at 6:03 AMView Comments

World's Largest Data Collector Teams Up With World's Largest Data Collector

Does anyone think this is a good idea?

Under an agreement that is still being finalized, the National Security Agency would help Google analyze a major corporate espionage attack that the firm said originated in China and targeted its computer networks, according to cybersecurity experts familiar with the matter. The objective is to better defend Google—and its users—from future attack.

EPIC has filed a Freedom of Information Act Request, asking for records pertaining to the partnership. That would certainly help, because otherwise we have no idea what’s actually going on.

I’ve already written about why the NSA should not be in charge of our nation’s cyber security.

Posted on February 5, 2010 at 6:02 AMView Comments

Google vs. China

I’m not sure what I can add to this: politically motivated attacks against Gmail from China. I’ve previously written about hacking from China. Shishir Nagaraja and Ross Anderson wrote a report specifically describing how the Chinese have been hacking groups that are politically opposed to them. I’ve previously written about censorship, Chinese and otherwise. I’ve previously written about broad government eavesdropping on the Internet, Chinese and otherwise. Seems that the Chinese got in through back doors installed to facilitate government eavesdropping, which I even talked about in my essay on eavesdropping. This new attack seems to be highly sophisticated, which is no surprise.

This isn’t a new story, and I wouldn’t have mentioned it at all if it weren’t for the surreal sentence at the bottom of this paragraph:

The Google-China flap has already reignited the debate over global censorship, reinvigorating human rights groups drawing attention to abuses in the country and prompting U.S. politicians to take a hard look at trade relations. The Obama administration issued statements of support for Google, and members of Congress are pushing to revive a bill banning U.S. tech companies from working with governments that digitally spy on their citizens.

Of course, the bill won’t go anywhere, but shouldn’t someone inform those members of Congress about what’s been going on in the United States for the past eight years?

In related news, Google has enabled https by default for Gmail users. In June 2009, I cosigned a letter to the CEO of Google asking for this change. It’s a good thing.

EDITED TO ADD (1/19): Commentary on Google’s bargaining position.

Posted on January 19, 2010 at 12:45 PMView Comments

My Reaction to Eric Schmidt

Schmidt said:

I think judgment matters. If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place. If you really need that kind of privacy, the reality is that search engines—including Google—do retain this information for some time and it’s important, for example, that we are all subject in the United States to the Patriot Act and it is possible that all that information could be made available to the authorities.

This, from 2006, is my response:

Privacy protects us from abuses by those in power, even if we’re doing nothing wrong at the time of surveillance.

We do nothing wrong when we make love or go to the bathroom. We are not deliberately hiding anything when we seek out private places for reflection or conversation. We keep private journals, sing in the privacy of the shower, and write letters to secret lovers and then burn them. Privacy is a basic human need.

[…]

For if we are observed in all matters, we are constantly under threat of correction, judgment, criticism, even plagiarism of our own uniqueness. We become children, fettered under watchful eyes, constantly fearful that—either now or in the uncertain future—patterns we leave behind will be brought back to implicate us, by whatever authority has now become focused upon our once-private and innocent acts. We lose our individuality, because everything we do is observable and recordable.

[…]

This is the loss of freedom we face when our privacy is taken from us. This is life in former East Germany, or life in Saddam Hussein’s Iraq. And it’s our future as we allow an ever-intrusive eye into our personal, private lives.

Too many wrongly characterize the debate as “security versus privacy.” The real choice is liberty versus control. Tyranny, whether it arises under threat of foreign physical attack or under constant domestic authoritative scrutiny, is still tyranny. Liberty requires security without intrusion, security plus privacy. Widespread police surveillance is the very definition of a police state. And that’s why we should champion privacy even when we have nothing to hide.

EDITED TO ADD: See also Daniel Solove’s “‘I’ve Got Nothing to Hide’ and Other Misunderstandings of Privacy.”

Posted on December 9, 2009 at 12:22 PMView Comments

The Commercial Speech Arms Race

A few years ago, a company began to sell a liquid with identification codes suspended in it. The idea was that you would paint it on your stuff as proof of ownership. I commented that I would paint it on someone else’s stuff, then call the police.

I was reminded of this recently when a group of Israeli scientists demonstrated that it’s possible to fabricate DNA evidence. So now, instead of leaving your own DNA at a crime scene, you can leave fabricated DNA. And it isn’t even necessary to fabricate. In Charlie Stross’s novel Halting State, the bad guys foul a crime scene by blowing around the contents of a vacuum cleaner bag, containing the DNA of dozens, if not hundreds, of people.

This kind of thing has been going on for ever. It’s an arms race, and when technology changes, the balance between attacker and defender changes. But when automated systems do the detecting, the results are different. Face recognition software can be fooled by cosmetic surgery, or sometimes even just a photograph. And when fooling them becomes harder, the bad guys fool them on a different level. Computer-based detection gives the defender economies of scale, but the attacker can use those same economies of scale to defeat the detection system.

Google, for example, has anti-fraud systems that detect ­ and shut down ­ advertisers who try to inflate their revenue by repeatedly clicking on their own AdSense ads. So people built bots to repeatedly click on the AdSense ads of their competitors, trying to convince Google to kick them out of the system.

Similarly, when Google started penalizing a site’s search engine rankings for having “bad neighbors”—backlinks from link farms, adult or gambling sites, or blog spam—people engaged in sabotage: they built link farms and left blog comment spam linking to their competitors’ sites.

The same sort of thing is happening on Yahoo Answers. Initially, companies would leave answers pushing their products, but Yahoo started policing this. So people have written bots to report abuse on all their competitors. There are Facebook bots doing the same sort of thing.

Last month, Google introduced Sidewiki, a browser feature that lets you read and post comments on virtually any webpage. People and industries are already worried about the effects unrestrained commentary might have on their businesses, and how they might control the comments. I’m sure Google has sophisticated systems ready to detect commercial interests that try to take advantage of the system, but are they ready to deal with commercial interests that try to frame their competitors? And do we want to give one company the power to decide which comments should rise to the top and which get deleted?

Whenever you build a security system that relies on detection and identification, you invite the bad guys to subvert the system so it detects and identifies someone else. Sometimes this is hard ­—leaving someone else’s fingerprints on a crime scene is hard, as is using a mask of someone else’s face to fool a guard watching a security camera ­—and sometimes it’s easy. But when automated systems are involved, it’s often very easy. It’s not just hardened criminals that try to frame each other, it’s mainstream commercial interests.

With systems that police internet comments and links, there’s money involved in commercial messages ­—so you can be sure some will take advantage of it. This is the arms race. Build a detection system, and the bad guys try to frame someone else. Build a detection system to detect framing, and the bad guys try to frame someone else framing someone else. Build a detection system to detect framing of framing, and well, there’s no end, really. Commercial speech is on the internet to stay; we can only hope that they don’t pollute the social systems we use so badly that they’re no longer useful.

This essay originally appeared in The Guardian.

Posted on October 16, 2009 at 8:56 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.