Entries Tagged "Google"

Page 14 of 17

Marc Rotenberg on Google's Italian Privacy Case

Interesting commentary:

I don’t think this is really a case about ISP liability at all. It is a case about the use of a person’s image, without their consent, that generates commercial value for someone else. That is the essence of the Italian law at issue in this case. It is also how the right of privacy was first established in the United States.

The video at the center of this case was very popular in Italy and drove lots of users to the Google Video site. This boosted advertising and support for other Google services. As a consequence, Google actually had an incentive not to respond to the many requests it received before it actually took down the video.

Back in the U.S., here is the relevant history: after Brandeis and Warren published their famous article on the right to privacy in 1890, state courts struggled with its application. In a New York state case in 1902, a court rejected the newly proposed right. In a second case, a Georgia state court in 1905 endorsed it.

What is striking is that both cases involved the use of a person’s image without their consent. In New York, it was a young girl, whose image was drawn and placed on an oatmeal box for advertising purposes. In Georgia, a man’s image was placed in a newspaper, without his consent, to sell insurance.

Also important is the fact that the New York judge who rejected the privacy claim, suggested that the state assembly could simple pass a law to create the right. The New York legislature did exactly that and in 1903 New York enacted the first privacy law in the United States to protect a person’s “name or likeness” for commercial use.

The whole thing is worth reading.

EDITED TO ADD (3/18): A rebuttal.

Posted on March 9, 2010 at 12:36 PMView Comments

Google in The Onion

Funny:

MOUNTAIN VIEW, CA—Responding to recent public outcries over its handling of private data, search giant Google offered a wide-ranging and eerily well-informed apology to its millions of users Monday.

“We would like to extend our deepest apologies to each and every one of you,” announced CEO Eric Schmidt, speaking from the company’s Googleplex headquarters. “Clearly there have been some privacy concerns as of late, and judging by some of the search terms we’ve seen, along with the tens of thousands of personal e-mail exchanges and Google Chat conversations we’ve carefully examined, it looks as though it might be a while before we regain your trust.”

Google expressed regret to some of its third-generation Irish-American users on Smithwood between Barlow and Lake.

Added Schmidt, “Whether you’re Michael Paulson who lives at 3425 Longview Terrace and makes $86,400 a year, or Jessica Goldblatt from Lynnwood, WA, who already has well-established trust issues, we at Google would just like to say how very, truly sorry we are.”

Posted on March 8, 2010 at 2:24 PMView Comments

More Details on the Chinese Attack Against Google

Three weeks ago, Google announced a sophisticated attack against them from China. There have been some interesting technical details since then. And the NSA is helping Google analyze the attack.

The rumor that China used a system Google put in place to enable lawful intercepts, which I used as a news hook for this essay, has not been confirmed. At this point, I doubt that it’s true.

EDITED TO ADD (2/12): Good article.

Posted on February 8, 2010 at 6:03 AMView Comments

World's Largest Data Collector Teams Up With World's Largest Data Collector

Does anyone think this is a good idea?

Under an agreement that is still being finalized, the National Security Agency would help Google analyze a major corporate espionage attack that the firm said originated in China and targeted its computer networks, according to cybersecurity experts familiar with the matter. The objective is to better defend Google—and its users—from future attack.

EPIC has filed a Freedom of Information Act Request, asking for records pertaining to the partnership. That would certainly help, because otherwise we have no idea what’s actually going on.

I’ve already written about why the NSA should not be in charge of our nation’s cyber security.

Posted on February 5, 2010 at 6:02 AMView Comments

Google vs. China

I’m not sure what I can add to this: politically motivated attacks against Gmail from China. I’ve previously written about hacking from China. Shishir Nagaraja and Ross Anderson wrote a report specifically describing how the Chinese have been hacking groups that are politically opposed to them. I’ve previously written about censorship, Chinese and otherwise. I’ve previously written about broad government eavesdropping on the Internet, Chinese and otherwise. Seems that the Chinese got in through back doors installed to facilitate government eavesdropping, which I even talked about in my essay on eavesdropping. This new attack seems to be highly sophisticated, which is no surprise.

This isn’t a new story, and I wouldn’t have mentioned it at all if it weren’t for the surreal sentence at the bottom of this paragraph:

The Google-China flap has already reignited the debate over global censorship, reinvigorating human rights groups drawing attention to abuses in the country and prompting U.S. politicians to take a hard look at trade relations. The Obama administration issued statements of support for Google, and members of Congress are pushing to revive a bill banning U.S. tech companies from working with governments that digitally spy on their citizens.

Of course, the bill won’t go anywhere, but shouldn’t someone inform those members of Congress about what’s been going on in the United States for the past eight years?

In related news, Google has enabled https by default for Gmail users. In June 2009, I cosigned a letter to the CEO of Google asking for this change. It’s a good thing.

EDITED TO ADD (1/19): Commentary on Google’s bargaining position.

Posted on January 19, 2010 at 12:45 PMView Comments

My Reaction to Eric Schmidt

Schmidt said:

I think judgment matters. If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place. If you really need that kind of privacy, the reality is that search engines—including Google—do retain this information for some time and it’s important, for example, that we are all subject in the United States to the Patriot Act and it is possible that all that information could be made available to the authorities.

This, from 2006, is my response:

Privacy protects us from abuses by those in power, even if we’re doing nothing wrong at the time of surveillance.

We do nothing wrong when we make love or go to the bathroom. We are not deliberately hiding anything when we seek out private places for reflection or conversation. We keep private journals, sing in the privacy of the shower, and write letters to secret lovers and then burn them. Privacy is a basic human need.

[…]

For if we are observed in all matters, we are constantly under threat of correction, judgment, criticism, even plagiarism of our own uniqueness. We become children, fettered under watchful eyes, constantly fearful that—either now or in the uncertain future—patterns we leave behind will be brought back to implicate us, by whatever authority has now become focused upon our once-private and innocent acts. We lose our individuality, because everything we do is observable and recordable.

[…]

This is the loss of freedom we face when our privacy is taken from us. This is life in former East Germany, or life in Saddam Hussein’s Iraq. And it’s our future as we allow an ever-intrusive eye into our personal, private lives.

Too many wrongly characterize the debate as “security versus privacy.” The real choice is liberty versus control. Tyranny, whether it arises under threat of foreign physical attack or under constant domestic authoritative scrutiny, is still tyranny. Liberty requires security without intrusion, security plus privacy. Widespread police surveillance is the very definition of a police state. And that’s why we should champion privacy even when we have nothing to hide.

EDITED TO ADD: See also Daniel Solove’s “‘I’ve Got Nothing to Hide’ and Other Misunderstandings of Privacy.”

Posted on December 9, 2009 at 12:22 PMView Comments

The Commercial Speech Arms Race

A few years ago, a company began to sell a liquid with identification codes suspended in it. The idea was that you would paint it on your stuff as proof of ownership. I commented that I would paint it on someone else’s stuff, then call the police.

I was reminded of this recently when a group of Israeli scientists demonstrated that it’s possible to fabricate DNA evidence. So now, instead of leaving your own DNA at a crime scene, you can leave fabricated DNA. And it isn’t even necessary to fabricate. In Charlie Stross’s novel Halting State, the bad guys foul a crime scene by blowing around the contents of a vacuum cleaner bag, containing the DNA of dozens, if not hundreds, of people.

This kind of thing has been going on for ever. It’s an arms race, and when technology changes, the balance between attacker and defender changes. But when automated systems do the detecting, the results are different. Face recognition software can be fooled by cosmetic surgery, or sometimes even just a photograph. And when fooling them becomes harder, the bad guys fool them on a different level. Computer-based detection gives the defender economies of scale, but the attacker can use those same economies of scale to defeat the detection system.

Google, for example, has anti-fraud systems that detect ­ and shut down ­ advertisers who try to inflate their revenue by repeatedly clicking on their own AdSense ads. So people built bots to repeatedly click on the AdSense ads of their competitors, trying to convince Google to kick them out of the system.

Similarly, when Google started penalizing a site’s search engine rankings for having “bad neighbors”—backlinks from link farms, adult or gambling sites, or blog spam—people engaged in sabotage: they built link farms and left blog comment spam linking to their competitors’ sites.

The same sort of thing is happening on Yahoo Answers. Initially, companies would leave answers pushing their products, but Yahoo started policing this. So people have written bots to report abuse on all their competitors. There are Facebook bots doing the same sort of thing.

Last month, Google introduced Sidewiki, a browser feature that lets you read and post comments on virtually any webpage. People and industries are already worried about the effects unrestrained commentary might have on their businesses, and how they might control the comments. I’m sure Google has sophisticated systems ready to detect commercial interests that try to take advantage of the system, but are they ready to deal with commercial interests that try to frame their competitors? And do we want to give one company the power to decide which comments should rise to the top and which get deleted?

Whenever you build a security system that relies on detection and identification, you invite the bad guys to subvert the system so it detects and identifies someone else. Sometimes this is hard ­—leaving someone else’s fingerprints on a crime scene is hard, as is using a mask of someone else’s face to fool a guard watching a security camera ­—and sometimes it’s easy. But when automated systems are involved, it’s often very easy. It’s not just hardened criminals that try to frame each other, it’s mainstream commercial interests.

With systems that police internet comments and links, there’s money involved in commercial messages ­—so you can be sure some will take advantage of it. This is the arms race. Build a detection system, and the bad guys try to frame someone else. Build a detection system to detect framing, and the bad guys try to frame someone else framing someone else. Build a detection system to detect framing of framing, and well, there’s no end, really. Commercial speech is on the internet to stay; we can only hope that they don’t pollute the social systems we use so badly that they’re no longer useful.

This essay originally appeared in The Guardian.

Posted on October 16, 2009 at 8:56 AMView Comments

File Deletion

File deletion is all about control. This used to not be an issue. Your data was on your computer, and you decided when and how to delete a file. You could use the delete function if you didn’t care about whether the file could be recovered or not, and a file erase program—I use BCWipe for Windows—if you wanted to ensure no one could ever recover the file.

As we move more of our data onto cloud computing platforms such as Gmail and Facebook, and closed proprietary platforms such as the Kindle and the iPhone, deleting data is much harder.

You have to trust that these companies will delete your data when you ask them to, but they’re generally not interested in doing so. Sites like these are more likely to make your data inaccessible than they are to physically delete it. Facebook is a known culprit: actually deleting your data from its servers requires a complicated procedure that may or may not work. And even if you do manage to delete your data, copies are certain to remain in the companies’ backup systems. Gmail explicitly says this in its privacy notice.

Online backups, SMS messages, photos on photo sharing sites, smartphone applications that store your data in the network: you have no idea what really happens when you delete pieces of data or your entire account, because you’re not in control of the computers that are storing the data.

This notion of control also explains how Amazon was able to delete a book that people had previously purchased on their Kindle e-book readers. The legalities are debatable, but Amazon had the technical ability to delete the file because it controls all Kindles. It has designed the Kindle so that it determines when to update the software, whether people are allowed to buy Kindle books, and when to turn off people’s Kindles entirely.

Vanish is a research project by Roxana Geambasu and colleagues at the University of Washington. They designed a prototype system that automatically deletes data after a set time interval. So you can send an email, create a Google Doc, post an update to Facebook, or upload a photo to Flickr, all designed to disappear after a set period of time. And after it disappears, no one—not anyone who downloaded the data, not the site that hosted the data, not anyone who intercepted the data in transit, not even you—will be able to read it. If the police arrive at Facebook or Google or Flickr with a warrant, they won’t be able to read it.

The details are complicated, but Vanish breaks the data’s decryption key into a bunch of pieces and scatters them around the web using a peer-to-peer network. Then it uses the natural turnover in these networks—machines constantly join and leave—to make the data disappear. Unlike previous programs that supported file deletion, this one doesn’t require you to trust any company, organisation, or website. It just happens.

Of course, Vanish doesn’t prevent the recipient of an email or the reader of a Facebook page from copying the data and pasting it into another file, just as Kindle’s deletion feature doesn’t prevent people from copying a book’s files and saving them on their computers. Vanish is just a prototype at this point, and it only works if all the people who read your Facebook entries or view your Flickr pictures have it installed on their computers as well; but it’s a good demonstration of how control affects file deletion. And while it’s a step in the right direction, it’s also new and therefore deserves further security analysis before being adopted on a wide scale.

We’ve lost the control of data on some of the computers we own, and we’ve lost control of our data in the cloud. We’re not going to stop using Facebook and Twitter just because they’re not going to delete our data when we ask them to, and we’re not going to stop using Kindles and iPhones because they may delete our data when we don’t want them to. But we need to take back control of data in the cloud, and projects like Vanish show us how we can.

Now we need something that will protect our data when a large corporation decides to delete it.

This essay originally appeared in The Guardian.

EDITED TO ADD (9/30): Vanish has been broken, paper here.

Posted on September 10, 2009 at 6:08 AMView Comments

Making an Operating System Virus Free

Commenting on Google’s claim that Chrome was designed to be virus-free, I said:

Bruce Schneier, the chief security technology officer at BT, scoffed at Google’s promise. “It’s an idiotic claim,” Schneier wrote in an e-mail. “It was mathematically proved decades ago that it is impossible—not an engineering impossibility, not technologically impossible, but the 2+2=3 kind of impossible—to create an operating system that is immune to viruses.”

What I was referring to, although I couldn’t think of his name at the time, was Fred Cohen’s 1986 Ph.D. thesis where he proved that it was impossible to create a virus-checking program that was perfect. That is, it is always possible to write a virus that any virus-checking program will not detect.

This reaction to my comment is accurate:

That seems to us like he’s picking on the semantics of Google’s statement just a bit. Google says that users “won’t have to deal with viruses,” and Schneier is noting that it’s simply not possible to create an OS that can’t be taken down by malware. While that may be the case, it’s likely that Chrome OS is going to be arguably more secure than the other consumer operating systems currently in use today. In fact, we didn’t take Google’s statement to mean that Chrome OS couldn’t get a virus EVER; we just figured they meant it was a lot harder to get one on their new OS – didn’t you?

When I said that I had not seen Google’s statement. I was responding to what the reporter was telling me on the phone. So yes, I jumped on the reporter’s claim about Google’s claim. I did try to temper my comment:

Redesigning an operating system from scratch, “[taking] security into account all the way up and down,” could make for a more secure OS than ones that have been developed so far, Schneier said. But that’s different from Google’s promise that users won’t have to deal with viruses or malware, he added.

To summarize, there is a lot that can be done in an OS to reduce the threat of viruses and other malware. If the Chrome team started from scratch and took security seriously all through the design and development process, they have to potential to develop something really secure. But I don’t know if they did.

Posted on July 10, 2009 at 9:44 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.