Entries Tagged "Facebook"

Page 4 of 10

Mass Surveillance Silences Minority Opinions

Research paper: Elizabeth Stoycheff, “Under Surveillance: Examining Facebook’s Spiral of Silence Effects in the Wake of NSA Internet Monitoring“:

Abstract: Since Edward Snowden exposed the National Security Agency’s use of controversial online surveillance programs in 2013, there has been widespread speculation about the potentially deleterious effects of online government monitoring. This study explores how perceptions and justification of surveillance practices may create a chilling effect on democratic discourse by stifling the expression of minority political views. Using a spiral of silence theoretical framework, knowing one is subject to surveillance and accepting such surveillance as necessary act as moderating agents in the relationship between one’s perceived climate of opinion and willingness to voice opinions online. Theoretical and normative implications are discussed.

No surprise, and something I wrote about in Data and Goliath:

Across the US, states are on the verge of reversing decades-old laws about homosexual relationships and marijuana use. If the old laws could have been perfectly enforced through surveillance, society would never have reached the point where the majority of citizens thought those things were okay. There has to be a period where they are still illegal yet increasingly tolerated, so that people can look around and say, “You know, that wasn’t so bad.” Yes, the process takes decades, but it’s a process that can’t happen without lawbreaking. Frank Zappa said something similar in 1971: “Without deviation from the norm, progress is not possible.”

The perfect enforcement that comes with ubiquitous government surveillance chills this process. We need imperfect security­—systems that free people to try new things, much the way off-the-record brainstorming sessions loosen inhibitions and foster creativity. If we don’t have that, we can’t slowly move from a thing’s being illegal and not okay, to illegal and not sure, to illegal and probably okay, and finally to legal.

This is an important point. Freedoms we now take for granted were often at one time viewed as threatening or even criminal by the past power structure. Those changes might never have happened if the authorities had been able to achieve social control through surveillance.

This is one of the main reasons all of us should care about the emerging architecture of surveillance, even if we are not personally chilled by its existence. We suffer the effects because people around us will be less likely to proclaim new political or social ideas, or act out of the ordinary. If J. Edgar Hoover’s surveillance of Martin Luther King Jr. had been successful in silencing him, it would have affected far more people than King and his family.

Slashdot thread.

EDITED TO ADD (4/6): News article.

Posted on March 29, 2016 at 12:58 PMView Comments

Possible Government Demand for WhatsApp Backdoor

The New York Times is reporting that WhatsApp, and its parent company Facebook, may be headed to court over encrypted chat data that the FBI can’t decrypt.

This case is fundamentally different from the Apple iPhone case. In that case, the FBI is demanding that Apple create a hacking tool to exploit an already existing vulnerability in the iPhone 5c, because they want to get at stored data on a phone that they have in their possession. In the WhatsApp case, chat data is end-to-end encrypted, and there is nothing the company can do to assist the FBI in reading already encrypted messages. This case would be about forcing WhatsApp to make an engineering change in the security of its software to create a new vulnerability—one that they would be forced to push onto the user’s device to allow the FBI to eavesdrop on future communications. This is a much further reach for the FBI, but potentially a reasonable additional step if they win the Apple case.

And once the US demands this, other countries will demand it as well. Note that the government of Brazil has arrested a Facebook employee because WhatsApp is secure.

We live in scary times when our governments want us to reduce our own security.

EDITED TO ADD (3/15): More commentary.

Posted on March 15, 2016 at 6:17 AMView Comments

DEA Sets Up Fake Facebook Page in Woman's Name

This is a creepy story. A woman has her phone seized by the Drug Enforcement Agency and gives them permission to look at her phone. Without her knowledge or consent, they steal photos off of the phone (the article says they were “racy”) and use it to set up a fake Facebook page in her name.

The woman sued the government over this. Extra creepy was the government’s defense in court: “Defendants admit that Plaintiff did not give express permission for the use of photographs contained on her phone on an undercover Facebook page, but state the Plaintiff implicitly consented by granting access to the information stored in her cell phone and by consenting to the use of that information to aid in an ongoing criminal investigations [sic].”

The article was edited to say: “Update: Facebook has removed the page and the Justice Department said it is reviewing the incident.” So maybe this is just an overzealous agent and not official DEA policy.

But as Marcy Wheeler said, this is a good reason to encrypt your cell phone.

Posted on October 15, 2014 at 7:06 AMView Comments

The Continuing Public/Private Surveillance Partnership

If you’ve been reading the news recently, you might think that corporate America is doing its best to thwart NSA surveillance.

Google just announced that it is encrypting Gmail when you access it from your computer or phone, and between data centers. Last week, Mark Zuckerberg personally called President Obama to complain about the NSA using Facebook as a means to hack computers, and Facebook’s Chief Security Officer explained to reporters that the attack technique has not worked since last summer. Yahoo, Google, Microsoft, and others are now regularly publishing “transparency reports,” listing approximately how many government data requests the companies have received and complied with.

On the government side, last week the NSA’s General Counsel Rajesh De seemed to have thrown those companies under a bus by stating that—despite their denials—they knew all about the NSA’s collection of data under both the PRISM program and some unnamed “upstream” collections on the communications links.

Yes, it may seem like the public/private surveillance partnership has frayed—but, unfortunately, it is alive and well. The main focus of massive Internet companies and government agencies both still largely align: to keep us all under constant surveillance. When they bicker, it’s mostly role-playing designed to keep us blasé about what’s really going on.

The U.S. intelligence community is still playing word games with us. The NSA collects our data based on four different legal authorities: the Foreign Intelligence Surveillance Act (FISA) of 1978, Executive Order 12333 of 1981 and modified in 2004 and 2008, Section 215 of the Patriot Act of 2001, and Section 702 of the FISA Amendments Act (FAA) of 2008. Be careful when someone from the intelligence community uses the caveat “not under this program” or “not under this authority”; almost certainly it means that whatever it is they’re denying is done under some other program or authority. So when De said that companies knew about NSA collection under Section 702, it doesn’t mean they knew about the other collection programs.

The big Internet companies know of PRISM—although not under that code name—because that’s how the program works; the NSA serves them with FISA orders. Those same companies did not know about any of the other surveillance against their users conducted on the far more permissive EO 12333. Google and Yahoo did not know about MUSCULAR, the NSA’s secret program to eavesdrop on their trunk connections between data centers. Facebook did not know about QUANTUMHAND, the NSA’s secret program to attack Facebook users. And none of the target companies knew that the NSA was harvesting their users’ address books and buddy lists.

These companies are certainly pissed that the publicity surrounding the NSA’s actions is undermining their users’ trust in their services, and they’re losing money because of it. Cisco, IBM, cloud service providers, and others have announced that they’re losing billions, mostly in foreign sales.

These companies are doing their best to convince users that their data is secure. But they’re relying on their users not understanding what real security looks like. IBM’s letter to its clients last week is an excellent example. The letter lists five "simple facts" that it hopes will mollify its customers, but the items are so qualified with caveats that they do the exact opposite to anyone who understands the full extent of NSA surveillance. And IBM’s spending $1.2B on data centers outside the U.S. will only reassure customers who don’t realize that National Security Letters require a company to turn over data, regardless of where in the world it is stored.

Google’s recent actions, and similar actions of many Internet companies, will definitely improve its users’ security against surreptitious government collection programs—both the NSA’s and other governments’—but their assurances deliberately ignores the massive security vulnerability built into its services by design. Google, and by extension, the U.S. government, still has access to your communications on Google’s servers.

Google could change that. It could encrypt your e-mail so only you could decrypt and read it. It could provide for secure voice and video so no one outside the conversations could eavesdrop.

It doesn’t. And neither does Microsoft, Facebook, Yahoo, Apple, or any of the others.

Why not? They don’t partly because they want to keep the ability to eavesdrop on your conversations. Surveillance is still the business model of the Internet, and every one of those companies wants access to your communications and your metadata. Your private thoughts and conversations are the product they sell to their customers. We also have learned that they read your e-mail for their own internal investigations.

But even if this were not true, even if—for example—Google were willing to forgo data mining your e-mail and video conversations in exchange for the marketing advantage it would give it over Microsoft, it still won’t offer you real security. It can’t.

The biggest Internet companies don’t offer real security because the U.S. government won’t permit it.

This isn’t paranoia. We know that the U.S. government ordered the secure e-mail provider Lavabit to turn over its master keys and compromise every one of its users. We know that the U.S. government convinced Microsoft—either through bribery, coercion, threat, or legal compulsion—to make changes in how Skype operates, to make eavesdropping easier.

We don’t know what sort of pressure the U.S. government has put on Google and the others. We don’t know what secret agreements those companies have reached with the NSA. We do know the NSA’s BULLRUN program to subvert Internet cryptography was successful against many common protocols. Did the NSA demand Google’s keys, as it did with Lavabit? Did its Tailored Access Operations group break into to Google’s servers and steal the keys?

We just don’t know.

The best we have are caveat-laden pseudo-assurances. At SXSW earlier this month, CEO Eric Schmidt tried to reassure the audience by saying that he was “pretty sure that information within Google is now safe from any government’s prying eyes.” A more accurate statement might be, “Your data is safe from governments, except for the ways we don’t know about and the ways we cannot tell you about. And, of course, we still have complete access to it all, and can sell it at will to whomever we want.” That’s a lousy marketing pitch, but as long as the NSA is allowed to operate using secret court orders based on secret interpretations of secret law, it’ll never be any different.

Google, Facebook, Microsoft, and the others are already on the record as supporting these legislative changes. It would be better if they openly acknowledged their users’ insecurity and increased their pressure on the government to change, rather than trying to fool their users and customers.

This essay previously appeared on TheAtlantic.com.

Posted on March 31, 2014 at 9:18 AMView Comments

Automatic Face-Recognition Software Getting Better

Facebook has developed a face-recognition system that works almost as well as the human brain:

Asked whether two unfamiliar photos of faces show the same person, a human being will get it right 97.53 percent of the time. New software developed by researchers at Facebook can score 97.25 percent on the same challenge, regardless of variations in lighting or whether the person in the picture is directly facing the camera.

Human brains are optimized for facial recognition, which makes this even more impressive.

This kind of technology will change video surveillance. Right now, it’s general, and identifying people is largely a forensic activity. This will make cameras part of an automated process for identifying people.

Posted on March 20, 2014 at 7:12 AMView Comments

Building an Online Lie Detector

There’s an interesting project to detect false rumors on the Internet.

The EU-funded project aims to classify online rumours into four types: speculation—such as whether interest rates might rise; controversy—as over the MMR vaccine; misinformation, where something untrue is spread unwittingly; and disinformation, where it’s done with malicious intent.

The system will also automatically categorise sources to assess their authority, such as news outlets, individual journalists, experts, potential eye witnesses, members of the public or automated ‘bots’. It will also look for a history and background, to help spot where Twitter accounts have been created purely to spread false information.

It will search for sources that corroborate or deny the information, and plot how the conversations on social networks evolve, using all of this information to assess whether it is true or false. The results will be displayed to the user in a visual dashboard, to enable them to easily see whether a rumour is taking hold.

I have no idea how well it will work, or even whether it will work, but I like research in this direction. Of the three primary Internet mechanisms for social control, surveillance and censorship have received a lot more attention than propaganda. Anything that can potentially detect propaganda is a good thing.

Three news articles.

Posted on February 21, 2014 at 8:34 AMView Comments

Surveillance as a Business Model

Google recently announced that it would start including individual users’ names and photos in some ads. This means that if you rate some product positively, your friends may see ads for that product with your name and photo attached—without your knowledge or consent. Meanwhile, Facebook is eliminating a feature that allowed people to retain some portions of their anonymity on its website.

These changes come on the heels of Google’s move to explore replacing tracking cookies with something that users have even less control over. Microsoft is doing something similar by developing its own tracking technology.

More generally, lots of companies are evading the “Do Not Track” rules, meant to give users a say in whether companies track them. Turns out the whole “Do Not Track” legislation has been a sham.

It shouldn’t come as a surprise that big technology companies are tracking us on the Internet even more aggressively than before.

If these features don’t sound particularly beneficial to you, it’s because you’re not the customer of any of these companies. You’re the product, and you’re being improved for their actual customers: their advertisers.

This is nothing new. For years, these sites and others have systematically improved their “product” by reducing user privacy. This excellent infographic, for example, illustrates how Facebook has done so over the years.

The “Do Not Track” law serves as a sterling example of how bad things are. When it was proposed, it was supposed to give users the right to demand that Internet companies not track them. Internet companies fought hard against the law, and when it was passed, they fought to ensure that it didn’t have any benefit to users. Right now, complying is entirely voluntary, meaning that no Internet company has to follow the law. If a company does, because it wants the PR benefit of seeming to take user privacy seriously, it can still track its users.

Really: if you tell a “Do Not Track”-enabled company that you don’t want to be tracked, it will stop showing you personalized ads. But your activity will be tracked—and your personal information collected, sold and used—just like everyone else’s. It’s best to think of it as a “track me in secret” law.

Of course, people don’t think of it that way. Most people aren’t fully aware of how much of their data is collected by these sites. And, as the “Do Not Track” story illustrates, Internet companies are doing their best to keep it that way.

The result is a world where our most intimate personal details are collected and stored. I used to say that Google has a more intimate picture of what I’m thinking of than my wife does. But that’s not far enough: Google has a more intimate picture than I do. The company knows exactly what I am thinking about, how much I am thinking about it, and when I stop thinking about it: all from my Google searches. And it remembers all of that forever.

As the Edward Snowden revelations continue to expose the full extent of the National Security Agency’s eavesdropping on the Internet, it has become increasingly obvious how much of that has been enabled by the corporate world’s existing eavesdropping on the Internet.

The public/private surveillance partnership is fraying, but it’s largely alive and well. The NSA didn’t build its eavesdropping system from scratch; it got itself a copy of what the corporate world was already collecting.

There are a lot of reasons why Internet surveillance is so prevalent and pervasive.

One, users like free things, and don’t realize how much value they’re giving away to get it. We know that “free” is a special price that confuses peoples’ thinking.

Google’s 2013 third quarter profits were nearly $3 billion; that profit is the difference between how much our privacy is worth and the cost of the services we receive in exchange for it.

Two, Internet companies deliberately make privacy not salient. When you log onto Facebook, you don’t think about how much personal information you’re revealing to the company; you’re chatting with your friends. When you wake up in the morning, you don’t think about how you’re going to allow a bunch of companies to track you throughout the day; you just put your cell phone in your pocket.

And three, the Internet’s winner-takes-all market means that privacy-preserving alternatives have trouble getting off the ground. How many of you know that there is a Google alternative called DuckDuckGo that doesn’t track you? Or that you can use cut-out sites to anonymize your Google queries? I have opted out of Facebook, and I know it affects my social life.

There are two types of changes that need to happen in order to fix this. First, there’s the market change. We need to become actual customers of these sites so we can use purchasing power to force them to take our privacy seriously. But that’s not enough. Because of the market failures surrounding privacy, a second change is needed. We need government regulations that protect our privacy by limiting what these sites can do with our data.

Surveillance is the business model of the Internet—Al Gore recently called it a “stalker economy.” All major websites run on advertising, and the more personal and targeted that advertising is, the more revenue the site gets for it. As long as we users remain the product, there is minimal incentive for these companies to provide any real privacy.

This essay previously appeared on CNN.com.

Posted on November 25, 2013 at 6:53 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.