Entries Tagged "Internet and society"

Page 3 of 6

The NSA is Commandeering the Internet

It turns out that the NSA’s domestic and world-wide surveillance apparatus is even more extensive than we thought. Bluntly: The government has commandeered the Internet. Most of the largest Internet companies provide information to the NSA, betraying their users. Some, as we’ve learned, fight and lose. Others cooperate, either out of patriotism or because they believe it’s easier that way.

I have one message to the executives of those companies: fight.

Do you remember those old spy movies, when the higher ups in government decide that the mission is more important than the spy’s life? It’s going to be the same way with you. You might think that your friendly relationship with the government means that they’re going to protect you, but they won’t. The NSA doesn’t care about you or your customers, and will burn you the moment it’s convenient to do so.

We’re already starting to see that. Google, Yahoo, Microsoft and others are pleading with the government to allow them to explain details of what information they provided in response to National Security Letters and other government demands. They’ve lost the trust of their customers, and explaining what they do—and don’t do—is how to get it back. The government has refused; they don’t care.

It will be the same with you. There are lots more high-tech companies who have cooperated with the government. Most of those company names are somewhere in the thousands of documents that Edward Snowden took with him, and sooner or later they’ll be released to the public. The NSA probably told you that your cooperation would forever remain secret, but they’re sloppy. They’ll put your company name on presentations delivered to thousands of people: government employees, contractors, probably even foreign nationals. If Snowden doesn’t have a copy, the next whistleblower will.

This is why you have to fight. When it becomes public that the NSA has been hoovering up all of your users’ communications and personal files, what’s going to save you in the eyes of those users is whether or not you fought. Fighting will cost you money in the short term, but capitulating will cost you more in the long term.

Already companies are taking their data and communications out of the US.

The extreme case of fighting is shutting down entirely. The secure e-mail service Lavabit did that last week, abruptly. Ladar Levison, that site’s owner, wrote on his homepage: “I have been forced to make a difficult decision: to become complicit in crimes against the American people or walk away from nearly ten years of hard work by shutting down Lavabit. After significant soul searching, I have decided to suspend operations. I wish that I could legally share with you the events that led to my decision.”

The same day, Silent Circle followed suit, shutting down their e-mail service in advance of any government strong-arm tactics: “We see the writing the wall, and we have decided that it is best for us to shut down Silent Mail now. We have not received subpoenas, warrants, security letters, or anything else by any government, and this is why we are acting now.” I realize that this is extreme. Both of those companies can do it because they’re small. Google or Facebook couldn’t possibly shut themselves off rather than cooperate with the government. They’re too large; they’re public. They have to do what’s economically rational, not what’s moral.

But they can fight. You, an executive in one of those companies, can fight. You’ll probably lose, but you need to take the stand. And you might win. It’s time we called the government’s actions what they really are: commandeering. Commandeering is a practice we’re used to in wartime, where commercial ships are taken for military use, or production lines are converted to military production. But now it’s happening in peacetime. Vast swaths of the Internet are being commandeered to support this surveillance state.

If this is happening to your company, do what you can to isolate the actions. Do you have employees with security clearances who can’t tell you what they’re doing? Cut off all automatic lines of communication with them, and make sure that only specific, required, authorized acts are being taken on behalf of government. Only then can you look your customers and the public in the face and say that you don’t know what is going on—that your company has been commandeered.

Journalism professor Jeff Jarvis recently wrote in the Guardian: “Technology companies: now is the moment when you must answer for us, your users, whether you are collaborators in the US government’s efforts to ‘collect it all—our every move on the internet—or whether you, too, are victims of its overreach.”

So while I’m sure it’s cool to have a secret White House meeting with President Obama—I’m talking to you, Google, Apple, AT&T, and whoever else was in the room—resist. Attend the meeting, but fight the secrecy. Whose side are you on?

The NSA isn’t going to remain above the law forever. Already public opinion is changing, against the government and their corporate collaborators. If you want to keep your users’ trust, demonstrate that you were on their side.

This essay originally appeared on TheAtlantic.com.

Slashdot thread. And a good interview with Lavabit’s founder.

Posted on August 15, 2013 at 6:10 AMView Comments

Another Perspective on the Value of Privacy

A philosophical perspective:

But while Descartes’s overall view has been rightly rejected, there is something profoundly right about the connection between privacy and the self, something that recent events should cause us to appreciate. What is right about it, in my view, is that to be an autonomous person is to be capable of having privileged access (in the two senses defined above) to information about your psychological profile ­ your hopes, dreams, beliefs and fears. A capacity for privacy is a necessary condition of autonomous personhood.

To get a sense of what I mean, imagine that I could telepathically read all your conscious and unconscious thoughts and feelings—I could know about them in as much detail as you know about them yourself—and further, that you could not, in any way, control my access. You don’t, in other words, share your thoughts with me; I take them. The power I would have over you would of course be immense. Not only could you not hide from me, I would know instantly a great amount about how the outside world affects you, what scares you, what makes you act in the ways you do. And that means I could not only know what you think, I could to a large extent control what you do.

That is the political worry about the loss of privacy: it threatens a loss of freedom. And the worry, of course, is not merely theoretical. Targeted ad programs, like Google’s, which track your Internet searches for the purpose of sending you ads that reflect your interests can create deeply complex psychological profiles—especially when one conducts searches for emotional or personal advice information: Am I gay? What is terrorism? What is atheism? If the government or some entity should request the identity of the person making these searches for national security purposes, we’d be on the way to having a real-world version of our thought experiment.

But the loss of privacy doesn’t just threaten political freedom. Return for a moment to our thought experiment where I telepathically know all your thoughts whether you like it or not From my perspective, the perspective of the knower—your existence as a distinct person would begin to shrink. Our relationship would be so lopsided that there might cease to be, at least to me, anything subjective about you. As I learn what reactions you will have to stimuli, why you do what you do, you will become like any other object to be manipulated. You would be, as we say, dehumanized.

Posted on July 9, 2013 at 6:24 AMView Comments

Pre-9/11 NSA Thinking

This quote is from the Spring 1997 issue of CRYPTOLOG, the internal NSA newsletter. The writer is William J. Black, Jr., the Director’s Special Assistant for Information Warfare.

Specifically, the focus is on the potential abuse of the Government’s applications of this new information technology that will result in an invasion of personal privacy. For us, this is difficult to understand. We are “the government,” and we have no interest in invading the personal privacy of U.S. citizens.

This is from a Seymour Hersh New Yorker interview with NSA Director General Michael Hayden in 1999:

When I asked Hayden about the agency’s capability for unwarranted spying on private citizens—in the unlikely event, of course, that the agency could somehow get the funding, the computer scientists, and the knowledge to begin making sense out of the Internet—his response was heated. “I’m a kid from Pittsburgh with two sons and a daughter who are closet libertarians,” he said. “I am not interested in doing anything that threatens the American people, and threatens the future of this agency. I can’t emphasize enough to you how careful we are. We have to be so careful—to make sure that America is never distrustful of the power and security we can provide.”

It’s easy to assume that both Black and Hayden were lying, but I believe them. I believe that, 15 years ago, the NSA was entirely focused on intercepting communications outside the US.

What changed? What caused the NSA to abandon its non-US charter and start spying on Americans? From what I’ve read, and from a bunch of informal conversations with NSA employees, it was the 9/11 terrorist attacks. That’s when everything changed, the gloves came off, and all the rules were thrown out the window. That the NSA’s interests coincided with the business model of the Internet is just a—lucky, in their view—coincidence.

Posted on June 27, 2013 at 11:49 AMView Comments

Finding Sociopaths on Facebook

On his blog, Scott Adams suggests that it might be possible to identify sociopaths based on their interactions on social media.

My hypothesis is that science will someday be able to identify sociopaths and terrorists by their patterns of Facebook and Internet use. I’ll bet normal people interact with Facebook in ways that sociopaths and terrorists couldn’t duplicate.

Anyone can post fake photos and acquire lots of friends who are actually acquaintances. But I’ll bet there are so many patterns and tendencies of “normal” use on Facebook that a terrorist wouldn’t be able to successfully fake it.

Okay, but so what? Imagine you had such an amazingly accurate test…then what? Do we investigate those who test positive, even though there’s no suspicion that they’ve actually done anything? Do we follow them around? Subject them to additional screening at airports? Throw them in jail because we know the streets will be safer because of it? Do we want to live in a Minority Report world?

The problem isn’t just that such a system is wrong, it’s that the mathematics of testing makes this sort of thing pretty ineffective in practice. It’s called the “base rate fallacy.” Suppose you have a test that’s 90% accurate in identifying both sociopaths and non-sociopaths. If you assume that 4% of people are sociopaths, then the chance of someone who tests positive actually being a sociopath is 26%. (For every thousand people tested, 90% of the 40 sociopaths will test positive, but so will 10% of the 960 non-sociopaths.) You have postulate a test with an amazing 99% accuracy—only a 1% false positive rate—even to have an 80% chance of someone testing positive actually being a sociopath.

This fallacy isn’t new. It’s the same thinking that caused us to intern Japanese-Americans during World War II, stop people in their cars because they’re black, and frisk them at airports because they’re Muslim. It’s the same thinking behind massive NSA surveillance programs like PRISM. It’s one of the things that scares me about police DNA databases.

Many authors have written stories about thoughtcrime. Who has written about genecrime?

BTW, if you want to meet an actual sociopath, I recommend this book (review here) and this blog.

Posted on June 19, 2013 at 11:19 AMView Comments

Blowback from the NSA Surveillance

There’s one piece of blowback that isn’t being discussed—aside from the fact that Snowden has killed the chances of any liberal arts major getting a DoD job for at least a decade—and that’s how the massive NSA surveillance of the Internet affects the US’s role in Internet governance.

Ron Deibert makes this point:

But there are unintended consequences of the NSA scandal that will undermine U.S. foreign policy interests—in particular, the “Internet Freedom” agenda espoused by the U.S. State Department and its allies.

The revelations that have emerged will undoubtedly trigger a reaction abroad as policymakers and ordinary users realize the huge disadvantages of their dependence on U.S.-controlled networks in social media, cloud computing, and telecommunications, and of the formidable resources that are deployed by U.S. national security agencies to mine and monitor those networks.

Writing about the new Internet nationalism, I talked about the ITU meeting in Dubai last fall, and the attempt of some countries to wrest control of the Internet from the US. That movement just got a huge PR boost. Now, when countries like Russia and Iran say the US is simply too untrustworthy to manage the Internet, no one will be able to argue.

We can’t fight for Internet freedom around the world, then turn around and destroy it back home. Even if we don’t see the contradiction, the rest of the world does.

Posted on June 17, 2013 at 6:13 AMView Comments

New Report on Teens, Social Media, and Privacy

Interesting report from the From the Pew Internet and American Life Project:

Teens are sharing more information about themselves on their social media profiles than they did when we last surveyed in 2006:

  • 91% post a photo of themselves, up from 79% in 2006.
  • 71% post their school name, up from 49%.
  • 71% post the city or town where they live, up from 61%.
  • 53% post their email address, up from 29%.
  • 20% post their cell phone number, up from 2%.

60% of teen Facebook users set their Facebook profiles to private (friends only), and most report high levels of confidence in their ability to manage their settings.

danah boyd points out something interesting in the data:

My favorite finding of Pew’s is that 58% of teens cloak their messages either through inside jokes or other obscure references, with more older teens (62%) engaging in this practice than younger teens (46%)….

While adults are often anxious about shared data that might be used by government agencies, advertisers, or evil older men, teens are much more attentive to those who hold immediate power over them—parents, teachers, college admissions officers, army recruiters, etc. To adults, services like Facebook that may seem “private” because you can use privacy tools, but they don’t feel that way to youth who feel like their privacy is invaded on a daily basis. (This, btw, is part of why teens feel like Twitter is more intimate than Facebook. And why you see data like Pew’s that show that teens on Facebook have, on average 300 friends while, on Twitter, they have 79 friends.) Most teens aren’t worried about strangers; they’re worried about getting in trouble.

Over the last few years, I’ve watched as teens have given up on controlling access to content. It’s too hard, too frustrating, and technology simply can’t fix the power issues. Instead, what they’ve been doing is focusing on controlling access to meaning. A comment might look like it means one thing, when in fact it means something quite different. By cloaking their accessible content, teens reclaim power over those who they know who are surveilling them. This practice is still only really emerging en masse, so I was delighted that Pew could put numbers to it. I should note that, as Instagram grows, I’m seeing more and more of this. A picture of a donut may not be about a donut. While adults worry about how teens’ demographic data might be used, teens are becoming much more savvy at finding ways to encode their content and achieve privacy in public.

Posted on May 24, 2013 at 8:40 AMView Comments

Surveillance and the Internet of Things

The Internet has turned into a massive surveillance tool. We’re constantly monitored on the Internet by hundreds of companies—both familiar and unfamiliar. Everything we do there is recorded, collected, and collated—sometimes by corporations wanting to sell us stuff and sometimes by governments wanting to keep an eye on us.

Ephemeral conversation is over. Wholesale surveillance is the norm. Maintaining privacy from these powerful entities is basically impossible, and any illusion of privacy we maintain is based either on ignorance or on our unwillingness to accept what’s really going on.

It’s about to get worse, though. Companies such as Google may know more about your personal interests than your spouse, but so far it’s been limited by the fact that these companies only see computer data. And even though your computer habits are increasingly being linked to your offline behavior, it’s still only behavior that involves computers.

The Internet of Things refers to a world where much more than our computers and cell phones is Internet-enabled. Soon there will be Internet-connected modules on our cars and home appliances. Internet-enabled medical devices will collect real-time health data about us. There’ll be Internet-connected tags on our clothing. In its extreme, everything can be connected to the Internet. It’s really just a matter of time, as these self-powered wireless-enabled computers become smaller and cheaper.

Lots has been written about theInternet of Things” and how it will change society for the better. It’s true that it will make a lot of wonderful things possible, but the “Internet of Things” will also allow for an even greater amount of surveillance than there is today. The Internet of Things gives the governments and corporations that follow our every move something they don’t yet have: eyes and ears.

Soon everything we do, both online and offline, will be recorded and stored forever. The only question remaining is who will have access to all of this information, and under what rules.

We’re seeing an initial glimmer of this from how location sensors on your mobile phone are being used to track you. Of course your cell provider needs to know where you are; it can’t route your phone calls to your phone otherwise. But most of us broadcast our location information to many other companies whose apps we’ve installed on our phone. Google Maps certainly, but also a surprising number of app vendors who collect that information. It can be used to determine where you live, where you work, and who you spend time with.

Another early adopter was Nike, whose Nike+ shoes communicate with your iPod or iPhone and track your exercising. More generally, medical devices are starting to be Internet-enabled, collecting and reporting a variety of health data. Wiring appliances to the Internet is one of the pillars of the smart electric grid. Yes, there are huge potential savings associated with the smart grid, but it will also allow power companies – and anyone they decide to sell the data to—to monitor how people move about their house and how they spend their time.

Drones are another “thing” moving onto the Internet. As their price continues to drop and their capabilities increase, they will become a very powerful surveillance tool. Their cameras are powerful enough to see faces clearly, and there are enough tagged photographs on the Internet to identify many of us. We’re not yet up to a real-time Google Earth equivalent, but it’s not more than a few years away. And drones are just a specific application of CCTV cameras, which have been monitoring us for years, and will increasingly be networked.

Google’s Internet-enabled glasses—Google Glass—are another major step down this path of surveillance. Their ability to record both audio and video will bring ubiquitous surveillance to the next level. Once they’re common, you might never know when you’re being recorded in both audio and video. You might as well assume that everything you do and say will be recorded and saved forever.

In the near term, at least, the sheer volume of data will limit the sorts of conclusions that can be drawn. The invasiveness of these technologies depends on asking the right questions. For example, if a private investigator is watching you in the physical world, she or he might observe odd behavior and investigate further based on that. Such serendipitous observations are harder to achieve when you’re filtering databases based on pre-programmed queries. In other words, it’s easier to ask questions about what you purchased and where you were than to ask what you did with your purchases and why you went where you did. These analytical limitations also mean that companies like Google and Facebook will benefit more from the Internet of Things than individuals—not only because they have access to more data, but also because they have more sophisticated query technology. And as technology continues to improve, the ability to automatically analyze this massive data stream will improve.

In the longer term, the Internet of Things means ubiquitous surveillance. If an object “knows” you have purchased it, and communicates via either Wi-Fi or the mobile network, then whoever or whatever it is communicating with will know where you are. Your car will know who is in it, who is driving, and what traffic laws that driver is following or ignoring. No need to show ID; your identity will already be known. Store clerks could know your name, address, and income level as soon as you walk through the door. Billboards will tailor ads to you, and record how you respond to them. Fast food restaurants will know what you usually order, and exactly how to entice you to order more. Lots of companies will know whom you spend your days—and nights—with. Facebook will know about any new relationship status before you bother to change it on your profile. And all of this information will all be saved, correlated, and studied. Even now, it feels a lot like science fiction.

Will you know any of this? Will your friends? It depends. Lots of these devices have, and will have, privacy settings. But these settings are remarkable not in how much privacy they afford, but in how much they deny. Access will likely be similar to your browsing habits, your files stored on Dropbox, your searches on Google, and your text messages from your phone. All of your data is saved by those companies—and many others—correlated, and then bought and sold without your knowledge or consent. You’d think that your privacy settings would keep random strangers from learning everything about you, but it only keeps random strangers who don’t pay for the privilege—or don’t work for the government and have the ability to demand the data. Power is what matters here: you’ll be able to keep the powerless from invading your privacy, but you’ll have no ability to prevent the powerful from doing it again and again.

This essay originally appeared on the Guardian.

EDITED TO ADD (6/14): Another article on the subject.

Posted on May 21, 2013 at 6:15 AMView Comments

Power and the Internet

All disruptive technologies upset traditional power balances, and the Internet is no exception. The standard story is that it empowers the powerless, but that’s only half the story. The Internet empowers everyone. Powerful institutions might be slow to make use of that new power, but since they are powerful, they can use it more effectively. Governments and corporations have woken up to the fact that not only can they use the Internet, they can control it for their interests. Unless we start deliberately debating the future we want to live in, and the role of information technology in enabling that world, we will end up with an Internet that benefits existing power structures and not society in general.

We’ve all lived through the Internet’s disruptive history. Entire industries, like travel agencies and video rental stores, disappeared. Traditional publishing—books, newspapers, encyclopedias, music—lost power, while Amazon and others gained. Advertising-based companies like Google and Facebook gained a lot of power. Microsoft lost power (as hard as that is to believe).

The Internet changed political power as well. Some governments lost power as citizens organized online. Political movements became easier, helping to topple governments. The Obama campaign made revolutionary use of the Internet, both in 2008 and 2012.

And the Internet changed social power, as we collected hundreds of “friends” on Facebook, tweeted our way to fame, and found communities for the most obscure hobbies and interests. And some crimes became easier: impersonation fraud became identity theft, copyright violation became file sharing, and accessing censored materials—political, sexual, cultural—became trivially easy.

Now powerful interests are looking to deliberately steer this influence to their advantage. Some corporations are creating Internet environments that maximize their profitability: Facebook and Google, among many others. Some industries are lobbying for laws that make their particular business models more profitable: telecom carriers want to be able to discriminate between different types of Internet traffic, entertainment companies want to crack down on file sharing, advertisers want unfettered access to data about our habits and preferences.

On the government side, more countries censor the Internet—and do so more effectively—than ever before. Police forces around the world are using Internet data for surveillance, with less judicial oversight and sometimes in advance of any crime. Militaries are fomenting a cyberwar arms race. Internet surveillance—both governmental and commercial—is on the rise, not just in totalitarian states but in Western democracies as well. Both companies and governments rely more on propaganda to create false impressions of public opinion.

In 1996, cyber-libertarian John Perry Barlow issued his “Declaration of the Independence of Cyberspace.” He told governments: “You have no moral right to rule us, nor do you possess any methods of enforcement that we have true reason to fear.” It was a utopian ideal, and many of us believed him. We believed that the Internet generation, those quick to embrace the social changes this new technology brought, would swiftly outmaneuver the more ponderous institutions of the previous era.

Reality turned out to be much more complicated. What we forgot is that technology magnifies power in both directions. When the powerless found the Internet, suddenly they had power. But while the unorganized and nimble were the first to make use of the new technologies, eventually the powerful behemoths woke up to the potential—and they have more power to magnify. And not only does the Internet change power balances, but the powerful can also change the Internet. Does anyone else remember how incompetent the FBI was at investigating Internet crimes in the early 1990s? Or how Internet users ran rings around China’s censors and Middle Eastern secret police? Or how digital cash was going to make government currencies obsolete, and Internet organizing was going to make political parties obsolete? Now all that feels like ancient history.

It’s not all one-sided. The masses can occasionally organize around a specific issue—SOPA/PIPA, the Arab Spring, and so on—and can block some actions by the powerful. But it doesn’t last. The unorganized go back to being unorganized, and powerful interests take back the reins.

Debates over the future of the Internet are morally and politically complex. How do we balance personal privacy against what law enforcement needs to prevent copyright violations? Or child pornography? Is it acceptable to be judged by invisible computer algorithms when being served search results? When being served news articles? When being selected for additional scrutiny by airport security? Do we have a right to correct data about us? To delete it? Do we want computer systems that forget things after some number of years? These are complicated issues that require meaningful debate, international cooperation, and iterative solutions. Does anyone believe we’re up to the task?

We’re not, and that’s the worry. Because if we’re not trying to understand how to shape the Internet so that its good effects outweigh the bad, powerful interests will do all the shaping. The Internet’s design isn’t fixed by natural laws. Its history is a fortuitous accident: an initial lack of commercial interests, governmental benign neglect, military requirements for survivability and resilience, and the natural inclination of computer engineers to build open systems that work simply and easily. This mix of forces that created yesterday’s Internet will not be trusted to create tomorrow’s. Battles over the future of the Internet are going on right now: in legislatures around the world, in international organizations like the International Telecommunications Union and the World Trade Organization, and in Internet standards bodies. The Internet is what we make it, and is constantly being recreated by organizations, companies, and countries with specific interests and agendas. Either we fight for a seat at the table, or the future of the Internet becomes something that is done to us.

This essay appeared as a response to Edge’s annual question, “What *Should* We Be Worried About?

Posted on January 31, 2013 at 7:09 AMView Comments

Privacy Concerns Around "Social Reading"

Interesting paper: “The Perils of Social Reading,” by Neil M. Richards, from the Georgetown Law Journal.

Abstract: Our law currently treats records of our reading habits under two contradictory rules ­ rules mandating confidentiality, and rules permitting disclosure. Recently, the rise of the social Internet has created more of these records and more pressures on when and how they should be shared. Companies like Facebook, in collaboration with many newspapers, have ushered in the era of “social reading,” in which what we read may be “frictionlessly shared” with our friends and acquaintances. Disclosure and sharing are on the rise.

This Article sounds a cautionary note about social reading and frictionless sharing. Social reading can be good, but the ways in which we set up the defaults for sharing matter a great deal. Our reader records implicate our intellectual privacy ­ the protection of reading from surveillance and interference so that we can read freely, widely, and without inhibition. I argue that the choices we make about how to share have real consequences, and that “frictionless sharing” is not frictionless, nor it is really sharing. Although sharing is important, the sharing of our reading habits is special. Such sharing should be conscious and only occur after meaningful notice.

The stakes in this debate are immense. We are quite literally rewiring the public and private spheres for a new century. Choices we make now about the boundaries between our individual and social selves, between consumers and companies, between citizens and the state, will have unforeseeable ramifications for the societies our children and grandchildren inherit. We should make choices that preserve our intellectual privacy, not destroy it. This Article suggests practical ways to do just that.

Posted on May 23, 2012 at 7:25 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.