Entries Tagged "privacy"

Page 75 of 145

What I've Been Thinking About

I’m starting to think about my next book, which will be about power and the Internet—from the perspective of security. My objective will be to describe current trends, explain where those trends are leading us, and discuss alternatives for avoiding that outcome. Many of my recent essays have touched on various facets of this, although I’m still looking for synthesis. These facets include:

  1. The relationship between the Internet and power: how the Internet affects power, and how power affects the Internet. Increasingly, those in power are using information technology to increase their power.
  2. A feudal model of security that leaves users with little control over their data or computing platforms, forcing them to trust the companies that sell the hardware, software, and systems—and allowing those companies to abuse that trust.
  3. The rise of nationalism on the Internet and a cyberwar arms race, both of which play on our fears and which are resulting in increased military involvement in our information infrastructure.
  4. Ubiquitous surveillance for both government and corporate purposes—aided by cloud computing, social networking, and Internet-enabled everything—resulting in a world without any real privacy.
  5. The four tools of Internet oppression—surveillance, censorship, propaganda, and use control—have both government and corporate uses. And these are interrelated; often building tools to fight one as the side effect of facilitating another.
  6. Ill-conceived laws and regulations on behalf of either government or corporate power, either to prop up their business models (copyright protections), fight crime (increased police access to data), or control our actions in cyberspace.
  7. The need for leaks: both whistleblowers and FOIA suits. So much of what the government does to us is shrouded in secrecy, and leaks are the only we know what’s going on. This also applies to the corporate algorithms and systems and control much of our lives.

On the one hand, we need new regimes of trust in the information age. (I wrote about the extensively in my most recent book, Liars and Outliers.) On the other hand, the risks associated with increasing technology might mean that the fear of catastrophic attack will make us unable to create those new regimes.

I believe society is headed down a dangerous path, and that we—as members of society—need to make some hard choices about what sort of world we want to live in. If we maintain our current trajectory, the future does not look good. It’s not clear if we have the social or political will to address the intertwined issues of power, security, and technology, or even have the conversations necessary to understand the decisions we need to make. Writing about topics like this is what I do best, and I hope that a book on this topic will have a positive effect on the discourse.

The working title of the book is Power.com—although that might be too similar to the book Power, Inc. for the final title.

These thoughts are still in draft, and not yet part of a coherent whole. For me, the writing process is how I understand a topic, and the shape of this book will almost certainly change substantially as I write. I’m very interested in what people think about this, especially in terms of solutions. Please pass this around to interested people, and leave comments to this blog post.

Posted on April 1, 2013 at 6:07 AMView Comments

The Dangers of Surveillance

Interesting article, “The Dangers of Surveillance,” by Neil M. Richards, Harvard Law Review, 2013. From the abstract:

….We need a better account of the dangers of surveillance.

This article offers such an account. Drawing on law, history, literature, and the work of scholars in the emerging interdisciplinary field of “surveillance studies,” I explain what those harms are and why they matter. At the level of theory, I explain when surveillance is particularly dangerous, and when it is not. Surveillance is harmful because it can chill the exercise of our civil liberties, especially our intellectual privacy. It is also gives the watcher power over the watched, creating the the risk of a variety of other harms, such as discrimination, coercion, and the threat of selective enforcement, where critics of the government can be prosecuted or blackmailed for wrongdoing unrelated to the purpose of the surveillance.

At a practical level, I propose a set of four principles that should guide the future development of surveillance law, allowing for a more appropriate balance between the costs and benefits of government surveillance. First, we must recognize that surveillance transcends the public-private divide. Even if we are ultimately more concerned with government surveillance, any solution must grapple with the complex relationships between government and corporate watchers. Second, we must recognize that secret surveillance is illegitimate, and prohibit the creation of any domestic surveillance programs whose existence is secret. Third, we should recognize that total surveillance is illegitimate and reject the idea that it is acceptable for the government to record all Internet activity without authorization. Fourth, we must recognize that surveillance is harmful. Surveillance menaces intellectual privacy and increases the risk of blackmail, coercion, and discrimination; accordingly, we must recognize surveillance as a harm in constitutional standing doctrine.

EDITED TO ADD (4/12): Reply to the article.

Posted on March 29, 2013 at 12:25 PMView Comments

Identifying People from Mobile Phone Location Data

Turns out that it’s pretty easy:

Researchers at the Massachusetts Institute of Technology (MIT) and the Catholic University of Louvain studied 15 months’ worth of anonymised mobile phone records for 1.5 million individuals.

They found from the “mobility traces” – the evident paths of each mobile phone – that only four locations and times were enough to identify a particular user.

“In the 1930s, it was shown that you need 12 points to uniquely identify and characterise a fingerprint,” said the study’s lead author Yves-Alexandre de Montjoye of MIT.

“What we did here is the exact same thing but with mobility traces. The way we move and the behaviour is so unique that four points are enough to identify 95% of people,” he told BBC News.

Here’s the study.

EFF maintains a good page on the issues surrounding location privacy.

Posted on March 26, 2013 at 6:38 AMView Comments

Our Internet Surveillance State

I’m going to start with three data points.

One: Some of the Chinese military hackers who were implicated in a broad set of attacks against the U.S. government and corporations were identified because they accessed Facebook from the same network infrastructure they used to carry out their attacks.

Two: Hector Monsegur, one of the leaders of the LulzSec hacker movement, was identified and arrested last year by the FBI. Although he practiced good computer security and used an anonymous relay service to protect his identity, he slipped up.

And three: Paula Broadwell, who had an affair with CIA director David Petraeus, similarly took extensive precautions to hide her identity. She never logged in to her anonymous e-mail service from her home network. Instead, she used hotel and other public networks when she e-mailed him. The FBI correlated hotel registration data from several different hotels—and hers was the common name.

The Internet is a surveillance state. Whether we admit it to ourselves or not, and whether we like it or not, we’re being tracked all the time. Google tracks us, both on its pages and on other pages it has access to. Facebook does the same; it even tracks non-Facebook users. Apple tracks us on our iPhones and iPads. One reporter used a tool called Collusion to track who was tracking him; 105 companies tracked his Internet use during one 36-hour period.

Increasingly, what we do on the Internet is being combined with other data about us. Unmasking Broadwell’s identity involved correlating her Internet activity with her hotel stays. Everything we do now involves computers, and computers produce data as a natural by-product. Everything is now being saved and correlated, and many big-data companies make money by building up intimate profiles of our lives from a variety of sources.

Facebook, for example, correlates your online behavior with your purchasing habits offline. And there’s more. There’s location data from your cell phone, there’s a record of your movements from closed-circuit TVs.

This is ubiquitous surveillance: All of us being watched, all the time, and that data being stored forever. This is what a surveillance state looks like, and it’s efficient beyond the wildest dreams of George Orwell.

Sure, we can take measures to prevent this. We can limit what we search on Google from our iPhones, and instead use computer web browsers that allow us to delete cookies. We can use an alias on Facebook. We can turn our cell phones off and spend cash. But increasingly, none of it matters.

There are simply too many ways to be tracked. The Internet, e-mail, cell phones, web browsers, social networking sites, search engines: these have become necessities, and it’s fanciful to expect people to simply refuse to use them just because they don’t like the spying, especially since the full extent of such spying is deliberately hidden from us and there are few alternatives being marketed by companies that don’t spy.

This isn’t something the free market can fix. We consumers have no choice in the matter. All the major companies that provide us with Internet services are interested in tracking us. Visit a website and it will almost certainly know who you are; there are lots of ways to be tracked without cookies. Cell phone companies routinely undo the web’s privacy protection. One experiment at Carnegie Mellon took real-time videos of students on campus and was able to identify one-third of them by comparing their photos with publicly available tagged Facebook photos.

Maintaining privacy on the Internet is nearly impossible. If you forget even once to enable your protections, or click on the wrong link, or type the wrong thing, you’ve permanently attached your name to whatever anonymous service you’re using. Monsegur slipped up once, and the FBI got him. If the director of the CIA can’t maintain his privacy on the Internet, we’ve got no hope.

In today’s world, governments and corporations are working together to keep things that way. Governments are happy to use the data corporations collect—occasionally demanding that they collect more and save it longer—to spy on us. And corporations are happy to buy data from governments. Together the powerful spy on the powerless, and they’re not going to give up their positions of power, despite what the people want.

Fixing this requires strong government will, but they’re just as punch-drunk on data as the corporations. Slap-on-the-wrist fines notwithstanding, no one is agitating for better privacy laws.

So, we’re done. Welcome to a world where Google knows exactly what sort of porn you all like, and more about your interests than your spouse does. Welcome to a world where your cell phone company knows exactly where you are all the time. Welcome to the end of private conversations, because increasingly your conversations are conducted by e-mail, text, or social networking sites.

And welcome to a world where all of this, and everything else that you do or is done on a computer, is saved, correlated, studied, passed around from company to company without your knowledge or consent; and where the government accesses it at will without a warrant.

Welcome to an Internet without privacy, and we’ve ended up here with hardly a fight.

This essay previously appeared on CNN.com, where it got 23,000 Facebook likes and 2,500 tweets—by far the most widely distributed essay I’ve ever written.

Commentary.

EDITED TO ADD (3/26): More commentary.

EDITED TO ADD (3/28): This Communist commentary seems to be mostly semantic drivel, but parts of it are interesting. The author doesn’t seem to have a problem with State surveillance, but he thinks the incentives that cause businesses to use the same tools should be revisited. This seems just as wrong-headed as the Libertarians who have no problem with corporations using surveillance tools, but don’t want governments to use them.

EDITED TO ADD (5/28): This essay has been translated into Polish.

Posted on March 25, 2013 at 6:28 AMView Comments

Changes to the Blog

I have made a few changes to my blog that I’d like to talk about.

The first is the various buttons associated with each post: a Facebook Like button, a Retweet button, and so on. These buttons are ubiquitous on the Internet now. We publishers like them because it makes it easier for our readers to share our content. I especially like them because I can obsessively watch the totals see how my writings are spreading out across the Internet.

The problem is that these buttons use images, scripts, and/or iframes hosted on the social media site’s own servers. This is partly for webmasters’ convenience; it makes adoption as easy as copy-and-pasting a few lines of code. But it also gives Facebook, Twitter, Google, and so on a way to track you—even if you don’t click on the button. Remember that: if you see sharing buttons on a webpage, that page is almost certainly being tracked by social media sites or a service like AddThis. Or both.

What I’m using instead is SocialSharePrivacy, which was created by the German website Heise Online and adapted by Mathias Panzenböck. The page shows a grayed-out mockup of a sharing button. You click once to activate it, then a second time to share the page. If you don’t click, nothing is loaded from the social media site, so it can’t track your visit. If you don’t care about the privacy issues, you can click on the Settings icon and enable the sharing buttons permanently.

It’s not a perfect solution—two clicks instead of one—but it’s much more privacy-friendly.

(If you’re thinking of doing something similar on your own site, another option to consider is shareNice. ShareNice can be copied to your own webserver; but if you prefer, you can use their hosted version, which makes it as easy to install as AddThis. The difference is that shareNice doesn’t set cookies or even log IP addresses—though you’ll have to trust them on the logging part. The problem is that it can’t display the aggregate totals.)

The second change is the search function. I changed the site’s search engine from Google to DuckDuckGo, which doesn’t even store IP addresses. Again, you have to trust them on that, but I’m inclined to.

The third change is to the feed. Starting now, if you click the feed icon in the right-hand column of my blog, you’ll be subscribing to a feed that’s hosted locally on schneier.com, instead of one produced by Google’s Feedburner service. Again, this reduces the amount of data Google collects about you. Over the next couple of days, I will transition existing subscribers off of Feedburner, but since some of you are subscribed directly to a Feedburner URL, I recommend resubscribing to the new link to be sure. And if by chance you have trouble with the new feed, this legacy link will always point to the Feedburner version.

Fighting against the massive amount of surveillance data collected about us as we surf the Internet is hard, and possibly even fruitless. But I think it’s important to try.

Posted on March 22, 2013 at 3:46 PMView Comments

Text Message Retention Policies

The FBI wants cell phone carriers to store SMS messages for a long time, enabling them to conduct surveillance backwards in time. Nothing new there—data retention laws are being debated in many countries around the world—but this was something I did not know:

Wireless providers’ current SMS retention policies vary. An internal Justice Department document (PDF) that the ACLU obtained through the Freedom of Information Act shows that, as of 2010, AT&T, T-Mobile, and Sprint did not store the contents of text messages. Verizon did for up to five days, a change from its earlier no-logs-at-all position, and Virgin Mobile kept them for 90 days. The carriers generally kept metadata such as the phone numbers associated with the text for 90 days to 18 months; AT&T was an outlier, keeping it for as long as seven years.

An e-mail message from a detective in the Baltimore County Police Department, leaked by Antisec and reproduced in a 2011 Wired article, says that Verizon keeps “text message content on their servers for 3-5 days.” And: “Sprint stores their text message content going back 12 days and Nextel content for 7 days. AT&T/Cingular do not preserve content at all. Us Cellular: 3-5 days Boost Mobile LLC: 7 days”

That second set of data is from 2009.

Leaks seems to be the primary way we learn how our privacy is being violated these days—we need more of them.

EDITED TO ADD (4/12): Discussion of Canadian policy.

Posted on March 21, 2013 at 1:17 PMView Comments

When Technology Overtakes Security

A core, not side, effect of technology is its ability to magnify power and multiply force—for both attackers and defenders. One side creates ceramic handguns, laser-guided missiles, and new-identity theft techniques, while the other side creates anti-missile defense systems, fingerprint databases, and automatic facial recognition systems.

The problem is that it’s not balanced: Attackers generally benefit from new security technologies before defenders do. They have a first-mover advantage. They’re more nimble and adaptable than defensive institutions like police forces. They’re not limited by bureaucracy, laws, or ethics. They can evolve faster. And entropy is on their side—it’s easier to destroy something than it is to prevent, defend against, or recover from that destruction.

For the most part, though, society still wins. The bad guys simply can’t do enough damage to destroy the underlying social system. The question for us is: can society still maintain security as technology becomes more advanced?

I don’t think it can.

Because the damage attackers can cause becomes greater as technology becomes more powerful. Guns become more harmful, explosions become bigger, malware becomes more pernicious…and so on. A single attacker, or small group of attackers, can cause more destruction than ever before.

This is exactly why the whole post-9/11 weapons-of-mass-destruction debate was so overwrought: Terrorists are scary, terrorists flying airplanes into buildings are even scarier, and the thought of a terrorist with a nuclear bomb is absolutely terrifying.

As the destructive power of individual actors and fringe groups increases, so do the calls for—and society’s acceptance of—increased security.

Traditional security largely works "after the fact". We tend not to ban or restrict the objects that can do harm; instead, we punish the people who do harm with objects. There are exceptions, of course, but they’re exactly that: exceptions. This system works as long as society can tolerate the destructive effects of those objects (for example, allowing people to own baseball bats and arresting them after they use them in a riot is only viable if society can tolerate the potential for riots).

When that isn’t enough, we resort to "before-the-fact" security measures. These come in two basic varieties: general surveillance of people in an effort to stop them before they do damage, and specific interdictions in an effort to stop people from using those technologies to do damage.

But these measures work better at keeping dangerous technologies out of the hands of amateurs than at keeping them out of the hands of professionals.

And in the global interconnected world we live in, they’re not anywhere close to foolproof. Still, a climate of fear causes governments to try. Lots of technologies are already restricted: entire classes of drugs, entire classes of munitions, explosive materials, biological agents. There are age restrictions on vehicles and training restrictions on complex systems like aircraft. We’re already almost entirely living in a surveillance state, though we don’t realize it or won’t admit it to ourselves. This will only get worse as technology advances… today’s Ph.D. theses are tomorrow’s high-school science-fair projects.

Increasingly, broad prohibitions on technologies, constant ubiquitous surveillance, and Minority Report-like preemptive security will become the norm. We can debate the effectiveness of various security measures in different circumstances. But the problem isn’t that these security measures won’t work—even as they shred our freedoms and liberties—it’s that no security is perfect.

Because sooner or later, the technology will exist for a hobbyist to explode a nuclear weapon, print a lethal virus from a bio-printer, or turn our electronic infrastructure into a vehicle for large-scale murder. We’ll have the technology eventually to annihilate ourselves in great numbers, and sometime after, that technology will become cheap enough to be easy.

As it gets easier for one member of a group to destroy the entire group, and the group size gets larger, the odds of someone in the group doing it approaches certainty. Our global interconnectedness means that our group size encompasses everyone on the planet, and since government hasn’t kept up, we have to worry about the weakest-controlled member of the weakest-controlled country. Is this a fundamental limitation of technological advancement, one that could end civilization? First our fears grip us so strongly that, thinking about the short term, we willingly embrace a police state in a desperate attempt to keep us safe; then, someone goes off and destroys us anyway?

If security won’t work in the end, what is the solution?

Resilience—building systems able to survive unexpected and devastating attacks—is the best answer we have right now. We need to recognize that large-scale attacks will happen, that society can survive more than we give it credit for, and that we can design systems to survive these sorts of attacks. Calling terrorism an existential threat is ridiculous in a country where more people die each month in car crashes than died in the 9/11 terrorist attacks.

If the U.S. can survive the destruction of an entire city—witness New Orleans after Hurricane Katrina or even New York after Sandy—we need to start acting like it, and planning for it. Still, it’s hard to see how resilience buys us anything but additional time. Technology will continue to advance, and right now we don’t know how to adapt any defenses—including resilience—fast enough.

We need a more flexible and rationally reactive approach to these problems and new regimes of trust for our information-interconnected world. We’re going to have to figure this out if we want to survive, and I’m not sure how many decades we have left.

This essay originally appeared on Wired.com.

Commentary.

Posted on March 21, 2013 at 7:02 AMView Comments

FinSpy

Twenty five countries are using the FinSpy surveillance software package (also called FinFisher) to spy on their own citizens:

The list of countries with servers running FinSpy is now Australia, Bahrain, Bangladesh, Britain, Brunei, Canada, the Czech Republic, Estonia, Ethiopia, Germany, India, Indonesia, Japan, Latvia, Malaysia, Mexico, Mongolia, Netherlands, Qatar, Serbia, Singapore, Turkmenistan, the United Arab Emirates, the United States and Vietnam.

It’s sold by the British company Gamma Group.

Older news.

EDITED TO ADD (3/20): The report.

EDITED TO ADD (4/12): Some more links.

Posted on March 19, 2013 at 1:34 PMView Comments

Nationalism on the Internet

For technology that was supposed to ignore borders, bring the world closer together, and sidestep the influence of national governments, the Internet is fostering an awful lot of nationalism right now. We’ve started to see increased concern about the country of origin of IT products and services; U.S. companies are worried about hardware from China; European companies are worried about cloud services in the U.S; no one is sure whether to trust hardware and software from Israel; Russia and China might each be building their own operating systems out of concern about using foreign ones.

I see this as an effect of all the cyberwar saber-rattling that’s going on right now. The major nations of the world are in the early years of a cyberwar arms race, and we’re all being hurt by the collateral damage.

A commentator on Al Jazeera makes a similar point.

Our nationalist worries have recently been fueled by a media frenzy surrounding attacks from China. These attacks aren’t new—cyber-security experts have been writing about them for at least a decade, and the popular media reported about similar attacks in 2009 and again in 2010—and the current allegations aren’t even very different than what came before. This isn’t to say that the Chinese attacks aren’t serious. The country’s espionage campaign is sophisticated, and ongoing. And because they’re in the news, people are understandably worried about them.

But it’s not just China. International espionage works in both directions, and I’m sure we are giving just as good as we’re getting. China is certainly worried about the U.S. Cyber Command’s recent announcement that it was expanding from 900 people to almost 5,000, and the NSA’s massive new data center in Utah. The U.S. even admits that it can spy on non-U.S. citizens freely.

The fact is that governments and militaries have discovered the Internet; everyone is spying on everyone else, and countries are ratcheting up offensive actions against other countries.

At the same time, many nations are demanding more control over the Internet within their own borders. They reserve the right to spy and censor, and to limit the ability of others to do the same. This idea is now being called the “cyber sovereignty movement,” and gained traction at the International Telecommunications Union meeting last December in Dubai. One analyst called that meeting the “Internet Yalta,” where the Internet split between liberal-democratic and authoritarian countries. I don’t think he’s exaggerating.

Not that this is new, either. Remember 2010, when the governments of the UAE, Saudi Arabia, and India demanded that RIM give them the ability to spy on BlackBerry PDAs within their borders? Or last year, when Syria used the Internet to surveil its dissidents? Information technology is a surprisingly powerful tool for oppression: not just surveillance, but censorship and propaganda as well. And countries are getting better at using that tool.

But remember: none of this is cyberwar. It’s all espionage, something that’s been going on between countries ever since countries were invented. What moves public opinion is less the facts and more the rhetoric, and the rhetoric of war is what we’re hearing.

The result of all this saber-rattling is a severe loss of trust, not just amongst nation-states but between people and nation-states. We know we’re nothing more than pawns in this game, and we figure we’ll be better off sticking with our own country.

Unfortunately, both the reality and the rhetoric play right into the hands of the military and corporate interests that are behind the cyberwar arms race in the first place. There is an enormous amount of power at stake here: not only power within governments and militaries, but power and profit amongst the corporations that supply the tools and infrastructure for cyber-attack and cyber-defense. The more we believe we are “at war” and believe the jingoistic rhetoric, the more willing we are to give up our privacy, freedoms, and control over how the Internet is run.

Arms races are fueled by two things: ignorance and fear. We don’t know the capabilities of the other side, and we fear that they are more capable than we are. So we spend more, just in case. The other side, of course, does the same. That spending will result in more cyber weapons for attack and more cyber-surveillance for defense. It will result in more government control over the protocols of the Internet, and less free-market innovation over the same. At its worst, we might be about to enter an information-age Cold War: one with more than two “superpowers.” Aside from this being a bad future for the Internet, this is inherently destabilizing. It’s just too easy for this amount of antagonistic power and advanced weaponry to get used: for a mistaken attribution to be reacted to with a counterattack, for a misunderstanding to become a cause for offensive action, or for a minor skirmish to escalate into a full-fledged cyberwar.

Nationalism is rife on the Internet, and it’s getting worse. We need to damp down the rhetoric and-more importantly-stop believing the propaganda from those who profit from this Internet nationalism. Those who are beating the drums of cyberwar don’t have the best interests of society, or the Internet, at heart.

This essay previously appeared at Technology Review.

Posted on March 14, 2013 at 6:11 AMView Comments

1 73 74 75 76 77 145

Sidebar photo of Bruce Schneier by Joe MacInnis.