Entries Tagged "control"

Page 5 of 8

Building in Surveillance

China is the world’s most successful Internet censor. While the Great Firewall of China isn’t perfect, it effectively limits information flowing in and out of the country. But now the Chinese government is taking things one step further.

Under a requirement taking effect soon, every computer sold in China will have to contain the Green Dam Youth Escort software package. Ostensibly a pornography filter, it is government spyware that will watch every citizen on the Internet.

Green Dam has many uses. It can police a list of forbidden Web sites. It can monitor a user’s reading habits. It can even enlist the computer in some massive botnet attack, as part of a hypothetical future cyberwar.

China’s actions may be extreme, but they’re not unique. Democratic governments around the world—Sweden, Canada and the United Kingdom, for example—are rushing to pass laws giving their police new powers of Internet surveillance, in many cases requiring communications system providers to redesign products and services they sell.

Many are passing data retention laws, forcing companies to keep information on their customers. Just recently, the German government proposed giving itself the power to censor the Internet.

The United States is no exception. The 1994 CALEA law required phone companies to facilitate FBI eavesdropping, and since 2001, the NSA has built substantial eavesdropping systems in the United States. The government has repeatedly proposed Internet data retention laws, allowing surveillance into past activities as well as present.

Systems like this invite criminal appropriation and government abuse. New police powers, enacted to fight terrorism, are already used in situations of normal crime. Internet surveillance and control will be no different.

Official misuses are bad enough, but the unofficial uses worry me more. Any surveillance and control system must itself be secured. An infrastructure conducive to surveillance and control invites surveillance and control, both by the people you expect and by the people you don’t.

China’s government designed Green Dam for its own use, but it’s been subverted. Why does anyone think that criminals won’t be able to use it to steal bank account and credit card information, use it to launch other attacks, or turn it into a massive spam-sending botnet?

Why does anyone think that only authorized law enforcement will mine collected Internet data or eavesdrop on phone and IM conversations?

These risks are not theoretical. After 9/11, the National Security Agency built a surveillance infrastructure to eavesdrop on telephone calls and e-mails within the United States.

Although procedural rules stated that only non-Americans and international phone calls were to be listened to, actual practice didn’t always match those rules. NSA analysts collected more data than they were authorized to, and used the system to spy on wives, girlfriends, and famous people such as President Clinton.

But that’s not the most serious misuse of a telecommunications surveillance infrastructure. In Greece, between June 2004 and March 2005, someone wiretapped more than 100 cell phones belonging to members of the Greek government—the prime minister and the ministers of defense, foreign affairs and justice.

Ericsson built this wiretapping capability into Vodafone’s products, and enabled it only for governments that requested it. Greece wasn’t one of those governments, but someone still unknown—a rival political party? organized crime?—figured out how to surreptitiously turn the feature on.

Researchers have already found security flaws in Green Dam that would allow hackers to take over the computers. Of course there are additional flaws, and criminals are looking for them.

Surveillance infrastructure can be exported, which also aids totalitarianism around the world. Western companies like Siemens, Nokia, and Secure Computing built Iran’s surveillance infrastructure. U.S. companies helped build China’s electronic police state. Twitter’s anonymity saved the lives of Iranian dissidents—anonymity that many governments want to eliminate.

Every year brings more Internet censorship and control—not just in countries like China and Iran, but in the United States, the United Kingdom, Canada and other free countries.

The control movement is egged on by both law enforcement, trying to catch terrorists, child pornographers and other criminals, and by media companies, trying to stop file sharers.

It’s bad civic hygiene to build technologies that could someday be used to facilitate a police state. No matter what the eavesdroppers and censors say, these systems put us all at greater risk. Communications systems that have no inherent eavesdropping capabilities are more secure than systems with those capabilities built in.

This essay previously appeared—albeit with fewer links—on the Minnesota Public Radio website.

Posted on August 3, 2009 at 6:43 AMView Comments

An Expectation of Online Privacy

If your data is online, it is not private. Oh, maybe it seems private. Certainly, only you have access to your e-mail. Well, you and your ISP. And the sender’s ISP. And any backbone provider who happens to route that mail from the sender to you. And, if you read your personal mail from work, your company. And, if they have taps at the correct points, the NSA and any other sufficiently well-funded government intelligence organization—domestic and international.

You could encrypt your mail, of course, but few of us do that. Most of us now use webmail. The general problem is that, for the most part, your online data is not under your control. Cloud computing and software as a service exacerbate this problem even more.

Your webmail is less under your control than it would be if you downloaded your mail to your computer. If you use Salesforce.com, you’re relying on that company to keep your data private. If you use Google Docs, you’re relying on Google. This is why the Electronic Privacy Information Center recently filed a complaint with the Federal Trade Commission: many of us are relying on Google’s security, but we don’t know what it is.

This is new. Twenty years ago, if someone wanted to look through your correspondence, he had to break into your house. Now, he can just break into your ISP. Ten years ago, your voicemail was on an answering machine in your office; now it’s on a computer owned by a telephone company. Your financial accounts are on remote websites protected only by passwords; your credit history is collected, stored, and sold by companies you don’t even know exist.

And more data is being generated. Lists of books you buy, as well as the books you look at, are stored in the computers of online booksellers. Your affinity card tells your supermarket what foods you like. What were cash transactions are now credit card transactions. What used to be an anonymous coin tossed into a toll booth is now an EZ Pass record of which highway you were on, and when. What used to be a face-to-face chat is now an e-mail, IM, or SMS conversation—or maybe a conversation inside Facebook.

Remember when Facebook recently changed its terms of service to take further control over your data? They can do that whenever they want, you know.

We have no choice but to trust these companies with our security and privacy, even though they have little incentive to protect them. Neither ChoicePoint, Lexis Nexis, Bank of America, nor T-Mobile bears the costs of privacy violations or any resultant identity theft.

This loss of control over our data has other effects, too. Our protections against police abuse have been severely watered down. The courts have ruled that the police can search your data without a warrant, as long as others hold that data. If the police want to read the e-mail on your computer, they need a warrant; but they don’t need one to read it from the backup tapes at your ISP.

This isn’t a technological problem; it’s a legal problem. The courts need to recognize that in the information age, virtual privacy and physical privacy don’t have the same boundaries. We should be able to control our own data, regardless of where it is stored. We should be able to make decisions about the security and privacy of that data, and have legal recourse should companies fail to honor those decisions. And just as the Supreme Court eventually ruled that tapping a telephone was a Fourth Amendment search, requiring a warrant—even though it occurred at the phone company switching office and not in the target’s home or office—the Supreme Court must recognize that reading personal e-mail at an ISP is no different.

This essay was originally published on the SearchSecurity.com website, as the second half of a point/counterpoint with Marcus Ranum.

Posted on May 5, 2009 at 6:06 AMView Comments

Censorship in Dubai

I was in Dubai last weekend for the World Economic Forum Summit on the Global Agenda. (I was on the “Future of the Internet” council; fellow council members Ethan Zuckerman and Jeff Jarvis have written about the event.)

As part of the United Arab Emirates, Dubai censors the Internet:

The government of the United Arab Emirates (UAE) pervasively filters Web sites that contain pornography or relate to alcohol and drug use, gay and lesbian issues, or online dating or gambling. Web-based applications and religious and political sites are also filtered, though less extensively. Additionally, legal controls limit free expression and behavior, restricting political discourse and dissent online.

More detail here.

What was interesting to me about how reasonable the execution of the policy was. Unlike some countries—China for example—that simply block objectionable content, the UAE displays a screen indicating that the URL has been blocked and offers information about its appeals process.

Posted on November 12, 2008 at 12:56 PMView Comments

The NSA Teams Up with the Chinese Government to Limit Internet Anonymity

Definitely strange bedfellows:

A United Nations agency is quietly drafting technical standards, proposed by the Chinese government, to define methods of tracing the original source of Internet communications and potentially curbing the ability of users to remain anonymous.

The U.S. National Security Agency is also participating in the “IP Traceback” drafting group, named Q6/17, which is meeting next week in Geneva to work on the traceback proposal. Members of Q6/17 have declined to release key documents, and meetings are closed to the public.

[…]

A second, apparently leaked ITU document offers surveillance and monitoring justifications that seem well-suited to repressive regimes:

A political opponent to a government publishes articles putting the government in an unfavorable light. The government, having a law against any opposition, tries to identify the source of the negative articles but the articles having been published via a proxy server, is unable to do so protecting the anonymity of the author.

This is being sold as a way to go after the bad guys, but it won’t help. Here’s Steve Bellovin on that issue:

First, very few attacks these days use spoofed source addresses; the real IP address already tells you where the attack is coming from. Second, in case of a DDoS attack, there are too many sources; you can’t do anything with the information. Third, the machine attacking you is almost certainly someone else’s hacked machine and tracking them down (and getting them to clean it up) is itself time-consuming.

TraceBack is most useful in monitoring the activities of large masses of people. But of course, that’s why the Chinese and the NSA are so interested in this proposal in the first place.

It’s hard to figure out what the endgame is; the U.N. doesn’t have the authority to impose Internet standards on anyone. In any case, this idea is counter to the U.N. Universal Declaration of Human Rights, Article 19: “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.” In the U.S., it’s counter to the First Amendment, which has long permitted anonymous speech. On the other hand, basic human and constitutional rights have been jettisoned left and right in the years after 9/11; why should this be any different?

But when the Chinese government and the NSA get together to enhance their ability to spy on us all, you have to wonder what’s gone wrong with the world.

Posted on September 18, 2008 at 6:34 AMView Comments

Kill Switches and Remote Control

It used to be that just the entertainment industries wanted to control your computers—and televisions and iPods and everything else—to ensure that you didn’t violate any copyright rules. But now everyone else wants to get their hooks into your gear.

OnStar will soon include the ability for the police to shut off your engine remotely. Buses are getting the same capability, in case terrorists want to re-enact the movie Speed. The Pentagon wants a kill switch installed on airplanes, and is worried about potential enemies installing kill switches on their own equipment.

Microsoft is doing some of the most creative thinking along these lines, with something it’s calling “Digital Manners Policies.” According to its patent application, DMP-enabled devices would accept broadcast “orders” limiting their capabilities. Cellphones could be remotely set to vibrate mode in restaurants and concert halls, and be turned off on airplanes and in hospitals. Cameras could be prohibited from taking pictures in locker rooms and museums, and recording equipment could be disabled in theaters. Professors finally could prevent students from texting one another during class.

The possibilities are endless, and very dangerous. Making this work involves building a nearly flawless hierarchical system of authority. That’s a difficult security problem even in its simplest form. Distributing that system among a variety of different devices—computers, phones, PDAs, cameras, recorders—with different firmware and manufacturers, is even more difficult. Not to mention delegating different levels of authority to various agencies, enterprises, industries and individuals, and then enforcing the necessary safeguards.

Once we go down this path—giving one device authority over other devices—the security problems start piling up. Who has the authority to limit functionality of my devices, and how do they get that authority? What prevents them from abusing that power? Do I get the ability to override their limitations? In what circumstances, and how? Can they override my override?

How do we prevent this from being abused? Can a burglar, for example, enforce a “no photography” rule and prevent security cameras from working? Can the police enforce the same rule to avoid another Rodney King incident? Do the police get “superuser” devices that cannot be limited, and do they get “supercontroller” devices that can limit anything? How do we ensure that only they get them, and what do we do when the devices inevitably fall into the wrong hands?

It’s comparatively easy to make this work in closed specialized systems—OnStar, airplane avionics, military hardware—but much more difficult in open-ended systems. If you think Microsoft’s vision could possibly be securely designed, all you have to do is look at the dismal effectiveness of the various copy-protection and digital-rights-management systems we’ve seen over the years. That’s a similar capabilities-enforcement mechanism, albeit simpler than these more general systems.

And that’s the key to understanding this system. Don’t be fooled by the scare stories of wireless devices on airplanes and in hospitals, or visions of a world where no one is yammering loudly on their cellphones in posh restaurants. This is really about media companies wanting to exert their control further over your electronics. They not only want to prevent you from surreptitiously recording movies and concerts, they want your new television to enforce good “manners” on your computer, and not allow it to record any programs. They want your iPod to politely refuse to copy music to a computer other than your own. They want to enforce their legislated definition of manners: to control what you do and when you do it, and to charge you repeatedly for the privilege whenever possible.

“Digital Manners Policies” is a marketing term. Let’s call this what it really is: Selective Device Jamming. It’s not polite, it’s dangerous. It won’t make anyone more secure—or more polite.

This essay originally appeared in Wired.com.

Posted on July 1, 2008 at 6:48 AMView Comments

New TSA ID Requirement

The TSA has a new photo ID requirement:

Beginning Saturday, June 21, 2008 passengers that willfully refuse to provide identification at security checkpoint will be denied access to the secure area of airports. This change will apply exclusively to individuals that simply refuse to provide any identification or assist transportation security officers in ascertaining their identity.

This new procedure will not affect passengers that may have misplaced, lost or otherwise do not have ID but are cooperative with officers. Cooperative passengers without ID may be subjected to additional screening protocols, including enhanced physical screening, enhanced carry-on and/or checked baggage screening, interviews with behavior detection or law enforcement officers and other measures.

That’s right; people who refuse to show ID on principle will not be allowed to fly, but people who claim to have lost their ID will. I feel well-protected against terrorists who can’t lie.

I don’t think any further proof is needed that the ID requirement has nothing to do with security, and everything to do with control.

EDITED TO ADD (6/11): Daniel Solove comments.

Posted on June 11, 2008 at 1:42 PMView Comments

Our Data, Ourselves

In the information age, we all have a data shadow.

We leave data everywhere we go. It’s not just our bank accounts and stock portfolios, or our itemized bills, listing every credit card purchase and telephone call we make. It’s automatic road-toll collection systems, supermarket affinity cards, ATMs and so on.

It’s also our lives. Our love letters and friendly chat. Our personal e-mails and SMS messages. Our business plans, strategies and offhand conversations. Our political leanings and positions. And this is just the data we interact with. We all have shadow selves living in the data banks of hundreds of corporations’ information brokers—information about us that is both surprisingly personal and uncannily complete—except for the errors that you can neither see nor correct.

What happens to our data happens to ourselves.

This shadow self doesn’t just sit there: It’s constantly touched. It’s examined and judged. When we apply for a bank loan, it’s our data that determines whether or not we get it. When we try to board an airplane, it’s our data that determines how thoroughly we get searched—or whether we get to board at all. If the government wants to investigate us, they’re more likely to go through our data than they are to search our homes; for a lot of that data, they don’t even need a warrant.

Who controls our data controls our lives.

It’s true. Whoever controls our data can decide whether we can get a bank loan, on an airplane or into a country. Or what sort of discount we get from a merchant, or even how we’re treated by customer support. A potential employer can, illegally in the U.S., examine our medical data and decide whether or not to offer us a job. The police can mine our data and decide whether or not we’re a terrorist risk. If a criminal can get hold of enough of our data, he can open credit cards in our names, siphon money out of our investment accounts, even sell our property. Identity theft is the ultimate proof that control of our data means control of our life.

We need to take back our data.

Our data is a part of us. It’s intimate and personal, and we have basic rights to it. It should be protected from unwanted touch.

We need a comprehensive data privacy law. This law should protect all information about us, and not be limited merely to financial or health information. It should limit others’ ability to buy and sell our information without our knowledge and consent. It should allow us to see information about us held by others, and correct any inaccuracies we find. It should prevent the government from going after our information without judicial oversight. It should enforce data deletion, and limit data collection, where necessary. And we need more than token penalties for deliberate violations.

This is a tall order, and it will take years for us to get there. It’s easy to do nothing and let the market take over. But as we see with things like grocery store club cards and click-through privacy policies on websites, most people either don’t realize the extent their privacy is being violated or don’t have any real choice. And businesses, of course, are more than happy to collect, buy, and sell our most intimate information. But the long-term effects of this on society are toxic; we give up control of ourselves.

This essay originally appeared on Wired.com.

EDITED TO ADD (5/21): A rebuttal.

Posted on May 20, 2008 at 1:10 PMView Comments

Our Inherent Capability for Evil

This is interesting:

What took place on a peaceful Californian university campus nearly four decades ago still has the power to disturb. Eager to explore the way that “situation” can impact on behaviour, the young psychologist enrolled students to spend two weeks in a simulated jail environment, where they would randomly be assigned roles as either prisoners or guards.

Zimbardo’s volunteers were bright, liberal young men of good character, brimming with opposition to the Vietnam war and authority in general. All expressed a preference to be prisoners, a role they could relate to better. Yet within days the strong, rebellious “prisoners” had become depressed and hopeless. Two broke down emotionally, crushed by the behaviour of the “guards”, who had embraced their authoritarian roles in full, some becoming ever-more sadistic, others passively accepting the abuses taking place in front of them.

Transcripts of the experiment, published in Zimbardo’s book The Lucifer Effect: Understanding How Good People Turn Evil, record in terrifying detail the way reality slipped away from the participants. On the first day ­ Sunday ­ it is all self-conscious play-acting between college buddies. On Monday the prisoners start a rebellion, and the guards clamp down, using solitary confinement, sleep deprivation and intimidation. One refers to “these dangerous prisoners”. They have to be prevented from using physical force.

Control techniques become more creative and sadistic. The prisoners are forced to repeat their numbers over and over at roll call, and to sing them. They are woken repeatedly in the night. Their blankets are rolled in dirt and they are ordered painstakingly to pick them clean of burrs. They are harangued and pitted against one another, forced to humiliate each other, pulled in and out of solitary confinement.

On day four, a priest visits. Prisoner 819 is in tears, his hands shaking. Rather than question the experiment, the priest tells him, “You’re going to have to get less emotional.” Later, a guard leads the inmates in chanting “Prisoner 819 did a bad thing!” and blaming him for their poor conditions.

Zimbardo finds 819 covering his ears, “a quivering mess, hysterical”, and says it is time to go home. But 819 refuses to leave until he has proved to his fellow prisoners that he isn’t “bad”. “Listen carefully to me, you’re not 819,” says Zimbardo. “You are Stewart and my name is Dr Zimbardo. I am a psychologist not a prison superintendent, and this is not a real prison.”819 stops sobbing “and looks like a small child awakening from a nightmare”, according to Zimbardo. But it doesn’t seem to occur to him that things are going too far.

Guard Hellmann, leader of the night shift, plumbs new depths. He wakes up the prisoners to shout abuse in their faces. He forces them to play leapfrog dressed only in smocks, their genitals exposed. A new prisoner, 416, replaces 819, and brings fresh perspective. “I was terrified by each new shift of guards,” he says. “I knew by the first evening that I had done something foolish to volunteer for this study.”

The study is scheduled to run for two weeks. On the evening of Thursday, the fifth day, Zimbardo’s girlfriend, Christina Maslach, also a psychologist, comes to meet him for dinner. She is confronted by a line of prisoners en route to the lavatory, bags over their heads, chained together by the ankles. “What you’re doing to these boys is a terrible thing,” she tells Zimbardo. “Don’t you understand this is a crucible of human behaviour?” he asks. “We are seeing things no one has witnessed before in such a situation.” She tells him this has made her question their relationship, and the person he is.

Downstairs, Guard Hellmann is yelling at the prisoners. “See that hole in the ground? Now do 25 push-ups, fucking that hole. You hear me?” Three prisoners are forced to be “female camels”, bent over, their naked bottoms exposed. Others are told to “hump” them and they simulate sodomy. Zimbardo ends the experiment the following morning.

To read the transcripts or watch the footage is to follow a rapid and dramatic collapse of human decency, resilience and perspective. And so it should be, says Zimbardo. “Evil is a slippery slope,” he says. “Each day is a platform for the abuses of the next day. Each day is only slightly worse than the previous day. Once you don’t object to those first steps it is easy to say, ‘Well, it’s only a little worse then yesterday.’ And you become morally acclimatised to this kind of evil.”

EDITED TO ADD (5/13): The website is worth visiting, especially the section on resisting influence.

Posted on April 16, 2008 at 6:40 AMView Comments

Internet Censorship

A review of Access Denied, edited by Ronald Deibert, John Palfrey, Rafal Rohozinski and Jonathan Zittrain, MIT Press: 2008.

In 1993, Internet pioneer John Gilmore said “the net interprets censorship as damage and routes around it”, and we believed him. In 1996, cyberlibertarian John Perry Barlow issued his ‘Declaration of the Independence of Cyberspace’ at the World Economic Forum at Davos, Switzerland, and online. He told governments: “You have no moral right to rule us, nor do you possess any methods of enforcement that we have true reason to fear.”

At the time, many shared Barlow’s sentiments. The Internet empowered people. It gave them access to information and couldn’t be stopped, blocked or filtered. Give someone access to the Internet, and they have access to everything. Governments that relied on censorship to control their citizens were doomed.

Today, things are very different. Internet censorship is flourishing. Organizations selectively block employees’ access to the Internet. At least 26 countries—mainly in the Middle East, North Africa, Asia, the Pacific and the former Soviet Union—selectively block their citizens’ Internet access. Even more countries legislate to control what can and cannot be said, downloaded or linked to. “You have no sovereignty where we gather,” said Barlow. Oh yes we do, the governments of the world have replied.

Access Denied is a survey of the practice of Internet filtering, and a sourcebook of details about the countries that engage in the practice. It is written by researchers of the OpenNet Initiative (ONI), an organization that is dedicated to documenting global Internet filtering around the world.

The first half of the book comprises essays written by ONI researchers on the politics, practice, technology, legality and social effects of Internet filtering. There are three basic rationales for Internet censorship: politics and power; social norms, morals and religion; and security concerns.

Some countries, such as India, filter only a few sites; others, such as Iran, extensively filter the Internet. Saudi Arabia tries to block all pornography (social norms and morals). Syria blocks everything from the Israeli domain “.il” (politics and power). Some countries filter only at certain times. During the 2006 elections in Belarus, for example, the website of the main opposition candidate disappeared from the Internet.

The effectiveness of Internet filtering is mixed; it depends on the tools used and the granularity of filtering. It is much easier to block particular URLs or entire domains than it is to block information on a particular topic. Some countries block specific sites or URLs based on some predefined list but new URLs with similar content appear all the time. Other countries—notably China—try to filter on the basis of keywords in the actual web pages. A halfway measure is to filter on the basis of URL keywords: names of dissidents or political parties, or sexual words.

Much of the technology has other applications. Software for filtering is a legitimate product category, purchased by schools to limit access by children to objectionable material and by corporations trying to prevent their employees from being distracted at work. One chapter discusses the ethical implications of companies selling products, services and technologies that enable Internet censorship.

Some censorship is legal, not technical. Countries have laws against publishing certain content, registration requirements that prevent anonymous Internet use, liability laws that force Internet service providers to filter themselves, or surveillance. Egypt does not engage in technical Internet filtering; instead, its laws discourage the publishing and reading of certain content—it has even jailed people for their online activities.

The second half of Access Denied consists of detailed descriptions of Internet use, regulations and censorship in eight regions of the world, and in each of 40 different countries. The ONI found evidence of censorship in 26 of those 40. For the other 14 countries, it summarizes the legal and regulatory framework surrounding Internet use, and tests the results that indicated no censorship. This leads to 200 pages of rather dry reading, but it is vitally important to have this information well-documented and easily accessible. The book’s data are from 2006, but the authors promise frequent updates on the ONI website.

No set of Internet censorship measures is perfect. It is often easy to find the same information on uncensored URLs, and relatively easy to get around the filtering mechanisms and to view prohibited web pages if you know what you’re doing. But most people don’t have the computer skills to bypass controls, and in a country where doing so is punishable by jail—or worse—few take the risk. So even porous and ineffective attempts at censorship can become very effective socially and politically.

In 1996, Barlow said: “You are trying to ward off the virus of liberty by erecting guard posts at the frontiers of cyberspace. These may keep out the contagion for some time, but they will not work in a world that will soon be blanketed in bit-bearing media.”

Brave words, but premature. Certainly, there is much more information available to many more people today than there was in 1996. But the Internet is made up of physical computers and connections that exist within national boundaries. Today’s Internet still has borders and, increasingly, countries want to control what passes through them. In documenting this control, the ONI has performed an invaluable service.

This was originally published in Nature.

Posted on April 7, 2008 at 5:00 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.