Entries Tagged "censorship"

Page 5 of 7

Censorship in Dubai

I was in Dubai last weekend for the World Economic Forum Summit on the Global Agenda. (I was on the “Future of the Internet” council; fellow council members Ethan Zuckerman and Jeff Jarvis have written about the event.)

As part of the United Arab Emirates, Dubai censors the Internet:

The government of the United Arab Emirates (UAE) pervasively filters Web sites that contain pornography or relate to alcohol and drug use, gay and lesbian issues, or online dating or gambling. Web-based applications and religious and political sites are also filtered, though less extensively. Additionally, legal controls limit free expression and behavior, restricting political discourse and dissent online.

More detail here.

What was interesting to me about how reasonable the execution of the policy was. Unlike some countries—China for example—that simply block objectionable content, the UAE displays a screen indicating that the URL has been blocked and offers information about its appeals process.

Posted on November 12, 2008 at 12:56 PMView Comments

Internet Censorship

A review of Access Denied, edited by Ronald Deibert, John Palfrey, Rafal Rohozinski and Jonathan Zittrain, MIT Press: 2008.

In 1993, Internet pioneer John Gilmore said “the net interprets censorship as damage and routes around it”, and we believed him. In 1996, cyberlibertarian John Perry Barlow issued his ‘Declaration of the Independence of Cyberspace’ at the World Economic Forum at Davos, Switzerland, and online. He told governments: “You have no moral right to rule us, nor do you possess any methods of enforcement that we have true reason to fear.”

At the time, many shared Barlow’s sentiments. The Internet empowered people. It gave them access to information and couldn’t be stopped, blocked or filtered. Give someone access to the Internet, and they have access to everything. Governments that relied on censorship to control their citizens were doomed.

Today, things are very different. Internet censorship is flourishing. Organizations selectively block employees’ access to the Internet. At least 26 countries—mainly in the Middle East, North Africa, Asia, the Pacific and the former Soviet Union—selectively block their citizens’ Internet access. Even more countries legislate to control what can and cannot be said, downloaded or linked to. “You have no sovereignty where we gather,” said Barlow. Oh yes we do, the governments of the world have replied.

Access Denied is a survey of the practice of Internet filtering, and a sourcebook of details about the countries that engage in the practice. It is written by researchers of the OpenNet Initiative (ONI), an organization that is dedicated to documenting global Internet filtering around the world.

The first half of the book comprises essays written by ONI researchers on the politics, practice, technology, legality and social effects of Internet filtering. There are three basic rationales for Internet censorship: politics and power; social norms, morals and religion; and security concerns.

Some countries, such as India, filter only a few sites; others, such as Iran, extensively filter the Internet. Saudi Arabia tries to block all pornography (social norms and morals). Syria blocks everything from the Israeli domain “.il” (politics and power). Some countries filter only at certain times. During the 2006 elections in Belarus, for example, the website of the main opposition candidate disappeared from the Internet.

The effectiveness of Internet filtering is mixed; it depends on the tools used and the granularity of filtering. It is much easier to block particular URLs or entire domains than it is to block information on a particular topic. Some countries block specific sites or URLs based on some predefined list but new URLs with similar content appear all the time. Other countries—notably China—try to filter on the basis of keywords in the actual web pages. A halfway measure is to filter on the basis of URL keywords: names of dissidents or political parties, or sexual words.

Much of the technology has other applications. Software for filtering is a legitimate product category, purchased by schools to limit access by children to objectionable material and by corporations trying to prevent their employees from being distracted at work. One chapter discusses the ethical implications of companies selling products, services and technologies that enable Internet censorship.

Some censorship is legal, not technical. Countries have laws against publishing certain content, registration requirements that prevent anonymous Internet use, liability laws that force Internet service providers to filter themselves, or surveillance. Egypt does not engage in technical Internet filtering; instead, its laws discourage the publishing and reading of certain content—it has even jailed people for their online activities.

The second half of Access Denied consists of detailed descriptions of Internet use, regulations and censorship in eight regions of the world, and in each of 40 different countries. The ONI found evidence of censorship in 26 of those 40. For the other 14 countries, it summarizes the legal and regulatory framework surrounding Internet use, and tests the results that indicated no censorship. This leads to 200 pages of rather dry reading, but it is vitally important to have this information well-documented and easily accessible. The book’s data are from 2006, but the authors promise frequent updates on the ONI website.

No set of Internet censorship measures is perfect. It is often easy to find the same information on uncensored URLs, and relatively easy to get around the filtering mechanisms and to view prohibited web pages if you know what you’re doing. But most people don’t have the computer skills to bypass controls, and in a country where doing so is punishable by jail—or worse—few take the risk. So even porous and ineffective attempts at censorship can become very effective socially and politically.

In 1996, Barlow said: “You are trying to ward off the virus of liberty by erecting guard posts at the frontiers of cyberspace. These may keep out the contagion for some time, but they will not work in a world that will soon be blanketed in bit-bearing media.”

Brave words, but premature. Certainly, there is much more information available to many more people today than there was in 1996. But the Internet is made up of physical computers and connections that exist within national boundaries. Today’s Internet still has borders and, increasingly, countries want to control what passes through them. In documenting this control, the ONI has performed an invaluable service.

This was originally published in Nature.

Posted on April 7, 2008 at 5:00 AMView Comments

Benevolent Worms

This is a stupid idea:

Milan Vojnovic and colleagues from Microsoft Research in Cambridge, UK, want to make useful pieces of information such as software updates behave more like computer worms: spreading between computers instead of being downloaded from central servers.

The research may also help defend against malicious types of worm, the researchers say.

Software worms spread by self-replicating. After infecting one computer they probe others to find new hosts. Most existing worms randomly probe computers when looking for new hosts to infect, but that is inefficient, says Vojnovic, because they waste time exploring groups or “subnets” of computers that contain few uninfected hosts.

This idea pops up every few years. This is what I wrote back in 2003, updating something I wrote in 2000:

This is tempting for several reasons. One, it’s poetic: turning a weapon against itself. Two, it lets ethical programmers share in the fun of designing worms. And three, it sounds like a promising technique to solve one of the nastiest online security problems: patching or repairing computers’ vulnerabilities.

Everyone knows that patching is in shambles. Users, especially home users, don’t do it. The best patching techniques involve a lot of negotiation, pleading, and manual labor…things that nobody enjoys very much. Beneficial worms look like a happy solution. You turn a Byzantine social problem into a fun technical problem. You don’t have to convince people to install patches and system updates; you use technology to force them to do what you want.

And that’s exactly why it’s a terrible idea. Patching other people’s machines without annoying them is good; patching other people’s machines without their consent is not. A worm is not “bad” or “good” depending on its payload. Viral propagation mechanisms are inherently bad, and giving them beneficial payloads doesn’t make things better. A worm is no tool for any rational network administrator, regardless of intent.

A good software distribution mechanism has the following characteristics:

  1. People can choose the options they want.
  2. Installation is adapted to the host it’s running on.
  3. It’s easy to stop an installation in progress, or uninstall the software.
  4. It’s easy to know what has been installed where.

A successful worm, on the other hand, runs without the consent of the user. It has a small amount of code, and once it starts to spread, it is self-propagating, and will keep going automatically until it’s halted.

These characteristics are simply incompatible. Giving the user more choice, making installation flexible and universal, allowing for uninstallation—all of these make worms harder to propagate. Designing a better software distribution mechanism, makes it a worse worm, and vice versa. On the other hand, making the worm quieter and less obvious to the user, making it smaller and easier to propagate, and making it impossible to contain, all make for bad software distribution.

EDITED TO ADD (2/19): This is worth reading on the topic.

EDITED TO ADD (2/19): Microsoft is trying to dispel the rumor that it is working on this technology.

EDITED TO ADD (2/21): Using benevolent worms to test Internet censorship.

EDITED TO ADD (3/13): The benveolent W32.Welchia.Worm, intended to fix Blaster-infected systems, just created havoc.

Posted on February 19, 2008 at 6:57 AMView Comments

Chinese National Firewall Isn't All that Effective

Interesting research:

The study, carried out by graduate student Earl Barr and colleagues in the computer science department of UC Davis and the University of New Mexico, exploited the workings of the Chinese firewall to investigate its effectiveness.

Unlike many other nations Chinese authorities do not simply block webpages that discuss banned subjects such as the Tiananmen Square massacre.

Instead the technology deployed by the Chinese government scans data flowing across its section of the net for banned words or web addresses.

When the filtering system spots a banned term it sends instructions to the source server and destination PC to stop the flow of data.

Mr Barr and colleagues manipulated this to see how far inside China’s net, messages containing banned terms could reach before the shut down instructions were sent.

The team used words taken from the Chinese version of Wikipedia to load the data streams then despatched into China’s network. If a data stream was stopped a technique known as “latent semantic analysis” was used to find related words to see if they too were blocked.

The researchers found that the blocking did not happen at the edge of China’s network but often was done when the packets of loaded data had penetrated deep inside.

Blocked were terms related to the Falun Gong movement, Tiananmen Square protest groups, Nazi Germany and democracy.

On about 28% of the paths into China’s net tested by the researchers, blocking failed altogether suggesting that web users would browse unencumbered at least some of the time.

Filtering and blocking was “particularly erratic” when lots of China’s web users were online, said the researchers.

Another article.

Posted on September 14, 2007 at 7:52 AMView Comments

Ubiquity of Communication

Read this essay by Randy Farmer, a pioneer of virtual online worlds, explaining something called Disney’s ToonTown.

Designers of online worlds for children wanted to severely restrict the communication that users could have with each other, lest somebody say something that’s inappropriate for children to hear.

Randy discusses various approaches to this problem that were tried over the years. The ToonTown solution was to restrict users to something called “Speedchat,” a menu of pre-constructed sentences, all innocuous. They also gave users the ability to conduct unrestricted conversations with each other, provided they both knew a secret code string. The designers presumed the code strings would be passed only to people a user knew in real life, perhaps on a school playground or among neighbors.

Users found ways to pass code strings to strangers anyway. This page describes several protocols, using gestures, canned sentences, or movement of objects in the game.

After you read the ways above to make secret friends, look here. Another way to make secret friends with toons you don’t know is to form letters/numbers with the picture frames in your house. Around you may see toons who have alot of picture frames at their toon estates, they are usually looking for secret friends. This is how to do it! So, lets say you wanted to make secret friends with a toon named Lily. Your “pretend” secret friend code is 4yt 56s.

  • You: *Move frames around in house to form a 4.* “Okay.”
  • Her: “Okay.” She has now written the first letter down on a piece of paper.
  • You: *Move Frames around to form a y.* “Okay.”
  • Her: “Okay.” She has now written the second number down on paper.
  • You: *Move Frames around in house to form a t* “Okay.”
  • Her: “Okay.” She has now written the third letter down on paper. “Okay.”
  • You: *Do nothing* “Okay” This shows that you have made a space.
  • Repeat process

Randy writes: “By hook, or by crook, customers will always find a way to connect with each other.”

Posted on June 20, 2007 at 12:48 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.