Entries Tagged "filtering"

Page 2 of 2

Natural Language Shellcode

Nice:

In this paper we revisit the assumption that shellcode need be fundamentally different in structure than non-executable data. Specifically, we elucidate how one can use natural language generation techniques to produce shellcode that is superficially similar to English prose. We argue that this new development poses significant challenges for inline payloadbased inspection (and emulation) as a defensive measure, and also highlights the need for designing more efficient techniques for preventing shellcode injection attacks altogether.

Posted on March 25, 2010 at 7:16 AMView Comments

Building in Surveillance

China is the world’s most successful Internet censor. While the Great Firewall of China isn’t perfect, it effectively limits information flowing in and out of the country. But now the Chinese government is taking things one step further.

Under a requirement taking effect soon, every computer sold in China will have to contain the Green Dam Youth Escort software package. Ostensibly a pornography filter, it is government spyware that will watch every citizen on the Internet.

Green Dam has many uses. It can police a list of forbidden Web sites. It can monitor a user’s reading habits. It can even enlist the computer in some massive botnet attack, as part of a hypothetical future cyberwar.

China’s actions may be extreme, but they’re not unique. Democratic governments around the world — Sweden, Canada and the United Kingdom, for example — are rushing to pass laws giving their police new powers of Internet surveillance, in many cases requiring communications system providers to redesign products and services they sell.

Many are passing data retention laws, forcing companies to keep information on their customers. Just recently, the German government proposed giving itself the power to censor the Internet.

The United States is no exception. The 1994 CALEA law required phone companies to facilitate FBI eavesdropping, and since 2001, the NSA has built substantial eavesdropping systems in the United States. The government has repeatedly proposed Internet data retention laws, allowing surveillance into past activities as well as present.

Systems like this invite criminal appropriation and government abuse. New police powers, enacted to fight terrorism, are already used in situations of normal crime. Internet surveillance and control will be no different.

Official misuses are bad enough, but the unofficial uses worry me more. Any surveillance and control system must itself be secured. An infrastructure conducive to surveillance and control invites surveillance and control, both by the people you expect and by the people you don’t.

China’s government designed Green Dam for its own use, but it’s been subverted. Why does anyone think that criminals won’t be able to use it to steal bank account and credit card information, use it to launch other attacks, or turn it into a massive spam-sending botnet?

Why does anyone think that only authorized law enforcement will mine collected Internet data or eavesdrop on phone and IM conversations?

These risks are not theoretical. After 9/11, the National Security Agency built a surveillance infrastructure to eavesdrop on telephone calls and e-mails within the United States.

Although procedural rules stated that only non-Americans and international phone calls were to be listened to, actual practice didn’t always match those rules. NSA analysts collected more data than they were authorized to, and used the system to spy on wives, girlfriends, and famous people such as President Clinton.

But that’s not the most serious misuse of a telecommunications surveillance infrastructure. In Greece, between June 2004 and March 2005, someone wiretapped more than 100 cell phones belonging to members of the Greek government — the prime minister and the ministers of defense, foreign affairs and justice.

Ericsson built this wiretapping capability into Vodafone’s products, and enabled it only for governments that requested it. Greece wasn’t one of those governments, but someone still unknown — a rival political party? organized crime? — figured out how to surreptitiously turn the feature on.

Researchers have already found security flaws in Green Dam that would allow hackers to take over the computers. Of course there are additional flaws, and criminals are looking for them.

Surveillance infrastructure can be exported, which also aids totalitarianism around the world. Western companies like Siemens, Nokia, and Secure Computing built Iran’s surveillance infrastructure. U.S. companies helped build China’s electronic police state. Twitter’s anonymity saved the lives of Iranian dissidents — anonymity that many governments want to eliminate.

Every year brings more Internet censorship and control — not just in countries like China and Iran, but in the United States, the United Kingdom, Canada and other free countries.

The control movement is egged on by both law enforcement, trying to catch terrorists, child pornographers and other criminals, and by media companies, trying to stop file sharers.

It’s bad civic hygiene to build technologies that could someday be used to facilitate a police state. No matter what the eavesdroppers and censors say, these systems put us all at greater risk. Communications systems that have no inherent eavesdropping capabilities are more secure than systems with those capabilities built in.

This essay previously appeared — albeit with fewer links — on the Minnesota Public Radio website.

Posted on August 3, 2009 at 6:43 AMView Comments

Internet Censorship

A review of Access Denied, edited by Ronald Deibert, John Palfrey, Rafal Rohozinski and Jonathan Zittrain, MIT Press: 2008.

In 1993, Internet pioneer John Gilmore said “the net interprets censorship as damage and routes around it”, and we believed him. In 1996, cyberlibertarian John Perry Barlow issued his ‘Declaration of the Independence of Cyberspace’ at the World Economic Forum at Davos, Switzerland, and online. He told governments: “You have no moral right to rule us, nor do you possess any methods of enforcement that we have true reason to fear.”

At the time, many shared Barlow’s sentiments. The Internet empowered people. It gave them access to information and couldn’t be stopped, blocked or filtered. Give someone access to the Internet, and they have access to everything. Governments that relied on censorship to control their citizens were doomed.

Today, things are very different. Internet censorship is flourishing. Organizations selectively block employees’ access to the Internet. At least 26 countries — mainly in the Middle East, North Africa, Asia, the Pacific and the former Soviet Union — selectively block their citizens’ Internet access. Even more countries legislate to control what can and cannot be said, downloaded or linked to. “You have no sovereignty where we gather,” said Barlow. Oh yes we do, the governments of the world have replied.

Access Denied is a survey of the practice of Internet filtering, and a sourcebook of details about the countries that engage in the practice. It is written by researchers of the OpenNet Initiative (ONI), an organization that is dedicated to documenting global Internet filtering around the world.

The first half of the book comprises essays written by ONI researchers on the politics, practice, technology, legality and social effects of Internet filtering. There are three basic rationales for Internet censorship: politics and power; social norms, morals and religion; and security concerns.

Some countries, such as India, filter only a few sites; others, such as Iran, extensively filter the Internet. Saudi Arabia tries to block all pornography (social norms and morals). Syria blocks everything from the Israeli domain “.il” (politics and power). Some countries filter only at certain times. During the 2006 elections in Belarus, for example, the website of the main opposition candidate disappeared from the Internet.

The effectiveness of Internet filtering is mixed; it depends on the tools used and the granularity of filtering. It is much easier to block particular URLs or entire domains than it is to block information on a particular topic. Some countries block specific sites or URLs based on some predefined list but new URLs with similar content appear all the time. Other countries — notably China — try to filter on the basis of keywords in the actual web pages. A halfway measure is to filter on the basis of URL keywords: names of dissidents or political parties, or sexual words.

Much of the technology has other applications. Software for filtering is a legitimate product category, purchased by schools to limit access by children to objectionable material and by corporations trying to prevent their employees from being distracted at work. One chapter discusses the ethical implications of companies selling products, services and technologies that enable Internet censorship.

Some censorship is legal, not technical. Countries have laws against publishing certain content, registration requirements that prevent anonymous Internet use, liability laws that force Internet service providers to filter themselves, or surveillance. Egypt does not engage in technical Internet filtering; instead, its laws discourage the publishing and reading of certain content — it has even jailed people for their online activities.

The second half of Access Denied consists of detailed descriptions of Internet use, regulations and censorship in eight regions of the world, and in each of 40 different countries. The ONI found evidence of censorship in 26 of those 40. For the other 14 countries, it summarizes the legal and regulatory framework surrounding Internet use, and tests the results that indicated no censorship. This leads to 200 pages of rather dry reading, but it is vitally important to have this information well-documented and easily accessible. The book’s data are from 2006, but the authors promise frequent updates on the ONI website.

No set of Internet censorship measures is perfect. It is often easy to find the same information on uncensored URLs, and relatively easy to get around the filtering mechanisms and to view prohibited web pages if you know what you’re doing. But most people don’t have the computer skills to bypass controls, and in a country where doing so is punishable by jail — or worse — few take the risk. So even porous and ineffective attempts at censorship can become very effective socially and politically.

In 1996, Barlow said: “You are trying to ward off the virus of liberty by erecting guard posts at the frontiers of cyberspace. These may keep out the contagion for some time, but they will not work in a world that will soon be blanketed in bit-bearing media.”

Brave words, but premature. Certainly, there is much more information available to many more people today than there was in 1996. But the Internet is made up of physical computers and connections that exist within national boundaries. Today’s Internet still has borders and, increasingly, countries want to control what passes through them. In documenting this control, the ONI has performed an invaluable service.

This was originally published in Nature.

Posted on April 7, 2008 at 5:00 AMView Comments

Chinese National Firewall Isn't All that Effective

Interesting research:

The study, carried out by graduate student Earl Barr and colleagues in the computer science department of UC Davis and the University of New Mexico, exploited the workings of the Chinese firewall to investigate its effectiveness.

Unlike many other nations Chinese authorities do not simply block webpages that discuss banned subjects such as the Tiananmen Square massacre.

Instead the technology deployed by the Chinese government scans data flowing across its section of the net for banned words or web addresses.

When the filtering system spots a banned term it sends instructions to the source server and destination PC to stop the flow of data.

Mr Barr and colleagues manipulated this to see how far inside China’s net, messages containing banned terms could reach before the shut down instructions were sent.

The team used words taken from the Chinese version of Wikipedia to load the data streams then despatched into China’s network. If a data stream was stopped a technique known as “latent semantic analysis” was used to find related words to see if they too were blocked.

The researchers found that the blocking did not happen at the edge of China’s network but often was done when the packets of loaded data had penetrated deep inside.

Blocked were terms related to the Falun Gong movement, Tiananmen Square protest groups, Nazi Germany and democracy.

On about 28% of the paths into China’s net tested by the researchers, blocking failed altogether suggesting that web users would browse unencumbered at least some of the time.

Filtering and blocking was “particularly erratic” when lots of China’s web users were online, said the researchers.

Another article.

Posted on September 14, 2007 at 7:52 AMView Comments

Australian Porn Filter Cracked

The headline is all you need to know:

Teen cracks AU$84 million porn filter in 30 minutes

(AU$84 million is $69.5 million U.S.; that’s real money.)

Remember that the issue isn’t that one smart kid can circumvent the censorship software, it’s that one smart kid — maybe this one, maybe another one — can write a piece of shareware that allows everyone to circumvent the censorship software.

It’s the same with DRM; technical measures just aren’t going to work.

Posted on August 30, 2007 at 12:50 PMView Comments

The Kutztown 13

Thirteen Pennsylvania high-school kids — Kutztown 13 — are being charged with felonies:

They’re being called the Kutztown 13 — a group of high schoolers charged with felonies for bypassing security with school-issued laptops, downloading forbidden internet goodies and using monitoring software to spy on district administrators.

The students, their families and outraged supporters say authorities are overreacting, punishing the kids not for any heinous behavior — no malicious acts are alleged — but rather because they outsmarted the district’s technology workers….

The trouble began last fall after the district issued some 600 Apple iBook laptops to every student at the high school about 50 miles northwest of Philadelphia. The computers were loaded with a filtering program that limited Internet access. They also had software that let administrators see what students were viewing on their screens.

But those barriers proved easily surmountable: The administrative password that allowed students to reconfigure computers and obtain unrestricted Internet access was easy to obtain. A shortened version of the school’s street address, the password was taped to the backs of the computers.

The password got passed around and students began downloading such forbidden programs as the popular iChat instant-messaging tool.

At least one student viewed pornography. Some students also turned off the remote monitoring function and turned the tables on their elders_ using it to view administrators’ own computer screens.

There’s more to the story, though. Here’s some good commentary on the issue:

What the parents don’t mention — but the school did in a press release— is that it wasn’t as if the school came down with the Hammer of God out of nowhere.

These kids were caught and punished for doing this stuff, and their parents informed.

Over and over.

Quoth the release:

“Unfortunately, after repeated warnings and disciplinary actions, a few students continued to misuse the school-issued laptops to varying degrees. The disciplinary actions included detentions, in-school suspensions, loss of Internet access, and loss of computer privileges. After each disciplinary action, parents received either written notification or telephone calls.”

What was the parents’ reaction those disciplinary actions? Some of them complained that — despite signing a document agreeing to the acceptable use policy — the kids should be able to do whatever they wanted to with the free machines.

“We signed it, but we didn’t mean it”?

Yes, the kids should be punished. No, a felony comviction is not the way to punish them.

The problem is that the punishment doesn’t fit the crime. Breaking the rules is what kids do. Society needs to deal with that, yes, but it needs to deal with that in a way that doesn’t ruin lives. Deterrence is critical if we are to ever have a lawful society on the internet, but deterrence has to come from rational prosecution. This simply isn’t rational.

EDITED TO ADD (2 Sep): It seems that charges have been dropped.

Posted on August 22, 2005 at 6:56 AMView Comments

Combating Spam

Spam is back in the news, and it has a new name. This time it’s voice-over-IP spam, and it has the clever name of “spit” (spam over Internet telephony). Spit has the potential to completely ruin VoIP. No one is going to install the system if they’re going to get dozens of calls a day from audio spammers. Or, at least, they’re only going to accept phone calls from a white list of previously known callers.

VoIP spam joins the ranks of e-mail spam, Usenet newsgroup spam, instant message spam, cell phone text message spam, and blog comment spam. And, if you think broadly enough, these computer-network spam delivery mechanisms join the ranks of computer telemarketing (phone spam), junk mail (paper spam), billboards (visual space spam), and cars driving through town with megaphones (audio spam). It’s all basically the same thing — unsolicited marketing messages — and only by understanding the problem at this level of generality can we discuss solutions.

In general, the goal of advertising is to influence people. Usually it’s to influence people to purchase a product, but it could just as easily be to influence people to support a particular political candidate or position. Advertising does this by implanting a marketing message into the brain of the recipient. The mechanism of implantation is simply a tactic.

Tactics for unsolicited marketing messages rise and fall in popularity based on their cost and benefit. If the benefit is significant, people are willing to spend more. If the benefit is small, people will only do it if it is cheap. A 30-second prime-time television ad costs 1.8 cents per adult viewer, a full-page color magazine ad about 0.9 cents per reader. A highway billboard costs 0.21 cents per car. Direct mail is the most expensive, at over 50 cents per third-class letter mailed. (That’s why targeted mailing lists are so valuable; they increase the per-piece benefit.)

Spam is such a common tactic not because it’s particularly effective; the response rates for spam are very low. It’s common because it’s ridiculously cheap. Typically, spammers charge less than a hundredth of a cent per e-mail. (And that number is just what spamming houses charge their customers to deliver spam; if you’re a clever hacker, you can build your own spam network for much less money.) If it is worth $10 for you to successfully influence one person — to buy your product, vote for your guy, whatever — then you only need a 1 in a 100,000 success rate. You can market really marginal products with spam.

So far, so good. But the cost/benefit calculation is missing a component: the “cost” of annoying people. Everyone who is not influenced by the marketing message is annoyed to some degree. The advertiser pays a partial cost for annoying people; they might boycott his product. But most of the time he does not, and the cost of the advertising is paid by the person: the beauty of the landscape is ruined by the billboard, dinner is disrupted by a telemarketer, spam costs money to ship around the Internet and time to wade through, etc. (Note that I am using “cost” very generally here, and not just monetarily. Time and happiness are both costs.)

This is why spam is so bad. For each e-mail, the spammer pays a cost and receives benefit. But there is an additional cost paid by the e-mail recipient. But because so much spam is unwanted, that additional cost is huge — and it’s a cost that the spammer never sees. If spammers could be made to bear the total cost of spam, then its level would be more along the lines of what society would find acceptable.

This economic analysis is important, because it’s the only way to understand how effective different solutions will be. This is an economic problem, and the solutions need to change the fundamental economics. (The analysis is largely the same for VoIP spam, Usenet newsgroup spam, blog comment spam, and so on.)

The best solutions raise the cost of spam. Spam filters raise the cost by increasing the amount of spam that someone needs to send before someone will read it. If 99% of all spam is filtered into trash, then sending spam becomes 100 times more expensive. This is also the idea behind white lists — lists of senders a user is willing to accept e-mail from — and blacklists: lists of senders a user is not willing to accept e-mail from.

Filtering doesn’t just have to be at the recipient’s e-mail. It can be implemented within the network to clean up spam, or at the sender. Several ISPs are already filtering outgoing e-mail for spam, and the trend will increase.

Anti-spam laws raise the cost of spam to an intolerable level; no one wants to go to jail for spamming. We’ve already seen some convictions in the U.S. Unfortunately, this only works when the spammer is within the reach of the law, and is less effective against criminals who are using spam as a mechanism to commit fraud.

Other proposed solutions try to impose direct costs on e-mail senders. I have seen proposals for e-mail “postage,” either for every e-mail sent or for every e-mail above a reasonable threshold. I have seen proposals where the sender of an e-mail posts a small bond, which the receiver can cash if the e-mail is spam. There are other proposals that involve “computational puzzles”: time-consuming tasks the sender’s computer must perform, unnoticeable to someone who is sending e-mail normally, but too much for someone sending e-mail in bulk. These solutions generally involve re-engineering the Internet, something that is not done lightly, and hence are in the discussion stages only.

All of these solutions work to a degree, and we end up with an arms race. Anti-spam products block a certain type of spam. Spammers invent a tactic that gets around those products. Then the products block that spam. Then the spammers invent yet another type of spam. And so on.

Blacklisting spammer sites forced the spammers to disguise the origin of spam e-mail. People recognizing e-mail from people they knew, and other anti-spam measures, forced spammers to hack into innocent machines and use them as launching pads. Scanning millions of e-mails looking for identical bulk spam forced spammers to individualize each spam message. Semantic spam detection forced spammers to design even more clever spam. And so on. Each defense is met with yet another attack, and each attack is met with yet another defense.

Remember that when you think about host identification, or postage, as an anti-spam measure. Spammers don’t care about tactics; they want to send their e-mail. Techniques like this will simply force spammers to rely more on hacked innocent machines. As long as the underlying computers are insecure, we can’t prevent spammers from sending.

This is the problem with another potential solution: re-engineering the Internet to prohibit the forging of e-mail headers. This would make it easier for spam detection software to detect spamming IP addresses, but spammers would just use hacked machines instead of their own computers.

Honestly, there’s no end in sight for the spam arms race. Even so, spam is one of computer security’s success stories. The current crop of anti-spam products work. I get almost no spam and very few legitimate e-mails end up in my spam trap. I wish they would work better — Crypto-Gram is occasionally classified as spam by one service or another, for example — but they’re working pretty well. It’ll be a long time before spam stops clogging up the Internet, but at least we don’t have to look at it.

Posted on May 13, 2005 at 9:47 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.