Entries Tagged "botnets"

Page 3 of 6

Paying People to Infect their Computers

Research paper: “It’s All About The Benjamins: An empirical study on incentivizing users to ignore security advice,” by Nicolas Christin, Serge Egelman, Timothy Vidas, and Jens Grossklags.

Abstract: We examine the cost for an attacker to pay users to execute arbitrary code — potentially malware. We asked users at home to download and run an executable we wrote without being told what it did and without any way of knowing it was harmless. Each week, we increased the payment amount. Our goal was to examine whether users would ignore common security advice — not to run untrusted executables­ — if there was a direct incentive, and how much this incentive would need to be. We observed that for payments as low as $0.01, 22% of the people who viewed the task ultimately ran our executable. Once increased to $1.00, this proportion increased to 43%. We show that as the price increased, more and more users who understood the risks ultimately ran the code. We conclude that users are generally unopposed to running programs of unknown provenance, so long as their incentives exceed their inconvenience.

The experiment was run on Mechanical Turk, which means we don’t know who these people were or even if they were sitting at computers they owned (as opposed to, say, computers at an Internet cafe somewhere). But if you want to build a fair-trade botnet, this is a reasonable way to go about it.

Two articles.

Posted on June 19, 2014 at 6:28 AMView Comments

Security Externalities and DDOS Attacks

Ed Felten has a really good blog post about the externalities that the recent Spamhaus DDOS attack exploited:

The attackers’ goal was to flood Spamhaus or its network providers with Internet traffic, to overwhelm their capacity to handle incoming network packets. The main technical problem faced by a DoS attacker is how to amplify the attacker’s traffic-sending capacity, so that the amount of traffic arriving at the target is much greater than the attacker can send himself. To do this, the attacker typically tries to induce many computers around the Internet to send large amounts of traffic to the target.

The first stage of the attack involved the use of a botnet, consisting of a large number of software agents surreptitiously installed on the computers of ordinary users. These bots were commanded to send attack traffic. Notice how this amplifies the attacker’s traffic-sending capability: by sending a few commands to the botnet, the attacker can induce the botnet to send large amounts of attack traffic. This step exploits our first externality: the owners of the bot-infected computers might have done more to prevent the infection, but the harm from this kind of attack activity falls onto strangers, so the computer owners had a reduced incentive to prevent it.

Rather than having the bots send traffic directly to Spamhaus, the attackers used another step to further amplify the volume of traffic. They had the bots send queries to DNS proxies across the Internet (which answer questions about how machine names like www.freedom-to-tinker.com related to IP addresses like 209.20.73.44). This amplifies traffic because the bots can send a small query that elicits a large response message from the proxy.

Here is our second externality: the existence of open DNS proxies that will respond to requests from anywhere on the Internet. Many organizations run DNS proxies for use by their own people. A well-managed DNS proxy is supposed to check that requests are coming from within the same organization; but many proxies fail to check this — they’re “open” and will respond to requests from anywhere. This can lead to trouble, but the resulting harm falls mostly on people outside the organization (e.g. Spamhaus) so there isn’t much incentive to take even simple steps to prevent it.

To complete the attack, the DNS requests were sent with false return addresses, saying that the queries had come from Spamhaus — which causes the DNS proxies to direct their large response messages to Spamhaus.

Here is our third externality: the failure to detect packets with forged return addresses. When a packet with a false return address is injected, it’s fairly easy for the originating network to detect this: if a packet comes from inside your organization, but it has a return address that is outside your organization, then the return address must be forged and the packet should be discarded. But many networks fail to check this. This causes harm but — you guessed it — the harm falls outside the organization, so there isn’t much incentive to check. And indeed, this kind of packet filtering has long been considered a best practice but many networks still fail to do it.

I’ve been writing about security externalities for years. They’re often much harder to solve than technical problems.

By the way, a lot of the hype surrounding this attack was media manipulation.

Posted on April 10, 2013 at 12:46 PMView Comments

Hijacking the Coreflood Botnet

Earlier this month, the FBI seized control of the Coreflood botnet and shut it down:

According to the filing, ISC, under law enforcement supervision, planned to replace the servers with servers that it controlled, then collect the IP addresses of all infected machines communicating with the criminal servers, and send a remote “stop” command to infected machines to disable the Coreflood malware operating on them.

This is a big deal; it’s the first time the FBI has done something like this. My guess is that we’re going to see a lot more of this sort of thing in the future; it’s the obvious solution for botnets.

Not that the approach is without risks:

“Even if we could absolutely be sure that all of the infected Coreflood botnet machines were running the exact code that we reverse-engineered and convinced ourselves that we understood,” said Chris Palmer, technology director for the Electronic Frontier Foundation, “this would still be an extremely sketchy action to take. It’s other people’s computers and you don’t know what’s going to happen for sure. You might blow up some important machine.”

I just don’t see this argument convincing very many people. Leaving Coreflood in place could blow up some important machine. And leaving Coreflood in place not only puts the infected computers at risk; it puts the whole Internet at risk. Minimizing the collateral damage is important, but this feels like a place where the interest of the Internet as a whole trumps the interest of those affected by shutting down Coreflood.

The problem as I see it is the slippery slope. Because next, the RIAA is going to want to remotely disable computers they feel are engaged in illegal file sharing. And the FBI is going to want to remotely disable computers they feel are encouraging terrorism. And so on. It’s important to have serious legal controls on this counterattack sort of defense.

Some more commentary.

Posted on May 2, 2011 at 6:52 AMView Comments

Folk Models in Home Computer Security

This is a really interesting paper: “Folk Models of Home Computer Security,” by Rick Wash. It was presented at SOUPS, the Symposium on Usable Privacy and Security, last year.

Abstract:

Home computer systems are frequently insecure because they are administered by untrained, unskilled users. The rise of botnets has amplified this problem; attackers can compromise these computers, aggregate them, and use the resulting network to attack third parties. Despite a large security industry that provides software and advice, home computer users remain vulnerable. I investigate how home computer users make security-relevant decisions about their computers. I identify eight ‘folk models’ of security threats that are used by home computer users to decide what security software to use, and which security advice to follow: four different conceptualizations of ‘viruses’ and other malware, and four different conceptualizations of ‘hackers’ that break into computers. I illustrate how these models are used to justify ignoring some security advice. Finally, I describe one reason why botnets are so difficult to eliminate: they have been cleverly designed to take advantage of gaps in these models so that many home computer users do not take steps to protect against them.

I’d list the models, but it’s more complicated than that. Read the paper.

Posted on March 22, 2011 at 7:12 AMView Comments

The Business of Botnets

It can be lucrative:

Avanesov allegedly rented and sold part of his botnet, a common business model for those who run the networks. Other cybercriminals can rent the hacked machines for a specific time for their own purposes, such as sending a spam run or mining the PCs for personal details and files, among other nefarious actions.

Dutch prosecutors believe that Avanesov made up to €100,000 ($139,000) a month from renting and selling his botnet just for spam, said Wim De Bruin, spokesman for the Public Prosecution Service in Rotterdam. Avanesov was able to sell parts of the botnet off “because it was very easy for him to extend the botnet again,” by infecting more PCs, he said.

EDITED TO ADD (11/11): Paper on the market price of bots.

Posted on November 4, 2010 at 7:04 AMView Comments

Building in Surveillance

China is the world’s most successful Internet censor. While the Great Firewall of China isn’t perfect, it effectively limits information flowing in and out of the country. But now the Chinese government is taking things one step further.

Under a requirement taking effect soon, every computer sold in China will have to contain the Green Dam Youth Escort software package. Ostensibly a pornography filter, it is government spyware that will watch every citizen on the Internet.

Green Dam has many uses. It can police a list of forbidden Web sites. It can monitor a user’s reading habits. It can even enlist the computer in some massive botnet attack, as part of a hypothetical future cyberwar.

China’s actions may be extreme, but they’re not unique. Democratic governments around the world — Sweden, Canada and the United Kingdom, for example — are rushing to pass laws giving their police new powers of Internet surveillance, in many cases requiring communications system providers to redesign products and services they sell.

Many are passing data retention laws, forcing companies to keep information on their customers. Just recently, the German government proposed giving itself the power to censor the Internet.

The United States is no exception. The 1994 CALEA law required phone companies to facilitate FBI eavesdropping, and since 2001, the NSA has built substantial eavesdropping systems in the United States. The government has repeatedly proposed Internet data retention laws, allowing surveillance into past activities as well as present.

Systems like this invite criminal appropriation and government abuse. New police powers, enacted to fight terrorism, are already used in situations of normal crime. Internet surveillance and control will be no different.

Official misuses are bad enough, but the unofficial uses worry me more. Any surveillance and control system must itself be secured. An infrastructure conducive to surveillance and control invites surveillance and control, both by the people you expect and by the people you don’t.

China’s government designed Green Dam for its own use, but it’s been subverted. Why does anyone think that criminals won’t be able to use it to steal bank account and credit card information, use it to launch other attacks, or turn it into a massive spam-sending botnet?

Why does anyone think that only authorized law enforcement will mine collected Internet data or eavesdrop on phone and IM conversations?

These risks are not theoretical. After 9/11, the National Security Agency built a surveillance infrastructure to eavesdrop on telephone calls and e-mails within the United States.

Although procedural rules stated that only non-Americans and international phone calls were to be listened to, actual practice didn’t always match those rules. NSA analysts collected more data than they were authorized to, and used the system to spy on wives, girlfriends, and famous people such as President Clinton.

But that’s not the most serious misuse of a telecommunications surveillance infrastructure. In Greece, between June 2004 and March 2005, someone wiretapped more than 100 cell phones belonging to members of the Greek government — the prime minister and the ministers of defense, foreign affairs and justice.

Ericsson built this wiretapping capability into Vodafone’s products, and enabled it only for governments that requested it. Greece wasn’t one of those governments, but someone still unknown — a rival political party? organized crime? — figured out how to surreptitiously turn the feature on.

Researchers have already found security flaws in Green Dam that would allow hackers to take over the computers. Of course there are additional flaws, and criminals are looking for them.

Surveillance infrastructure can be exported, which also aids totalitarianism around the world. Western companies like Siemens, Nokia, and Secure Computing built Iran’s surveillance infrastructure. U.S. companies helped build China’s electronic police state. Twitter’s anonymity saved the lives of Iranian dissidents — anonymity that many governments want to eliminate.

Every year brings more Internet censorship and control — not just in countries like China and Iran, but in the United States, the United Kingdom, Canada and other free countries.

The control movement is egged on by both law enforcement, trying to catch terrorists, child pornographers and other criminals, and by media companies, trying to stop file sharers.

It’s bad civic hygiene to build technologies that could someday be used to facilitate a police state. No matter what the eavesdroppers and censors say, these systems put us all at greater risk. Communications systems that have no inherent eavesdropping capabilities are more secure than systems with those capabilities built in.

This essay previously appeared — albeit with fewer links — on the Minnesota Public Radio website.

Posted on August 3, 2009 at 6:43 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.