Entries Tagged "botnets"

Page 6 of 6

Hacker-Controlled Computers Hiding Better

If you have control of a network of computers—by infecting them with some sort of malware—the hard part is controlling that network. Traditionally, these computers (called zombies) are controlled via IRC. But IRC can be detected and blocked, so the hackers have adapted:

Instead of connecting to an IRC server, newly compromised PCs connect to one or more Web sites to check in with the hackers and get their commands. These Web sites are typically hosted on hacked servers or computers that have been online for a long time. Attackers upload the instructions for download by their bots.

As a result, protection mechanisms, such as blocking IRC traffic, will fail. This could mean that zombies, which so far have mostly been broadband-connected home computers, will be created using systems on business networks.

The trick here is to not let the computer’s legitimate owner know that someone else is controlling it. It’s an arms race between attacker and defender.

Posted on October 25, 2006 at 12:14 PMView Comments

Organized Cybercrime

Cybercrime is getting organized:

Cyberscams are increasingly being committed by organized crime syndicates out to profit from sophisticated ruses rather than hackers keen to make an online name for themselves, according to a top U.S. official.

Christopher Painter, deputy chief of the computer crimes and intellectual property section at the Department of Justice, said there had been a distinct shift in recent years in the type of cybercriminals that online detectives now encounter.

“There has been a change in the people who attack computer networks, away from the ‘bragging hacker’ toward those driven by monetary motives,” Painter told Reuters in an interview this week.

Although media reports often focus on stories about teenage hackers tracked down in their bedroom, the greater danger lies in the more anonymous virtual interlopers.

“There are still instances of these ‘lone-gunman’ hackers but more and more we are seeing organized criminal groups, groups that are often organized online targeting victims via the internet,” said Painter, in London for a cybercrime conference.

I’ve been saying this sort of thing for years, and have long complained that cyberterrorism gets all the press while cybercrime is the real threat. I don’t think this article is fear and hype; it’s a real problem.

Posted on September 19, 2006 at 7:16 AMView Comments

Bot Networks

What could you do if you controlled a network of thousands of computers—or, at least, could use the spare processor cycles on those machines? You could perform massively parallel computations: model nuclear explosions or global weather patterns, factor large numbers or find Mersenne primes, or break cryptographic problems.

All of these are legitimate applications. And you can visit distributed.net and download software that allows you to donate your spare computer cycles to some of these projects. (You can help search for Optimal Golomb Rulers—even if you have no idea what they are.) You’ve got a lot of cycles to spare. There’s no reason that your computer can’t help search for extraterrestrial life as it, for example, sits idly waiting for you to read this essay.

The reason these things work is that they are consensual; none of these projects download software onto your computer without your knowledge. None of these projects control your computer without your consent. But there are lots of software programs that do just that.

The term used for a computer remotely controlled by someone else is a “bot”. A group of computers—thousands or even millions—controlled by someone else is a bot network. Estimates are that millions of computers on the internet today are part of bot networks, and the largest bot networks have over 1.5 million machines.

Initially, bot networks were used for just one thing: denial-of-service attacks. Hackers would use them against each other, fighting hacker feuds in cyberspace by attacking each other’s computers. The first widely publicized use of a distributed intruder tool—technically not a botnet, but practically the same thing—was in February 2000, when Canadian hacker Mafiaboy directed an army of compromised computers to flood CNN.com, Amazon.com, eBay, Dell Computer and other sites with debilitating volumes of traffic. Every newspaper carried that story.

These days, bot networks are more likely to be controlled by criminals than by hackers. The important difference is the motive: profit. Networks are being used to send phishing e-mails and other spam. They’re being used for click fraud. They’re being used as an extortion tool: Pay up or we’ll DDoS you!

Mostly, they’re being used to collect personal data for fraud—commonly called “identity theft.” Modern bot software doesn’t just attack other computers; it attacks its hosts as well. The malware is packed with keystroke loggers to steal passwords and account numbers. In fact, many bots automatically hunt for financial information, and some botnets have been built solely for this purpose—to gather credit card numbers, online banking passwords, PayPal accounts, and so on, from compromised hosts.

Swindlers are also using bot networks for click fraud. Google’s anti-fraud systems are sophisticated enough to detect thousands of clicks by one computer; it’s much harder to determine if a single click by each of thousands of computers is fraud, or just popularity.

And, of course, most bots constantly search for other computers that can be infected and added to the bot network. (A 1.5 million-node bot network was discovered in the Netherlands last year. The command-and-control system was dismantled, but some of the bots are still active, infecting other computers and adding them to this defunct network.)

Modern bot networks are remotely upgradeable, so the operators can add new functionality to the bots at any time, or switch from one bot program to another. Bot authors regularly upgrade their botnets during development, or to evade detection by anti-virus and malware cleanup tools.

One application of bot networks that we haven’t seen all that much of is to launch a fast-spreading worm. (Some believe the Witty worm spread this way.) Much has been written about “flash worms” that can saturate the internet in 15 minutes or less. The situation gets even worse if 10 thousand bots synchronize their watches and release the worm at exactly the same time. Why haven’t we seen more of this? My guess is because there isn’t any profit in it.

There’s no real solution to the botnet problem, because there’s no single problem. There are many different bot networks, controlled in many different ways, consisting of computers infected through many different vulnerabilities. Really, a bot network is nothing more than an attacker taking advantage of 1) one or more software vulnerabilities, and 2) the economies of scale that computer networks bring. It’s the same thing as distributed.net or SETI@home, only the attacker doesn’t ask your permission first.

As long as networked computers have vulnerabilities—and that’ll be for the foreseeable future—there’ll be bot networks. It’s a natural side-effect of a computer network with bugs.

This essay originally appeared on Wired.com.

EDITED TO ADD (7/27): DDOS extortion is a bigger problem than you might think. Right now it’s primarily targeted against fringe industries—online gaming, online gambling, online porn—located offshore, but we’re seeing more and more of against mainstream companies in the U.S. and Europe.

EDITED TO ADD (7/27): Seems that Witty was definitely not seeded from a bot network.

Posted on July 27, 2006 at 6:35 AMView Comments

Click Fraud and the Problem of Authenticating People

Google’s $6 billion-a-year advertising business is at risk because it can’t be sure that anyone is looking at its ads. The problem is called click fraud, and it comes in two basic flavors.

With network click fraud, you host Google AdSense advertisements on your own website. Google pays you every time someone clicks on its ad on your site. It’s fraud if you sit at the computer and repeatedly click on the ad or—better yet—write a computer program that repeatedly clicks on the ad. That kind of fraud is easy for Google to spot, so the clever network click fraudsters simulate different IP addresses, or install Trojan horses on other people’s computers to generate the fake clicks.

The other kind of click fraud is competitive. You notice your business competitor has bought an ad on Google, paying Google for each click. So you use the above techniques to repeatedly click on his ads, forcing him to spend money—sometimes a lot of money—on nothing. (Here’s a company that will commit click fraud for you.)

Click fraud has become a classic security arms race. Google improves its fraud-detection tools, so the fraudsters get increasingly clever … and the cycle continues. Meanwhile, Google is facing multiple lawsuits from those who claim the company isn’t doing enough. My guess is that everyone is right: It’s in Google’s interest both to solve and to downplay the importance of the problem.

But the overarching problem is both hard to solve and important: How do you tell if there’s an actual person sitting in front of a computer screen? How do you tell that the person is paying attention, hasn’t automated his responses, and isn’t being assisted by friends? Authentication systems are big business, whether based on something you know (passwords), something you have (tokens) or something you are (biometrics). But none of those systems can secure you against someone who walks away and lets another person sit down at the keyboard, or a computer that’s infected with a Trojan.

This problem manifests itself in other areas as well.

For years, online computer game companies have been battling players who use computer programs to assist their play: programs that allow them to shoot perfectly or see information they normally couldn’t see.

Playing is less fun if everyone else is computer-assisted, but unless there’s a cash prize on the line, the stakes are small. Not so with online poker sites, where computer-assisted players—or even computers playing without a real person at all—have the potential to drive all the human players away from the game.

Look around the internet, and you see this problem pop up again and again. The whole point of CAPTCHAs is to ensure that it’s a real person visiting a website, not just a bot on a computer. Standard testing doesn’t work online, because the tester can’t be sure that the test taker doesn’t have his book open, or a friend standing over his shoulder helping him. The solution in both cases is a proctor, of course, but that’s not always practical and obviates the benefits of internet testing.

This problem has even come up in court cases. In one instance, the prosecution demonstrated that the defendant’s computer committed some hacking offense, but the defense argued that it wasn’t the defendant who did it—that someone else was controlling his computer. And in another case, a defendant charged with a child porn offense argued that, while it was true that illegal material was on his computer, his computer was in a common room of his house and he hosted a lot of parties—and it wasn’t him who’d downloaded the porn.

Years ago, talking about security, I complained about the link between computer and chair. The easy part is securing digital information: on the desktop computer, in transit from computer to computer or on massive servers. The hard part is securing information from the computer to the person. Likewise, authenticating a computer is much easier than authenticating a person sitting in front of the computer. And verifying the integrity of data is much easier than verifying the integrity of the person looking at it—in both senses of that word.

And it’s a problem that will get worse as computers get better at imitating people.

Google is testing a new advertising model to deal with click fraud: cost-per-action ads. Advertisers don’t pay unless the customer performs a certain action: buys a product, fills out a survey, whatever. It’s a hard model to make work—Google would become more of a partner in the final sale instead of an indifferent displayer of advertising—but it’s the right security response to click fraud: Change the rules of the game so that click fraud doesn’t matter.

That’s how to solve a security problem.

This essay appeared on Wired.com.

EDITED TO ADD (7/13): Click Monkeys is a hoax site.

EDITED TO ADD (7/25): An evalution of Google’s anti-click-fraud efforts, as part of the Lane Gifts case. I’m not sure if this expert report was done for Google, for Lane Gifts, or for the judge.

Posted on July 13, 2006 at 5:22 AMView Comments

For-Profit Botnet

Interesting article about someone convicted for running a for-profit botnet:

November’s 52-page indictment, along with papers filed last week, offer an unusually detailed glimpse into a shadowy world where hackers, often not old enough to vote, brag in online chat groups about their prowess in taking over vast numbers of computers and herding them into large armies of junk mail robots and arsenals for so-called denial of service attacks on Web sites.

Ancheta one-upped his hacking peers by advertising his network of “bots,” short for robots, on Internet chat channels.

A Web site Ancheta maintained included a schedule of prices he charged people who wanted to rent out the machines, along with guidelines on how many bots were required to bring down a particular type of Web site.

In July 2004, he told one chat partner he had more than 40,000 machines available, “more than I can handle,” according to the indictment. A month later, Ancheta told another person he controlled at least 100,000 bots, and that his network had added another 10,000 machines in a week and a half.

In a three-month span starting in June 2004, Ancheta rented out or sold bots to at least 10 “different nefarious computer users,” according to the plea agreement. He pocketed $3,000 in the process by accepting payments through the online PayPal service, prosecutors said.

Starting in August 2004, Ancheta turned to a new, more lucrative method to profit from his botnets, prosecutors said. Working with a juvenile in Boca Raton, Fla., whom prosecutors identified by his Internet nickname “SoBe,” Ancheta infected more than 400,000 computers.

Ancheta and SoBe signed up as affiliates in programs maintained by online advertising companies that pay people each time they get a computer user to install software that displays ads and collects information about the sites a user visits.

Posted on February 2, 2006 at 6:06 AMView Comments

Dutch Botnet

Back in October, the Dutch police arrested three people who created a large botnet and used it to extort money from U.S. companies. When the trio was arrested, authorities said that the botnet consisted of about 100,000 computers. The actual number was 1.5 million computers.

And I’ve heard reports from reputable sources that the actual actual number was “significantly higher.”

And it may still be growing. The bots continually scan the network and try to infect other machines. They do this autonomously, even after the command and control node was shut down. Since most of those 1.5 million machines—or however many there are—still have the botnet software running on them, it’s reasonable to believe that the botnet is still growing.

Posted on December 22, 2005 at 8:18 AMView Comments

The Zotob Worm

If you’ll forgive the possible comparison to hurricanes, Internet epidemics are much like severe weather: they happen randomly, they affect some segments of the population more than others, and your previous preparation determines how effective your defense is.

Zotob was the first major worm outbreak since MyDoom in January 2004. It happened quickly—less than five days after Microsoft published a critical security bulletin (its 39th of the year). Zotob’s effects varied greatly from organization to organization: some networks were brought to their knees, while others didn’t even notice.

The worm started spreading on Sunday, 14 August. Honestly, it wasn’t much of a big deal, but it got a lot of play in the press because it hit several major news outlets, most notably CNN. If a news organization is personally affected by something, it’s much more likely to report extensively on it. But my company, Counterpane Internet Security, monitors more than 500 networks worldwide, and we didn’t think it was worth all the press coverage.

By the 17th, there were at least a dozen other worms that exploited the same vulnerability, both Zotob variants and others that were completely different. Most of them tried to recruit computers for bot networks, and some of the different variants warred against each other—stealing “owned” computers back and forth. If your network was infected, it was a mess.

Two weeks later, the 18-year-old who wrote the original Zotob worm was arrested, along with the 21-year-old who paid him to write it. It seems likely the person who funded the worm’s creation was not a hacker, but rather a criminal looking to profit.

The nature of worms has changed in the past few years. Previously, hackers looking for prestige or just wanting to cause damage were responsible for most worms. Today, they’re increasingly written or commissioned by criminals. By taking over computers, worms can send spam, launch denial-of-service extortion attacks, or search for credit-card numbers and other personal information.

What could you have done beforehand to protect yourself against Zotob and its kin? “Install the patch” is the obvious answer, but it’s not really a satisfactory one. There are simply too many patches. Although a single computer user can easily set up patches to automatically download and install—at least Microsoft Windows system patches—large corporate networks can’t. Far too often, patches cause other things to break.

It would be great to know which patches are actually important and which ones just sound important. Before that weekend in August, the patch that would have protected against Zotob was just another patch; by Monday morning, it was the most important thing a sysadmin could do to secure the network.

Microsoft had six new patches available on 9 August, three designated as critical (including the one that Zotob used), one important, and two moderate. Could you have guessed beforehand which one would have actually been critical? With the next patch release, will you know which ones you can put off and for which ones you need to drop everything, test, and install across your network?

Given that it’s impossible to know what’s coming beforehand, how you respond to an actual worm largely determines your defense’s effectiveness. You might need to respond quickly, and you most certainly need to respond accurately. Because it’s impossible to know beforehand what the necessary response should be, you need a process for that response. Employees come and go, so the only thing that ensures a continuity of effective security is a process. You need accurate and timely information to fuel this process. And finally, you need experts to decipher the information, determine what to do, and implement a solution.

The Zotob storm was both typical and unique. It started soon after the vulnerability was published, but I don’t think that made a difference. Even worms that use six-month-old vulnerabilities find huge swaths of the Internet unpatched. It was a surprise, but they all are.

This essay will appear in the November/December 2005 issue of IEEE Security & Privacy.

Posted on November 11, 2005 at 7:46 AMView Comments

Fearmongering About Bot Networks

Bot networks are a serious security problem, but this is ridiculous. From the Independent:

The PC in your home could be part of a complex international terrorist network. Without you realising it, your computer could be helping to launder millions of pounds, attacking companies’ websites or cracking confidential government codes.

This is not the stuff of science fiction or a conspiracy theory from a paranoid mind, but a warning from one of the world’s most-respected experts on computer crime. Dr Peter Tippett is chief technology officer at Cybertrust, a US computer security company, and a senior adviser on the issue to President George Bush. His warning is stark: criminals and terrorists are hijacking home PCs over the internet, creating “bot” computers to carry out illegal activities.

Yes, bot networks are bad. They’re used to send spam (both commercial and phishing), launch denial-of-service attacks (sometimes involving extortion), and stage attacks on other systems. Most bot networks are controlled by kids, but more and more criminals are getting into the act.

But your computer a part of an international terrorist network? Get real.

Once a criminal has gathered together what is known as a “herd” of bots, the combined computing power can be dangerous. “If you want to break the nuclear launch code then set a million computers to work on it. There is now a danger of nation state attacks,” says Dr Tippett. “The vast majority of terrorist organisations will use bots.”

I keep reading that last sentence, and wonder if “bots” is just a typo for “bombs.” And the line about bot networks being used to crack nuclear launch codes is nothing more than fearmongering.

Clearly I need to write an essay on bot networks.

Posted on May 17, 2005 at 3:33 PMView Comments

Combating Spam

Spam is back in the news, and it has a new name. This time it’s voice-over-IP spam, and it has the clever name of “spit” (spam over Internet telephony). Spit has the potential to completely ruin VoIP. No one is going to install the system if they’re going to get dozens of calls a day from audio spammers. Or, at least, they’re only going to accept phone calls from a white list of previously known callers.

VoIP spam joins the ranks of e-mail spam, Usenet newsgroup spam, instant message spam, cell phone text message spam, and blog comment spam. And, if you think broadly enough, these computer-network spam delivery mechanisms join the ranks of computer telemarketing (phone spam), junk mail (paper spam), billboards (visual space spam), and cars driving through town with megaphones (audio spam). It’s all basically the same thing—unsolicited marketing messages—and only by understanding the problem at this level of generality can we discuss solutions.

In general, the goal of advertising is to influence people. Usually it’s to influence people to purchase a product, but it could just as easily be to influence people to support a particular political candidate or position. Advertising does this by implanting a marketing message into the brain of the recipient. The mechanism of implantation is simply a tactic.

Tactics for unsolicited marketing messages rise and fall in popularity based on their cost and benefit. If the benefit is significant, people are willing to spend more. If the benefit is small, people will only do it if it is cheap. A 30-second prime-time television ad costs 1.8 cents per adult viewer, a full-page color magazine ad about 0.9 cents per reader. A highway billboard costs 0.21 cents per car. Direct mail is the most expensive, at over 50 cents per third-class letter mailed. (That’s why targeted mailing lists are so valuable; they increase the per-piece benefit.)

Spam is such a common tactic not because it’s particularly effective; the response rates for spam are very low. It’s common because it’s ridiculously cheap. Typically, spammers charge less than a hundredth of a cent per e-mail. (And that number is just what spamming houses charge their customers to deliver spam; if you’re a clever hacker, you can build your own spam network for much less money.) If it is worth $10 for you to successfully influence one person—to buy your product, vote for your guy, whatever—then you only need a 1 in a 100,000 success rate. You can market really marginal products with spam.

So far, so good. But the cost/benefit calculation is missing a component: the “cost” of annoying people. Everyone who is not influenced by the marketing message is annoyed to some degree. The advertiser pays a partial cost for annoying people; they might boycott his product. But most of the time he does not, and the cost of the advertising is paid by the person: the beauty of the landscape is ruined by the billboard, dinner is disrupted by a telemarketer, spam costs money to ship around the Internet and time to wade through, etc. (Note that I am using “cost” very generally here, and not just monetarily. Time and happiness are both costs.)

This is why spam is so bad. For each e-mail, the spammer pays a cost and receives benefit. But there is an additional cost paid by the e-mail recipient. But because so much spam is unwanted, that additional cost is huge—and it’s a cost that the spammer never sees. If spammers could be made to bear the total cost of spam, then its level would be more along the lines of what society would find acceptable.

This economic analysis is important, because it’s the only way to understand how effective different solutions will be. This is an economic problem, and the solutions need to change the fundamental economics. (The analysis is largely the same for VoIP spam, Usenet newsgroup spam, blog comment spam, and so on.)

The best solutions raise the cost of spam. Spam filters raise the cost by increasing the amount of spam that someone needs to send before someone will read it. If 99% of all spam is filtered into trash, then sending spam becomes 100 times more expensive. This is also the idea behind white lists—lists of senders a user is willing to accept e-mail from—and blacklists: lists of senders a user is not willing to accept e-mail from.

Filtering doesn’t just have to be at the recipient’s e-mail. It can be implemented within the network to clean up spam, or at the sender. Several ISPs are already filtering outgoing e-mail for spam, and the trend will increase.

Anti-spam laws raise the cost of spam to an intolerable level; no one wants to go to jail for spamming. We’ve already seen some convictions in the U.S. Unfortunately, this only works when the spammer is within the reach of the law, and is less effective against criminals who are using spam as a mechanism to commit fraud.

Other proposed solutions try to impose direct costs on e-mail senders. I have seen proposals for e-mail “postage,” either for every e-mail sent or for every e-mail above a reasonable threshold. I have seen proposals where the sender of an e-mail posts a small bond, which the receiver can cash if the e-mail is spam. There are other proposals that involve “computational puzzles”: time-consuming tasks the sender’s computer must perform, unnoticeable to someone who is sending e-mail normally, but too much for someone sending e-mail in bulk. These solutions generally involve re-engineering the Internet, something that is not done lightly, and hence are in the discussion stages only.

All of these solutions work to a degree, and we end up with an arms race. Anti-spam products block a certain type of spam. Spammers invent a tactic that gets around those products. Then the products block that spam. Then the spammers invent yet another type of spam. And so on.

Blacklisting spammer sites forced the spammers to disguise the origin of spam e-mail. People recognizing e-mail from people they knew, and other anti-spam measures, forced spammers to hack into innocent machines and use them as launching pads. Scanning millions of e-mails looking for identical bulk spam forced spammers to individualize each spam message. Semantic spam detection forced spammers to design even more clever spam. And so on. Each defense is met with yet another attack, and each attack is met with yet another defense.

Remember that when you think about host identification, or postage, as an anti-spam measure. Spammers don’t care about tactics; they want to send their e-mail. Techniques like this will simply force spammers to rely more on hacked innocent machines. As long as the underlying computers are insecure, we can’t prevent spammers from sending.

This is the problem with another potential solution: re-engineering the Internet to prohibit the forging of e-mail headers. This would make it easier for spam detection software to detect spamming IP addresses, but spammers would just use hacked machines instead of their own computers.

Honestly, there’s no end in sight for the spam arms race. Even so, spam is one of computer security’s success stories. The current crop of anti-spam products work. I get almost no spam and very few legitimate e-mails end up in my spam trap. I wish they would work better—Crypto-Gram is occasionally classified as spam by one service or another, for example—but they’re working pretty well. It’ll be a long time before spam stops clogging up the Internet, but at least we don’t have to look at it.

Posted on May 13, 2005 at 9:47 AMView Comments

Tracking Bot Networks

This is a fascinating piece of research on bot networks: networks of compromised computers that can be remotely controlled by an attacker. The paper details how bots and bot networks work, who uses them, how they are used, and how to track them.

From the conclusion:

In this paper we have attempted to demonstrate how honeynets can help us understand how botnets work, the threat they pose, and how attackers control them. Our research shows that some attackers are highly skilled and organized, potentially belonging to well organized crime structures. Leveraging the power of several thousand bots, it is viable to take down almost any website or network instantly. Even in unskilled hands, it should be obvious that botnets are a loaded and powerful weapon. Since botnets pose such a powerful threat, we need a variety of mechanisms to counter it.

Decentralized providers like Akamai can offer some redundancy here, but very large botnets can also pose a severe threat even against this redundancy. Taking down of Akamai would impact very large organizations and companies, a presumably high value target for certain organizations or individuals. We are currently not aware of any botnet usage to harm military or government institutions, but time will tell if this persists.

In the future, we hope to develop more advanced honeypots that help us to gather information about threats such as botnets. Examples include Client honeypots that actively participate in networks (e.g. by crawling the web, idling in IRC channels, or using P2P-networks) or modify honeypots so that they capture malware and send it to anti-virus vendors for further analysis. As threats continue to adapt and change, so must the security community.

Posted on March 14, 2005 at 10:46 AMView Comments

1 4 5 6

Sidebar photo of Bruce Schneier by Joe MacInnis.