Entries Tagged "botnets"

Page 7 of 7

Fearmongering About Bot Networks

Bot networks are a serious security problem, but this is ridiculous. From the Independent:

The PC in your home could be part of a complex international terrorist network. Without you realising it, your computer could be helping to launder millions of pounds, attacking companies’ websites or cracking confidential government codes.

This is not the stuff of science fiction or a conspiracy theory from a paranoid mind, but a warning from one of the world’s most-respected experts on computer crime. Dr Peter Tippett is chief technology officer at Cybertrust, a US computer security company, and a senior adviser on the issue to President George Bush. His warning is stark: criminals and terrorists are hijacking home PCs over the internet, creating “bot” computers to carry out illegal activities.

Yes, bot networks are bad. They’re used to send spam (both commercial and phishing), launch denial-of-service attacks (sometimes involving extortion), and stage attacks on other systems. Most bot networks are controlled by kids, but more and more criminals are getting into the act.

But your computer a part of an international terrorist network? Get real.

Once a criminal has gathered together what is known as a “herd” of bots, the combined computing power can be dangerous. “If you want to break the nuclear launch code then set a million computers to work on it. There is now a danger of nation state attacks,” says Dr Tippett. “The vast majority of terrorist organisations will use bots.”

I keep reading that last sentence, and wonder if “bots” is just a typo for “bombs.” And the line about bot networks being used to crack nuclear launch codes is nothing more than fearmongering.

Clearly I need to write an essay on bot networks.

Posted on May 17, 2005 at 3:33 PMView Comments

Combating Spam

Spam is back in the news, and it has a new name. This time it’s voice-over-IP spam, and it has the clever name of “spit” (spam over Internet telephony). Spit has the potential to completely ruin VoIP. No one is going to install the system if they’re going to get dozens of calls a day from audio spammers. Or, at least, they’re only going to accept phone calls from a white list of previously known callers.

VoIP spam joins the ranks of e-mail spam, Usenet newsgroup spam, instant message spam, cell phone text message spam, and blog comment spam. And, if you think broadly enough, these computer-network spam delivery mechanisms join the ranks of computer telemarketing (phone spam), junk mail (paper spam), billboards (visual space spam), and cars driving through town with megaphones (audio spam). It’s all basically the same thing—unsolicited marketing messages—and only by understanding the problem at this level of generality can we discuss solutions.

In general, the goal of advertising is to influence people. Usually it’s to influence people to purchase a product, but it could just as easily be to influence people to support a particular political candidate or position. Advertising does this by implanting a marketing message into the brain of the recipient. The mechanism of implantation is simply a tactic.

Tactics for unsolicited marketing messages rise and fall in popularity based on their cost and benefit. If the benefit is significant, people are willing to spend more. If the benefit is small, people will only do it if it is cheap. A 30-second prime-time television ad costs 1.8 cents per adult viewer, a full-page color magazine ad about 0.9 cents per reader. A highway billboard costs 0.21 cents per car. Direct mail is the most expensive, at over 50 cents per third-class letter mailed. (That’s why targeted mailing lists are so valuable; they increase the per-piece benefit.)

Spam is such a common tactic not because it’s particularly effective; the response rates for spam are very low. It’s common because it’s ridiculously cheap. Typically, spammers charge less than a hundredth of a cent per e-mail. (And that number is just what spamming houses charge their customers to deliver spam; if you’re a clever hacker, you can build your own spam network for much less money.) If it is worth $10 for you to successfully influence one person—to buy your product, vote for your guy, whatever—then you only need a 1 in a 100,000 success rate. You can market really marginal products with spam.

So far, so good. But the cost/benefit calculation is missing a component: the “cost” of annoying people. Everyone who is not influenced by the marketing message is annoyed to some degree. The advertiser pays a partial cost for annoying people; they might boycott his product. But most of the time he does not, and the cost of the advertising is paid by the person: the beauty of the landscape is ruined by the billboard, dinner is disrupted by a telemarketer, spam costs money to ship around the Internet and time to wade through, etc. (Note that I am using “cost” very generally here, and not just monetarily. Time and happiness are both costs.)

This is why spam is so bad. For each e-mail, the spammer pays a cost and receives benefit. But there is an additional cost paid by the e-mail recipient. But because so much spam is unwanted, that additional cost is huge—and it’s a cost that the spammer never sees. If spammers could be made to bear the total cost of spam, then its level would be more along the lines of what society would find acceptable.

This economic analysis is important, because it’s the only way to understand how effective different solutions will be. This is an economic problem, and the solutions need to change the fundamental economics. (The analysis is largely the same for VoIP spam, Usenet newsgroup spam, blog comment spam, and so on.)

The best solutions raise the cost of spam. Spam filters raise the cost by increasing the amount of spam that someone needs to send before someone will read it. If 99% of all spam is filtered into trash, then sending spam becomes 100 times more expensive. This is also the idea behind white lists—lists of senders a user is willing to accept e-mail from—and blacklists: lists of senders a user is not willing to accept e-mail from.

Filtering doesn’t just have to be at the recipient’s e-mail. It can be implemented within the network to clean up spam, or at the sender. Several ISPs are already filtering outgoing e-mail for spam, and the trend will increase.

Anti-spam laws raise the cost of spam to an intolerable level; no one wants to go to jail for spamming. We’ve already seen some convictions in the U.S. Unfortunately, this only works when the spammer is within the reach of the law, and is less effective against criminals who are using spam as a mechanism to commit fraud.

Other proposed solutions try to impose direct costs on e-mail senders. I have seen proposals for e-mail “postage,” either for every e-mail sent or for every e-mail above a reasonable threshold. I have seen proposals where the sender of an e-mail posts a small bond, which the receiver can cash if the e-mail is spam. There are other proposals that involve “computational puzzles”: time-consuming tasks the sender’s computer must perform, unnoticeable to someone who is sending e-mail normally, but too much for someone sending e-mail in bulk. These solutions generally involve re-engineering the Internet, something that is not done lightly, and hence are in the discussion stages only.

All of these solutions work to a degree, and we end up with an arms race. Anti-spam products block a certain type of spam. Spammers invent a tactic that gets around those products. Then the products block that spam. Then the spammers invent yet another type of spam. And so on.

Blacklisting spammer sites forced the spammers to disguise the origin of spam e-mail. People recognizing e-mail from people they knew, and other anti-spam measures, forced spammers to hack into innocent machines and use them as launching pads. Scanning millions of e-mails looking for identical bulk spam forced spammers to individualize each spam message. Semantic spam detection forced spammers to design even more clever spam. And so on. Each defense is met with yet another attack, and each attack is met with yet another defense.

Remember that when you think about host identification, or postage, as an anti-spam measure. Spammers don’t care about tactics; they want to send their e-mail. Techniques like this will simply force spammers to rely more on hacked innocent machines. As long as the underlying computers are insecure, we can’t prevent spammers from sending.

This is the problem with another potential solution: re-engineering the Internet to prohibit the forging of e-mail headers. This would make it easier for spam detection software to detect spamming IP addresses, but spammers would just use hacked machines instead of their own computers.

Honestly, there’s no end in sight for the spam arms race. Even so, spam is one of computer security’s success stories. The current crop of anti-spam products work. I get almost no spam and very few legitimate e-mails end up in my spam trap. I wish they would work better—Crypto-Gram is occasionally classified as spam by one service or another, for example—but they’re working pretty well. It’ll be a long time before spam stops clogging up the Internet, but at least we don’t have to look at it.

Posted on May 13, 2005 at 9:47 AMView Comments

Tracking Bot Networks

This is a fascinating piece of research on bot networks: networks of compromised computers that can be remotely controlled by an attacker. The paper details how bots and bot networks work, who uses them, how they are used, and how to track them.

From the conclusion:

In this paper we have attempted to demonstrate how honeynets can help us understand how botnets work, the threat they pose, and how attackers control them. Our research shows that some attackers are highly skilled and organized, potentially belonging to well organized crime structures. Leveraging the power of several thousand bots, it is viable to take down almost any website or network instantly. Even in unskilled hands, it should be obvious that botnets are a loaded and powerful weapon. Since botnets pose such a powerful threat, we need a variety of mechanisms to counter it.

Decentralized providers like Akamai can offer some redundancy here, but very large botnets can also pose a severe threat even against this redundancy. Taking down of Akamai would impact very large organizations and companies, a presumably high value target for certain organizations or individuals. We are currently not aware of any botnet usage to harm military or government institutions, but time will tell if this persists.

In the future, we hope to develop more advanced honeypots that help us to gather information about threats such as botnets. Examples include Client honeypots that actively participate in networks (e.g. by crawling the web, idling in IRC channels, or using P2P-networks) or modify honeypots so that they capture malware and send it to anti-virus vendors for further analysis. As threats continue to adapt and change, so must the security community.

Posted on March 14, 2005 at 10:46 AMView Comments

1 5 6 7

Sidebar photo of Bruce Schneier by Joe MacInnis.