Comments

Jason BurnettApril 18, 2013 12:11 PM

While this is pretty horrific, I do think it's very clever. It's starting to seem like the safest approach to malware is to assume you're going to get infected, it's just a matter of when, and then make plans accordingly. (Perhaps people more knowledgeable on the subject than I have been of this mindset for a while now.)

gomezApril 18, 2013 1:28 PM

Won't the data persist on journaled filesystems, if the infected files are modified?

I don't know much about how viruses are implemented, but perhaps a defence could be for the operating system to randomly move executable files to different parts of the hard drive?

AnupApril 18, 2013 1:47 PM

I wonder how long it will be before this increasingly self aware code turns into something that the creator cannot control, even if it was because of some bug he/she did not catch. Maybe it is time we had something similar to the three laws of robotics, but of course, with malware, it is a moot point.

RobApril 18, 2013 2:10 PM

I would think most large companies are running network ids that saves binaries and alerts such as FireEye.

WinterApril 18, 2013 2:44 PM

That arms race is starting to look like it's biological analogue.

Maybe it is time to look closer at how unicellular organisms, plants, and animals deal with organic viruses for ideas?

Northern RealistApril 18, 2013 2:54 PM

@winter - check back issues of Scientific American and of the IBM Journal -- they were looking at this approach long ago...

Matt HollingsworthApril 18, 2013 3:24 PM

Of course many people who are protecting themselves using read-only bootable drives and backups miss the problem of malware that is composed of known xploits or worse yet, BIOS or hardware level hacks that are not fixed by re-installing or booting from read only media...

Repentant EvildoerApril 18, 2013 6:06 PM

I worked as an assembly language programmer at an adware company from 2002-2004.

We used, or considered using, just such methods as this. We routinely deleted or hid our own components, or made them load as printer drivers. We randomly shuffled the innards of our loader so anti-virus programs wouldn't recognize it as it was being downloaded.

And there were many worse things that we were asked to do but refused, such as pretending we were a deleted file, or patching the OS so we wouldn't show up in directory searches.

The company was eventually shut down by the state Attorney General.

Clive RobinsonApril 18, 2013 6:17 PM

@ Jason Burnett,

Perhaps people more knowledgeable on the subject than I have been of this mindset for a while now.

The simple answer is yes.

For instance some OS's have "append only" file systems where it is possible to read and write data but not delete or overwrite existing data.

Another example is an OS with file systems with strong file type seperation, where it then becomes much easier to watch file systems for unexpected or unauthorized changes. Strong seperation in file systems has similar advantages to strong seperation in system memory and functionality and you can almost "type safe" it the way you do variables in some high level languages.

As a technique it predates most security concernces as it was originaly designed to optomize system performance, and as such a usefull technique has almost been lost and although it can be found in dusty journals from the 1960's it has been "re-invented" by security researchers that are not aware of it because their litrature searches tend to use the wrong keywords.

Unsurprisingly there are many other techniques from the 60's & 70's that have likewise become in effect "lost knowledge" that are effective in helping reduce and make easier the detection of what we now call "malware" in it's various forms.

And guess what some are being thought up anew by researchers who were not even born at the time (and in some cases neither were there parents...).

@ Nick P who posts regularly on this blog appears to have made searching for this "old knowledge" a bit of a hobby and has in the past posted links to old journals that are now online.

Sadly whilst little of our new knowledge is original and the old knowledge is available little or any of it has made it into commodity operating systems where it would make very significant improvments in security.

As I've occasionaly said in the past most of our commodity OS's need to start again. That is they realy realy need to ditch backwards compatability, that is in oh so many ways responsable for our current security ills one way or another.

ThomasApril 18, 2013 6:45 PM

@Clive

Ditching backwards compatibility will happen around the same time the US converts to metric.

Everyone knows it's necessary, should have happened years ago and will only get more expensive the longer we delay.

It just isn't going to happen.

AaronApril 18, 2013 9:25 PM

There's a space at the end of the href attribute in your URL that may cause it to not load when clicked in some situations. Just to let you know

I'm No Superman!April 19, 2013 1:47 AM

http://www.buggedplanet.info

There is so much malware in the wild, with web pages telling you how to disguise it from sites like VirusTotal, even if you check files against sites like VT and the anti-malware programs you are running, there's no hope to be clean at all. There's always the chance you're rooted either through malicious hardware design or through something hiding in firmware.

NooiyApril 19, 2013 3:20 AM

Nothing new here. Erasing tracks is one common function in malwares. Even its other attributes like password stealing and so on seems very dull (use of Base64 in url is like using clear text).

Just another PR transforming a simple malware in a weapon...

Clive RobinsonApril 19, 2013 5:04 AM

@ Repentant Evildoer,

We used, or considered using, just such methods as this. We routinely deleted or hid our own components, or made them load as printer drivers. We randomly shuffled the innards of our loader so anti-virus programs wouldn't recognize it as it was being downloaded...

Hmm you were one of the "Guns for Hire" I used to talk about at that time when much of the security industry and academic researchers were trying to portray such malware activities as being "uber script kidies" doing such things for "ego food"rather than for monetary gain.

I could be petulant and go "Nah Nah told you so!" to those industry "pundits / gurus" but to be quite honest I'd rather just smile politely and carry on making my predictions (and hopefully living long enough ;-) to see if I call them right or not[1].

And tucked away in the article Bruce links to is a little snippit that is not in the MS posting. Which is the bit about checking for user activity at the keyboard etc.

That is it's checking for a real life user machine and many would think not a server. But that would be a very limited view point, what is not mentioned is that this behaviour also acts as a Honey pot/net filter, thus making it less likely that the malware will get snagged.

Some years ago I directed my mind to how to avoid Honey pot/net systems and user activity or lack there of was my first thought, but I considered it "to obvious".

However another thought that I discussed on this blog was how to remotly detect such systems in as camoflaged a way as possible.

The realisation you need is not to look for user activity which can be faked with scripts if the admin is smart enough but something else.

Which is where the old "follow the money" argument comes into play.

All Honey trap systems use resources which are expensive in various dimensions such as system purchase, power consumption, environmental cost (ie physical space, air con etc), maintanence and insurance costs etc. To reduce costs nearly all honey trap systems use Virtual Machine technology and this can be seen as an Achilles Heel of Honey Traps.

So the question then becomes not how to detect users but how to detect virtual machines.

After a little thought you realise you are looking for a distinquisher that shows two or more virtual machines share the same hardware from a remote location...

Well there are several ways but my criterion was how to camoflage what you are doing from the Honey Trap admin. Who is for obvious reasons paying very close attention to their logs but in all probability also recording every packet that hits the network in ways that cannot be remotely detected. So the method must not only appear as someting the admin expects in their logs and will thus ignore but also not contain any differentiator like a precious new zero day exploit etc.

There are several ways this can be done but most leak some information about what you are doing if the person looking at captured data is smart enough.

Thus the probe you use must at the very least follow the "Duck Principle" (If it looks like a duck, waddles like a duck and quacks like a duck, why would you assume it's a goose). So you need to find a suitable duck, and I decided to use a nice noisy "brain dead script kiddie attack" that does a simple port test or equivalent that looks just like netcat etc in a simple script. Obviously what ever method you chose has to get above the access bar set by the Honey Trap admin.

The real difference is that the probe is carefully timed and the responces from the VM's are likewise carefully timed. This is because VM's all share the same system clock and thus any drift in it can be seen in the VM responses across the network due to packet timing information. Thus if you see a bunch of machines that all have the same clock drift the chances are they are all on the same base hardware and are thus VM's and thus best avoided if you don't want your latest valuable zero day to be caught and analysed.

I actually emailed one or two people in the HoneyNet project to warn them to be alert for such a probing attack or do what would be required to decouple the VMs sufficiently well from the system clock that a network timing probe attack would be prohibitive in time. Of those emailed the response was shall we say compleatly underwhelming.

Perhaps now they realise malware writers are looking to detect Honey Trap machines they will take the issue a little more seriously...

[1] For those who want to make their own predictions two pieces of advise and a filter. Firstly "follow the money" ultimatly it is "the route of all evil" which motivates people to tred that path to supposed riches which is the base of the American Dream. Secondly study history in a broad context as has been observed "people who do not learn from history have condemed themselves to re-live it" or more appropriatly new tech alows old tricks new places to be performed. So find an old trick and mentaly re-work it for the new environment and if you can see how money can be made from it, there's a high degree of certainty that somebody will actually do it. When is the question and that's where you apply the filter, it will happen most probably when the easier money has been nearly all consumed and the smarter people see it's time to move to a new trick (ie the principle of the low hanging fruit).

Repentant EvildoerApril 19, 2013 5:50 AM

@ Clive Robinson,

Our code silently declined to install on any machine which had a debugger installed. This was not difficult to discover, and anyway the only one we really wanted to avoid was SoftICE.

We didn't worry much about avoiding honeypots, because distributors of our ad client weren't supposed to create driveby-download wrappers. But since they had financial incentives to do so (they got paid by the install), they did.

This cost us money and exposed us to accusations of illegal activity, so we set up honeypots of our own. They were exactly as you describe, VMs running unpatched versions of Windows XP (i.e., pre-SP1). We had the machines semi-randomly crawl the web until our monitoring daemon noticed an unknown process starting.

I say "semi-randomly" because we knew where to start: primarily sites purporting to link to adult material, secondarily sites offering song lyrics.

On top of all this there was a silent war going on between adware vendors. It began when we discovered that a competitor's client was uninstalling ours, against which we retaliated in kind; from there it escalated as you would expect. I spent many hours dissecting competitors' code, some of which was much nastier than ours. (This was before the Sony rootkit debacle made many of these methods common knowledge.)

Bugs in this kind of low-level code could (and a few times did) make the user's PC functionally useless, requiring a reinstall of the OS to recover.

NobodyApril 19, 2013 8:54 AM

Yeah, I have to admit, this is nothing new. Other articles get more into the technical details where there may be something new, but this concept is one of the first ones a malware developer who was serious about stealth would come to understand.

Probably a lot of the systems using this manner of functionality simply are not caught.


@Clive R

"Hmm you were one of the "Guns for Hire" I used to talk about at that time when much of the security industry and academic researchers were trying to portray such malware activities as being "uber script kidies" doing such things for "ego food"rather than for monetary gain.
I could be petulant and go "Nah Nah told you so!" to those industry "pundits / gurus" but to be quite honest I'd rather just smile politely and carry on making my predictions (and hopefully living long enough ;-) to see if I call them right or not[1]."

The price of consistently coming up with bright ideas is you have to remain at least somewhat anonymous when doing so, and just shrug off the patent trolls of the world... or others who have a far more exhaustible supply of good ideas and bicker on "owning" them.

The pleasure is one sees the future, and can be a profound, though unseen influence.


ModeratorApril 19, 2013 10:16 AM

Please save off-topic news for the squid thread. There'll be one along later today. Or if you can't wait, you could use last week's.

SnowbodyApril 19, 2013 12:56 PM

Didn't some DOS TSR viruses move themselves around memory when they detected an antivirus running?

AlexApril 22, 2013 1:53 PM

Maybe I'm dating myself, but this was common practice back in the OLD hacker days. You'd write code, usually something to intercept passwords/track console activity. Have it dump the goods somewhere then clean up the mess behind you.

Granted, it was a much more manual process back then, but we were also only interested in targeting a particular system for curiosity back then, not targeting millions of computers for profit.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..