Malware Tries to Detect Test Environment

A new malware tries to detect if it’s running in a virtual machine or sandboxed test environment by looking for signs of normal use and not executing if they’re not there.

From a news article:

A typical test environment consists of a fresh Windows computer image loaded into a VM environment. The OS image usually lacks documents and other telltale signs of real world use, Fenton said. The malware sample that Fenton found…looks for existing documents on targeted PCs.

If no Microsoft Word documents are found, the VBA macro code execution terminates, shielding the malware from automated analysis and detection. Alternately, if more than two Word documents are found on the targeted system, the macro will download and install the malware payload.

EDITED TO ADD (10/16): Some details.

Posted on September 28, 2016 at 6:34 AM32 Comments


Clive Robinson September 28, 2016 7:04 AM

When all is said and done, the way it detects is not new. It’s been discussed before with Honey-Nets and the like.

And as with Honey-Nets it’s the investigators lazyness in implementing an environment that is what gives the game away.

This is not the first time that such “check befor you fly” techniques have been used.

If you think back, you should remember that after Stuxnet, escaped and thus became known and subsiquently got publicaly dissected some malware started taking precautions. In atleast one case by using payload encryption to protect it’s contents. Effectively the payload encryption was “keyed” by information on the target computer (that went into a very CPU time expensive algorithm).

In some ways this is like the Electronic Warfare ECM / ECCM … EC…CM escalation, with the advantage being with the Malware not the investigators.

There are a number of downsides for the Malware, one is that the detection methods quite rapidly get to be of significant size, and this alone may well trigger Intrusion Detection Systems. Other problems are that even though encrypted the detection code or payload will have a recognisable network signiture or a polymorphic system that can be recognised. But the one that will stand out or should do if the malware author is being cautious is the CPU load on a target system.

JG4 September 28, 2016 7:13 AM

another nice example of malware sensing the environment was VW’s engine controller recognizing that it was being subjected to a standardized test regime

would you expect any less of a company that used Nazi slave labor?

Dimitrios Andrakakis September 28, 2016 7:55 AM

@JG4 “would you expect any less of a company that used Nazi slave labor?”

Seriously? I come from a country that suffered a lot during this time, and even I don’t think WW2-stories have anything to do with anything.


Ssieth September 28, 2016 9:23 AM

Yes – this is why test environments are generally best set up as virtual copies of real machines with as few changes as necessary to preserve data security. Not always easy but…

Andrew September 28, 2016 9:39 AM

This is not really new, malware tries to detect debug environment since their very first versions, more than 20 years ago, debug mode in the begining now virtual machines.
The approach is kind of weird for this scope, maybe they were looking for something else too. Like the documents importance for a future cryptolocker attack.

Evan Th September 28, 2016 10:27 AM

@Andrew, look at the article – the malware was running as a Word macro, and it was inspecting Word’s recent file list to see how many documents it’d opened. That’s a fairly nice approach, actually.

Randal September 28, 2016 11:04 AM

Perhaps development tools which produce an array of fake documents would be useful. There are already tools which create large numbers of test accounts and other server artifacts.

Andrew September 28, 2016 11:27 AM

@Evan Th

Indeed, I was reading it somewhere else before and they claimed that the malware was counting the number of words in documents.


Anon September 28, 2016 11:30 AM

Bruce, I hate to say this, but the technique to try to detect VM environments in viruses is about 20 years old now, just as Andrew said (and not just through detecting a VM’s debug mode, but by searching for other signs of actual computer usage).

The only thing new here is that it’s being done rather elegantly from a VBA script.

Scott Lewis September 28, 2016 11:51 AM

There are a lot of enterprises running virtualized file servers and giving employees virtual desktops. In the very short run, hiding your virus or trojan because you detected you are running on a VM will limit the spread quite a bit, so this is likely a short term trend.

Detecting the lack of recently opened files in Word is pretty brilliant though.

Karl Gruber September 28, 2016 2:07 PM

TrendMicro’s Deep Discovery product lets you upload copies of your COEs to the sandbox environment, so you could populate them with docs to defeat this checking.

This movement to behavior based detection rather than signature and traditional heuristics offers real benefits for defeating zero day malware.

Grauhut September 28, 2016 4:24 PM

Classical literature:

“Red Pill… or how to detect VMM using (almost) one CPU instruction
Joanna Rutkowska
November 2004

Swallowing the Red Pill is more or less equivalent to the following code (returns non zero when in Matrix):

 int swallow_redpill () {
   unsigned char m[2+4], rpill[] = "\x0f\x01\x0d\x00\x00\x00\x00\xc3";
   *((unsigned*)&rpill[3]) = (unsigned)m;
   return (m[5]>0xd0) ? 1 : 0;

The heart of this code is actually the SIDT instruction (encoded as 0F010D[addr]), which stores the contents of the interrupt descriptor table register (IDTR) in the destination operand, which is actually a memory location…”

WhiskersInMenlo September 28, 2016 6:05 PM

This seems to be a big hint that many of us should work in a VM.
1) copy known good VM to Fresh_VM.
2) start VM using Fresh_VM image.
3) Trasfer files to someplace else as needed.
4) stop VM and trash Fresh_VM. Heck kill -9 the VM.

Variations are possible…

Lawrence Pingree September 28, 2016 8:49 PM

This is not new, there’s been anti-evasion capabilities for quite some time now that check environmental factors to determine whether to execute payload downloads or not etc. Minerva Labs is a provider that is using these factors to stop malware from functioning, leveraging this exact interaction to defeat malware.

Anon September 29, 2016 12:35 AM

Whilst this behavior is nothing new, Windows 10 curiously displays VM status in Task Manager, so one would presume that checking if running in a VM just became a whole lot easier.

The best way to defeat malware is open untrusted documents in a throw-away instance that has no persistence.

About the only people this behavior really affects are those analyzing it. It only prevents dynamic analysis however, not static.

There will never be a perfect malware that can evade detection due to the nature of software.

Another way of testing could be to watch the VM from the outside. Not a new idea, but no malware can evade what it can’t see.

Clive Robinson September 29, 2016 1:28 AM

@ Anon,

There will never be a perfect malware that can evade detection due to the nature of software.

And the nature of it’s behaviour.

The problem however also applies to the actual CPU the malware runs on as well. That is the CPU can not see malware as part of it’s inherent characteristics. It can ONLY see what’s in memory if the code that examines memory works correctly. Thus has not been “got at” by malware or an insider.

Thus you need a way to check, –that is outside of the vulnerable CPU– which gives rise to the notion of a second less vulnerable set of logic to perform instrumentation on the vulnerable CPU.

r September 29, 2016 1:46 AM

“There will never be a perfect malware that can evade detection due to the nature of software”

There will never be a perfect system that can detect evasion due to the nature of software. (and standardized testing procedures)

This is where a malware that doesn’t set off any alarms needs to be investigated, often they’re not. Things that have been “disovered” often weren’t for years before they’re finally discovered. It’s not enough. There’s not enough eyes on things that misrepresent themselves – or even on things that don’t, with triggers based on interaction or environment unless you’re operating a very convincing environment your automated analysis will only go so far.

Did we ever unencrypt the environment-locked modules of stuxnet?

No system is perfect, the best stuff has one foot in crowd-sourcing – and public tips – not merely automated discovery and analysis.

Careful not to buy into the snakeoil of “absolute security”.

JPA September 30, 2016 12:33 AM

@Clive “Thus you need a way to check, –that is outside of the vulnerable CPU– which gives rise to the notion of a second less vulnerable set of logic to perform instrumentation on the vulnerable CPU.”

Somehow this makes me think we need to teach CPU’s Buddhism. An enlightened CPU would be able to detect the malware. 🙂

Virtual Machine September 30, 2016 1:46 AM

There are a lot of vulnerabilities with VM’s that crop up. Maybe three different systems, one the more flippant system, an isolated personal system, and an isolated testing and analysis system. Hell, chuck in a honey-pot as well for the fun of it, and a mobile system with cheap removable wifi card and a hammer for going online. Mow much does that add up to now, what is the wallet damage?

And after all that, wait the net connection is down.

Anon September 30, 2016 4:05 AM

@r: I know absolute security doesn’t exist – we are human after all.

Part of the problem is, we do not see what we’re not looking for. Malware might try to hide (with associated behaviors) but if we don’t know to look in the first place, it doesn’t matter.

@Clive: I recall a discussion in the past about seperate monitoring hardware that checked the data traversing the bus to/from a CPU being monitored. The idea reminded me of the debug devices for PICs whereby the PIC is put in a debug socket that is then plugged in the board. Perhaps such a device could be created for desktop CPUs to monitor them?

Wael September 30, 2016 5:19 AM

@Anon, @Clive Robinson,

I recall a discussion in the past about seperate monitoring hardware that checked the data traversing the bus to/from a CPU

It’s gotta be one of the longest technical security discussions here, it lasted well over three years. Castles Vs. Prisons, or C-v-P. Such a lovely name 🙂

Clive Robinson September 30, 2016 9:01 AM

@ Sorin,

You should check out this article, for more recent anti-detection tricks

All though the are different instantiations, they are of the same class of detection techniques.

That is they are looking for “not normal” charecteristics, that investigators can quickly change their test environment to replicate.

I know of tricks where the investigators can not change their test environment easily. I’ve mentioned some of them in the past, and they are often used as sources of entropy in random number generators and involve hardware charecteristics.

One charecteristic for instance is how heat effects the timing of various hardware signals. The heat comes from the difference between a computer just sitting idle and one being used. Importantly the type of use the computer is being put to changes the signals. The 24h heat signiture of an administrative / clerical user is markedly different to that of a programmer, which likewise is different to a system used as an industrial control system head end. All of which are markedly different to the use of a malware researcher.

Thus a malware attack of the sort you realy don’t want –covert APT from high level attacker– can run tests over several days unlocking it’s encrypted tests and payload little by little as it builds confidence in what it sees.

If you want an idea of real sneaky things you can check, have a think about how the various cuts of quartz crystals are effected by heat and importantly how they change depending on the electrical loads adjacent to them…

It’s not something you are going to see in cyber-crime malware for some time. But there are people who have done the research and put such techniques in covert code. As has been noted before Stuxnet was a bit embarrassing in the way it got publicaly dissected and various people do not want that happening again.

JPA September 30, 2016 8:35 PM


Then having a baseline for the average value of a variable that is associated with the use of the CPU, like the heat signature might act as a “thermometer” to see if malware were running as that might change the signature. Sort of like the CPU running a fever?

Clive Robinson October 1, 2016 2:57 AM

@ JPA,

Sort of like the CPU running a fever?

More like a “sweat from honest toil” as the malware is looking for user generated load.

Importantly though it’s not just one signature, as there are a number of CPU’s in your modern computer controling the I/O. And the I/O it’s self can generate heat and power supply variations as well. One example being the WiFi subsystems.

The problem for the malware writer is the level of work they have to do to get the signatures of user work. They have to do it with out being observed, either directly, or from the side effects of the observation process. Heat is ideal for this because because it uses the thermal mass of various parts of the computer to integrate the work signiture so only very loe bandwidth sampling is needed.

So finding some other signal that naturally integrates user load to keep malware sampling thus it’s own load signiture low is what a covert (APT type) malware writer is going to be looking for. Especialy if it can be sampled in a non obvious way, thus cannot be easily “trip wired” or “instrumented” by the defenders.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.