Faking Hardware Memory Access

Interesting:

[Joanna] Rutkowksa will show how an attacker could prevent forensics investigators from getting a real image of the memory where the malware resides. “Even if they somehow find out that the system is compromised, they will be unable to get the real image of memory containing the malware, and consequently, they will be unable to analyze it,” says Rutkowska, senior security researcher for COSEINC.

Posted on March 1, 2007 at 1:33 PM19 Comments

Comments

Nicholas Weaver March 1, 2007 2:09 PM

And in other news, water is wet…

From within a compromised system, you can’t make any assertions about a compromised system, even if you know it IS compromised. Rootkits are wonderful things, no.

Basically, it comes down to that “I think, therefore I am” is a load of bull***t.

Nicholas Weaver March 1, 2007 2:12 PM

AHH, never mind. Read the FA. Its defeating monitors placed on the memory bus as part of malware analysis.

Here’s my bet. Simply she keeps the malcode small and in L2 cache.

Junior March 1, 2007 2:18 PM

This isn’t news, me and some friends has been experimenting with TLB desynch attacks for quite some time. Forcing reads of one page to occur at one physical address, and executions of the same page at another isn’t that hard in a modern OS.

Junior March 1, 2007 2:34 PM

Oh, and Nicholas, keeping something in L2 cache without keeping it in regular RAM is pretty much impossible on a multitasking computer. Also, the regular techniques found in rootkits (spoofing information by using hooks) won’t really help you against someone not using the hooked APIs. Using a falsified entry in the TLB you can “trick” a regular MOV, since the page table itself is compromised.

Jim March 1, 2007 2:40 PM

At least with the needle in the haystack, you know what the needle looks like. Now you are looking for the proprietary needle and don’t know what you are looking for looks like. Good luck finding it. You might find it and not know what it is you found.

Junior March 1, 2007 4:45 PM

Well the problem is that when you start looking for the needle you’ll be automagically transfered to a haystack without a needle.

Morpheus March 1, 2007 6:25 PM

With fully virtualizable processors (as I understand recent Intel processors are), isn’t this moot? If the malware ever figures out how to get “on the bare metal”, the operating environment that the forensics tools gets to run in will appear exactly how the malware wants it to appear. At this point, all you can do is break out the logic analyzer, because any operations that go through the CPU must be assumed to have been compromised.

Morpheus March 1, 2007 6:33 PM

Ok, so I didn’t RTFA first. The article confirms my assertion about the operating environment.

Looks like the emerging workaround is to plug something into the PCI bus.

It wouldn’t matter if it was somehow plugged directly into the FSB, because even that is a programmable multiplexor.

Couldn’t you still attach to the memory slot itself, and no way of obfuscating reads done there?

Junior March 2, 2007 1:25 AM

There are a lot of ways to get around TLB desynchs, unfortunately most of them requires heavily specialized gear.
I think the only feasible way for most malware researchers is to modify the OS loader to guarantee first loading of a specialized module that keeps tabs on the buffers and makes sure nothing mucks with them.

greg March 2, 2007 3:51 AM

“There are a lot of ways to get around TLB desynchs, unfortunately most of them requires heavily specialized gear.”

Many of us have access to such gear. Never underestimate Eve. Hardware hacking is less talked about these days. But its done more than ever.

Even gettting data out of flash or turned off ram with distructive methods is carried out by many.

Things like electron microscopes and the like are readly avavlible to a lot of pepople and are cheaper enought that uni’s and companys don’t mind a little personal use. At my uni it was encoraged.

Even tamper resitant hardware is weak agaist a lot of attacts, and PC arn’t (yet) tamper resistant by design.

Long story short. If it doesn’t blow up when you open the case/package. You can work out whats going on. The code the keys… whatever.

Junior March 2, 2007 5:30 AM

“Many of us have access to such gear.”

I think many is a bit of an exaggeration to be honest. The number of organizations with access to the kind of equipment needed is a drop in the ocean compared to the number who would need the gear, but can’t get to it. Malware can use techniques for stealth today that requires extremely expensive and hard to find equipment to detect, and it doesn’t really take a lot of effort. Proof of concept code has been out for quite some time about TLB desynchs, and I’d be surprised if I didn’t come across malware that uses it within a year. The simplicity of the techniques makes it vital that research companies geared towards the consumer market has countermeasures that aren’t based on electron microscopes, custom made memory controllers, and so on.

Dave March 2, 2007 6:11 AM

@Junior:

If I recall correctly, some architectures (such as ARM) allow certain cache-lines to be locked in. Does anyone know x86 well enough to comment if it has any similar features?

Tobias March 2, 2007 7:35 AM

@Junior:

Doing forensic via FireWire seems to be pretty common these days. That seems to be the main target of Rutkowksa’s attack. There are open libraries for DMA access (primary focus of research was FireWire, but parsing memory snapshots is also possible) in development by David R. Piegdon, see http://david.piegdon.de/products.html – He gave a talk about it at the CCC Cologne, but unfortunately there are no slides (and no code) available yet…

Chris S March 2, 2007 8:54 AM

I don’t think anyone has yet invented an attack that can’t be detected, or a defense that can’t be circumvented.

But, if successful, this would appear to be relatively low-cost to implement, with a high cost to detect. Keep in mind that most forensics personnel don’t walk around with expensive hardware gear, so although someone can detect this, it’s going be a rare person compared to the average.

Think back to how widespread the Sony rootkit was before it was detected. Most people were running malware detection on top of the OS, unaware that the OS was lying to them.

In addition, as I understand it, if an attack used this and resided solely in memory, it would become much harder to detect. Much of the low-level hardware detection I’ve heard of involves rerunning stuff on after installing detectors on the hardware. Getting hardware detectors into a standard system without shutting it down would be very expensive.

The following data recovery would seem to on the level of this approach, following the Concorde crash…

http://catless.ncl.ac.uk/Risks/21.05.html#subj11

… where data was recovered from a powered and running circuit by carefully supplying alternate power and moving the circuit to a new installation.

Spider March 2, 2007 9:17 AM

Help me out guys, if you really suspect a root kit why would you boot the infected operating system to try and find it? Why not move it to a visualized environment and just examine what its doing there. Why the need for electron microscopes or expensive gear to tap into the PCI bus?

At least put the disk into a know good system and run a scan on the suspect disk as a slave.

Junior March 2, 2007 1:47 PM

Dave: I can’t say I’m 100% sure (something is itching in the darker, rarely used, regions of my brain) but no, i’m fairly sure you can’t do that on the x86. The reason for the itch is most likely me reading something connected to it, but I can’t remember where, or exactly what it was about 🙂

Tobias: Yes, and that’s exactly the problem. Analysis over firewire, or any memory-based method, simply won’t work if you have different pages for read and execute in the TLB.

Spider: The main problem would be that the malware would detect the virtualized environment and just stay under cover.

Chris March 2, 2007 2:24 PM

When malware reaches kernel level priveleges it is already over. Everything should be assumed compromised and/or subverted in some way. This is interesting research but should be considered anti-forensics not ‘security’, the compromise has already happened, your data is already gone etc…

David Koontz March 4, 2007 3:54 PM

http://developer.amd.com/assets/IOMMU-ben-yehuda.pdf

Quote:
“In IOMMU isolation solves a very different
problem than IOMMU translation. Isolation restricts the access of an adapter to the specific area of memory that the IOMMU allows. Without isolation, an adapter controlled by an untrusted entity (such as a virtual machine when running with a hypervisor, or a non-root userlevel driver) could compromise the security or availability of the system by corrupting memory.”

In other words, the IOMMU can be used to prevent something in the I/O subsystem (PCI, etc.) from seeing all of memory. Apparently some new Ultra SPARCs have IOMMUs, too?

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.