Protecting OSs from RootKits

Interesting research: “Countering Kernel Rootkits with Lightweight Hook Protection,” by Zhi Wang, Xuxian Jiang, Weidong Cui, and Peng Ning.

Abstract: Kernel rootkits have posed serious security threats due to their stealthy manner. To hide their presence and activities, many rootkits hijack control flows by modifying control data or hooks in the kernel space. A critical step towards eliminating rootkits is to protect such hooks from being hijacked. However, it remains a challenge because there exist a large number of widely-scattered kernel hooks and many of them could be dynamically allocated from kernel heap and co-located together with other kernel data. In addition, there is a lack of flexible commodity hardware support, leading to the socalled protection granularity gap ­ kernel hook protection requires byte-level granularity but commodity hardware only provides pagelevel protection.

To address the above challenges, in this paper, we present HookSafe, a hypervisor-based lightweight system that can protect thousands of kernel hooks in a guest OS from being hijacked. One key observation behind our approach is that a kernel hook, once initialized, may be frequently “read”-accessed, but rarely “write”-accessed. As such, we can relocate those kernel hooks to a dedicated page-aligned memory space and then regulate accesses to them with hardware-based page-level protection. We have developed a prototype of HookSafe and used it to protect more than 5, 900 kernel hooks in a Linux guest. Our experiments with nine real-world rootkits show that HookSafe can effectively defeat their attempts to hijack kernel hooks. We also show that HookSafe achieves such a large-scale protection with a small overhead (e.g., around 6% slowdown in performance benchmarks).

The research will be presented at the 16th ACM Conference on Computer and Communications Security this week. Here’s an article on the research.

Posted on November 10, 2009 at 1:26 PM14 Comments

Comments

B. Real November 10, 2009 1:49 PM

Personally, I don’t find a 6% performance hit to be “small overhead”.

With a suitably locked-down system, are there studies that say how much horsepower is being taken up in the overhead of AV, managers, etc?

MarkH November 10, 2009 2:05 PM

@Real:

Rational security decisions are questions of balance. How much of the world’s computing power is taken up by trojans, viruses, rootkits and other malware?

Clive Robinson November 10, 2009 2:25 PM

This is actualy not a new idea, and it does have some problems.

Another way of doing it is via DMA.

Basicaly you have a hardware hypervisor that uses “unused” CPU bus cycles to check memory in the kernal.

Any differences get flaged up and corrected or the system halts.

The down side of these aproaches is “loadable kernal moduls” and “device drivers”. There are occasions when these need to be changed and it’s getting the control mechanism right that is the pain.

Personaly I’d prefere hardware segregated kernels with controled communications, but unlike the above that’s not going to happen on commodity hardware any time soon.

HJohn November 10, 2009 2:26 PM

Whenever a relative or friend asks me to look at their computer because it is lagging, I inevitably find that a couple dozen programs think they are so special they have to run on startup and their task bar is a foot long. The question is not whether it is hurting performance, it is if the performance hit is worth the benefits.

With anti malware/AV/security/etc, the performance lag is a trade off–how much at risk are you, what do you do, what are your protecting, and, less frequently asked, how skilled are you at avoiding problems you can’t prevent?

Clive Robinson November 10, 2009 2:59 PM

Oh and another thing that needs to be considered.

The in most “hooks” in most kernels are either “jump tables” or “software interupt tables”

They are in known locations and usually redirect to known locations or locations that are trivialy found and jump tables are often called from locations that are known or can be found (via various stack tricks).

If you lock down the jump table the root kit developers can find the landing point easily and then replace code there with another jump etc. Likewise launch points to a jump table can be altered to jump somewhere else.

Locking it all down is the method of the “old days” where the “kernel code” was actually in ROM (embeded) not RAM (loaded) and only the tables where loaded in to RAM (why do you think the i86 processors reset to the top of memory but the tables go at the bottom of memory, it’s a hang over from ROM OS usage).

To stop root kits you need to move out all the kernal tables into seperate pages (not just jump and interupt) and have a secondary process oversee the tables for sanity.

It will require some rework in all modern kernels but it is going to be most easily done with lightweight kernels or kernels that are segregated into seperate parts with controled communications.

The code is less efficient in size and time but not by a lot and has a number of other advantages as well (debuging is a lot easier for one 8)

JR November 10, 2009 4:52 PM

@HJohn:

less frequently asked, how skilled are
you at avoiding problems you can’t
prevent?

Do I read this as in “get rid of problems you couldn’t prevent” or as in “prevent problems you couldn’t predict”?

–JR

Bob Miller November 10, 2009 8:03 PM

This is almost useless. Once a rootkit is in the kernel, there are zillions of ways for it to subvert control. “Hooks”, or function pointers, might be slightly more convenient for the rootkit writer, but there’s no way this makes a kernel harder to compromise or reduces the rootkit’s function.

HJohn November 10, 2009 8:36 PM

@JR: “Do I read this as in “get rid of problems you couldn’t prevent” or as in “prevent problems you couldn’t predict”?”


Good point, I didn’t word that well.

Let me try it again…

less frequently asked, how skilled are you at dealing with risks/problems that you don’t use tools (antimalware/AV/security/etc) to prevent.

Emily LaTella November 11, 2009 7:40 AM

@Real: “how much horsepower is being taken up in the overhead of AV, managers, etc?”

Gee, I don’t know about the audio/video stuff, but in a typical company, the managers, especially middle management, seem to take up an awful lot of the horsepower, to the point where it’s usually much better without them. Senior managers to think up the general direction, and immediate managers to make sure the workers are doing the job, is all the management you really need.

What? You mean in a COMPUTER? Oh! That’s different! Never mind….

Masten November 11, 2009 8:04 AM

I have read studies claiming that you don’t notice perfomance degradations that is less than 10%. So if thats true 6% should be ok.

qq November 11, 2009 8:53 AM

@ I have read studies claiming that you don’t notice perfomance degradations that is less than 10%. So if thats true 6% should be ok.

On my corporate PC, I have antivirus which uses up 9% of performance, an IDS/firewall taking up 7%, two inventory scanners using 4% and 6% respectively. But since each is less than 10%, i’m not noticing them at all.

Not noticing… not noticing… [bashes his head on the table] not noticing! not noticing!…

Honest But Curious November 11, 2009 12:08 PM

I’d like to point back at the article about Chinese information warfare capabilities, which was discussed here a short while ago:
http://www.schneier.com/blog/archives/2009/10/report_on_chine.html

If these guys are Chinese (highly likely, judging by their names), and they come back to China a few years from now (also highly likely), they will be conscripted to the “cyber warfare militia”. The same report notices that many Chinese researchers (some of which are in military universities) publish many articles about rootkits and detection/counter-detection measures.

Nick P November 11, 2009 3:42 PM

This is a really nice idea, but I’m still beating to the same drum: less shit in the kernel!!! There’s no excuse for megabytes of kernel code when modern microkernels like QNX and OKL4 do so much with only tens or hundreds of KB of kernel code and they do it fast. The researchers are trying to solve a solved problem. The current [probably effective] strategy in high assurance systems is a high assurance micro or sep kernel with a minimal runtime for trusted/critical apps and a virtualization layer for legacy code. Systems like INTEGRITY Padded Cell combine a secure kernel, Intel VT for unmodified guests, a POSIX layer, an IOMMU, and user-mode device drivers to greatly reduce risks. This currently beats entire categories of attacks with the right software setup.

I just don’t see how injecting more complexity into an already complex OS solves anything. The MILS and High Assurance virtualization approaches solve quite a few problems and are very modularized to increase evaluatability. I say we stick with what’s worked so far and build on it. Need more middleware, better hardware-level security, increased scrutiny of virtualization layers and easier methods for secure integration of components. And I’d like all that for only $100-$300 per copy on eBay. 😉

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.