Schneier on Security
A blog covering security and security technology.
« Amber Alerts As Security Theater |
| Memo to the Next President »
August 11, 2008
Bypassing Microsoft Vista's Memory Protection
This is huge:
Two security researchers have developed a new technique that essentially bypasses all of the memory protection safeguards in the Windows Vista operating system, an advance that many in the security community say will have far-reaching implications not only for Microsoft, but also on how the entire technology industry thinks about attacks.
In a presentation at the Black Hat briefings, Mark Dowd of IBM Internet Security Systems (ISS) and Alexander Sotirov, of VMware Inc. will discuss the new methods they've found to get around Vista protections such as Address Space Layout Randomization(ASLR), Data Execution Prevention (DEP) and others by using Java, ActiveX controls and .NET objects to load arbitrary content into Web browsers.
By taking advantage of the way that browsers, specifically Internet Explorer, handle active scripting and .NET objects, the pair have been able to load essentially whatever content they want into a location of their choice on a user's machine.
EDITED TO ADD (8/11): Here's commentary that says this isn't such a big deal after all. I'm not convinced; I think this will turn out to be a bigger problem than that.
Posted on August 11, 2008 at 4:26 PM
• 57 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Whoa... an exploitable IE bug involving ActiveX controls?
Yet another reason to use Firefox, if not leave Windows all together!
"Yet another reason to use Firefox, if not leave Windows all together!"
The article suggests that this won't help for long:
"Dai Zovi stressed that the techniques Dowd and Sotirov use do not rely on specific vulnerabilities. As a result, he said, there may soon be similar techniques applied to other platforms or environments."
I'll give it a serious, but not huge, rating. The memory protection is intra-process, not inter-process. They show that it is still possible to hijack the browser process, but nowhere do they show that they can break out of the process sandbox and thus violate OS security.
It also shows that browser plugins need to be updated (for example, to support DEP or ASLR).
@tcliu: Users are already capable of enabling DEP on their systems (if you have a CPU with NX/xD.) Aren't plugins supposed to execute in the browser's context?
Seriously - everyone here seems to have shrugged this off.
The implications are far reaching. This is not a specific vulnerability - but a way of defeating the mechanisms put in place by Vista (stolen from the PaX team) altogether.
I was working on a few ways to do this - but I bow down to Mark Dowd again. His leetness again far surpasses expectation.
This DOES violate OS security. Vista has DEP, SSP and ASLR built in. They did this to make their OS to force exploit developers to have to work really hard to make a reliable exploit. In addition, these methods in combination many times completely mitigate what could have been a very trivial exploit.
This isn't about privilege escalation. It's about showing how their implementation of the PaX team's genius is flawed or perhaps the PaX team's genius itself is flawed.
Plus, taking over a browser is really juicy - it has the ability to poke outside of firewalls.
@the people who mentioned it
It isn't about moving to Firefox or Linux either - supposedly the research points to ways around ALSR and SSP and DEP on all OSes...
Disclaimer: I have not read the paper, taking faith in Bruce.
Security researchers are too often imitating chicken little.
Given that the title of the paper is "How to Impress Girls with Browser Memory Protection Bypasses", I predict that every one at Black Hat 08 went home alone.
@Ross: As the Ars Technica guy puts it - DEP, ALSR, SSP etc. are not absolute barriers. They make it harder to develop exploits, but once defeated, they are defeated. As exploit code can be reused, they were only a temporary measure until reliable delivery mechanisms were developed.
Ok, so maybe it is still "OS Security", I'll give you that much - I've always thought of the OS security as being inter-process, and not intra-process.
I recommend reading the link mentioned by "Anonymous" above. The only thing which is "huge" here is once again the ego of some self-proclaimed security experts which have inflated the nature of their findings (or maybe the media is to blaim, but the apparently did not correct them).
Link to paper not working. Any mirrors?
It's amazing to me how people are scoffing at this. People who didn't read, didn't understand, or didn't think about the implications of the paper aren't hesitating for a second to mock the experts who reviewed it.
A lot of people just became a lot less secure. Calling things "backup" or "temporary" doesn't change that. Absolving Microsoft of any blame doesn't change it either.
Bruce, why did you change the author's title from "Bypassing Browser Memory Protections" which is accurate, to the more sensational "Bypassing Microsoft Vista's Memory Protection".
The title you chose implies that the operating system's fundamental memory access protection was broken (i.e. processes running as a non-administrator user can write to memory in a process with higher privileges.). This is not the case at all.
Their paper describes how to defeat some of the "defense in depth" measure that MSFT put in place, to make an attacker's life more difficult **once they already made it over the real wall** and are inside the target process.
Now I have read the paper and it has done nothing but encourage my former post.
I definitely agree with you that our modern protection schemes are not an end-all. You responded "it makes exploitation harder". If you look at my post - I also pointed this out as well. Blacksun wouldn't be much of a game if these protection schemes were an end-all. :-D
I was trying to say that this was a big deal and not just an overinflated ego because several methods of breaking our security workarounds were released. I'm not saying the sky is falling - we have about 30 years of experience now dealing with papers that show faults in our binary safeguards.
It is a large deal because now exploits that may not have been reasonable previously are reliably exploitable now (and until these flaws are fixed). It is also a large deal because the ideas (I really like the idea of stack spraying) are applicable in different OSes and situations.
I think Mark Dowd is trying to tell us that for all the security sandboxed opcodes promise us - they lend themselves to ways of breaking other assumptions we rely on, here the assumption that we can provide a client-side environment where ASLR, DEP and other memory protection mechanisms are applied in all regions of memory.
@Brian: "A lot of people just became a lot less secure."
Yes, but only if you depended on the listed measures to protect you forever.
The problem is that we're still being hit by buffer overflows. Until you fix that, you will have these problems and nothing you can do will solve it.
The solution is managed code and use of existing hardware-implemented memory protection.
I scoff at it because what is presented as the end of the world isn't all that new - until you fix the basic problem this is an arms race, and thinking that it isn't is self deception.
"Absolving Microsoft of any blame doesn't change it either."
Since there is nothing reasonable MS can do to stop buffer overflows in programs they do not have the source code nor build chain for, I let them go on this one. Feel free to bash them as much as you want, but I'll opt out.
I am one of the authors of the paper referenced in the post above. First of all, I'd like to apologize for the sensationalism of the press coverage. Most of the articles about our work are completely inaccurate and full of ridiculous statements. Mark and I had nothing to do with these articles and were not contacted by their authors.
Please read our slides and paper (available from http://taossa.com/) before making any judgements about their content.
What we've done is show that the exploitation prevention mechanisms implemented in Windows Vista (including DEP and ASLR) are ineffective at preventing the exploitation of browser memory corruption vulnerabilities, due to the following factors:
1) the amount of contol an attacker has over the state of the browser process
2) the plugin architecture that allows third party plugins (Java, Flash, Acrobat) which often weaken these protections
3) the architecture of the browsers which run all code in the same process and have no isolation between different components
Our research is focused only on browsers. The protection mechanisms in Vista are still effective at preventing the exploitation of vulnerabilities in server processes, which is why I believe that Vista is still more secure than any previous version of Windows.
@Ross: "security sandboxed opcodes"
I don't understand this one - do you mean running unmanaged code in a sandbox, or are you talking about managed code?
@Durable Alloy: The issue is (a) whether the plugin can execute in a DEP environment - some just can't - and (b) whether the plugin plays by the rules.
For example, the Java plugin marks its whole heap as being read/write/execute, thus providing attackers with a huge area of memory where anything written can also be executed.
"full of ridiculous statements"
So I won't be able to impress girls with this? Damn.
It just confirms what we already knew was happening.
We know Browsers are exploited.
We know Java(script), Flash, etc. pp. contribute to that.
I hope the Hype forces some people to finally end their sloppy ways and create secure Browsers and since we can not kill them, plugins.
I have not read mentioned paper because Vista is crap, along ANY other OS with lots of features. I only run *BSD, and even still, lots of little things can go wrong. GRR.
Regarding reputation of Taossa.com website, well, great website, Dowd et all, allready are a name in industry. If any 'sensationalism' well, perhaps only better to wake up users from their slumber.
Microsoft, why bother anymore? Same with its cousin, good cop Apple.
Blackhat, too much publicity, where is a conference where the real security has a reason to do X and get Y, without being compromised.
I'd like to read about *BSD getting porked on the best configs and hardware. How about 'bumping' an ISP level qmail, djbdns, etc system, something relevent, not junk. You know these guys have it in them.
Insecurity reporting, even if clever exploits, is still not Security reporting. You hope and expect now that a TLA [Three Letter Agency] does not run any M$ for important data.
Real security is being in control of ...
"I think Mark Dowd is trying to tell us that for all the security sandboxed opcodes promise us - they lend themselves to..."
I think Mark Dowd is trying to tell us that for all the security _that_ sandboxed opcodes promise us - they lend themselves to...
By sandboxed opcodes I am referring to the JVM, Flash VM and other languages that utilize sandboxing like .NET.
Thank you for the paper. It was a little heavy on the background and history, but I understand why it was there. Thank you for showing us where some of our weaknesses lie so we can fix them.
My one complaint is that I read the paper and wrote some code, but the ladies haven't started flooding in yet.
This is still for a 32 bit system. I bet a 64 bit system stands up a lot better.
The link in the addition looks like it belongs to a Friday blog post.
I have no interest in "bashing" Microsoft. My point is that you can't turn this from a big story into a small story by parceling out blame in a certain way.
I'll say it again: a lot of people just became a lot less secure.
That's a big story, no buts about it. The fact that something had to happen eventually doesn't automatically make it unimportant or "not news" when it actually does happen.
Don't you think memory management issues are important to computer security in general? Any vulnerability not directly about BSD has no relevance?
Um, either my browser was hijacked, or the link contained at the end ("EDITED TO ADD (8/11): Here's commentary that says this isn't such a big deal after all. I'm not convinced; I think this will turn out to be a bigger problem than that.") is a link to National Geographic.
People should read the presentation. It shows some interesting information about the limitations of current techniques for protecting a process address space from destructive code loaded into the address space (such as loading a dynamic library or a "plugin".) As browsers are the most common program to both download and execute such code from random locations, they are most vulnerable to such attacks.
Most existing techniques used to protect address spaces are basically "security through obscurity" - there are limitations in what you can do before breaking the programming model.
There is no "fix" with current x86 CPU designs and programming models. The difficulty is essentially the same as that presented with DRM, if you can execute it, you can exploit it.
I think the link on the "Edited to Add" section is a wrong paste :-)
While I agree with your post as a whole - I disagree with your testament that ASLR and DEP (and etc) are "security through obscurity". That phrase means that you keep the design a secret in hopes that a flawed or vulnerable or weak system never gets uncovered. I would say it is "security through probability", as these methods are very well documented and not secretive at all and they work on the premise that probabilistically, without enough information, an attack will fail (guessing addr, guessing stack cookie)
@Prashanth No, actually you just got Squid Roll'd
Hmmm as described here (I cann't get at the paper at the moment),
This looks like it was a predictable (with hindsight ;) issue and interestingly one to which their is no (effective) solution within the current OS architecture concept...
It would appear that the Vista mechanisums are not at fault and where (probably) doing their job correctly at the level they where designed to work at (ie at the OS and inter process levels). This is because the bulk of the exploit is effectivly running within the application at the application level which the OS has no reason to involve it's self.
The real issue is therefore that any sufficiently complex application effectivly becomes an OS in it's own right at the underlying OS's application process level.
And when thought of that way the application program becomes vulnerable to all the existing classes of security vulnerabilities OS's have previously been vulnerable to.
As long as the application is alowed access to resources external to it (ie the network etc) by the underlying OS then the application can be used to exploit those resources.
This means that even if you fix the application's vulnerabilities (as the underlying OS's have been fixed), as long as the application behaves as an OS it becomes vulnerable to what are effectivly it's applications (plugins etc) and so on up the line.
The two obvious solutions (stop OS like behaviour and stop access to external resources) are not likley to happen as this effectivly stops the required functionality.
The solution would appear to be accept that it is going to happen and monitor for abberant behaviour, then once it is detected respond appropriatly. Effectivly what an Intrusion Detection / Prevention System is supposed to do at the network level.
The follow on thought from this is that the only "efficient" way to implement such a system is by having an ID/PS running at various levels capable of communicating an issue upwards towards the ID/PS level imediatly below the offending code, and that it is that level that terminates the code. Which in of it's self gives rise to two further issues of "recognition" and "trust".
Oh and there will of course be all the side/covert channel issues to deal with as well which curent ID/PS's are incapable of recognising.
Yes I think Bruce is correct, this is significant and more importantly it is going to open a whole new field of endevor for "hats" to play in, "research grants anyone?" ;)
Uh...the link in your edit points to a page of squid pictures...
I have not yet been able to digest the paper. But isn't this one of the problems OLPC's Bitfrost is supposed to contain?
The basic tenet of Bitfrost (as I understood it) was to contain processes-turned-evil by absolute access control. Eg, even a completely taken over browser would be unable to get access to HW drivers or the file system.
And this is why OpenBSD doesn't allow disabling execute protection or address space randomization. Because it was obvious from the beginning that if something like that is optional, people will disable it instead of fixing their buggy software.
Everyone else adds tweaks and buttons to disable features like this and everyone else will get bitten.
Come on, April 1st was 4 months ago.
I'm going to have to come down on the "not a big deal" side, but for a different reason.
What we're looking at here is an analysis which indicates all the counter-exploit features in Vista can be overcome, leaving it no better off than a system without these features. This means that Vista has failed to solve the existing problem. It doesn't mean we have a new problem.
But let's back up a minute here. These aren't defences against security holes, they're defences against specific classes of exploit techniques. Why did anybody ever think this was going to work? Attackers will just invent and use different techniques to exploit the security holes that they find. It's the patch treadmill all over again, and we know that doesn't work. I don't buy the "defence in depth" spiel - it is supposed to mean multiple, independently-effective security systems that must all be bypassed, not a long series of small speed bumps to irritate attackers. This is like something the TSA would do.
Worse still is people's tunnel vision on memory handling flaws in C code. One of the articles linked earlier actually said this:
"Rather, their purpose is to make exploitation more difficult. Microsoft has a solution for those wanting to make it impossible—use .NET."
The disturbing thing about this comment is that the author apparently believes it. Memory handling flaws get all the big press, but there's a whole lot of other security holes out there. These days I see far more SQL injections and XSS attacks than buffer overruns, not to mention things that don't fit into any particular class, like the recent SSH weak key issue, and I would expect this trend to continue - it's not that the software is getting more secure, it's that we're getting better at finding more kinds of security hole.
Even if you could fix every buffer overrun in the world overnight, that is not going to save you. Everybody keeps looking for a quick fix, and there isn't one.
Can't we leave ActiveX out of this? Surely most people disable ActiveX when browsing sites outside their firewall nowadays?
This vulnerability is newsworthy because it can be exploited even when ActiveX is disabled. ActiveX vulnerabilities are not newsworthy.
@Ross: That managed environments could be used to launch an attack from other, non-managed parts is not, I think, something that should be considered a flaw of managed environments. Managed code only promises to protect against the code being managed - not against unmanaged code.
In the case here, the Java VM marks all pages as read/write/execute. It shouldn't, so this is an implementation issue. Serious, but not fatal. The big problem is that we have unmanaged code elsewhere with a buffer overrun.
I agree with the tunnel vision complaint.
The real problem is that buffer overruns are so serious.
"The disturbing thing about this comment is that the author apparently believes it. Memory handling flaws get all the big press, but there's a whole lot of other security holes out there. These days I see far more SQL injections and XSS attacks than buffer overruns, not to mention things that don't fit into any particular class, like the recent SSH weak key issue, and I would expect this trend to continue - it's not that the software is getting more secure, it's that we're getting better at finding more kinds of security hole."
I neither stated nor implied that all security flaws were buffer overflows, nor that these environments guarded against other security flaws, and I somewhat resent the implication that I did.
I would have thought that the repeated specification of _buffer overflows_ (and not more vague terminology such as "security flaws") might have been sufficient to indicate to the reader that the subject of discussion was buffer overflows.
I suspect that the reasons that buffer overflows get so much attention are their long history, their regular use in self-propagating worms, and the fact that the tools exist to systematically eliminate them but are not used.
So why is it that we still have "browsers which run all code in the same process and have no isolation between different components"? Turn it around--could we write browsers which run at a lower permission level than general user processes, and which isolate each application session in a single security domain?
@Randolph: Yes, we could. For example, I run this browser session as a less-privileged user. This has saved me from one XSS exploit so far.
The problems are:
1. I don't want a hijacked browser to have access to my personal documents. This means I can't easily upload images etc. to for example Facebook. I first have to copy the image files to a directory where the browser has access.
2. I can't download a file to any directory. I have to put it in a special download directory that the browser can write to and then move it out of there.
3. The browser startup takes more time, as I have to open the less-privileged users profile before creating the browser process.
4. I can't view any html files in my home directory with the less-privileged browser, since I don't allow it read access to my home directory.
Inconveniences and not impossibilities, yes, but I don't think a regular user would even begin to understand how to use my setup.
But what about isolating plugins in separate processes? Well, moving data between processes is difficult. You need to use shared memory for that, and in shared memory, you can't use regular pointers as there is no guarantee what process-specific address the shared memory area will be mapped into. This makes it harder and forces you to rely on serialization of objects to transfer them to the other processes. The L4 (I think) kernel did a nice job with this (good), but that particular method didn't support multithreading (bad).
@a better way:
No, the real problem is that having your computer exploited is serious.
The real problem is that we suck at handling identity theft.
The real problem is that most banks's security is poor, with authentication schemes of the type "what was your mother's maiden name".
The real problem is that we bet everything on not getting owned, and have no protection at all when we do. This is like betting everything on going through life without getting a cold.
Just as terrorism requires intelligence and emergency response, exploit defense requires both protections and fast recovery. We're focusing on the former and forget the latter.
What I mean is that Mark Dowd did previous research on bypassing the restrictions of the Flash VM byte parser. He was capable of adding opcodes to the VM stack and wiping out security checks. He exploited that fact that this program manages its own runtime stack (like JVM and .NET VM do) so he could run code without crashing or hanging the program. Yes - the danger here is arbitrary code running - but arbitrary code running without telltale signs is in fact worse.
Now Dowd released a paper with another researcher showing that VMs, or "managed environments" as you put it, break one and sometimes several of the assumptions used to keep an ASLR/DEP environment harder to tackle. The developers of these products didn't just compile them wrong - most of them need to execute code from writable pages. So the fix isn't that the developers need to recompile new binaries and throw them around.
These things point in the general direction of "languages that have a managed runtime stack run fairly fast and may run on more than one architecture. They can load in 3rd party software and garuntee to a large degree, the safety of the 3rd party software."
For these advantages, Dowd and the other researcher show a tradeoff. Their environments, which are needed for the features they allow, current implementations of binary protections schemes have been significantly defeated. When launching an exploit from a browser, these environments can be used to make exploitation reliable.
@Ross: "most of them need to execute code from writable pages"
In general, at some point you do need to run unmanaged "code". If not the VM itself, then the hardware itself. A bug there will be exploitable. But I still think managed environments are the way to go. At least we will have one bug in one program (the VM) instead of multiple bugs in multiple programs.
Dowd's Flash exploit wasn't based on running in a VM per se, but based on a bug in the VM where a variable was first tested as if it were signed, but then used as unsigned. Furthermore, the use of the variable in a memory allocation was then done without checking return values. In short, Dowd found a classic buffer overflow - just in a VM. All credit to Dowd for finding the bug and proving it exploitable, but I suspect that fixing that bug was a five minute job, if even that, whereas writing the exploit was described as "superhuman". (The seriousness of the bug he found was that everyone has Flash installed - but I suspect that the best course of action for that bug would have been to make it public even without exploit code, as it was so easy to fix that the question of "how bad is this" would never even have to be raised.)
As for needing to execute from writable pages, I suppose this has to do with JIT? Then why can't the VM set page permissions to write but not execute when creating the native code and set execute but not write when done?
I agree completely with your analysis of Dowd's Flash exploit. It was as simple as adding "unsigned" to the declaration of that int.
And of course we need to run unmanaged code somewhere - that's why I think it is silly people call managed code an end-all. Plus, even then there are design bugs and other problems. (Some people have declared to me "Sun will put the JVM on a processor and it will revolutionize the world". I think that's nonsensical.)
As for changing the page permissions as needed, Java supports Reflection and Flash allows you to load in more SWF objects at runtime. I do not know of any examples for .NET but I know a lot of these environments are capable of compiling more bytecode to opcode when requested later. If a page is writeable and then executable - it is still a problem. An exploit author can make a large bytecode object to be loaded at runtime to give himself a large window to write his payload into. Then, his attack waits until the compilation is done and he runs his exploit, pointing to these parts of memory. That's not a solution, that's a workaround.
We agree on a lot of the premises, and that's a really good point on the seriousness of the exploit being the widespread usage of Flash. But I think my point still stands that being able to run your payload without interrupting normal execution is still quite serious and that this would have been an insurmountable task - or close to it - had the VM runtime stack not been there to do his bidding.
I am not saying "managed code" is broken or that "managed code" is flawed. In fact... I pointed out that these environments are needed for managed code, that the companies can't just simply recompile with new flags set; it can't really be helped. This is Microsoft's job to fix, and perhaps if the attacks can be extended to other platforms (which I suspect they might) other venders as well.
There are some very serious benefits to using technology like Flash. However, Mark Dowd has shown us that there is still a tradeoff we are paying for the use of these environments in a primarily (on most desktops) attacker-controlled environment (the well equipt internet browser).
Any Maxthon users out there? Maxthon (which uses the IE rendering engine) allows you to disable scripts, activeX, or whatever by default (and enable for a given tab). I presume that that would offer protection from these sorts of attacks. Can anyone out there confirm/refute this?
(note: I realize that IE engine vulnerabilities would still be problematic)
Probably not unless you have Flash, .NET and Java disabled by default and then enable it for certain tabs you can trust. I mean - the problem wasn't an IE rendering problem - it is a problem with incorporating these feature rich binaries in the browser. The researchers use these libraries to get reliable exploitation of any other vulnerability, here they use a flaw in the way cursors are handled. The problem isn't with IE, really, it is a conflict between the features and environments in Flash, Java and .NET rely on to work the way we want and assumptions that get made about ALSR and DEP.
[Also - you should check to see how Maxthon is compiled. Perhaps it is marked ASLR incompatible or DEP incompatible in which case an attack wouldn't need these attacks to hit your hard.]
Thanks for the reply. I set DEP to opt-out mode, so I get DEP coverage, though (unfortunately) DEP is not provided without that step. Maxthon, unfortunately, doesn't support ASLR either.
I'm just wondering if, for eg, the Flash filtering really works. Or, does the Flash plugin execute, only to exit immediately?
Hopefully, it is the former and not the latter. I haven't researched, so I'm throwing it out to the community for opinions.
@Ross: Ok, right. I see how you could bypass the write/execute part.
Since you need an initial exploit to get into the executable code, I guess we're right back to the ol' buffer overflow (or illegal memory write in the case of the Flash exploit, but that's the buffer overflow's sibling) then.
It seems to be the only way to turn data into code. (Much like SQL injection turns what should be parameter values into SQL code.)
"Sun will put the JVM on a processor and it will revolutionize the world"
I read about that way back in the last century. The basic problem was that it required Sun to not only develop the VMs, but also the chips to run them on, which had to be a whole range from server to embedded. Turns out that it was much more economical to let Intel and AMD (and the others) do the chip design and then just have a (semi-)portable C++ VM that will JIT the Java bytecode to native - which resulted in pretty much the exact same thing as having a HW Java VM.
Plus, it made upgrades - not to mention bugfixes - a whole heap easier.
"Memory handling flaws get all the big press, but there's a whole lot of other security holes out there." - I have to agree to that. Buffer overflows had never cost as much as users opening every email attachment they receive or running any executable they find without checking for viruses first. You don't rely on DEP and ASLR to cover up your poor memory handling techniques - you just fix it.
I completely agree with you.
Although it isn't even about checking for viruses first. Virus signatures are generally very easy to evade - and hey if you make something from scratch it isn't going to be in any AV databases.
There's a short follow-up interview with one of the authors of the paper over on ZDNet, wherein he says that he "is horrified by the lack of understanding displayed by the tech press when they covered the paper Mark and I presented at BlackHat" and goes on to clarify the severity of the issue:
My main complaint with the paper is that it's not bypassing DEP even though that claim is made. DEP isn't mean to do anything about pages that are writable and executable. If you have that situation it's not the exploit that is clever, it's the developers who removed the safety straps that are stupid.
@yob: As Ross explained, even with safety straps, you are vulnerable.
1. I run a Java VM in the browser.
2. A malicious page causes the VM to load a certain class. The VM does a JIT of the code and places it in executable but not writable memory.
3. The same page now does a buffer overflow (in writable memory) that causes the instruction pointer to jump into the JITted code from step 2.
Granted, this takes one more exploit, and the DEP bypass done in the paper is a bit of a sucker punch, but DEP isn't 100%, even when used correctly.
My point isn't that DEP solves all the world's ills. It is that they didn't circumvent DEP. Claiming to have broken DEP in this case is like claiming to have defeated the body armor of Bob by shooting Alice.
Mike Dowd at IBM... hmmmm...
When I worked for IBM I saw a _great_ job title back in 1998-1999:
This was a lot funnier a title during the Clinton Administration... and would look cool on a business card.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.