Designing Processors to Support Hacking

This won best-paper award at the First USENIX Workshop on Large-Scale Exploits and Emergent Threats: “Designing and implementing malicious hardware,” by Samuel T. King, Joseph Tucek, Anthony Cozzie, Chris Grier, Weihang Jiang, and Yuanyuan Zhou.

Hidden malicious circuits provide an attacker with a stealthy attack vector. As they occupy a layer below the entire software stack, malicious circuits can bypass traditional defensive techniques. Yet current work on trojan circuits considers only simple attacks against the hardware itself, and straightforward defenses. More complex designs that attack the software are unexplored, as are the countermeasures an attacker may take to bypass proposed defenses.

We present the design and implementation of Illinois Malicious Processors (IMPs). There is a substantial design space in malicious circuitry; we show that an attacker, rather than designing one speci?c attack, can instead design hardware to support attacks. Such ?exible hardware allows powerful, general purpose attacks, while remaining surprisingly low in the amount of additional hardware. We show two such hardware designs, and implement them in a real system. Further, we show three powerful attacks using this hardware, including a login backdoor that gives an attacker complete and highlevel access to the machine. This login attack requires only 1341 additional gates: gates that can be used for other attacks as well. Malicious processors are more practical, more flexible, and harder to detect than an initial analysis would suggest.

Theoretical? Sure. But combine this with stories of counterfeit computer hardware from China, and you’ve got yourself a potentially serious problem.

Posted on April 24, 2008 at 1:52 PM32 Comments


Mike Laird April 24, 2008 2:39 PM

For anything other than smashing atomic particles together, academics are usually the last to know about a new thing. And they write papers on topics where they sense money for research is forthcoming. So on “the last to know” point, who has found malicious hardware installed on their servers, and where? On the “money for research” point, who is paying for this research (I can guess), but more importantly – why – what balance of defense vs offense?

I have no facts, but I have a strong sense that malicious hardware has already been found in surprising places, and researchers are trying to catch up to the evolving practitioners.

jeffd April 24, 2008 2:54 PM

@Mike Laird –

We probably would have heard about attacks like this against most targets outside the government. Inside the government, I think it would be more likely to be dealt with internally through the CIA/NSA rather than take it to outside researchers.

Likewise, hardware hacking was once popular and, as one european boy with a tv remote found, is still effective. I’ve suspected that hardware hacking could be taken a step further, and if i’ve thought about it, it’s not a big leap to think that smarter/more experienced people would research it.

alan April 24, 2008 3:02 PM

Who needs custom chips when you can just modify off-the-shelf hardware to do your evil for you.

I have seen a mouse that when plugged in will root a Windows box. It just has a concealed usb drive with an autorun script that runs the rootkit.

It is not hard to exploit the trust some people have in their hardware.

max April 24, 2008 3:08 PM

What’s also interesting is the alleged relative simplicity of the changes required. 1341 gates is nothing compared to tens of millions of transistors any modern CPU has. Which means that NSA could have given Intel a call, and provided them with certain architectural suggestions long time ago. And we wouldn’t know (unless somebody familiar with the scheme spilled the beans). I wonder, how many people in Intel would need to know about such a change? Another interesting question is whether folks that actually make the CPUs in Taiwan possess the capability to quietly alter the chip after being contacted by some persuasive Chinese intelligence officials?

Skeptical Weenie April 24, 2008 3:15 PM

Why would I bother to do this when it is so easy to break software, hand out USB keys, mail CDs – all possible without the fuss and bother of infiltrating hardware design processes?

Anonymous April 24, 2008 3:38 PM

Sounds like someone has re-read an old copy of the novel “High Flight” (no, not the poem…)

moo April 24, 2008 3:42 PM

@Skeptical Weenie: you might not be thinking big enough. For example.. pretend you are the Chinese government and you want to have (potentially) a way to surreptitiously access U.S. government computers.

…Or, maybe you’re a terrorist group who wants to have a “fire sale” and crash all the computers at once.

With the architecture described in their paper, you could send a UDP packet to a host and when the OS scans the bytes in it, the CPU would be trojaned even though the OS discards the packet. So simply sending 1 UDP packet to a machine lets you load your own exploit code into it which has unrestricted access to all of memory. The exploit code could read interesting memory locations and send back the results by overwriting some outgoing UDP packet, allowing the attacker to gather info about the target machine and craft a more custom attack specifically for it.

There are tonnes of possibilities.

J.T. Kirk April 24, 2008 4:19 PM

It’s always good to have an over-ride key so you can lower the other guy’s shields when you need to.

Consider a scenario where there is a hardware board with multiple “secret” chips, each unbeknownst to the other…one belongs to you, one to someone else…maybe even another party involved.

Talk about trusted computing…

Ishmael April 24, 2008 4:23 PM

Take a look at the higher assurance levels of the Common Criteria and you’ll see that these types of things have been on the radar for a while.

MikeA April 24, 2008 4:52 PM

Since the Chinese government (among others) have access to Windows source code, and if, as MSFT claims, security requires concealing said source code, then I submit that they have a lot cheaper vector than subborning processor designers. Not that practicality would stop some (ahem) researchers of my aquaintance from trying it. 🙂

1234 April 24, 2008 5:28 PM

Well, at least one of the co-authors of the paper was smart enough to earn a Ph.D. degree.

Bruce Schneier does not have a Ph.D. degree.

SteveJ April 24, 2008 7:24 PM

@Skeptical Weenie: “Why would I bother to do this when it is so easy to break software, hand out USB keys, mail CDs”

Well, you might do it if you had in mind a reasonably capable, security-conscious defender, who runs their own software (or other software you think you can’t infiltrate) but relies on commodity hardware (which you think you can infiltrate). If you’re Intel and your target is Google, as an unlikely example. Or, to feed the paranoids, if you’re the government of China and your target is western civilisation as we know it.

If you’re a run-of-the-mill hacker then obviously that isn’t the case, and this isn’t an opportunity. But just because the attacker would have to be unusual doesn’t necessarily mean you can ignore the possibility, ‘cos there only has to be one such attacker to make it happen.

TheDoctor April 25, 2008 4:44 AM

One real disadvantage of hacked hardware is that you can be tracked. Software is fluid like smoke, but hardware is real and remains as a physical trace.

Combined with gouvernments (either yours or foreign) this could be a problem, but thinking of terrorists that attack with malicoius hardware is hilarious.

SteveJ April 25, 2008 4:52 AM

Until the point where it is actually fabricated, hardware is just as fluid/smokey as software. So if you discover that your hardware is compromised, you’re still left combing through “soft” records. Did the chip fabricator change the design at the last moment, or was it the chip design company? Or an individual chip designer?

You have physical evidence of skulduggery, which can’t simply be overwritten the way rewritable media can. But the same would be true if a version of Windows issued forth from Redmond on DVD-ROM with a deliberate backdoor embedded.

TheDoctor April 25, 2008 6:31 AM

@SteveJ: …a version of Windows issued forth from Redmond on DVD-ROM with a deliberate backdoor embedded.


And directly after this the whole management of MS would be crucified. Literally.

So this is the reason why companies go the long road to make such things difficult to happen.

bob April 25, 2008 7:02 AM

The US DoD has been buying telephones from China for years which it then installs in classified telephone networks. That seems less than optimal security.

And a classified conference room where I used to work had an (unknown sourced) VTC in it. Who vetted the electronics in THAT to make sure it wasnt listening when it said it was off?

moo April 25, 2008 7:07 AM

There’s a major advantage to attacking hardware which is not present with software: the vulnerability is not patchable. If you’re China, and you manage to manufacture 50 million desktop CPUs with a vulnerability and they then get inserted into servers, home computers, firewalls etc. then you will probably have exploitable targets around for years. Most home users especially, are NOT going to throw out their computer and buy a new one just because of a “tiny flaw” in their CPU that they read about on the Intarweb. After all, (from their point of view) the computer still works fine! It would be like the Pentium FDIV bug—not correct, but having no visible symptoms to the user (unless they are a security guru).

Clive Robinson April 25, 2008 7:42 AM

As a “getting very long in the tooth” design engineer having designed custom processors around AMD Bit Slice chips and even the 74S TTL range of chips I’m aware of what a lowlevel “hack” is capable of.

What most people do not realise is that the use of FPGAs and loadable/mutable microcode used for more specialised hardware realy is a big wide door that is usually left wide open.

The real question is not “if it’s possible” but “how do I check to see if my hardware is compromised”

And that folks is a very difficult question to answer (think EmSec/TEMPEST and the various Side Channel attacks).

However as noted above Step one is to extract your head from the sand and recognise it is an issue.

Step two is to use the Newtonian methodology (observe/theorise/experiment) to form a basic model of vectors.

Step three charecterise the vectors.

Ho hum life ain’t easy…

Rai April 25, 2008 8:30 AM

Rumor, thumbdrives from china have trojan horses,
I don’t know the source of this rumor, but it has been around this year,
As Bruce mentioned in a recent article these things make it possible to lose a lot of data very easily.
Yesterday, while in a cafe mens room, I found that someone had left a blackberry phone setting on top of the TP rack, I got to play with the buttons and read the contacts until I was finished and came out and turned it over to the employees of the cafe.
Very nice phone it was. When you chose to be insecure, its an attractive nuisance.

Lewis Donofrio April 25, 2008 8:37 AM


They (the exchange/bes admins) can remotely wipe and reset the blackberry once its reported missing.

Now the TTW (Time to wipe) is the window of insecurity you encountered – that could have been closed if the user cared enough about simple password for the console…

–I used it a year ago on my Nextel 7150 sure it slowed down my crackberry but at least I felt more secure.

wawatson April 25, 2008 12:17 PM

@Rai – Do you mean the assembled thumbdrive, or the individual controller or memory chips ??

I’d imagine that flooding a particular market with trojanned chips on the offchance that a few would end up in an “interesting place” would also serve to cause panic in the receiving organisation.

This also brings to mind the story from four years ago about how AMD CPU microcode updates were open to change by any hacker, whereas Intel’s was signed.

A Non Mouse April 25, 2008 2:34 PM


“AMD CPU microcode updates where open… …Intel’s was signed.”

What makes you think Intel’s was “realy” signed?

Ask yourself how much silicon real estate you need for a real signing process either as gates or microcode Flash/EEPROM?

Then think about Microsoft and their X-Box using Intel CPU and TEA as the crypto engine for their faulty code updates system.

I suspect that if there was real estate given over to crypto for code signing checking then Intel would have made it both available and generic for general use as well. Otherwise they have tied up very valuable resources for something that might only ever be used once…

Bolzano April 25, 2008 2:54 PM

An amusing misuse of the word “literally” by TheDoctor:

“And directly after this the whole management of MS would be crucified. Literally.”

Let me guess – your doctorate is not in English …

Ben Rosengart April 25, 2008 2:58 PM

You could pull an “on trusting trust” style attack and modify the CPU design software.

Brannosuke X April 25, 2008 3:49 PM

rai, you just made another good point that people who dont take the time to secure their data on a device that is a both a method of communication and organization for their daily lives do not even deserve to have one. I hope he was not a DHS or federal contractor or employee.

wawatson April 25, 2008 6:55 PM

@A non mouse …

To fill you in … the Intel signing of CPU microcode patches was solely for the distribution (eg across the Internewt) and was supposed to be checked before loading into the microcode patch area supported by the motherboard. (of course, that would then be open to the hardware-based attack mooted in the article). Nothing required any dedication of silicon real estate on the CPU itself. Check out the Intel and AMD docs for this area for yourself, as well as reading and its relationship the AMD bios fix to clear their Errata 109.

You could also check out the Linux ‘microcode driver’ for more info.

Spelling this stuff out … it could be possible to accumulate changes to the CPU operation by microcode to reveal a CPU vulnerability which could be exploited by a related malware attack.

As for the various foobars like Microsoft’s … nuff said. But the one you noted was a textbook example of the use of the “Birmingham Screwdriver” approach with crypto. So I’m not even prepared to think about complete mismatches such as this.

2seriouslywrong April 27, 2008 11:18 AM

Whats this ruckus?
Intel and microcode. GRR. Many have tried to break it , search google, read about it in security books. AMD, easy, no protection, hackers listed code.
Blogs sometimes say middle crud to PROVOKE conversations, and get intel out of people and map knowledge.
Gets annoying sometimes…
My .02 cents.

Arclight April 27, 2008 4:33 PM

Adding to what at least one other user suggests, I think FPGA chips and other mutable hardware are the most likely attack vectors. While compromised chip-design software and deliberate malware engineering in the design/fab process are possible, updating FPGA logic is really not much different than standard software/firmware hacking. A large number of high-end network devices we sell employ these, and consumer video hardware and such is coming with FPGA technology instead of custom ASICs.


TheDoctor April 28, 2008 6:46 AM

@Bolzano: no, of course not.

What I meant was: in the direct meaning of the word, so not the intellectual crucifixion, the REAL one.

BTW: I thought this is a valid use of literally.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.