Designing a Malicious Processor

From the LEET ’08 conference: “Designing and implementing malicious hardware,” by Samuel T. King, Joseph Tucek, Anthony Cozzie, Chris Grier, Weihang Jiang, and Yuanyuan Zhou.

Abstract:

Hidden malicious circuits provide an attacker with a stealthy attack vector. As they occupy a layer below the entire software stack, malicious circuits can bypass traditional defensive techniques. Yet current work on trojan circuits considers only simple attacks against the hardware itself, and straightforward defenses. More complex designs that attack the software are unexplored, as are the countermeasures an attacker may take to bypass proposed defenses.

We present the design and implementation of Illinois Malicious Processors (IMPs). There is a substantial design space in malicious circuitry; we show that an attacker,
rather than designing one specific attack, can instead design hardware to support attacks. Such flexible hardware allows powerful, general purpose attacks, while remaining surprisingly low in the amount of additional hardware. We show two such hardware designs, and implement them in a real system. Further, we show three powerful attacks using this hardware, including login backdoor that gives an attacker complete and highlevel access to the machine. This login attack requires only 1341 additional gates: gates that can be used for other attacks as well. Malicious processors are more practical, more flexible, and harder to detect than an initial analysis would suggest.

Posted on October 16, 2008 at 12:39 PM31 Comments

Comments

Eric K. October 16, 2008 1:13 PM

Hm. The first image this conjures up for me is sneaking malicious processors into commercial hardware where the user doesn’t realize the hardware is malicious and working against him.

But I think the more common likely use would be where the user knows the hardware is malicious because it’s working for him but against the software running on it, such as for the breaking of DRM.

Clive Robinson October 16, 2008 1:30 PM

@ Bruce,

This is so old…

It has been known for quite some time in a number of circles that CPU’s can be re-microcoded.

That is some CPUs have areas where additional microcode can be added to correct mistakes or add additional features. And there are some CPUs that have no firm microcode it’s all loaded in from another memory device at boot, so enabaling one CPU to be programed to look like another.

Which of course means that it is possible to make the microcode effectivly invisable from outside the chip (unless the chip manufacturer includes an unalterable way to securly checksum the microcode areas).

And the logical follow on from this is that short of ripping the chip top of and running it up under a scanning electron microscope it is going to be possiblr to 100% check that there is no malicious microcode…

The real problem from an attackers point of view is how do you make the code do anything usefull…

That is recognise something in main memory and leak it out directly or via a side channel.

And it’s that area where the new fun research is going to be 😉

Anthony October 16, 2008 1:31 PM

I’ve often considered the possibility of network chips being designed to give access to system memory with a magic packet that could be piggybacked on legit traffic through firewalls or IDS’s.

Davi Ottenheimer October 16, 2008 1:33 PM

IMP? I don’t think so. I say MPU. Otherwise it will become IMPU, and what does Illinois really have to do with anything anyway. They can be made anywhere by anyone.

“Maliciously modified devices are already a reality. In 2006, Apple shipped iPods infected with the RavMonE virus”

I don’t see the malicious argument at all. The story in 2006 was that a test lab in China had an infection of RavMonE that spread to units tested there. Has malicious intent been found? Same for the Seagate case.

Other examples in the paper are related to the CIA and KGB.

Strange segue and background. I see a vast gulf between accidental test/lab viruses and the efforts of state intelligence agencies during war.

“Although some initial work has been done on this problem in the security community, our understanding of malicious circuits is limited.”

I guess they are excluding all peripheral and tap-based hardware attacks. Many operational security managers are well versed in the problem of hardware attacks and malicious circuits. Some I know require access to circuits to be either sealed and monitored at all times or exposed/transparent so detection is easy. Speaking of which, the defense section omits the concept of using quantity and baselines. Hard to imagine someone serious about catching malicious or even malfunctioning hardware only ever performing tests on a single unit.

“it is doubtful that ‘script
kiddies’ will turn their adolescent energies to malicious
processors”

Ugh. Totally unnecessary. Smug and adolescent remark. Wise attackers use a path of least resistance, relative to a reward.

Clive Robinson October 16, 2008 1:39 PM

Opps, never try to do two things at the same time…

In my above,

“be possiblr to 100%”

Should be,

“be impossible to 100%”

Oh I also forgot to mention that just as it is difficult to get the new microcode to do anything usefull like leak data,

It is also very difficult to get it to do things on command from outside.

In both cases you almost always need to be able to use existing microcode in the CPU and that is not possible with all CPU architectures.

Bahggy October 16, 2008 1:49 PM

@Clive Robinson,

I have to agree, getting microcode to do thing on command from the outside will be ‘tricky’

Givn that this is Bruce’s Blog, perhaps we could imagine a scenario where microcode is modified to provide predictable pseudo random number generation.

Clive Robinson October 16, 2008 2:44 PM

@ Bahggy,

“Givn that this is Bruce’s Blog, perhaps we could imagine a scenario where microcode is modified to provide predictable pseudo random number generation.”

Actually that would quite easy on a processor that has a built in TRNG based on say thermal noise from a resistor (Intel).

Essentialy you overlay the microcode call to the TRNG register array with your own.

If it was the sort of TRNG that takes the “real entropy pool” and hashes it then all you need to do is replace the pool with something like a counter that starts at a known offset and has maybe a 16bit counter…

Obviously you would want to pre append the counter with a value that changed from CPU to CPU like the Intel chip serial number for instance.

You could also add a bit of pesudo random time every time a certain memory range was used (ie coresponds to IO range for network cards)

You could modulate this with the CPU serial number in effect sending it out spread spectrum encoded ontop of any network packets sent…

How about that for a first of the cuff idea?

tim October 16, 2008 2:45 PM

ok, I have not read the link, so maybe it mentions microcode somewhere but if it doesn’t why are you guys talking microcode? The blurb Bruce quoted is talking about adding additional gates to a processor, a physical hardware design change, that could be used to circumvent security.

Clive Robinson October 16, 2008 2:54 PM

@ Bahggy,

Or the ultimate TPM / DRM killer.

TPM will eventually be built into the CPU. To actually put the secret and other keys into the TPM at any time other than at the factory means it will have to go through the CPU…

Hide a copy of the key in CPU flash etc and again spread spectrum modulate it on the CPU clock etc to leak it out.

Now that is an attack with rela merit as it would show just how usless TPM would realy be…

People should not get me thinking nefarious thoughts it’s dangerous 😉

Jim October 16, 2008 3:01 PM

This is exactly the kind of stuff I am really scared of. It is 1.) hard and expensive to detect, out-of-bound of any community driven auditing process 2.) no remedy exists besides a new CPU, which will close the circle to 1.).
Yeah, find those 1341 gates on an intel quadcore, good luck.

Additionaly this seems like reinvented (public) research the NSA has done before.
J.

Reality Check October 16, 2008 3:12 PM

Who needs malicious hardware when we have already Microsoft software and legions of less than mediocre programmers right here?

Clive Robinson October 16, 2008 3:20 PM

@ tim,

“… why are you guys talking microcode?”

For the simple reason it is low hanging fruit that you can play with now…

Adding gates requires a modification to the CPU internals that is deliberate and has to happen way up the design pipline and would actually be quite difficult to sneak through the conventional design proccess, and the oportunity to do it only happens at the first design stage.

Microcode on the other hand is done much further down the design pipline and from the silicon point of view is just more of what is already there so is a lot easier to hide your malicious stuff in at a later date. Also the designers usually put a bit more microcode space in on the off chance they will have to make changes. And importantly it would usuall mean changing only one of the masks, which means you can slip your nasty in on a processor step change which happens quite often in comparison.

Finaly as I said some processors alow microcode to be loaded up into a CPU after it’s in a motherboard which means that you could possibly do it with some sort of virus…

Clive Robinson October 16, 2008 4:02 PM

@ tim,

Another point that is worth considering is deniability.

A modern FAB plant capable of producing high end CPUs is expensive, very expensive. You are talking about it being cheaper to buy a couple of countries it’s that expensive.

Now if you are Mr Intel, you do not want to risk that sort of investment even “for the man”. If you actually put the gates into say the latest batch of high end CPUs you know there is a very real chance it is going to be found. And you have no deniability it had to have not only happened in your factory but right at the top of the design chain where only one or two senior people could have “slipped it in” without it being routienly picked up…

So you need to move it down the design chain preferably right out of your factory.

Well microcode is realy software and would be as ephemeral if not for the fact it’s put into the chip as one of the metalisation (tracking) masks.

So as Mr Intel think one step further on, how do I get rid of traceability that leads to no deniability. Well traceability in the FAB process is probably better than the military has for it’s A-bomds (and no I’m not joking). So you realy cannot buck it and hope not to be detected.

Simple you know that microcode needs to be changed from time to time, having the ability to modify it is the next best thing to being able to add gates to the design well what do you do…

You make part or all of the microcode field upgradable and whilst you are at it if your balls are made of highly polished brass slip a few field programable gate arrays in their as well using the same argument. It all goes into the design for good and proper reasons that even the most skeptical shareholder will agree with (remember the cost of the Pentium floating point bug?).

Now as it can be reprogramed on the motherboard you slip the malicious update in with the kit supplied to motherboard makers who supply to the region “the man” wants bugged…

If you are realy good then you actually do it via a virus.

You have good deniability and “the man” gets what he wants. Everybody is happy except for the government and users of the region targeted. But you can always rush out the correct field update wait a while and then send out a virus to switch it all back again. Better still get “the mans” other friends do it…

Any points I missed?

Curmudgeon October 16, 2008 8:05 PM

Microcode loads on Intel chips are allegedly encrypted and/or signed, but the algorithms are unlikely to be strong because Intel would be offering hardware crypto functions in the instruction set if the hardware had the capability to do real encryption.

In contrast, AMD microcode loads are believed to be essentially unauthenticated. Knowledge that this feature could be exploited to subvert OS memory protection–in much the same way this paper suggests–dates back to at least 2004.[1]

Malicious hardware is not an entirely new concept. In the x86 world, there has always been the risk of a malicious motherboard causing havoc through creative use/abuse of hardware level debugging features and/or SMM.

[1] http://www.interesting-people.org/archives/interesting-people/200407/msg00251.html

Kevin Maciunas October 16, 2008 8:12 PM

@Clive Robinson

I suspect that Mr Intel could have already engineered this. As Al W. pointed out earlier – and the kit is available.

All we need to do is interfere with the supply chain. Your company orders new laptops. I buy laptops with “Intel vPro” technology and “pre-configure” them. Re-flash the BIOS to remove the config menus and hey presto. (Of course, I also cunningly remove the “with vPro technology” stickers, just to make it hard to spot..).

Given that I haven’t actually tried this, there might be a hidden gotcha in the above but I’d rate it a pretty decent chance of working effectively.

moo October 16, 2008 10:06 PM

I wouldn’t worry much about the American three-letter agencies persuading Intel to bug their chips. I would worry more about fabs in China…

I think I remember hearing about this IMP research a year or two ago.

Suppose you are a Chinese intelligence agency, with access to a plant where chips are made for desktop PCs. (It might not even have to be CPUs… what about compromised northbridge/southbridge chips? What fraction of the cheap motherboards in the world are made in China?)

They could surely sneak in 1341 extra gates during the fabbing process, among the tens of millions of existing gates… how would the buyers ever find that out? I admit it may not be that easy, but imagine the intelligence coup it would represent if you could get American government agencies to buy computers with those tampered chips in them! You could send an innocuous packet to them (disguised as part of an HTTP response, for example) and load a small payload onto them which would have direct access to all of RAM and could then send packets back out containing whatever you wanted to see. At the very least, its one hell of a vector for covertly inserting a rootkit or other spyware.

bim October 17, 2008 3:05 AM

@moo
“but imagine the intelligence coup it would represent if you could get American government agencies to buy computers with those tampered chips in them!”

Wasn’t it early this year that there was a story about counterfeit routers being used in US gov networks? I seem to remember that at the time, there wasa certain amount of “how do they know the components are exactly the same as real routers”.

Jonadab the Unsightly One October 17, 2008 6:34 AM

Correct me if I’m missing something, but this seems to me like it would only be practicable if you know exactly what software the system is going to run (well, exactly what OS, at least) at hardware-design-time (or, with microcode, microcode-write time, which as near as I can tell would generally still well before the hardware is sold to a specific customer, except for maybe a few special cases like the military).

I mean, how could you design a system to compromise the software if you don’t know what the software’s going to be? Seems unlikely.

My conclusion would be that the best way to thwart this, or at least make it very very difficult, would be to avoid using the bundled OEM software. Format the drive and install different software that you obtain separately.

Richard October 17, 2008 6:39 AM

Of course if you really want to hide your tracks you could do it by introducing a noise vulnerability that only occurs for certain bit patterns. The logical design will then be innocent (although it might need to contain a few apparently pointless gates).

Anonymous Coward October 17, 2008 8:31 AM

This has been done for years via IP over A/C. All your bytes are belong to “them” for a long, long time now.

asdf October 17, 2008 3:04 PM

This is old. The King et al paper came out a while back.

Bruce Schneier is out of date and out of touch as usual.

Less October 18, 2008 11:00 PM

Most organizations (government and private) are clueless as to how much a problem this is.

It never cease to amaze me how high ranking officials carry around Blackberries, when they must have been warned that,

a) it is over the air (lots of scope for passive intercepts)

b) it is sent via the internet (more scope for intercepts)

c) stored in a foreign server (grin)

d) forwarded around elsewhere (more scope)

e) devices themselves are not secure (COTS chips)

f) software / firmware (in device, everywhere en route) not secure

However, none of these seem to keep people from using them.

But then, it beats using gov.palin@yahoo.com

Less October 19, 2008 12:57 AM

While we are on the subject of “high tech” ways of doing things, the methods below, devised quite a few generations back, still work wonders:

  • send someone with a pair of binoculars to lip read people

  • read their body language and facial expressions

  • send in a “honey” trap

Kadin October 21, 2008 4:41 PM

@Jonadab:

That’s really not much of a hardship. If you were bugging desktops, and your target was government agencies or a big corporations, it wouldn’t be hard to guess what flavor of Windows they’re most likely to install. At least for the foreseeable future I don’t see that changing.

If you’re bugging or tampering with embedded systems of some sort, it’s even easier. I would imagine that tampering with chips used in routers or firewalls would be even more tempting than desktop PCs; compromise an organization’s network security and you might not even need to tap individual desktop computers.

If you’re rigging an ASIC that’s used in Cisco routers, it’s pretty much a given what kind of software the finished device is going to be running. Ditto with many other network security products that have proprietary chips in them.

And even if you’re not entirely sure what the finished product is going to be running, if you aim for the fat part of the bell curve (right now, Windows for general purpose PCs, maybe VXWorks for embedded processors?), you could snag a lot of people. This would be especially true if your goal is not very specific espionage but simply malware propagation or something more general in scope.

HumHo October 22, 2008 2:06 PM

as an European I would suspect the American alphabet agencies more of interest in something like this rather than some Chinese company. But the Chinese seem to be a great bogeyman at the moment.

Humho October 22, 2008 2:08 PM

Less:
– send in a “honey” trap


What is a “honey” trap? A honey that is a trap? hmmm..aren’t they all?

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.