Backdoor Found (Maybe) in Chinese-Made Military Silicon Chips

We all knew this was possible, but researchers have found the exploit in the wild:

Claims were made by the intelligence agencies around the world, from MI5, NSA and IARPA, that silicon chips could be infected. We developed breakthrough silicon chip scanning technology to investigate these claims. We chose an American military chip that is highly secure with sophisticated encryption standard, manufactured in China. Our aim was to perform advanced code breaking and to see if there were any unexpected features on the chip. We scanned the silicon chip in an affordable time and found a previously unknown backdoor inserted by the manufacturer. This backdoor has a key, which we were able to extract. If you use this key you can disable the chip or reprogram it at will, even if locked by the user with their own key. This particular chip is prevalent in many systems from weapons, nuclear power plants to public transport. In other words, this backdoor access could be turned into an advanced Stuxnet weapon to attack potentially millions of systems. The scale and range of possible attacks has huge implications for National Security and public infrastructure.

Here’s the draft paper:

Abstract. This paper is a short summary of the first real world detection of a backdoor in a military grade FPGA. Using an innovative patented technique we were able to detect and analyse in the first documented case of its kind, a backdoor inserted into the Actel/Microsemi ProASIC3 chips. The backdoor was found to exist on the silicon itself, it was not present in any firmware loaded onto the chip. Using Pipeline Emission Analysis (PEA), a technique pioneered by our sponsor, we were able to extract the secret key to activate the backdoor. This way an attacker can disable all the security on the chip, reprogram crypto and access keys, modify low-level silicon features, access unencrypted configuration bitstream or permanently damage the device. Clearly this means the device is wide open to intellectual property theft, fraud, re-programming as well as reverse engineering of the design which allows the introduction of a new backdoor or Trojan. Most concerning, it is not possible to patch the backdoor in chips already deployed, meaning those using this family of chips have to accept the fact it can be easily compromised or it will have to be physically replaced after a redesign of the silicon itself.

The chip in question was designed in the U.S. by a U.S. company, but manufactured in China. News stories. Comment threads.

One researcher maintains that this is not malicious:

Backdoors are a common problem in software. About 20% of home routers have a backdoor in them, and 50% of industrial control computers have a backdoor. The cause of these backdoors isn’t malicious, but a byproduct of software complexity. Systems need to be debugged before being shipped to customers. Therefore, the software contains debuggers. Often, programmers forget to disable the debugger backdoors before shipping. This problem is notoriously bad for all embedded operating systems (VxWorks, QNX, WinCE, etc.).


It could just be part of the original JTAG building-block. Actel didn’t design their own, but instead purchased the JTAG design and placed it on their chips. They are not aware of precisely all the functionality in that JTAG block, or how it might interact with the rest of the system.

But I’m betting that Microsemi/Actel know about the functionality, but thought of it as a debug feature, rather than a backdoor.

It’s remotely possible that the Chinese manufacturer added the functionality, but highly improbable. It’s prohibitively difficult to change a chip design to add functionality of this complexity. On the other hand, it’s easy for a manufacturer to flip bits. Consider that the functionality is part of the design, but that Actel intended to disable it by flipping a bit turning it off. A manufacturer could easily flip a bit and turn it back on again. In other words, it’s extraordinarily difficult to add complex new functionality, but they may get lucky and be able to make small tweaks to accomplish their goals.

EDITED TO ADD (5/29): Two more articles.

EDITED TO ADD (6/8): Three more articles.

EDITED TO ADD (6/10): A response from the chip manufacturer.

The researchers assertion is that with the discovery of a security key, a hacker can gain access to a privileged internal test facility typically reserved for initial factory testing and failure analysis. Microsemi verifies that the internal test facility is disabled for all shipped devices. The internal test mode can only be entered in a customer-programmed device when the customer supplies their passcode, thus preventing unauthorized access by Microsemi or anyone else. In addition, Microsemi’s customers who are concerned about the possibility of a hacker using DPA have the ability to program their FPGAs with its highest level of security settings. This security setting will disable the use of any type of passcode to gain access to all device configuration, including the internal test facility.

A response from the researchers.

In order to gain access to the backdoor and other features a special key is required. This key has very robust DPA protection, in fact, one of the best silicon-level protections we have ever encountered. With our breakthrough PEA technique we extracted the key in one day and we found that the key is the same in all ProASIC3, Igloo, Fusion and SmartFusion FPGAs. Customers have an option to program their chosen passcode to increase the security; however, Actel/Microsemi does not tell its customers that a special fuse must be programmed in order to get the backdoor protected with both the passcode and backdoor keys. At the same time, the passcode key can be extracted with our PEA technique which is public and covered in our patent so everyone can independently verify our claims. That means that given physical access to the device an attacker can extract all the embedded IP within hours.

There is an option for the highest level of security settings – Permanent Lock. However, if the AES reprogramming option is left it still exposes the device to IP stealing. If not, the Permanent Lock itself is vulnerable to fault attacks and can be disabled opening up the path to the backdoor access as before, but without the need for any passcode.

Posted on May 29, 2012 at 2:07 PM67 Comments


Aaron Andrusko May 29, 2012 2:47 PM

Amazing that this article will be the end of the public discussion too.

Jason May 29, 2012 2:48 PM

Is there one master key that unlocks the backdoor? So you would not require physical access to the device to discover it? It sounds like it is separate from the user-generated key.

Actel reprogrammable devices typically program via JTAG, which means either that it is programmed once in the factory with a fixture, or else you need some other external logic or controller connected to the JTAG pins to be able to reprogram it. In the fixture case, you’d need physical access to the device to compromise it. In the second case, you’d need to be able to break into the system, replace the programming file, and force the system to initiate a programming cycle (JTAG player, if it is available.)

Ken May 29, 2012 3:14 PM

How interesting that this comes just a month after I touched this subject on my blog…

What if your hardware was infected with a virus?

A few key points:

While the chances of this occurring are unlikely, it’s still a possibility. Chances are that if a piece of hardware were modified that significantly, it would most likely be deliberate actions of a well funded organization, with malware rivaling that of Stuxnet or Duqu. This organization would need to do a lot more than just infect a USB stick – the organization would need someone on the inside of the manufacturing process to implement any hardware based malware, and most likely would be government funded. This malware would be well beyond the complexity of Stuxnet or Duqu, as it would be malware written at the physical hardware layer, incorporated into the equipment.

While the likelihood of this being detected at a manufacturer level is relatively high, thanks to quality control processes, if a hardware based piece of malware were missed by a manufacturer, or intentionally introduced by a manufacturer under direction of its government, once a piece of hardware leaves the factory, hardware based malware would be near impossible to detect until it was too late.

Ultimately, this raises the question of “how well do you trust your manufacturers?” Are you having a local, trusted manufacturer you’ve dealt with for years build your equipment, or do you outsource your manufacturing to the cheapest supplier overseas who you’ve never even met face-to-face?

In a world where best practices such as configuration management and configuration standardization are becoming key, should a piece of hardware based malware be created, configuration standardization may ultimately be our own downfall.

Nick P May 29, 2012 4:37 PM

Fabs in America or very friendly countries. I repeat it again here. Add to it designs reviewed in full [by DOD/NSA] for stuff like backdoors. Then, we might be getting somewhere.

Greg A May 29, 2012 5:02 PM

If, as sounds plausible, this debug feature is from the generic JTAG block, the obvious next question is how many other chips are also vulnerable to this exact same exploit? Presumably the key would be the same for ALL the chips…

Russ May 29, 2012 5:12 PM

I am an FPGA designer, and I can tell you that while this backdoor is not surprising. Both because of the complexity that Graham (@ErrataRob) aka “researcher” points out, but also because the JTAG interface that is mentioned is such a low level interface, that touches almost everything in the chip.

I can’t speak authoritatively on this particular Actel family. However,most FPGAs do not use the JTAG interface in the field. Typically, this interface is brought to its own connector on the board, and is only used during initial development. In the field, it is usually not connected to anything at all, since the FPGA has alternate, faster methods and interfaces for loading its brains. So, if the hardware (the board that the FPGA is installed on) is designed with security in mind, then a JTAG security hole shouldn’t be an issue.

I can already hear some people saying that depending on the FPGA family, some of the JTAG pins are re-purposed for general IO, but can be force to their original JTAG purpose. But again this depends on the voltage levels of a few pins at power up. So, again with proper attention to detail, this risk can be significantly mitigated.

Of course, even if the JTAG pins are connected to other chips on the same board, such as microprocessors, just being able to twiddle these pins, would require either a Herculean brute force effort, or access/knowledge of the board level design. Given that an attacker is targeting a product where the FPGA in question is a known component, maybe that isn’t much of a leap in imagination. But it doesn’t really keep me awake at night. At this point I think social engineering of the people controlling the products is a much easier target.

dragonfrog May 29, 2012 5:15 PM


You’d still need either physical access to the chip, or remote access to a computer connected by JTAG to the chip, to take advantage of this backdoor.

That doesn’t invalidate the seriousness of the vulnerability. Take this scenario as a hypothetical example:

The application is a guidance system for a surface-to-air missile. The primary goal is of course that the missiles will hit aircraft as often as possible; an important secondary goal is that any unexploded missiles that land in enemy territory do not enable future opponents to make sophisticated SAM guidance systems to shoot down your own planes with.

So, you pick an FPGA whose manufacturer makes strong claims that it is impossible to extract the program from the chip. Confident in those claims, you burn your most sophisticated guidance algorithms into the chip and end up littering them all over battlefields for the next couple of decades.

Some years after selecting this FPGA, you read this paper.

Ken May 29, 2012 5:31 PM


“You’d still need either physical access to the chip, or remote access to a computer connected by JTAG to the chip, to take advantage of this backdoor.”

Yep…No way for a completely offline piece of hardware to get exploited by malware…just ask Iran’s nuclear engineers.

You can attempt to justify using vulnerable hardware because there are mitigations, but not fixes, available, but without a fix, it’s still a vulnerability. For a mission critical piece of hardware, that’s unacceptable.

Dirk Praet May 29, 2012 7:02 PM

@ Alan Hargreaves

IMHO our former collegue Alec (as usual) pretty much nails it. If you hadn’t pointed to his blog post, I would have.

I’m pretty curious what the official Microsemi/Actel explanation is gonna be. One of the more interesting side phenomena of the story is the very strong polarisation over it in the infosec community, with some parties crying bloody murder and others calling it bogus and a non-event. Same thing with Flame/Skywiper, by the way.

M.V. May 29, 2012 7:15 PM

A little research shows that the Actel/ProASIC3 is manufactured at UMC. UMC is a taiwanese company with Fabs in Taiwan, Singapore and Japan.

So I ask where does that manufactured in China hyperbole comes from?

Hans May 29, 2012 8:48 PM

I’m sick of these “dissenting” voices downplaying the severity of this. From what is reported, the secret backdoor allows e.g. to read values which should never be possible to read back (and as an additional note manufacturer claimed that the logic for this is not even there, so a big lie).

I fail to see how being able to read back “secret” values is not a severe security risk, especially considering that it is reported that the same backdoor key works on all chips.

carey May 29, 2012 8:54 PM

You mean relying on your sworn ideological enemy for all your manufacturing needs will eventually bite you in the ass? Shocking!

Clive Robinson May 30, 2012 3:39 AM

@ M.V.,

MC is a taiwanese company with Fabs in Taiwan, Singapore and So I ask where does that manufactured in China hyperbole comes from?

First off Taiwan if also known as the “Republic of China” and contrary to what you read in US newspapers their is not a siege situation.

Goods and people fairly routienly move between the two, the ethnic Taiwanese are the same as the ethnic Chinese ass they were one and the same people for a considerable period before WWII.

It is known that the Chinese Government have invested quite heavily in Taiwanese companies or their parent companies through various other “cut out” organisations.

So getting and placing Chinese Government Agents in these companies is more a question of logistics as opposed to Cloak and Dagger.

The term “Chines knock off” originaly applied to the ROC not to the mainland so the name just shifted back to the mainland in more recent times as they became more industrialised.

ATN May 30, 2012 3:52 AM

I wonder if the “aes encrypt” assembly instruction on PC microprocessors has a backdoor on some of these contrefeit microprocessors so that it keeps its input somewhere inside the chip…

ewan May 30, 2012 3:54 AM

“It’s remotely possible that the Chinese manufacturer added the functionality, but highly improbable. It’s prohibitively difficult to change a chip design to add functionality of this complexity”

This seems to assume that the manufacturer would be doing this on their own without help from any well resourced sponsor.

With Stuxnet and Flame a common line seems to have been “This is to complex for individuals, it must have been a state” in this case it’s “This is to complex for individuals, it must not be happening”.

Mark Currie May 30, 2012 4:11 AM

I doubt that missile designers would rely on a chip’s protection mechanisms alone. I would be more inclined to believe that they would follow the kind of methods used by crypto module designers.

Generally the crypto hardware community just assume that all chips have a backdoor that the manufacturers use for testing their chip. Sensitive firmware and FPGA configs etc. are typically protected by encapsulating the chips and associated circuitry within a custom-designed anti-tamper enclosure. It also allows the possibility of erasing the sensitive information when tampering is detected.

It’s not 100% secure, but it does force the attacker to start from an unknown position and requires breaking-in using sophisticated wafer fab equipment, rather than being able to simply plug into the JTAG or access the manufacturer testing backdoor.

For me the real danger lies with integrated comms interfaces and particularly RF comms. Physical protection in this case is useless if there is a backdoor through an external comms interface. Ideally the comms interfaces should be on separate chips.

v May 30, 2012 5:22 AM

“American military chip that is highly secure with sophisticated encryption standard, manufactured in China.”
I may be being simplistic but doesn’t this one sentence tell you enough…
You only need to ask yourself one question – if the US manufactured military chips for China would they insert a backdoor?

terryg May 30, 2012 7:25 AM

re. UMC: we’ve been designing some custom silicon, and my understanding (I’m not the chip designer) is UMC is the Chinese version (poorly phrased) of TSMC.

re. JTAG: every product i’ve designed this century has a decent-sized FPGA in it (I dont code the VHDL, i just design hardware). we always have a JTAG port to get the FPGA going, but we also always connect the FPGA JTAG to our micro – allowing in-situ firmware upgrades through the hardware comms interface (usually Gig-E) without cracking the case.

and when it is done, voila – the system is now that much closer to hackable.

I can see why this might not be done, but I doubt I’ll ever come across circumstances in which we would not connect the micro to the FPGA JTAG. you’d want to be mighty confident the VHDL was perfect…..

Someone May 30, 2012 7:51 AM

Ten years ago I worked on updating a design of some equipment normally sold commercially, so that it could be used by the US DoD. One change we were required to make was removing Chinese-manufactured FPGAs; we were told by the DoD that these were not acceptable for use in secure products.

So I’m surprised that there is any US military equipment out there using Chinese-manufactured FPGAs at all.

M.V. May 30, 2012 8:04 AM

Clive:”First off Taiwan if also known as the “Republic of China” and contrary to what you read in US newspapers their is not a siege situation.”

I should have added that i know that Taiwan calls itself Repuplic of China.

The relationship may not be a siege, but it is still rather strained. Direct travelling between the two countries is only possible since a few years.

They may be the same ethnic for the outsider, but there are even language barriers. So it is not that easy to place spys, probably harder than to place chinese spys in Silicon Valley. (When i last was there i had the impression that every other engineer is from chinese origin).

I stay to it, Manufactured in China (as it is today implied) is not the same as Manufactured in Taiwan (aka R.o.C).

This is simply blown out of proportion, just like the shoe and the underpant bomber (and they have been real).

Chris May 30, 2012 9:09 AM

Think of it this way, if the Chinese were getting chips fabricated in the US would the government there try to get a backdoor enabled/installed.

wumpus May 30, 2012 1:24 PM

Anybody familiar with their use?

I see two ways to attack with such a back door:

  1. Reverse engineering. Commercial FPGAs include encryption to make reverse engineering slightly more expensive (at least for TLAs and larger corporations). At first, I assumed this was the extent of the issue.

  2. Remote hacking. If it can run an (even abbridged) TCP/IP stack, somewhere, somebody has left one remotely controllable. It doesn’t matter how strong you try to use the LART, somewhere along the lines, it just had to be connected to the open internet. Reflash and away you go. Before claiming that the military never does this, ask yourself how in the world drone controlling computers have viruses.

Nick P May 30, 2012 1:28 PM

@ M.V.

Chine != Taiwan certainly. I trust Taiwan far more. However, I think you’re overstating the difficultly of turning one into a spy. Do they care about their family? Is what you’re offering them more than they make in a year? Will they be forced to loose their job anyway?

And so on and so on… An evil and often effective game, tradecraft is.

Bob G May 30, 2012 1:54 PM

I used to work for a major Defense Contractor. “Military Silicon Chip” means what? What product uses these chips? Milspec (Military Spec) chips are not made by Chineese or outside the US. Even contacting China by email causes a review for disclosure to Foreign Controlled Nationals. Any information transfered to them would come under control. For Milspec the entire mfg process is controlled. It is not some test at the end of fab that makes an item MilSpec.

A_Nonny_Mouse May 30, 2012 2:36 PM

I have long been CONVINCED this exact scenario would be discovered.

When you contract WITH YOUR ENEMY for critical security or military products, you can bet they’re going to find a way to use it against you.


It was treason and needs to be treated as such.

Nick P May 30, 2012 3:41 PM

@ Bob G on Foreign Sourced Stuff

The article below and my own experience regarding rigorous gov.t requirements having lax enforcement seem to contradict your position. An example on the latter is that a “controlled interface” connecting a highly sensitive network with a less trusted one like the Internet must be assured at very high levels. However, they’ve been using solutions built on EAL4 (read: LOW assurance) technology to do this.

Requirements on paper != requirements in practice.

YU May 30, 2012 10:43 PM

It looks like guys from try to sell technology,

opensource firmware could be safer

RobertT May 31, 2012 4:55 AM

@ Mark Currie
“I doubt that missile designers would rely on a chip’s protection mechanisms alone. I would be more inclined to believe that they would follow the kind of methods used by crypto module designers.”

Would you care to elaborate?

Because as far as I know these FPGA’s are internally Flash programmed. So the “program” resides inside the FPGA, unlike RAM based FPGA’s where the load data could reside inside a secure ROM.

If you have access to the JTAG than you can probably dump the FPGA state, which for something like a Missile control system is probably very valuable, I’d imagine the FPGA contains the information on things like electronic counter measures and intercept avoidance technology. I’ve gotta believe that that kinda stuff is pretty secret and closely guarded information.

The only saving position is that the attack requires physical access to the FPGA, hopefully this access is also closely guarded.

RobertT May 31, 2012 6:14 AM

WRT FPGA chips you will find that all of them incorporate some from of serial testing port. In this case JTAG.

Now with ANY serial test port like this you have limited options after doing wafer-level and package test.

You Can
1) Not bond the JTAG pads (test only available at wafer level)
2) Bond but use lock-out EEPROM or Fuse bits
3) Intentionally Burn_out some circuitry (typically input buffer)
4) Intentionally blow out gate oxides to create an Anti-fuse (internal shunts on critical lines like the CLK)
5) Add some long complex enable sequence

Unfortunately fixes 2 through 4 can be repaired using FIB equipment and extra wires can easily be added with manual wire bonders. So of these choices a complex JTAG enable sequence is probably the most secure, but this is merely security by obscurity…

Clive Robinson May 31, 2012 7:49 AM

@ RobertT,

…a complex… …enable sequence is probably the most secure, but this is merely security by obscurity…

It is and more importantly it goes both ways…

For an attacker, they need to have some “keying feature” to stop the hidden hardware being discovered by chance during the purchasors test and use etc.

Now a lot of people have gone on about “pin connectivity” or lack there of, which as you, Nick P and I have discussed before is not a necessity, just being reasonably close with a sufficiently large and appropriatly modulated EM carrier is sufficient.

As for the harm it could do, a simple reset on the CPU PC or Acc would probably send an active weapon wild, which in modern smart weapons is probably all it would take to make them very un smart.

The advantage of this of course is when wars are being fought by proxie. Imagine how usefull a little box of electronics that would make a “bunker buster” Thermobaric weapon miss the cave entrance would be in “asymetric warfare” in the Afghan mountains or Pakistani boarder region. Or worse a 1K old iron bomb losing it’s “added smarts” and dropping a few tens or hundreds of feet wide in a heavily built up population would be in terms of “negative propaganda”…

The advantage would be that even if it failed to expload there would be in effect little or no evidence on the test bench, nor would a video of the drop actually produce much of anything either…

The problem with the information so far presented is it is lacking in sufficient detail to properly evaluate it’s potential.

And before anyone else raises the point, yes I’m glad there is insufficient information, simply because it will leave the otherside to some extent guessing as to what can and can not be found by the technology.

David Marks May 31, 2012 8:18 AM

I was really interested in finding out a lot more about the technology they used to find the backdoor and break the key and today on the Cambridge website they have posted one of the patents detailing how the side channel attack technology works that they used. There is also a brief post about there is a patch that can be used on the backdoor if anyone is interested.

Anonymous Coward May 31, 2012 11:35 AM

A few quick comments:

1) UMC is a competitor to TSMC. Both are primarily located in Taiwan with some fabs elsewhere. I know TSMC has at least one fab located in the US.

2) Silicon is normally programmed in RTL (Verilog or VHDL). This is then used to produce gates and connections between them. This is the point where simulation is usually done. Then the gates are used to produce actual transistors at a specific point on a die. Finally masks are generated to control the manufacturing; etch here, don’t etch there, oxide goes here, etc. In a single die there may be hundreds of layers of masks in the silicon plus additional layers for the metal on the topside. Only the masks are sent to the fab. Not the RTL.

The impact of this means it is very hard for the fab to actually know what the die they are building does. They have the instructions on how to build a transistor at a very low level. They would have to analyze a lot of layers to determine exactly where the transistor are located and how they are connected together. And then then they have to go backwards to determine how the gates (groupings of transistors) are connected.

Of course some IP blocks are added at the fab. For example a standard JTAG interface may be in the fabs library. So the masks from the designer don’t have that, they just have instructions to the fab to merge the JTAG interface masks at a specific location. In this case the fab does have RTL for the JTAG interface.

It is possible to reverse engineer from the masks. But it is very non-trivial to do. And it will take time. Especially when you think that a chip contains millions of transistors.

Manufacturing on a fab has a known time; a few weeks from masks arriving to chips out, as few as 2 if it’s a rush order. That doesn’t leave a lot of time for a reverse engineering job to hack the masks.

If I was going to hack a chip I’d do it at the RTL level. Get someone at the company to insert the hack there. It’s easier to implement. Maybe harder to hide though, the RTL is usually under fairly tight control at the company. Of course there still may be dozens or hundreds of people with access to parts of the code base.

I would worry more about the fab introducing failures. Make a layer thinner than specified. Hard to detect. And it could shorten the lifetime of the part if done correctly. Which leads to a slow denial of service attack sometime in the future.

  • SW guy at a large silicon company

Mark Currie May 31, 2012 2:12 PM

@RobertT – Back when I programmed FPGA’s, SRAM-fuse based FPGA’s could also be programmed using a micro (soft config).

Ideally you want to try to “design-out” having to trust chips. You don’t want to hard-code, hard-burn or hard-wire your sensitive stuff. Keep it soft. Then you can store it in encrypted form and all you have to worry about is protecting a key.

Of course it’s not that straight-forward and there are many considerations when designing an anti-tamper system. It’s a whole field on its own. However, many of these techniques are discussed in groups like the Cryptographic Hardware and Embedded Systems (CHES) group. Ross Anderson and the guys at Cambridge often discuss this kind of thing too and their stuff is more easily accessible.

There is also an ISO standard – ISO/IEC 19790:2006 that provides good info on this kind of thing.

RobertT May 31, 2012 11:31 PM

@Mark Currie,

“ISO/IEC WD TR 30104 Information Technology — Security Techniques — Physical Security Attacks, Mitigation Techniques and Security Requirements”

Thanks for the links, I can certainly agree that this is desirable but the devil is always in the detail. I’ve seen many complex key controlled encryptions degrade into “the secret method” because key management is a difficult task

BTW I also have just a little experience with hardware hacking 🙂 maybe even some techniques that Russ is yet to discover.

RobertT May 31, 2012 11:42 PM

@Clive R
“As for the harm it could do, a simple reset on the CPU PC or Acc would probably send an active weapon wild, which in modern smart weapons is probably all it would take to make them very un smart.”

I agree, the knowledge that this exploit exists is what gets people thinking about how to utilize it, and that is always the beginning of the end.

it can take the form of someone sneaking through a mod to hook up these pins (probably labeled “do not connect”), The PCB Technician will use some excuse about floating pins error on the PCB layout package. this will usually get it authorized especially if the belief is that it does nothing anyway. Give me a couple of inches PCB of track and I can definitely induce a voltage on it sufficient to flip a CMOS input.

The point is that these types of changes can happen well away from the oversight of any security experts.

David Marks June 1, 2012 1:02 PM

One thing I dont get about Actels response is they dont even mention anything about the fact you can readback the IP using this test feature which having read Actels documentation clearly states is not even physically implemented. Most odd.

Mark Currie June 1, 2012 1:46 PM

“I’ve seen many complex key controlled encryptions degrade into “the secret method” because key management is a difficult task“


However, this is the one case where no key management is required. The key is not shared. The device is initialised with, or (better still) generates its own random key and encrypts the info for its own exclusive use.

Mark Currie June 1, 2012 2:34 PM


Apologies if this post gets sent twice.

“I’ve seen many complex key controlled encryptions degrade into “the secret method” because key management is a difficult task“


However, this is the one case where no key management is required. The key is not shared. The device is initialised with, or (better still) generates its own random key and encrypts the info for its own exclusive use.

Tamara June 1, 2012 3:03 PM

After reading all of your comments I had an enjoyable time reading Really readable and interesting descriptions of what JTAG is and how FPGA’s work.

Left me thinking this issue is really BIG.

But today it hit me that if the chips and boards are being made offshore, why is the JTAG issue so important? Couldn’t any of the chips on the board already be compromised?
I’m confused–why the emphasis on this (seemingly important) vulnerability of JTAG, when, since the whole board is in someone else’s hands they could put their own chips on the board?

Are there inspections that preclude suspicious mountings that make the possibility of the JTAG exploit more optimal? Educate me.

As I was pondering, it was the question of whether this exploit of JTAG would require physical access, in which case, game already over, or NOT, if restricting future access to the JTAG vulnerability is the issue, or whether this could become a type of remote attack, or time delayed exploit.

Seems like, if the JTAG vulnerability is significant, either an attacker has to get a cable to the pins, or something on the board has to access those pins, which may involve the sw on the pc to be in cahoots. Have a I misunderstood this whole thing?

I would love to understand this better,

Tamara June 1, 2012 9:58 PM

Sh*t, last time I saw this kind of silence I was declaring a student pilot emergency landing, and everyone on air just shut up and went away when they heard a woman’s voice declaring an emergency landing.

Is there anyone here to help? 🙂
Maybe I could change my name: okay, this is bob everyone, asking for some help understanding this chip issue. for my own personal edification. Have limited logic and reasoning skills, eager to understand it all, confused by all the arguments about it.

Is there any known wifi or rf way to remotely manipulate these chips, or would this vulnerability require some kind of physical access to the vulnerable chip via the JTAG? Physical Access can theoretically be controlled. If I’m a complete moron, have at it mates, lemme have it. 🙂 really, I can take the criticism.
“Bob” aka a girl

Wael June 2, 2012 12:42 AM

@ Tamara,

I’m your Huckleberry! (Doc Holiday, Tombstone)
Yes, they could be compromised. That is one reason some countries refuse to use certain security chips manufactured by a foreign country.

Clive Robinson June 2, 2012 6:00 AM

@ Mark Currie,

However, this is the one case where no key management is required. The key is not shared.

I guess it depends on your meaning of key managment, but mine covers all asspects of the key including how it is stored in memory and how it is used by software. The reason for this is simple I see no reason nor have I seen for very many years any reason to suppose that there are any bounderiess that limit an attackers ability to get inside a hardware system including inside the chips.

Many years ago I was tasked with defending against such issues and it’s a hard problem to solve (technicaly speaking impossible). I came up with three methods of making key storage attacking more difficult,

1, Don’t store the key.
2, Store “round key” or “Sarray” values in rapidly rotating circular buffers (with a twist).
3, Don’t store values but “shadows” of values.

Over the years it appears that others have subsiquently worked some or all of it out for themselves so there is little point in maintaining the “required secrecy”.

For those that are not quite sure what I mean by the three ways I’ll explain,

The first, “Don’t store the key” is based on the simple observation that all modern cipher systems “expand the key” through “one way algorithms” before using them as “rounds keys” or initial state to store in a stream ciphers State Array (Sarry). You only need to store the expanded key form, which has the advantage of making “key recovery” as hard a task as reversing the one way functions which is usually (but not always) as hard as finding the key from a ciphertext only attack.

The second method, “using circular buffers (with a twist)” I used to describe as like “a snake eating it’s tail”. In an ordinary circular buffer there is a pointer to tell you where the start and end of the data is and the assumption is it follows in a linear order. If you add values and use say a fast timer interupt to move the values around in the buffer then providing the pointer is not stored directly in memory it becomes a hard task to decide what values are used where and how. Further the “with a twist” is where you don’t just increment the pointer thus providing the values in linear order in memory you use an evolving mapping process to shuffle them about in memory.

Thirdly is another one that appears odd “don’t store values but “shadows” of values”. I’ll start by explaining what a data shadow is. If you have two bytes of memory they hold individual values of data. However there are a number of ways you can use these two stored values to generate a third ephemeral value. The simplest way is that of XORing the two values to get the third value. Likewise you could add them mod 256 etc, you can also use more than two stored values to generate the ephemeral value. This ephemeral value only ever appears in a CPU register, and is generated each and every time it is needed then promptly overwritten by a new value. If you use the values from the circular buffers and also regenerate stored values during the fast timer interupt things become quite opaque to an attacker.

And yes before somebody points it out it is all ‘security by obscurity’ but when it come to attackers who can get on or at the chips, it’s the only thing you have left to fight them with…

@ Tamara / “Bob” aka a girl,

… last time I saw this kind of silence I was declaring a student pilot emergency landing, and everyone on air just shut up and went away…

As they say about comedy “Timing is everything”…

First off a lot of the posters are not from US time zones so a post at 3PM US time is 9PM or later for the rest of the posters and Firday Evenings are generaly reserved for getting the “muscle relaxant” down the throat in convivial suroundings and company not bloging. Secondly it’s been very very noticable that in the US not many people post during “working hours” any longer (job fear?).

But any how back to your original question,

But today it hit me that if the chips and boards are being made offshore, why is the JTAG issue so important?

The problem is not just reserved to the JTAG system or pins, it’s three questions realy,

1, Is there a backdoor?
2, How is it accessed?
3, How do we find it?

Which I’ll cover along with your other questions.

But first you ask,

Couldn’t any of the chips on the board already be compromised

The simple answer is yes and that’s been an assumption made by some (myself included) for some time now.

The number of FAB plants around the world doing the higher density devices is very small (Robert T has listed them in the past) and most of them are in the Far East and conceivably under Chinese State Agency access either directly or indirectly… So it’s safe to assume that some if not many are indeed “backdoored”.

With regards,

I’m confused–why the emphasis on this(seemingly important) vulnerability of JTAG,

The simple answer is a test point is a spying point. JTAG is used for testing the chip at various stages and as such has access to places you would not normaly expect to find access to the outside world (like CPU registers etc). Making a secure system testable in this way is a bit like pouring a reinforced vault but leaving hundreds of holes in the walls to put windows in so you can “look in to see things are secure”….

I have a rule of thumb that put simply is,


The reason I say this is that every time you run a “trace” into a circuit it provides an overt/known path for a wanted signal, but also a covert/unknown path for other signals due to “cross talk” etc. Also a long trace is effectivly an antenna and like all “trancducers” it’s two way that is it both “emits signals” and at the same time is “susceptible to signals” and can be used for both “pasive” and “active” EmSec or TEMPEST attacks. Usually on chip traces are so short their effective frequency response is in the THz range.

Disconnecting this trace at “some point” does not remove it (except in the mind of an unknowing engineer) it physicaly remains and thus can be used by an attacker in some way… You might not know how but it’s best to assume it can unless you are sufficiently experianced to say otherwise (and there are darned few people with anything close to sufficient experiance).

Which brings us onto,

since the whole board is in someone else’s hands they could put their own chips on the board

Yes but they won’t have the all important program loaded untill the final board has been thoroughly tested and delivered to the customer and in all probability further tested there. This specific attack is all about getting the (encrypted) code out of a device (in plaintext) long after it’s been programed (see comments up the page about unexploded weapons lying around on the battle field etc).

As I was pondering, it was the question of whether this exploit of JTAG would require physical access, in which case, game already over, or NOT

Yes and no, In the general case a backdoor only needs sufficient energy to “flip a bit” which if it is already biased in some way is very very little, thus could be done with an RF carrier tuned to the resonant frequency (or multiple there of) of any track or circuit connected to the desired pin, that is then modulated with a desired signal.

This particular attack would be to get hold of the software within the device in plaintext format which would require very close almost direct contact distances.

As I discused with RobertT the sort of attack that would make a smart weapon very unsmart just needs an ariel, tuned “pre-selector” circuit feeding a base band matched filter the output of which does the brief fliping of a reset line etc.

Wael June 2, 2012 1:38 PM

Continuing with Clive Robinson’s input …

The threat we are talking about here is probably referring to a class III adversary (major government with virtually unlimited resources or a major research institute). As such, JTAG disabling is a small hurdle to such an adversary. For commercial products, JTAG functionality is normally disabled before shipping and after testing. Not by removing the traces, but by blowing fuses on the chip; OTP (one time programmable, not one time pad). However, In some products that require more invasive debugging tools after deployment, there are ways to override the fuse fuse blowing, and re-enable JTAG by, say, changing an NV item (non-volatile memory location). These type of NV item locations and purposes are often hidden even from the OEM. So if someone finds such an item, they may claim it is used for a back door. Back door, it is. The purpose may or may not be malicious. Back doors are not always a good idea, and are susceptible to inside attackers (class II). If this back door is hidden from the designer – the customer – by the manufacturer, then the intention should be suspected. The customer (government, in this case) bares the responsibility for verifying and validating the chip at all stages, with auditing. Chips are becoming very complex, and the test and verification tools are also in sync, one would hope.

Tamara June 2, 2012 10:06 PM

@Clive Thanks! I’m currently digesting this, and while I am processing this info, one question:
If there is a worry about RF or any other connectivity to the JTAG pins, couldn’t it be thwarted by rendering them useless? Or are they needed later? Until I fully digest your excellent response, I’m still trying to figure out why this issue is so critical, and I’m only interested because it might be so critical. So if a board is manufactured in a competing nation, and it controls important stuff here, and it’s well protected from physical access and shielded from RF or other local access…this might not be a major problem? Believe me, I’m not minimizing the issue, just trying to box it in and figure out where the boundaries are. Back to reading your excellent discussion of the issue, thank you.

Mark Currie June 3, 2012 12:22 AM

@Clive Robinson,

I like your idea, and as you suggest, it can be made even more complicated, thus drawing out the reverse-engineering process as long as possible in the final stand. But I would hope that this would be the last gasp. I believe that you have more allies than you give credit to. I think that you are writing off hardware defences too easily.

We agree that all we can really hope to do us delay information discovery. Information propagates and prevails in all sorts of subtle ways, thwarting our feeble attempts to hide, suppress or even destroy it. While we may not be able to defeat nature, defending against other humans is always possible, even if it entails the endless tit-for-tat cycle of measure/counter-measure.

When it comes down to the final stand, as I am sure you know, there are other defences than pure obfuscation. If the attacker only has one or two samples to play with then the risk of triggering an intrusion detect-and-respond defence can slow the process down quite a bit.

As an example of an intrusion detect-and-respond system in the public/commercial domain you can look at a chip like the Dallas/Maxim DS3644. Now I am not sure how effective this chip is against FIB and laser, but there are other ways of defending against the instruments in this game.

This is a game whose secrets are important (as RobertT will agree), so although much has been revealed in recent years, the state of the art will never be published.

Joel Norvell June 3, 2012 1:54 AM

It’s too bad that Amr Mohsen is behind bars. He could have stopped this in its tracks. It seems “criminal” to me not to consider the cost to society of prosecuting someone like Amr – who apparently went a bit off the rails but could do so much to benefit society.

Clive Robinson June 3, 2012 3:10 PM

@ Mark Currie,

I believe that you have more allies than you give credit to. I think that you are writing off hardware defences too easily

Yes and no, the problem is what you design today will still be put in products X years down the road when what is currently “bleeding edge” is droping of the bottom of the “obsolete product” list.

I used to work for a company that designed embeded systems for the Oil&Gas industry, some code I developed (in assembler) for the then leading edge 8086 is still being used… Guess what it’s not even in the original assembler form, I (apparently) wrote it so well and commented it carefully they paid a consultant to re-develop it in C so. that they could more easily port it onto other systems… If I’d know I’d have probably done it for them for just a few quid and the “vanity/ego”. The real joke of it was I actually prototyped the code originaly in C and then re-worked it in assembler (I’ve still got it as a print out in my “old jobs” morgue).

It is because of this “rapid change” in hardware (some components only have a 12month “design in” life these days) you have to code very very defensivly.

For instance back in the 1980’s AMD made memory devices that had a “self destruct” feature, but unless you’ve got an original product guide you would not know about it even AMD themselves don’t admit to having (available) records and data sheets…

So whilst today I might as you say “have more allies than you give credit to” they might be “fair weather friends” who are not to be found tomorrow 😉

However of seemingly more interest is,

This is a game whose secrets are important (as RobertT will agree), so although much has been revealed in recent years, the state of the art will never be publishe

Actually it’s a little less glamorous than that sounds 🙁 most of the “classes” of defense measure are well known and have been published one way or another for many years, what is kept “secret” is the “specifics” of each instance, so falls truly into “security by obscurity” but daming as that may sound it’s actually not.

In the real tangible physical world security by obscurity works much of the time because of fundemental rules of the physical universe (such as only being in one place, having only limited resources of “scale”, energy is proportional to work, etc). Where it does not work is where those fundimental rules do not apply and that is in the intangable information world.

Thus the real trick is working out how to turn intangible nearly infinatly reproducible at virtualy zero cost information into a unique virtualy non reproducible even at huge cost physical object as and where required. And this is where crypto amongst other tools provide little pieces of the solution.

For instance a little whimsy take a nearly perfect but not quite crystal, it’s faults are as near unique as you could reasonably hope for, if you could find some way of using these faults to encode the information without going through having the crystals faults being capable of being turned into easily reproducable information you would have found a way to use the crystal as a “unique uncopyable key” to the information.

Oddly as fantastic as this stands we have reason to believe it is possible with quantum effects. It has been pointed out that you can do something similar with an aproach similar to holograms where the refrence mirror involves the crystal and thus has it’s unique flaws, thus it becomes possible to encode an image of the information with only the crystal being able to acuratly reproduce the image back. If the image could be encoded in some way such that if only one part of it was incorect then none of th€eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee

MarkH June 4, 2012 1:09 AM


In my opinion, most of the comments on this thread reflect a “maximalist” attitude, that any crack in the concrete is equivalent to the dam breaking and all the cities downstream being washed away.

Actually, in information security, such a mindset is highly desirable when it comes to design and analysis. On the other hand, it is an EXTREMELY POOR basis for judging the relative magnitude of various vulnerabilities.

As an (admittedly unfair) example, I have repeatedly tried to explain to people that theoretically realizable related-key attacks on the AES cipher DON’T mean that there exists even one real-world application, in which it is possible to recover an AES key from looking at or manipulating the ciphertexts (or ciphertexts and plaintexts together). In other words, a deviation from perfection is not the same thing as a break.

Now, I’m not expert on these matters, but here’s my “arithmetic” on exploiting the JTAG capabilities.

[1] If the JTAG pins are not connected, it is most unlikely (notwithstanding Clive’s discussion of radio beams) that any practical attack could be made.

[2] If the JTAG pins are connected, this would most often be to some on-board microcontroller. An adversary who can seize control of the micro, could then extract information from the FPGA and/or reprogram it.

[2a] Now, in most cases, if an adversary can do whatever he wishes with the on-board micro, the security situation is very poor. So the “extra bonus” of FPGA manipulation via JTAG only has a real security impact, when the FPGA performs a critical security function, for example encryption or decryption.
For example, if a secret key is “burned” onto the FPGA, it could be extracted via JTAG. (Note: often, “hard coding” a key in this way is a sign that the security design already has gone wrong.)
In a trickier attack, the FPGA might be reprogrammed to do its job with subtle errors that could leak information outside the system. This attack generally requires that the adversary know quite a lot about the details of board’s design and operation.

[2b] Please note that in the “missile” scenario, if the adversary can do whatever he wants with the on-board micro, he can defeat the missile whether there’s a JTAG vulnerability, or not. If the adversary can beam enough RF power at the missile to twiddle bits on its internal electronics package — whilst in mid-flight! — then he can probably defeat the missile without worrying about FPGA internals, and so on.

[3] Note that in the foregoing, the adversary must gain control of the on-board micro (and the micro must be connected to the JTAG pins). This means the attacker must control the programming of the micro, or control some data link that talks to the micro and have found a vulnerability.
Further, in most systems, that micro will not be connected to the outside world. It will be on some data bus with one or more other computers as intermediaries, so the adversary must hack those as well.

None of this is to say that exploitation is impossible, or even infeasible. But whether this vulnerability matters to an actual system, and if so to what extent, will depend very strongly and the system’s design and modes of operation.

MarkH June 4, 2012 1:20 AM

To those commenters, who are dismayed that the FPGA programming can (reportedly) be read out, even though the manufacturer says it can’t:

In my professional work, I have evaluated programmable devices that are supposed to “protect” the information programmed into them.


In certain countries, bypassing these “protections” is so routine that any ordinary engineer knows it is possible, and where to find the techniques for doing so.

Unless something is sold by a reputable manufacturer as a “tamper resistant module,” it is a mistake to base any security case on an adversary being unable to extract the programmed data.

This isn’t a reason to give up on designing for security — but the security analysis must either include protecting the chips from access by an adversary, or else offer protection even when the adversary can extract the programming.

PS Even the well-designed tamper resistant modules have been successfully attacked (see Ross Anderson). Practical security analysis must be based on some assumptions as to the extent of resources an attacker will bring to bear.

ENKI-2 June 4, 2012 9:53 AM

Had the chips been manufactured in the united states, the same vulnerabilities would exist, but we wouldn’t be reading about it.

Clive Robinson June 4, 2012 2:14 PM

@ Mark Currie,

Hmm sory about the end of my previous post, this smart phone occasionaly does some odd things (such as hang the keyboard code up…)

As I was saying,

If the image could be encoded in some way such that if only one part of it was incorect then none of the image could be recovered then you would be close to achieving the idea.

@ Tamara, Mark H,

In my opinion, most of the comments on this thread reflect a “maximalist” attitude, that any crack in the concrete is equivalent to the dam breaking and all the cities downstream being washed away

Let me explain why the little crack is the equivalent of the biblical flood, or could be in some cases such that it has to be defended against,

If you are the likes of the NSA, GCHQ et al, you have two conflicting interests,

1, Protect your own nations communications.
2, Break the communications of every other nation (friend or foe).

One thing you know for certain is other countries will try and break your communications with all possible prejudice (yup, theft, burglary, paying/blackmailing people to become traitors and abduction, torture and murder). This is not a “movie plot scenario” this is real life and it hapens all to frequently one way or another (look up the “walker family” for one).

Another thing you can also assume with certainty is you are going to lose equipment on the battle field. It’s unavoidable and you know that the enemy will either copy it as is or try to analys it.

Back during the Second World War all cipher machines were essentialy mechanical in nature and it was thus impossible to hide the functioning of the machine, once it was lost.

Thus even though you might want to make the cipher machine as cryptographicaly strong as is reasonably possible to “protect your nation” you know darn well the enemy will only copy it and thus put their communications beyond reach, thus destroying your other mission of “reading the enemies communications”…

A seamingly impossible problem arises, or atleast appears to… However analysis of US Army and Navy cipher systems shows that in the old mechanical systems “not all keys had the same strength”. That is some keys were very strong or strong, and some weak or very weak. Unless you had a very good understanding of “why”, all keys appeared to be equaly as strong.

Thus an enemy just copying the machine without understanding how it worked would send a proportion of their traffic under the very weak or weak keys thus enabling the US to “read their communications” (or atleast some of them and then use other techniques to shorten the break time on other messages). If however they had the smarts to know what were strong and weak keys, the chances were their own systems were atleast as strong.

In fact it has been said that GCHQ never approved a cipher for use unless it knew how to break it or knew it could not be broken. Thus they knew how long a message would be secure under that cipher.

However there is a third problem known in some circles as the “tactical generals problem”. Field ciphers generaly only have to be secure for very short periods of time as tactical signals are generally very very short lived. Staff ciphers generaly have to be secure for much longer periods of time as battle plans may be in the planning for months (think D-day landings). Now what do you do when you have a general who for some reason only has a tactical cipher available when he needs to send staff traffic (this happens quite frequently in practice).

The solution is to reserve some of the very strong keys on tactical ciphers for staff use or what is call “Officer only” communications. The language in such communications is likley to be very very different from tactical communications such that any field/tactical traffic broken by the enemy will be of little or no use in breaking staff/”officer only” communications.

Now in order for this to all work, you need to have one authority for issuing keys, and for many years this was one of the NSA’s (and GCHQ et al) primary functions.

However as electronics has progressed the ability to copy ciphers has become a lot more difficult to do but is still possible for many nations. But we can see from the NSA’s “skipjack” algorithm used in the CapStone chips for Clipper and Fotezza systems that they are still playing the game, but slightly differently. Skipjack is a very brittle cipher, that is it’s like a porcelain rod very strong in compression but load it in any other way and it is very brittle and will break. The Skipjack algorithm was declasified as an algorithm some years ago and has been analysed by quite a few people, they have found that even very small apparently quite insignificant changes significantly weaken the cipher. However what was not released was the information regarding the “tamper proofing” of the actual chips themselves.

Now the Key Escrow etc that cliper and skipjack were supposedly developed for are long gone and AES has come along. But it is interesting to note that the ideas behind skipjack were probably atleast fourty years old when Key Escrow was first suggested and interestingly like DES just meeting the 56bit key length Skipjack only just met the 80bit keylength, AES on the other hand appears to comfortably meet 128bits and even larger key sizes…

Has the NSA given up it’s games? probably not, they just (according to some) rigged the contest using “human nature”. It was known befor the ink was dry on the signiture of the NIST announcment of the AES winner that there were problems with practical implementations of the AES winner. Specifficaly when used in “online” systems it leaked information badly via various timing attacks on moden CPU’s. Specifficaly the “candidate code” had “all been optomised for efficiency and speed” the result was thiss opened up numerous side channels. Because the candidate code for various CPUS etc was available it was just dropped into products “as is” along with all the side channels.

I for one always advise people to only use AES in “offline mode” that is encrypt your data on a seperate “Air Gaped” machine and transfer only the encrypted files via checked media to your “online” ssystem for onwards transmission. My reasons are many and only one of them is to do with poor AES implementations.

It is interesting to note (and it adds fuel to the conspirital fire) that the NSA although approving AES for secret and above, only do so for “data at rest” in products like their Inline Media Encryptor (IME) which is used inline for storage equipment like hard drives etc. That is it is not considered secure “in use” (and I personaly can think of a whole load of good reasons other than time based side channels for this).

However when designing systems because of the “tactical general problem” it is best to assume that “any crack is a dam buster” and design accordingly…

Clive Robinson June 6, 2012 12:56 AM

@ Tamara,

You might find this an interesting read,

It’s a paper from Bochum University in Germany and it covers embedded wireless devices and covers attacks by type I & II personnel and the equipmant they might find of use.

The paper is a little dated, but when reading it keep in mind just how much easier some of the attacks on the likes of KeyLoq fobs etc will be with new analysis vectors showing the same levels of improvment PEA does over preceading methods…

As Bruce once (now famously 😉 remarked “attacks only improve with time”.

Oh pay special attention to the likes of devices powered from the EM field, back when the E Passports were first seriously touted I commented that all these chips work in slightly different ways and thus their effect on the EM field would be different and measurable, thus it would be possible to identify the country of origin of ssome passports “sight unseen” just by examining the trace. This still holds true, however as you will see in the paper another attack on the crypto setup phase timing trace revealed the actuall user details…

Much of the equipment in use can either be purchased (some of it very cheaply second hand now we are in a “double-dip recession”) rented for short periods, or if you are handy with a soldering iron built with “Off The Shelf Components”. These can usually be sourced either on the Internet from the manufacturer or via the likes of “componet catalogue” outlets used by small and medium sized businesses and Schools/Universities in most parts of the world (have a look at, DigiKey, RS Components, Farnell etc as examples).

Mark Currie June 6, 2012 4:16 PM

@ Clive Robinson,

Sorry for the delayed response. I hope I haven’t lost everyone. Thanks for the ref.

“the problem is what you design today will still be put in products X years down the road when what is currently “bleeding edge” is droping of the bottom of the “obsolete product” list.”

Yes, this is a very valid issue when it comes to protecting long-term secrets. R&D in this field is very expensive and life-cycles cannot always be controlled.

In the tactical scenario the protection mechanisms need only provide protection for the duration of the mission. So sensitive info could be automatically destroyed by the device itself based on certain cues e.g. timeout, lack of keep-alive, power failure, kill signal, etc….

@ MarkH

“Even the well-designed tamper resistant modules have been successfully attacked (see Ross Anderson).”

It’s probably safe to say that all modules can eventually be successfully attacked. The skill lies in whether the module can hold out long enough to make a difference.

You may have heard of Christopher Tarnovsky. Tarnovsky is a formidable chip hacker, very experienced at reverse engineering chips. He has access to decent instruments and has a pretty well-equipped lab himself (including FIB). While some of the “secure” chips he has examined are almost trivial to crack, he can take 9 months and many samples to crack a well-designed chip. Perhaps a well-funded organisation with better labs could do better, but Tarnovsky’s accounts show the kind of effort required when no prior knowledge exists. Therefore design secrecy (also a difficult task) is imperative. Bear in mind that the security mechanisms here are only based on the chip’s capabilities, which have severe disadvantages. Chips rely only on external power and do not have the luxury of a crypto module which has more space to allow for things like decent filtering and other protection mechanisms.

I suppose I could be accused of being overly optimistic towards the designers who have all the odds stacked up against them. However I have great faith in human innovation. What I like about this blog is that many commentators balance their negative warnings with a positive idea as well. Crypto designers (no matter how experienced) need all the ideas and encouragement they can get.

Nathan Fain June 8, 2012 5:38 AM

In the first paper they claim there was a backdoor without providing any proof to the claims. The vendor responds saying that this is a feature that can be turned off. The researchers give nothing that can refute this and its likely that manufacturers clients can easily verify. The researchers then release a new paper with moderated backdoor claims that contradict themselves:

Ultimately, an attacker can extract the intellectual property (IP) from the device as well as make a number of changes to the firmware such as inserting new Trojans into its configuration.

A vulnerability that allows one to insert a trojan is not the same as a device or system ‘with’ a trojan. It’s not snake-oil but the language, and insistence on this language still, is certainly FUD. Only examination of common configuration practice (likely through access to documentation) could re-assert the claim that the features found are backdoors. If you need a more full rundown and dissection of the papers, err

David Marks June 8, 2012 8:07 AM

@ Nathan Fain

There is no new paper, only the old draft paper the final paper as they say is for CHES so what other paper are you talking about? On their page there is reference to another paper regarding AES that was submitted it IACR. I think you are perhaps confusing them.

What you have mentioned about a trojan is not really anything to do with the paper, its just to show that you can extract the IP and reprogram the chip as you can easily break the AES key and PASSKEY. I dont see anything wrong in that.

Actel doesnt say anything about the fact that they can readback all the IP from the device when Actel claims its physically impossible even in the companies own literature. They dont even deny it. This speaks volumes. So it seems there is another key used to allow readback of the device thats not documented and exists alongside a host of other functionality. I would suggest this looks pretty dodgy.

I cant see how any undocumented feature that allows you to circumnavigate user security and readback all the IP thats supposed to be impossible is some test feature or other feature for failure analysis.

R. MacRae June 16, 2012 4:53 AM

In the old days we used to have fun looking for the undocumented CPU instructions in the 8086/8088 & 6502 series. Don’t see why things would have changed. Find the key(s) to get to the undocumented instructions via software and your there. All the old CPU’s had them along with the support chips. Just the key’s are more difficult to crack. I would say being creatures of habit the same method would be implemented in today’s chips to access a lower undocumented register level. KISS method.

John Langman June 17, 2012 5:56 AM

The analysis and reverse engineering of chips whilst sophisticated is mainly the application of standard processes. Reading ROM contents with the use of an FIB is well documented. The extraction of circuitry with an electron microscope is available world wide. Re-engineering a device to insert new features is highly expensive especially if the device is fabbed in sub 50nm feature processes. The result would require access to superb
facilities and the re-manufacture of 10 to 20 mask sets at upwards of 1mdlrs per. Then make test and prove, such an endeavour can only reasonably non commercially be undertaken by states. However the existence within most complex micro-electronics of test routines including JTAG with a pre-requisite of being available to the test machine and not the user has been extant in the industry since inception. If security is so important then devices can be built with scribe and crack removal post test. This feature has been utilised in smart card systems for 30 plus years. Leaving in test systems with vulnerabilities is symptomatic of the lack of appreciation of the nefarious use they may be used for post delivery.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.