Adding Backdoors at the Chip Level

Interesting research into undetectably adding backdoors into computer chips during manufacture: “Stealthy dopant-level hardware Trojans: extended version,” also available here:

Abstract: In recent years, hardware Trojans have drawn the attention of governments and industry as well as the scientific community. One of the main concerns is that integrated circuits, e.g., for military or critical-infrastructure applications, could be maliciously manipulated during the manufacturing process, which often takes place abroad. However, since there have been no reported hardware Trojans in practice yet, little is known about how such a Trojan would look like and how difficult it would be in practice to implement one. In this paper we propose an extremely stealthy approach for implementing hardware Trojans below the gate level, and we evaluate their impact on the security of the target device. Instead of adding additional circuitry to the target design, we insert our hardware Trojans by changing the dopant polarity of existing transistors. Since the modified circuit appears legitimate on all wiring layers (including all metal and polysilicon), our family of Trojans is resistant to most detection techniques, including fine-grain optical inspection and checking against “golden chips”. We demonstrate the effectiveness of our approach by inserting Trojans into two designs—a digital post-processing derived from Intel’s cryptographically secure RNG design used in the Ivy Bridge processors and a side-channel resistant SBox implementation­—and by exploring their detectability and their effects on security.

The moral is that this kind of technique is very difficult to detect.

EDITED TO ADD (4/13): Apologies. I didn’t realize that this paper was from 2014.

Posted on March 26, 2018 at 9:26 AM46 Comments

Comments

Kurt Seifried March 26, 2018 9:46 AM

As systems get more complex and detailed proving the negative of “no backdoors” becomes increasingly expensive (it’s already basically impossible, now it’s just becoming more impossible).

David Rudling March 26, 2018 9:58 AM

Consider two theoretical manufacturers of “doped” chips.
One might be a manufacturer controlled by, say, China. During manufacture a backdoor might be introduced to allow spying by Chinese government agencies whose principal targets might be presumed to be foreign governments and commercial organisations and their own population.
The other might be a manufacturer controlled by, say, the USA. During manufacture a backdoor might be introduced to allow spying by US government agencies whose principal targets might be presumed to be foreign governments and commercial organisations and their own population (including those of close allies e.g. the UK).
Now since I don’t work for the NSA or similar but am just a member of the population whose chip should I prefer to see in the computer of my choice?
But wait. Suppose the backdoor is introduced by agents of the other lot subverting the manufacturing of the controlling lot. Now which should I prefer?
Or suppose backdoors are introduced by both lots without detection by the other.
Pass me my quill pen and parchment please.

MGD March 26, 2018 10:25 AM

If CPU chips could be compromised to introduce hardware level backdoors (as some politicians desire and law enforcement organizations have suggested), that would affect all users … ordinary, law abiding citizens; criminals; manufacturers; military manufacturers; government; even the security arms of the government …

In other words, “Be careful what you wish for … Lest your wish be granted”

–MGD

Some Guy March 26, 2018 12:10 PM

@Bill You may want to take a tip from the ancient Babylonians and try using a clay tablet with a reed stylus. It’s much easier and faster to write on, and can still be made less alterable with nothing more than a nice hot kiln.

Billbo March 26, 2018 1:00 PM

Did anyone notice that this paper was published in 2014? If the appropriate agencies moved quickly enough, every CPU chip currently being manufactured could be compromised.

dj March 26, 2018 1:06 PM

Interesting paper and concept. But it seems incomplete and much too late. These techniques were already conceived of and well-known in the industry at least as early as the 1970’s and is one major reason why DOD had banned most foreign-made IC’s. I suspect that the authors can only be credited for producing the first academic paper on this old subject. It’s interesting that not a few academic papers are coming out on matters that were already well-known before this century as if they were novel or newly discovered. (Re-inventing the wheel much?) 🙂

This is something that cannot be done by one or even a few conspirators in a wafer fab. It requires changing the workflow which would be instantly detected. Testing during manufacturing would also catch anything that alters expected parameters. It requires a lot of resources and a lot of people to pull off, and it is unlikely, though possible, to be so perfectly done that a proper source audit couldn’t detect it.

D-503 March 26, 2018 3:15 PM

“However, since there have been no reported hardware Trojans in practice yet”
I thought Intel Management Engine was introduced in 2008. Sorry for the snark… but it seems to me, as a layperson, that CPUs have been “feature-rich” ever since various scaling “laws” for CPUs started slowing down.

echo March 26, 2018 4:14 PM

This is pretty clever.

@dj

Which is easier? Hijacking an existing fab or stealing the IP? What if you build your own fab and interrupt the supply chain?

How many of a particular chip would need compromising? What if you only need 1 in a 1000 or 1 in 10,000 to be compromised? What are the chances of a test of random samples detecting them?

D-503 March 26, 2018 4:47 PM

@dj
Given the richness of the prize, wouldn’t various organisations around the world devote “a lot of resources and a lot of people” to pulling it off?
Also, I got the impression – from previous discussions on this blog – that it’s a fundamentally hard problem to do anything more than a superficial testing and auditing of a complex system. You would have to know in advance exactly where to look.
Aside from the computability issues, isn’t there a lot of outsourcing of the design and engineering work?
In addition to the problem that, as I mentioned in my previous comment, some useful features look as if they’re “dual use”.

jamez March 26, 2018 7:54 PM

what exactly can one do with a backdoored chip? eat cpu cycles maybe, or cause purposeful miscalculations?
malware can compromise data by phoning home with it, but a cpu would need cooperation from other system components to really be trojan, wouldn’t it?

tyr March 26, 2018 9:37 PM

@D-503

Many years ago I had someone at Intel
tell me that they couldn’t exhaustively
test an 8080 CPU.

A few years later there was a batch of
8212s that made it from them into the
supply chain got installed by manufacturers
into equipment and sold to end users.
Once the problem was identified the hard
way all of that batch was replaced and
they were tossed.

If that could happen then, the newer more
complex of today would pass right through
into the workplace with a backdoor in it
waiting to bite once triggered.

Given the reach of nation state actors it
isn’t hard to imagine that it has already
happened.

Anyone who tried to stop something like
that would wind up pushing Daisies and the
rest would cash the governments checks.

The revelations of the past decades show
that no one has been paranoid enough by
far.

James Sutherland March 27, 2018 3:51 AM

@Billbo: “Did anyone notice that this paper was published in 2014? If the appropriate agencies moved quickly enough, every CPU chip currently being manufactured could be compromised.”

I’m sure they are way ahead of that already – why did you think the NSA bought a CPU factory from Sony a few years ago? (Two obvious explanations, both involving backdoors: one, to keep other people’s backdoors out, two, to put their own in.)

@jamez: “what exactly can one do with a backdoored chip? eat cpu cycles maybe, or cause purposeful miscalculations?
malware can compromise data by phoning home with it, but a cpu would need cooperation from other system components to really be trojan, wouldn’t it?”

No. In the same year as this paper, I published one with a (simulated – I don’t have the funds for an actual chip fab!) CPU instruction level backdoor, and a proof of concept which captured and exfiltrated your AES encryption keys via an innocuous-looking webpage. (The page had a line of Javascript which performed a floating point division; the backdoor, inspired by Intel’s Pentium fiasco, simply returned encryption keys instead of the correct result of one specific division.)

Capturing AES keys was one handy application I thought of at the time, but there are lots of other possibilities: change the behaviour of the random number generator on command, tamper with specific memory locations …

Thoth March 27, 2018 4:54 AM

@all, Clive Robinson

This topic reminds me of the “Castles & Prisons” model.

I think most of the old timers here have discussed these problems including @Clive Robinson, @RobertT, @Nick P, @Wael, myself and many others.

The fundamental problem is “putting all the eggs in one basket” syndrome.

Despite knowing these issues, we are still stuck in this loop.

(haha) March 27, 2018 6:34 AM

@jamez: “a cpu would need cooperation from other system components to really be trojan”

All cpu already have this cooperation, necessary to make “Intel Management Engine” work. IME has, e.g., its own version of sshd listening on your ethernet port.

VinnyG March 27, 2018 6:56 AM

I confess to large-scale ignorance regarding the function of ICs at this level. However, after reading and mostly (I think) conceptually grasping the saliant parts, I have two questions that may be worth pursuing. First concerns the use of “golden chips” as a comparison standard. This one is simple. How does the manufacturer know with absolute certainty that the “golden” chip has not been tampered with? The second concerns detection, especially in the field, by chance. The paper discussed side-channel attacks that seem to be deliberately designed to reveal the alteration(s.) Those attacks failed. But the altered chip does behave differently under some conditions, does it not? Would this not mean that there is some possibility that some operation that should work correctly on the architecture in question would fail? Would this be obvious, or is there redundancy somewhere that would likely conceal it? Can the probability that this would happen in such a way as to reveal that the chip was altered be calculated?

MrC March 27, 2018 9:35 AM

@ jamez:

If I understand correctly, the idea is that some transistors in the RNG circuit are “stuck” so that you end up with outputs with much lower entropy than they ought to have — low enough that brute force becomes feasible if you’ve got a model of what the circuit’s actually doing. Sound crypto fails if you feed it a non-random number when it asks for a random one. (The dicey part is doing that while still looking random enough that no one else is able to develop a model what the circuit’s actually doing through empirical observation during the entire lifetime of the affected chips.)

me March 27, 2018 10:09 AM

@VinnyG:
“there is some possibility that some operation that should work correctly on the architecture in question would fail”

yes, as there is some possibility that if you smash random keys on your keyboard you go to facebook and login succesfully as someone else.

i have read a paper about cpu backdoors where they added one single gate that worked as analog device and not digital.
if you repeat dividing by 0 (normally this will raise an error). that gate gets charged more and more and eventually it will change state. when state is changed the cpu will change from user privilege to admin privilege execution.

this is not something that can happen, you can divide per 0 with calculator, but noone has any reason to keep dividing by 0 say 10000 times fast enough.
there is no single valid reason to do this. yet a javascript webpage can do this.

link to the paper:
https://web.eecs.umich.edu/~taustin/papers/OAKLAND16-a2attack.pdf

me March 27, 2018 10:20 AM

“How does the manufacturer know that the “golden” chip has not been tampered with?”
because they make that chip in a (more) controlled place.
for example (im from italy), we have “st microelectronics” that makes IC.
i know that they make prototype (few production) here and mass produce in china because it cost less. here they have more advanced fabric but they mass produce there.

some detection methods are:
-xray the chip, that black plastic thing is much larger than the actual chip inside, you can see if the shape/size or number of contained ic is correct.
-current consumed: if you add something it will need (more) current to work.
-emf electronic devices emit electromagnetic field.

the problem is that one gate will not increase the consumed current enough to be detected, also it consume current only when it change state.
also one gate is small and is difficult to “see it”.

i don’t know exactly if we can even watch the produced chip.
from what i know a the process involve making a “mask” of the chip, and using lasers, the laser goes through small holes in the mask and since the holes are smaller than the wavelenght of the laser light the light diffract and this must be calculated…
TL;DR making a (small) chip is quite difficult and expensive. once closed its almost impossible to open or watch whats inside it

dj March 27, 2018 12:13 PM

@echo:

Hijacking a fab is not going to work. Not unless you don’t care you’ll be caught. And that would happen faster than the product can be finished. Stealing the IP may be easier but it’s more than just the designs that must be stolen. The entire development history would have to be taken as well, and all the production data as well, since processes are changed during production. It’s a huge amount of data spread across multiple systems. It’s simply easier, though not easy at all, to have your own fab, hire out as a manufacturer and alter the design, design methods that won’t expose the alterations in testing or appearance and have alternate data on hand for audits. But one small discrepancy or error and the jig is up. Also, there are customers who strip devices down to bare silicon looking for problems and alterations, and competitors who do the same to reverse engineer the devices.

Even if one could get away with creating just a few altered devices, they cannot be sure they will reach their target and if that target is something sensitive, the devices will be more closely vetted, more frequently audited and tested.

@D-503:

There already are states that have enough resources to copy and alter chip designs.

Random sample testing is no longer done. Every production wafer is tested at each step in the process. Every test is recorded and automatically checked against expected parameters. At the end of production every chip is tested and sorted according to performance before and after it is packaged. Not too many actually make it to the end of production and then not all of them make it out the door. There is too much at stake to not do such exhaustive testing. Quantity used to be king. Now quality is king. Even if they start making a million devices, it doesn’t mean they’ll get even a significant fraction of that million done. Times have changed.

dj March 27, 2018 12:27 PM

@me:

Things are done a bit differently now. Device features are too small to use the conventional method of simply projecting light through a mask.

The mask is much larger now and the light used is very-to-extreme short UV. The image from the mask is projected onto the wafer through a series of lenses set up like a reverese microscope. Diffraction is reduced or effectively eliminated.

D-503 March 27, 2018 4:22 PM

@dj
The RNG backdoor described in the paper cannot be physically detected even by expensive and time-consuming of shaving the chip layer by layer and mapping out the circuits by current methods, even if one had a reliable golden chip to compare against.
Dopants patterned at close to the atomic scale of matter, over a large area – sounds very hard to verify reliably.
As for testing, the RNG backdoor is completely undetectable by conventional testing methods, and mathematically hard to detect by unconventional methods, as long as the adversary isn’t too greedy about how far they degrade the RNG.
The RNG backdoor is an exceptionally low-hanging fruit, but more sophisticated backdoors can be engineered.
Consider backdoors that only alter CPU behaviour if the CPU performs a specific operation on a specific pair of magic numbers.
In the case of @James Sutherland’s backdoor, it’s a pair of 64-bit numbers, if I’m guessing right (I haven’t read his paper). Assuming a full 128 bits of entropy in that number pair, the CPU would have to be run for years at 100% to test every possible combination.
The Intel ME backdoor reportedly uses a 2048-bit magic number. Brute-forcing that one – I’m too lazy to do the back-of-the-envelope calculation – but I would guess by the Landauer limit and e = mc^2, it could take an amount of energy greater than the equivalent of the total mass of the universe.
And if the output of the backdoor is a non-obvious side-channel, good luck testing for that.
So it’s great that quality control has improved so much over the past 20 years[1], but as a naiive person I wouldn’t be confident that deliberate tampering could be detected… until it gets used too often against powerful nation-states.
[1] II was under the impression that 100% of chips have manufacturing defects, and only the ones with critical defects are discarded.

Brian March 27, 2018 8:16 PM

From the abstract: “In this paper we propose an extremely stealthy approach for implementing hardware Trojans below the gate level, and we evaluate their impact on the security of the target device. Instead of adding additional circuitry to the target design, we insert our hardware Trojans by changing the dopant polarity of existing transistors.”

I’m no expert, but I’m an EE and have a bit of a background in circuit design, device physics and IC design and fabrication.

It would seem to me that to make this approach work you would need to alter the fab’s process parameters, affecting not only individual transistors but other more complex analog and mixed-signal “library” blocks, eg. a variety of I/O pads (half a dozen at least in a modern chip; high-speed ones include re-timing PLLs … many have Schmitt triggers), on-chip linear voltage regulators and references, ADCs & DACS, internal cross-domain re-timing blocks, and crystal clock interfaces (maybe not so much the last one these days because designs tend to use use clock modules?).

These block designs are painstakingly tuned to the fab’s process parameters, and if one or more of these blocks didn’t stop working entirely, the fab’s QC higher-ups would definitely notice the change in the wafer probe parametric test reports (and probably yield distribution by bin) and take steps to correct it.

Maybe the guys who wrote this paper found a way to make it work under ideal, un-monitored conditions, but I strongly suspect the threat here is more theoretical than actual. You’d need a LOT of people to be part of this conspiracy.

echo March 28, 2018 12:04 PM

Is it possible to burn out a single CPU transistor with crossed x-ray beams or something similar?

D-503 March 28, 2018 7:09 PM

How’s your aim? 🙂
It’s amazng what people workng at synchrotrons can do with X-rays these days, but X-rays are notoriously hard to focus. Even if you could get a narrow enough beam – the wavelength is short enough, but designing X-ray optics is for a special breed of engineer – I don’t know how anyone could align a chip in 3 dimensions with 14 nm precision.

Hmm March 28, 2018 8:20 PM

Do you think you could even identify the (hundreds/thousands of) transistors you’d need to hit?

Don’t worry about aim, worry about the concept.

echo March 28, 2018 10:44 PM

@D-503

So you believe this is theoretically possible? This is good enough for me. My physics, maths, and engineering isn’t good enough to get much further than asking the questions. I can’t take this any further than this point so leave it to people with more skill to check the physics and practicalities and logistics and costs etcetera.

On an indirect tangent sub diffraction limit optics is pretty cool. It was another one of those impossible things which became possible.

https://en.wikipedia.org/wiki/Superlens

Clive Robinson March 28, 2018 10:57 PM

@ D-503,

I don’t know how anyone could align a chip in 3 dimensions with 14 nm precision.

Have you thought about how they align the wafer under the masks for each layer when making the chips?…

@ Hmm,

Do you think you could even identify the (hundreds/thousands of) transistors you’d need to hit?

That is kind of a self answering question when you think about it.

There are two times when you would go looking for such a “backdoor”

1, When you have evidence of a backdoor.
2, When you have suspicion of a backdoor.

In the first you have “a known fault” in the second you don’t.

When you have “a known fault” you have something tangible you can reason about. When you only have “a suspicion” you do not have anything tangible to reason about.

The first is “a logical AND” search which is not just bounded but scope reducing. The second is “a logical OR” search which is only bound by the possibilities and as they increase as a power law it is very much scope expanding.

Obviously this very much effects the size of the search space, thus making the former very much simpler than the latter.

At the simplest you would use a “binary chop” on the former. That is “a known fault” implicitly has known characteristics, you simply ask a series of questions about each charecteristic which sorts the candidate devices fairly quickly on the “Has / Doesn’t have” basis. But also as part of the process uncovers new charecteristics to test against.

It’s a fundemental testing strategy that for some reason that is not clear few people appear aware of… But when told have the “That’s obvious” moment, that marks such things.

Clive Robinson March 28, 2018 11:26 PM

@ David Rudling,

Or suppose backdoors are introduced by both lots without detection by the other.
Pass me my quill pen and parchment please.

Welcome to the rabbit hole. You either fall down into a maze of twisty little passages or you break your leg. It all depends on which happens first “Eat me” or “Drink me”… Both lead to pain and anguish.

It’s the sort of thing “intel analysts” have to live with day after day in the “Great Game” world of “Smoke and Mirrors”.

D-503 March 29, 2018 12:17 AM

@Clive

Have you thought about how they align the wafer under the masks for each layer when making the chips?…

Yes, I did think about that – aligning the wafer is a vastly simpler problem because robust, precise landmarks are available, and 2 of the dimensions don’t need much precision – in manufacturing, it’s the relative positions of features in that plane that matter (getting the depth right to nm precision is critical, though). Those advantages are gone when trying to physically modify a chip post-manufacture. So it’s a whole different class of problem.

Clive Robinson March 29, 2018 3:17 AM

@ jamez,

what exactly can one do with a backdoored chip? eat cpu cycles maybe, or cause purposeful miscalculations?
malware can compromise data by phoning home with it, but a cpu would need cooperation from other system components to really be trojan, wouldn’t it?

You have to think about the process as a black box.

If you put a backdoor into a chip then either it works all the time or has to be triggered by external input.

If the backdoor is working all the time then what it can do is extraordinarily limited in scope, otherwise it would show up in testing. Thus it would be more passive than active some how leaking say the carry bit out in RF noise or on the power supply line. Likewise any co-pros such as for floating point or crypto.

Such limited capability has little or no range, possibly not even out of the equipment casing. However if equipment is captured but without the self destruct enabled –a fairly serious problem these days– an attacker could get a probe inside the casing and read out code etc from the backdoor.

A triggered backdoor could take a more active in effect denial of service type attack. The problem is however the reverse of the passive backdoor, that is getting the trigger signal into the equipment.

In both the active and passive cases back dooring a general purpose CPU does not get you much as you have no idea as to the function it will do. Thus backdoors are more likely to apper in dedicated functionality not general purpose, which might account for them not being seen.

The nearest we appear to have come is the Rowhammer, Specter and meltdown attacks that work on memory contents. Where an external entity can trigger a backdoor through a web page input.

However as the focus of Western Inteligence is turning on “the enemy within” be they imaginary, invented or real, I would expect such memory accessing/leaking faults to increase.

The more complex the functionality the more likely there is to be an opportunity to insert such a backdoor and get away with it. Further the “standard” nature of Personal Computers makes the ability to set up a communications channel to carry data out or triggers in way way more easy.

The other thing is the likes of the NSA, GCHQ etc are not stupid and their resources can buy them multiple attack points. Thus they can get an attack in small individual pieces. They also are fairly adept at Finessing where you get a feature into standards and the like under the guise of “health and safety” or other apparently surveillance unrelated activity. It is after all why we have GPS chips in our mobile phones and the ability to turn the phone microphone on remotely…

Thus you would expect a “little help” to an engineer to make things run faster or with less power to be suspect. The AES competition to my looking at it was “finessed” the candidates had to have various proof of ability code put into the public domain. One of which was a speed test, the process of which opened up a whole load of time based side channels. The NSA would have known that software engineers would just put the “fast code” into their progrme or libraries. The result was a secure algorithm with built in implementation faults…

Having in effect got away with that they now know that future people will have their eyes open a little further. Thus they will split their attacks up into more parts. If we look at the recent CPU attacks they need a high frequency clock, an ability to use it and from a distance. That is it needs several parts to be lined up…

It’s just one of many reasons I do not alow the likes of javascript, java or flash to run on my computers…

Hmm March 29, 2018 4:25 AM

There are two times when you would go looking for such a “backdoor”

3: just because
4: I have a fuzzer
5: paid to look
6: pedant
There is nothing beyond pedant

Clive Robinson March 29, 2018 6:22 AM

@ Hmm,

There is nothing beyond pedant

Quite…

But your list of examples can be shown to fall as a subset of the two I give. In the same way as “Education” would.

JG4 March 29, 2018 6:44 AM

@D-503 – One part of getting things in the right place is metrology, which field has been ploughed diligently by HP/Agilent/Keysight and Zygo. It is fairly easy to get things measured to 0.1 nm using inexpensive run-of-the-mill visible and near-infrared laser chips. The other part might be called nanopositioning. One of the names in this sector is Aerotek, but there are plenty of others. It is obvious that semiconductor manufacturers know how to position masks with the necessary precision. To first order, that probably requires better than 10% of the linewidth/feature size. A synonym for “backdoor a chip” is “add undocumented features,” and requires only subtle alerations of mask features, probably at the nanometer scale. Those alterations could be performed with ion beam milling under an electron microscope. I haven’t seen it mentioned here lately, but the embedded engineer in the family sent me a link to a company that can and will open chip packages and break hardware encryption schemes. I can’t recall if they have a voltage-contrast electron microscope, but that is a handy tool for analysing the effects of mask alterations, as well as capturing real-time information on chip operation. Not quite the ultimate hypervisor, but close. These tools generally are quite expensive, sited at large universities, corporations and state-level actors. In some cases, they can be rented fairly cheaply. The raw silicon wafers themselves are trustworthy, so production of trusted hardware can begin there. To get rid of the mask completely, e-beam exposure (a glorified electron microscope) of photoresist can be used. It is very slow, but it would allow processing of trusted chips in small quantities. Plasma etching, diffusion and oxidation are relatively safe processes, so could be done offsite using a trusted courier to convey and observe. The wafer would have to be returned to the trusted exposure system for any additional photoresist steps. It may be that trusted hardware is accessible to a crowd-sourced effort. These steps can be carried out for tens of thousands to hundreds of thousands of dollars. The large men with dark glasses and dark suits who drive large black SUVs will add considerably to the pool of available cash, in return for you accepting their assistance with the details of the e-beam programming.

echo March 29, 2018 9:24 AM

@JG4, Clive Robinson, hmmm

I have experienced this kind of power tripping within the state sector and beaurocracies so have good cause to believe finessing and compromising for advantage is baked in. In some cases this is for personal career and political advantage not necessarily the goals of the organisation or improvment of any particular standard or quality benchmark or public interest. My view is two can play at this game.

I spent today routing around an administrative block with a side channel attack on another branch office, discussing issues until inforation began to fall out, then backdoored the administrative block by getting through to an internal department buried behind the layer of opaque blanket policy. I needed to do this because a key decision maker needed to be informed of a technical issue prior to making a decision and needed the administrative authority to give them cover which the deeply buried internal depart has input with. Only when my fix is in will I proceed with the next step. Until then I’m refusing to reach out for the dangled carrot because I know this is a point of control and I want this to go my way not the default way. In theory we should be on the same side but because of broken defaults in the system things fall apart very quickly before we begin which is why I have been working so hard at this point to bring us onto the same page.

I used to wonder what was so great about some Hollywood producers until I read an interview which revealed how much navigating and negotiating they needed to do behind the scenes for a movie to be made.

echo March 29, 2018 3:19 PM

This is interesting. I have no idea if this could be used with cpu lithography or testing. This technique also works with infrared so may be relevant to the face recognition technology topic?

http://www.sciencemag.org/news/2018/03/x-ray-ghost-images-could-cut-radiation-doses
Yet by connecting a single-pixel camera to a patterned light source, a team of physicists in China has made detailed x-ray images using a statistical technique called ghost imaging, first pioneered 20 years ago in infrared and visible light. Researchers in the field say future versions of this system could take clear x-ray photographs with cheap cameras—no need for lenses and multipixel detectors—and less cancer-causing radiation than conventional techniques.

MikeA March 30, 2018 3:26 PM

Oh, for the days when this sort of stuff was done for commercial advantage. IIRC, a certain well-known manufacturer of video game consoles included the IC equivalent of a “trap city” in their product. Transistors that appeared to be ion implanted (well known topology) were in fact not. The copiers, who claimed to have produced their copy by reverse engineering the original and designing their own, used implanted transistors for all that shape. The result was visible in the resulting chroma signal.
Some years later, I was approached by an “analysis” company who heard (erroneously) that I needed to get an accurate read on the doping of a competitors chip, a process which at the time was widely believed to be impossible. I declined, partly because they wanted a great deal of money for non-guaranteed results, but also because Gentlemen do not read each others masks.

Cassandra April 2, 2018 8:37 AM

Perhaps no-one is talking about the elephant in the room: the possibility that one or several nation-states have already put in hardware fixes that are well-nigh undetectable. The modification of the built-in random-number generator is a case in point – it passes NIST tests for randomness, while anyone who knows the characteristics of the modification can easily de-randomise the output, which makes breaking many commonly-used public-key protocols easy to break.
Most people do not worry about nation-state level resources being used to decrypt their apparently private communications, and for most people, it is not a worry because there is a (somewhat porous) Chinese wall between national intelligence services and law-enforcement agencies, which means the available techniques are not used against relatively minor criminal infractions of relevant laws, but only against targets that are deemed a national security risk, or who have information that is interesting to a particular nation-state.
Nonetheless, the actual practicality of such hardware Trojans underlines the point that many regular contributors to blog comments make: if you want to secure your communications, you need to ensure that encryption is performed by hardware known to be ‘clear’ to you, which is why there is a low-key, but ongoing effort to produce a practical design for an off-line encryption device that can be reasonably assumed to not have such a hardware trojan. Some people do actually need such a thing.
It’s worth reading for background Richard Stallman’s essay on Trusted Computing, which in turn links to Ross Anderson’s Frequently Asked Questions regarding the same topic.

25. So a `Trusted Computer’ is a computer that can break my security?

That’s a polite way of putting it.

A ‘Secure Enclave’ on a chip die is an area trusted by the manufacturer to be able to break your security, so could very easily be a ‘hardware Trojan’ hiding in plain sight. Making all your correspondence non-private in exchange for Hollywood being able to protect it’s cash-flow seems like a poor bargain to me. Opinions differ as to whether that is a good thing.

Cassandra

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.