Cloning Google Titan 2FA keys

This is a clever side-channel attack:

The cloning works by using a hot air gun and a scalpel to remove the plastic key casing and expose the NXP A700X chip, which acts as a secure element that stores the cryptographic secrets. Next, an attacker connects the chip to hardware and software that take measurements as the key is being used to authenticate on an existing account. Once the measurement-taking is finished, the attacker seals the chip in a new casing and returns it to the victim.

Extracting and later resealing the chip takes about four hours. It takes another six hours to take measurements for each account the attacker wants to hack. In other words, the process would take 10 hours to clone the key for a single account, 16 hours to clone a key for two accounts, and 22 hours for three accounts.

By observing the local electromagnetic radiations as the chip generates the digital signatures, the researchers exploit a side channel vulnerability in the NXP chip. The exploit allows an attacker to obtain the long-term elliptic curve digital signal algorithm private key designated for a given account. With the crypto key in hand, the attacker can then create her own key, which will work for each account she targeted.

The attack isn’t free, but it’s not expensive either:

A hacker would first have to steal a target’s account password and also gain covert possession of the physical key for as many as 10 hours. The cloning also requires up to $12,000 worth of equipment and custom software, plus an advanced background in electrical engineering and cryptography. That means the key cloning—­were it ever to happen in the wild—­would likely be done only by a nation-state pursuing its highest-value targets.

That last line about “nation-state pursuing its highest-value targets” is just not true. There are many other situations where this attack is feasible.

Note that the attack isn’t against the Google system specifically. It exploits a side-channel attack in the NXP chip. Which means that other systems are probably vulnerable:

While the researchers performed their attack on the Google Titan, they believe that other hardware that uses the A700X, or chips based on the A700X, may also be vulnerable. If true, that would include Yubico’s YubiKey NEO and several 2FA keys made by Feitian.

Posted on January 12, 2021 at 6:16 AM34 Comments

Comments

Juergen January 12, 2021 6:30 AM

While a cool side channel attack, it should be noted that at least for the Titan key it’s not actually in any way practical unless you find a way to seal up the key again after the cloning – the attack is destructive as far as the casing of the key is concerned.

In effect, you’d need a victim that not only doesn’t notice his key is missing for 10 hours, but also that the key was badly mangled. Of course, a nation-state could have the resources to have a spare casing on hand.

Victor Wagner January 12, 2021 7:13 AM

If one makes a workshop for cloning the keys, and would do thousand of them, it would worth just $12 of equipment per one key. It requires a lot of skilled labour, yes.
But it seems that if there is steady stream of keys to be cloned, it would worth about $100 per clone, not more.

Paul January 12, 2021 7:53 AM

I wonder why everyone calls destructive readout (not only the case needs to be mutilated, but the resin shell of the crypto IC must be dissolved with acid as well) “cloning”…

Clive Robinson January 12, 2021 7:57 AM

@ Bruce, ALL,

That last line about “nation-state pursuing its highest-value targets” is just not true.

No it’s not, nor is the articles comment,

“The cloning also requires up to $12,000 worth of equipment and custom software, plus an advanced background in electrical engineering and cryptography.”

There are various types of equipment that could do this, and many of them can be rented for short periods of time. So that price drops to around $1000.

As for an “an advanced background in electrical engineering” lets just say I suspect they are “over egging the pudding” this looks like it’s within the possabilities of an undergraduate project, or very shortly will be. In fact I suspect many that have a Radio Amature licence of the higher levels and a couple of years tinkering would not find this too difficult to try out.

Which brings us around to,

“The cloning works by using a hot air gun and a scalpel to remove the plastic key casing and expose the NXP A700X chip”

Is very probably not required.

1, Plastic casing : Generally does not stop Electromagnetic radiation.

2, Electromagnetic radiation : Occurs as a concequence of doing work with the movment of charge (current flowing) in conductors.

3, The movment of charge : Happens in all active electronic circuits.

Back in the 1980’s as I’ve mentioned in the past I started a series of experiments that alowed me to actively attack “Pocket Gambling Machines” and “Electronic Wallets”. In the 1990’s I was using the same techniques but more refined to attack Smart Cards.

At the end of the 1990’s there was lots of noise about “Differential Power Analysis”[1] Whilst it was effective against Smart Cards it was not very useful otherwise. The reason being how to measure the movment of charge in the circuit. In SPA/DPA it means making an intrusive electrical connection by cutting a PCB trace or wire and inserting either a resistor or micro-transformer. Which also means getting through tamper evident casings.

Back in the 1980’s I’d already solved that problem. PCB traces and wires are “antennas” and antennas are bi-directional transducers that convert EM radiation into movment of charge and back again. Because of this they can be viewed like transformers where the residual energy in all the windings must be zero, therefore they can act as “summation circuits”. So if you inject in an RF carrier it gets re-radiated, but it is cross modulated by the other charges in the wire. So it brings out the signal that is in the wire.

To cut what is a long story short the casing does not need to be removed if it is transparent to RF energy.

Which tends to sugget to me that Google’s key was not designed by someone with wide knowledge in the security field.

Unless of course it was a deliberate design fault by someone who assumes the knowledge of exploting it is not widely known…

[1] Differential Power Analysis, is basically observing the “noise” on the power supply to the smart card with a digital storage scope. You make many repeated readings, align the traces and average them. Each time you double the number of readings the random noise halves. Thus the real signal you are looking for quickly appears. You then apply algorithm based statistical observations to extract the likes of Encryption key bits.

https://en.m.wikipedia.org/wiki/Power_analysis

[2] DPA and it’s relatives are ‘Passive Attacks” similar to TEMPEST attacks, thus the “Poor hand maiden” of “Active Attacks”. Worse in the public published research there is little about using EM/RF radiation to probe the circuit or induce faults in the way the circuit works. These more refined attacks are “Active EMSec By EM carrier” attacks with some being “Active Fault Injection By EM carrier” attacks and can be in some cases quite devestating.

Noah January 12, 2021 8:26 AM

While the general purpose hardware cost $12k, I am sure a purpose-built system could be made for far less. In general lab equipment is expensive because it can be used over a wide range of parameters.

I also expect less invasive methods could be developed. For example, make a jig with a precision drill bit to drill a tiny hole in just the right spot and depth to insert the probe, then fill it with white epoxy afterwards.

Basically, this is a proof of concept, and with engineering could be much more practical. Note, however, that someone who has your key for that long could also simply use it 🙂

Clive Robinson January 12, 2021 9:48 AM

@ ALL,

Getting the casing off as reported is very “smash and grab”. It does the job required but makes a hell of a mess that has to be fixed up afterwards at considerable cost and effort.

The point is for their research there was no need to “fix up afterwards” so it was fine.

From the description of the key it is two plastic shells glued together.

This gives rise to two less messy avenues of attack on the glue, leaving the shells intact to be reused[1]

1, Chemical
2, Physical

I do not know the plastic or glue involved, but if not chemically similar you can often find a solvent that works on one but not the other.

From practical experience[2] I know you can “cut glue” but not what has been glued by physical means. I also happen to know from practical experience that surgical dental laser scalpels are kind of handy to open even welded plastic cases and be able to put then back together almost untracably. Then there are variations on diathermy / RF welding you can use.

In practice these attacks will be quite a bit faster than the hot air gun and scalpel system they used, and will alow the original plastic shells to be reused. So if the owner of the key has put in their own “secret scratches” to detect tampering they will still be there…

But my prefered method would still be to extend the range of the EM pickup to outside the case.

As I’ve mentioned before you can try to hide something in noise, but if the noise is not colocated with the signal there are well known techniques from both astronomy and medical immaging whereby you can issolate a desired signal from hundreds of other local noise sources. In essence you make the equivalent of a “Very Long Base Line radio telescope” which in this case also has the advantage you see in body scanners of getting a 360degree wrap around the target.

Would it be difficult to build, technically no, physically it would be finickity but once done be repeatably usable quickly easily and with a little more thought automatically, so skilled labour would not be required to operate it or a specialised lab bench set up.

The other thing is that the “six hours” to get the required number of readings is bassed on a non automated process, where you record a 10mS process into a Digital Storage Oscilloscope, then manually transfere it to a computer. You could automate it down so the DSO was not required and the readings fed in real time into the computer with only a little pre post processing. Which with optomisation means you might only need to have the key in the jig for maybe five to ten minutes…

[1] These sorts of attack are not new and there is historic evidence the hard wax seals on what we would now call “Diplomatic communications” were attacked in the original “Black Chambers” to remove the threads and then put back in the original seal. It’s a matter of record that the Russians could open the wire and metal seals used on diplomatic pouches in the 20th century, as part of their attack on the electric typewriters being sent to the US embassy in Moscow.

[2] I demonstrate to people how to open envelopes and put them back together by physically attacking the glue, not on the envelope flap but down the sides. All it takes is a little bit of patience a steady hand and a low cost small kitchen knife that is prepared in a slightly odd way. In essence you hone it down to get a very thin edge, this you then blunt and then using a large grit size put random sharp groves in the edge. In effect you make not a “knife” but a “very thin hacksaw blade” that has micro sized teeth. That way you can cut the clue not the paper, you then use the same usually PVA based glue –or white wood glue– it glue it back. With practice and patience you can do this so that even tipping the resealed envelope through the light does not show physical marks.

JonKnowsNothing January 12, 2021 10:48 AM

@Clive @All

re:physical vs electronic 2FA devices

A number of online video games use a form of 2FA. From some reports they probably aren’t much more secure than non-2FA access.

1, Key fob with direct connect (like in the report)
2, Xmit/Rcv physical device which sends a GetKey msg to a server, receives a Key Code valid for a short duration, Key Code is entered online.
3, Online 2FA generators, iirc Google has one that uses the same GetKey/KeyCode process but without the physical device, sending via the internet.

The difficulty is 2 fold for game application scenarios: (1)

1, the code must match something on the server or login process
2, the xmit/rcv is dependent on the manufacturer or the code-sequence provider
3, it’s still hackable at the server end and at the end user end.

It’s partly security theater.

And then there is the problem of the generated key value…

1, Some games allow 3d party sales and transactions. One might be surprised at the value a gamer will place on a ready-to-play fully-equipped character at max level with max level weaponry. In some countries, prisoners are used to created these high level toons that are sold by Gold Farmers. The prisoners use a script that min-maxes the time to create these for-sale-toons and often must log n-hours to fill quotas.

Other games do not allow Gold Farming but it happens and the ban-hammer happens if they are discovered.

Twitter isn’t the only company that has a ban-hammer but no one gives a sniff if it’s a Gold Farmer or Gold Buyer.

tfb January 12, 2021 12:09 PM

@Clive Robinson

If you look at the paper, the probe they used to ‘listen’ to the processor was very close to it indeed. So close that it’s going to hear a lot more from the parts of it it was close to than from mire distant bits. In other words they may well have been snooping on specific busses/other parts of the thing, and if they were further away all this would get drowned in noise from other parts of the processor as the relative difference in distance would be much smaller.

(But this is just a theory of course).

If this is a good theory though this attack would be defeated by putting the thing inside a case which was a lot more resistant to attack.

lurker January 12, 2021 12:26 PM

@Clive

1, Plastic casing : Generally does not stop Electromagnetic radiation.

There’s a section in Bunny Huang’s The Hardware Hacker (sorry I don’t have it to hand, quoting from aging memory) where he describes xrays of a chip, then slicing it open to find the big white squares were metal shields over parts of the circuit. After he published his analysis of the chip functions, later versions had more and larger shields. Given the age of Bunny’s book,

Which tends to sugget to me that Google’s key was not designed by someone with wide knowledge in the security field.

I would suggest the key design, rather than its hack, was an undergrad project with insufficient supervision.

Clive Robinson January 12, 2021 12:59 PM

@ tfb,

If this is a good theory though this attack would be defeated by putting the thing inside a case which was a lot more resistant to attack.

Firstly the paper indicates there is a NFC antenna on the PCB so they may not want better RF absorption/screening.

However ignoring the theory and NFC for the moment, “putting the thing inside a case which was a lot more resistant to attack” would probably solve it any way.

Almost nobody wants the key to be any bigger, but many would almost certainly like it to feel more solid as it gives it a “higher quality feel” thus the illusion of both quality and security.

So to keep the size the same and make it “feel more solid” kind of moves “plastic” out of the game for the case. Metal is the next logical material to consider which would help screen any emissions, it does not have to be very thick, stainless steel foil would probably do. So a thin pressed case would do and have a quality satin finish look, like the “Cruzer Thumb Drives”. Also to give it weight some internal ferrite material would help a lot as well.

But something tells me that NXP (was Philips, who gave rise to PSV Einedhoven football team) will not be making many more of those chips. They were getting long in the tooth some time ago and as the profit had been made, so the price was cheap which is probably the main reason they were being used for this application anyway.

But as for the theory if you read the 60page paper you will find even the authors know their attack was “over kill”. That is they went all out[1] to get the best signals they could thay way the signal processing was going to be less as were the required number of samples and the idea more easily turned into an attack.

They know that now there is a working prototype that has proved it can be done, others will increase the range, reduce the samples etc and publish other papers off of their paper. They might even publish more papers on it themselves.

Such is the nature of “Publish or be damned” to get your next accademic job or even the much sought after “tenure” or “visiting professor” etc.

[1] Fuming nitric acid is fun stuff, it turns organic materials such as rags and flesh into rocket fuel… It was used by the Germans during the later stages of WWII and they called it S-Stoff,

https://en.m.wikipedia.org/wiki/S-Stoff

The fact they used it to decap the chips might actually discorage others going down their path…

Clive Robinson January 12, 2021 1:39 PM

@ lurker,

… where [Bunny Huang] describes xrays of a chip, then slicing it open to find the big white squares were metal shields over parts of the circuit.

They might or might not have been shields, you’ld need more information to be certain.

But chips used in products with sensitive RF circuitry do have shields built in. Also they are used on non security or RF related chips as well.

But yes you will find all sorts of strange things in security related chips not just metal plates but “random coils” of wire and even “Shaped Charges” have been tried… Even special chemicals designed to create intense heat/fire, and those are the ones people are alowed to talk about…

The thing is “playing the signal strength above noise floor game” with metal plates in the near field is sometimes not a good idea, have a look at how “patch antennas”, “slot radiators” and even “loop antennas” work.

People would generally be better of with information theoretic protection methods rather than physical or electromagnetic defences.

RobertT January 12, 2021 5:41 PM

Hmm might need to take a look at exactly what they’re doing, sounds interesting.

I don’t know enough about the NXP chip to comment on which parts of the circuit I’d be looking at first but I do have a fair idea about which of the blocks that are critical to maintaining security in any security / identification application.

1) there’s always a shared secret, so you need to extract the secret
2) there’s always some attempt to obfuscate (what you think you are looking at, is not what you are actually looking at)
3) there’s always some sort of proprietary communications between the secure blocks
It can be as simple as a low power LVDS type data link or as complex as a multiwire analog signaling system. Sometimes there are undisclosed offsets which move the data packet around inside the frame. Bottom line is that there are lots of possible ways to hide critical information
4) Once you know which signal lines you need to monitor the problem is extracting the data

A decade ago this was possible by direct probing the metal connectors on the surface of the chip, but those days are long gone (and most of the proprietary internal comms include ways to detect chip probing.

These days with a modern sub 90 nm CMOS process you’d be hard pressed to directly probe anything however there are still ways to detect the signal level without direct probing. These methods include Photon emissions, thermally inducing offsets and a few other methods that I probably shouldn’t mention.

As for the time required to steal the data.
this is my guess as to the best case time line
Decap the chip (about 10 minutes assuming regular Epoxy encapsulation) (takes much longer if it’s some sort of goop that you need to first blast through with an IR laser.
Blast a hole through the chip passivation layer (assuming regular SiO2) 15 min, 14min setup and a couple of laser blasts or send it off for a FIB (2 hours best case)
Setup Picoprobe (assuming chip is directly probeable) about 15min per probe (takes longer if you need to coordinate 2 or more probes on the chip at the same time)
Extract data 10min (I’m guessing)
It’s likely that the attacker would try to simplify the task by interfering with the operation of the Random number generator. Most good on chip RNG’s incorporate both a real random number seed and a Pseudo Random number circuit (usually some variant of a simple xor feedback LSFR) Generally the real RNG (say 8 bits) is mixed with the LSFR to create the encryption 256 bit (or larger) RNG number. If you can jam the real RNG data than the Pseudo RNG range of outputs is significantly reduced. But maybe this is a trick that these attackers have yet to learn.

From what I understand of the attack I’d have the whole process done in about 2 to 3 hours.
Hope this helps, btw you’re wasting your time with just this list of steps for any circuit that I designed.

RobertT January 12, 2021 5:45 PM

I wrote a detailed response but it appears to have gone to moderation
I guess they don’t want detailed posts.

RobertT January 12, 2021 9:27 PM

OK I read the actual attack method and I’m somewhat surprised by the complexity of the task but I guess it’s an attack implemented by someone that lacks the skills to reverse engineer the chip layout so that they can figure out exactly which of the lines to probe. Instead they rely on EM emissions from the Multiplier block and used some advanced data mining techniques to discover the most likely Nonce.

The chip in question uses 140nm technology and 5 level metal. The metal layers look like they’re standard Aluminum layers with an Si02 dielectric probably interconnecting Tungsten Plug Vias. It’s over 15 year old technology so direct chip probing is still possible.
It looks like there are several sections where an upper layer of metal is being used as a shield layer to make it a little more difficult to access the signals of interest. But this wouldn’t stop anyone with any chip probing experience for very long, it’s at most an inconvenience. If they have made the Shield into an active circuit element of the circuit then I’d give them full marks but I doubt they’re doing this. because it’s an uncommon trick that’s highly likely to backfire because it adds an unknown packaging parasitic capacitance.
Anyway it’s an interesting attack obviously developed to leverage the unique skills the team members.

MrC January 12, 2021 11:31 PM

@JonKnowsNothing

A number of online video games use a form of 2FA. From some reports they probably aren’t much more secure than non-2FA access.

Not all dongles are created equal. The protocol matters.

All the old stuff like TOTP in which the dongle just barfs up a secret are vulnerable to a straightforward MitM attack.

The new stuff (U2F) solves this by having the remote site send a token to be signed, and the browser telling the dongle which domain the token is from, and thus which key to use. Getting around this requires suborning the browser (in which case the attacker likely no longer needs to go after the dongle), or fooling the browser (which basically makes the overall system as secure as TLS, since that’s the weakest link).

The new, new stuff (FIDO2) unfortunately heads off in the direction of making the dongle a password replacement. So it’s not “two-factor authentication” anymore, but rather one-factor authentication using a token that’s vulnerable to physical theft.

Online 2FA generators, iirc Google has one that uses the same GetKey/KeyCode process but without the physical device, sending via the internet.

I think I may be misunderstanding you here. Are you saying that Google offers a remote website that will generate a 2FA code for you? That sounds… profoundly stupid. Can you post a link for this?

I’m aware that they offer a smartphone app for generating 2FA codes, which is [sarcasm]positively brilliant[/sarcasm] because of course it’s a good idea to use a hopelessly insecure device for which you don’t even have root access for this purpose. But having a remote website do it sounds infinitely stupider.

JonKnowsNothing January 13, 2021 12:09 AM

@MrC

re: Link to Google F2A

Sorry I don’t have the link.

It was for a MMORPG game that offered 2FA access with a keyfob.

Aside from other issues, the battery in the fobs finally died and lots of folks wanted a replacement but the fobs were no longer made and you couldn’t change the battery yourself.

There were a number of options suggested by the game (1) at that time, on how to continue using 2FA; there was a web version because I don’t use apps.

I haven’t played that game for some time so I don’t know what updates there may have been.

1, Like all things corporate, games companies like others “cut out the fat” ie: customer service and tech support etc. When things work you don’t need help, if they don’t work you need help but there isn’t any to be had, without help you stop playing ’cause it doesn’t work. Catch 22.

Clive Robinson January 13, 2021 1:30 AM

@ RobertT,

Long time no post, nice to see you are still doing interesting things.

With regards,

I wrote a detailed response but it appears to have gone to moderation
I guess they don’t want detailed posts.

Detailed posts are nice (or atleast I think so)

There are glitches in the new blog sodtware, and some oddities.

You bumped into one of the oddities. For some reason when you post you don’t see your comment if you click on the post notification page that says approved or held for mod.

However if you see approved and go click on the 100 last comment page after a moment or two, your post appears there. If you then click on the link in your comment on the 100 comment page it takes you back to the page correctly updated.

Why this happens this way I’don’t know but that as they say “Is the price of progress”.

RobertT January 13, 2021 2:00 AM

@Clive
I sometimes check in here but not often anything that I’m interested in commenting on.
What I find interesting about this attack is that it uses the Near field emissions of the Multiplier which is something that I normally wouldn’t try to shield because it’s not an attack vector that’s high on my list of viable attacks.
In all likelihood the method only succeed because NXP did nothing to stop this attack.

This is the basic problem with all security applications, you shut-tight and bolt every backdoor that you can imagine and then you’re embarrassed by some kid that hops through an open window.
the problem is compounded by the fact that the chips sell for next to nothing, definitely sub $0.50 in volume.
Lots of attack worries, lots of unique knowledge required with next to no profit, welcome to my world.

Clive Robinson January 13, 2021 4:51 AM

@ RobertT,

In all likelihood the method only succeed because NXP did nothing to stop this attack.

I have a sneaky suspicion that NXP got the short straw.

The attack used a new method of pulling a signal out of the noise, which reading between the lines the authors think could be considerably extended.

The whole attack was “conservative” I’m guessing to give the best chance for the maths to succeed.

As far as I’m aware it’s the first time someone has got usefull data out of a multiplier. I know it’s been thought about before but it’s not come up on the published papper horizon.

In fact I the last paper I remember that would be related was half a decade back with “Oh arh… Just a little bit more” paper,

https://link.springer.com/chapter/10.1007/978-3-319-16715-2_1

(the title came from a GinaG song… Of such appaling cheesyness it stuck in the mind the same way that cheap plastic burger cheese sticks in your mouth like poormans “hot melt glue”. Which might also be why I remember the paper).

But I also remember a conversation about using a DSP MAD instruction for obsfication some years back as part of a “card shuffling” type crypto algorithm. The argument being the array of AND gates in a multiplier produced a signiture that would be swamped by the ADD. I was not convinced and there was not realy any data on it to say one way or another…

With regards,

… you shut-tight and bolt every backdoor that you can imagine and then you’re embarrassed by some kid that hops through an open window.

Yup it’s why I favour “segregation” as a way of getting security, generally there is only so far kids can jump 😉

As for,

Lots of attack worries, lots of unique knowledge required with next to no profit, welcome to my world.

You forgot to add “lots of unrelated domain knowledge”. I would make a reasonable bet that there are a couple of academic papers out their neglected on some dusty shelf, that the title and keywords the authors used in effect hide them from security researchers thus the papers importance to security is missed…

name.withheld.for.obvious.reasons January 13, 2021 7:46 AM

@ RobertT, Clive, the usual suspects

Great to see you back RobertT. Hope you are doing well and glad to see you haven’t abandon those of us in the pews. On the NXP issue it is a clear case for scalar architectures that are modular. A multi-die design does have some advantages and of course there is the classic trade-off related to production engineering and costs. But, if securing device hardware is serious then the design architecture is primary in affecting such a goal. You, and many others, understand the ecosystem and what pressures make addressing issues such as this difficult. From the management structures, product delivery pressures, expertise and costs in the R&D process, and the qualitative processes that assist in reaching security-based goals.

I think we’ve all heard and/or embrace the old adage, “If you want to secure this system, remove power.” It is simplistic but it is a super practical approach in answer to the opposite spectrum often found in product designs. Of course it is not an answer, but it could become one. The probability is not zero/zed, but nearly. I’ll suggest Bruce’s book to others to get a clearer picture as to the environment; “Click Here to Kill Everybody: Security and Survival in a Hyper-connected World”.

The number of IoT devices that will need to be completely trashed could be enormous. The real question is how many and how long do we have before an effective DOS or other problems resulting in very costly remediation. And I personally and tired of the hardware and software life cycle inconsistencies from many perspectives (ecological, expense, profit motives, etc.). When hardware vendors decided that the software lifecycle model was useful, they left the consumer in the breach.

A number of hardware subsystems represent a homogenous, cross platform, common defects. There are SPI and SMBus components for example, video and line driver (USB or HDMI-based), storage interconnect interfaces such as SATA and even PCI glue hardware (including just the risers).

Goat January 13, 2021 8:40 AM

I though mod had resolved @RobertT that is unfortunate, @Bruce have you looked into the matter.

Clive Robinson January 13, 2021 9:18 AM

@ name.withheld…, RobertT, and the usual suspects,

I think we’ve all heard and/or embrace the old adage, “If you want to secure this system, remove power.” It is simplistic but it is a super practical approach

It’s a more drastic version of “Energy Gapping”… in this case removing the energy to function. Which is kind of not what system users want. But it is quite effective at stopping information leaking, unless of course some one steals the physical parts…

But the principle is the same information gets impressed on energy or physical objects for,

1, Processing.
2, Communications.
3, Storage.

The first two require “work to be done” which has two implications,

1, Energy is required for work to be done.
2, Energy usage is never 100% efficient.

Thus information impressed/modulated on energy will escape due to inefficiency.

That’s the laws of physics, and there is little you can do about it.

However thermodynamics also comes into play.

3, As work is carried out the inefficiency via physical processes becomes “heat”.

4, All systems have finite heat tolerance.

Thus you have to get rid of the waste energy from doing work to stop the system destroying it’s self.

Thus you have to remove the waste energy and physics defines only three ways you can do this,

1, Conduction.
2, Radiation.
3, Convection.

Conduction occures via a conductor of some form, the energy will be constrained by it and thus when a steady state is achived the energy in at one end comes out the other. Unless the conductor is lossy, in which case energy leaves the conductor proportiant to it’s length (so 1/r)

Radiation is assumed to occure equally in all directions from a point source. Whilst this is not true when close in it is true a short distance away. As the energy is on a two dimensional surface the energy drops of in proportion to the area (so 1/r^2).

Convection is transporting energy by transfering it to particles that then move as their density changes or some other factor comes into play. The energy is moved volumetricaly thus decreases as the volume increases (so 1/r^3).

So the energy decresses with effective distance. Thus the modulated signal it carries decresses with distance.

So depending on how the energy is transported the signal drops against any uniform background noise, to the point it is nolonger reliably detectable.

But both information and noise have a characteristic in common they ate spread across a bandwidth. That is there is a certain amount of energy per unit of bandwidth.

Without going into the mathmatics it can be seen that for information to leave a system it must have,

1, Sufficient bandwidth for the information.
2, Sufficient modulation depth.
3, Sufficient energy to go the required distance and still be above the noise.

Which means if you,

1, increase the noise.
2, reduce the energy.
3, reduce the bandwidth.
4, reduce the modulation.

You limit how far the information impressed on the energy is usable.

This tells you all you need to know to appreciate the ideas behind the signal leakage distance rules of TEMPEST which used to be a “Government Clasified Secret” untill not that long ago…

TEMPEST is actually a “Passive Emission Security Attack” methodology and there is a bit more behind it than just the above. Another part is how to deal with information moving around within a system and how communication channels are formed.

It’s why you have other design rules such as,

1, Clock the inputs.
2, Clock the outputs.
3, Clock from secure to insecure.
4, Control from secure to insecure.
5, Error signals are control signals from least secure.
6, Fail hard on error.
7, Fail long on error.

And one or two more.

Designing to these can be hard but one mistake most make is,

“Security -v- Efficiency”

The more as systems become more efficient you get two side effects,

1, They become more transparent.
2, They have greater bandwidth.

Neither is good if you are trying to close down side channels.

Various failings in these rules can be seen in the design of that security token…

JonKnowsNothing January 13, 2021 9:54 AM

@Clive @All

re: secure this system, remove power

Something that certain tank crews forgot or didn’t, do because they needed to keep the tank diesel engines running.

They were targeted by their heat signature.

They died. Killed from far away, far beyond the range of their tank cannons. Early drone strikes; they never saw it coming.

R. Cake January 13, 2021 11:26 AM

For those of you that actually read the researcher’s publication (obviously not everyone on this thread) you will certainly have noticed the list of products and especially the CC certificates that are referenced there. The newest one dates from 2014.
This in turn tells us that the chip generation that was successfully attacked here is probably dated about 2010 or even before that. At that time, EMA probing sidechannel analysis was not yet really practical. Therefore it is not surprising that with equipment and computing power from 10..15 years later attacks start becoming realistic.

As for @Clive’s proposals – please remember that the magnetic field drops with the square of the radius from the source. Therefore, it is not at all the same to measure with an EMA probe a few micrometers above the chip surface – like done in the paper here – and doing the same about one millimeter higher (above the undamaged package surface). Not only is your signal going to be wwaaaayyyy weaker there, it will also be masked from the adjacent signals around the signal you are actually trying to lock in to.

As it looks today, the attack described is clearly viable in the lab only, with an opened chip package. Not sure how many of you guys have worked with fuming nitric acid before, but this is actually ugly stuff. You can rinse it off, but some will remain in the plastic and continue slowly etching away there. Also therefore, I am doubtful about the idea of “returning a unit back to service” or the owner not noticing that it has been tampered with.
In nearly all cases, there will be an easier way into the holder’s accounts, e.g. by social engineering.

Still, I think this is actually a very nicely implemented attack and a well written paper.

RobertT January 13, 2021 4:27 PM

Reducing the system power is an important but insufficient step.
To understand why you need to back to basics and learn Information Theory. Crack open a few books by the masters Shannon, Hartley and Nyquist but don’t stop there because that’s just the beginning. Basically to hide information you need to suppress emissions (or power signatures) below the noise floor. Sounds easy but it’s not.

For instance In a chip that can only be tested as a whole product it would be sufficient to add noise to the power supply to suppress the ability of the attacker to apply DPA techniques. However if the attacker can separate the power going to the various blocks within the chip (by whatever means) then they’re back in the game. Generally if you just added power supply Noise the obvious solution would be to simply disable the Noise block. Or probe the chip’s internal power supply and differentially reject the additive noise.

This means that Noise needs to be embraced as a fundamental part of the design of any high security device. So what is “noise” and what kinds of noise can you easily find.
Typically there are 4 physical noise sources available on a chip.

1/(f*f) generally present but gets swamped by 1/f noise
1/f noise tends to be the largest magnitude noise source
Thermal noise (standard device noise typical of resistors and capacitors)
Very high frequency noise (at the device frequency limits noise hooks up again)

Ok so the easiest noise source to access is 1/f noise but it is typically not all that useful because it doesn’t cause substantial changes over a short interval meaning the system looks perfectly synchronous over the short term. This point is very important if you’re adversary is externally gathering information by over-sampling. (as was the case with this NXP side channel attack)

Ideally you want large changes in things like the system clock on a cycle by cycle basis. There are two ways to achieve this
1) Multiply some thermal noise and inject it into the system clock
2) Use pseudo noise (algorithmic clock noise)
True thermal noise is difficult to isolate and it is worth mentioning that circuits designed to extract thermal noise also tend to be extremely sensitive to externally applied electric fields (so called RF injection attacks)

So that leaves you with Pseudo Noise. The unfortune thing about any form of Algorithmic noise is that there are only a handful of good circuit implementations for Pseudo Noise (especially when it needs to be strictly bounded for system clock generation purposes)

Anyone that has studied hidden RF communication methods knows that Direct Sequence Spreading was a popular technique right up till the 1970’s. The problem with Direct sequence spreading is that if anyone knows the algorithm and start position and the key than they’ll actually achieve signal gain by synchronizing their external sampling and despreading the signal. Unfortunately the same signal recovery gain advantage exists for any on chip system clock that has been hidden by clock spreading algorithms. I won’t go into this any further for obvious reasons.

So there you have it, you’re screwed whatever you do. The more attacks you can imagine the more screwed you are and in the end if you even try to address every single attack vector well you’ll never release a product, a failing for which you’ll soon be fired.
Welcome to my world.

Clive Robinson January 13, 2021 6:30 PM

@ R.Cake,

please remember that the magnetic field drops with the square of the radius from the source

Err a magnetic field is over a volume (dipole) not a surface (monopole), so try inverse cubed law.

You can do a fairly simple experiment to measure this yourself. But someone has done it for you,

https://www.instructables.com/How-does-magnetic-field-vary-with-distance/

But there is a bit more to it than that a dipole is effectively two monopoles of oposite polarity seperated by a distance. This means that close in things get a little more interesting,

https://van.physics.illinois.edu/qa/listing.php?id=419

M Welinder January 14, 2021 11:29 AM

Several commenters seem to think that destruction of the plastic casing means that it is not a cloning attack.

Why would that be? Surely if you are putting this much effort into making a copy then you can also afford and procure a new plastic casing, indistinct from the original. You can probably even pre-age the new case to the desired level of scuffing and add the right amount of grime.

SpaceLifeForm January 15, 2021 12:18 AM

@ RobertT. Clive

Do you see an attack on comms via sound or light coming from a Faraday Cage?

If my computer inside the Faraday Cage only runs on Battery, no Mains.

What is the attack if no attacker physical presence?

Chris Drake January 15, 2021 2:52 AM

Unforgivable – there’s countless papers and other reports on these kinds of attacks stretching back decades.

If they designed this chip ignoring all that, how many other obvious mistakes must they have made?

SpaceLifeForm January 15, 2021 3:18 AM

@ Chris Drake, Clive

“If they designed this chip ignoring all that, how many other obvious mistakes must they have made?”

Depends upon how much they were paid.

Clive Robinson January 15, 2021 3:48 AM

@ SpaceLifeForm,

Do you see an attack on comms via sound or light coming from a Faraday Cage?

First of “Faraday Cages” are not perfect far from it. The “DC to Daylight” claims are rarely true even for radiated E Field signals. But when it comes to mechanical energy such as sound few cage designs take it into account. But at best they deal only with radiated and some conducted energy not convected energy.

But one problem is if you want to go inside a Faraday Cage you have to have some way of staying alive… Which means “holes” have to be made, and they have consequences.

The usual rule of thumb for an antenna is that it is “resonant” and that in an unloaded state it needs to be a dipole of half a wavelength minimum.

Another rule of thumb is that of inverse physical images. That is if you define an area of greater than a wavelength in diameter a conventional antenna of a conductor in air can be inverted and a slot in a conductive surface will radiate as effectively only with the polarisation changed by 90degrees.

Another rule of thumb is that an antennas bandwidth is proportional to it’s resonant frequency. So a wire dipole with a 2% bandwidth at 1MHz if scaled down will still have a 2% bandwidth at 10GHz but measured in Hz or “information bandwidth” the 1MHz antenna has 20KHz bandwidth but the 10GHz antenna has 200MHz bandwidth, thus 10,000 times the information carrying capacity.

A concequence of sampling theorm is that information reflects around the sampling frequency through the frequency domain. So your baseband information bandwidth of +/- 10Mhz is still +/- 10MHz up at 10GHz and would chearfully be radiated from a slot of 2% bandwidth. Such a slot would only be ~15mm long.

So you start to see the problem of holes.

I could go on at considerable length but we start to get into interesting territory with things like “chicken wire” each thumb sized hole is an antenna at some frequency related to the loop circumferance the loops form a phased array that mostly cancel out. However at some frequency multiple they do have “beam forming” abilities.

You can actually see this, if you look in the box of lenses used to find your eye prescription you will find one is not a lense in the form you think of but a series of small holes in a plate…

Thus making holes in Farady Screens is always problematical not just to what we consider the RF spectrum but beyond all the way up through the light spectrum…

Oh and remember with rules of thumb, like the thumb it’s self the edge cases are very broad.

RobertT January 15, 2021 5:14 AM

@Chris Drake
No disrespect intended but to be honest you’re the one that is completely clueless if you believe, for even one second, that every possible attack vector can me addressed especially in a very cost sensitive product.
Its always a tradeoff
From what I understand the Power signature for this NXP chip is very good suggesting that DPA attacks don’t work. The attacker in this case has removed packaging material and lowered an expensive special RF probe to within a few microns of the die so that they could extract emissions signatures for specific blocks. How would you over come this? seriously how do you stop an adversary with physical possession of the device from doing this?

I know ways to make this task difficult for the attacker but that said I don’t know of any circuit design or system implementation that makes this attack impossible. (I do know ways that will make this attack method so frustrating that they simply give up but I’m not going to just tell you what the solution is because this is knowledge that was hard won and to be honest its how my labor is differentiated (its what you pay for when I do the job)

That said I’m not going to take cheap shots at NXP because it looks to me like they’ve done a reasonable job.

AlanS June 21, 2022 5:18 PM

Interesting that no one seems to have mentioned that there is a signature counter in the FIDO spec. So if you have a perfectly cloned key and the original key in now back in the possession of its owner who, lets assume, has been perfectly duped, the relying party will be able to detect the cloning after both keys have been used.

So why bother to clone it? If you have the stolen the key, why not just use it to login to the account (or accounts) and steal whatever information you are after? The whole point of the cloning is presumably to have continued access without the owner knowing the account has been compromised but the counter on each key will almost immediately diverge so cloning the key won’t accomplish continued access, assuming the RP is implementing the spec properly.

Let’s assume the cloned key has been used to login to a user’s account. The authentication increases the signature counter on the clone which is now has a greater value than the counter on the original key. As soon as the account owner logs in with the original key the relaying party will be able to detect that the signature counter it has stored is greater than the counter on the key being used to authenticate. That means the original key has been cloned or is malfunctioning.

Unless there is a solution to the counter issue, cloning is pointless. It doesn’t get you anything just using the stolen key would get you.

AlanS June 21, 2022 5:29 PM

In short, you can clone a FIDO security key but as soon as one of the keys is used the clone is no longer a clone and the attempted cloning can be detected.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.