Supply-Chain Security and Trust

The United States government’s continuing disagreement with the Chinese company Huawei underscores a much larger problem with computer technologies in general: We have no choice but to trust them completely, and it’s impossible to verify that they’re trustworthy. Solving this problem ­ which is increasingly a national security issue ­ will require us to both make major policy changes and invent new technologies.

The Huawei problem is simple to explain. The company is based in China and subject to the rules and dictates of the Chinese government. The government could require Huawei to install back doors into the 5G routers it sells abroad, allowing the government to eavesdrop on communications or ­—even worse ­—take control of the routers during wartime. Since the United States will rely on those routers for all of its communications, we become vulnerable by building our 5G backbone on Huawei equipment.

It’s obvious that we can’t trust computer equipment from a country we don’t trust, but the problem is much more pervasive than that. The computers and smartphones you use are not built in the United States. Their chips aren’t made in the United States. The engineers who design and program them come from over a hundred countries. Thousands of people have the opportunity, acting alone, to slip a back door into the final product.

There’s more. Open-source software packages are increasingly targeted by groups installing back doors. Fake apps in the Google Play store illustrate vulnerabilities in our software distribution systems. The NotPetya worm was distributed by a fraudulent update to a popular Ukranian accounting package, illustrating vulnerabilities in our update systems. Hardware chips can be back-doored at the point of fabrication, even if the design is secure. The National Security Agency exploited the shipping process to subvert Cisco routers intended for the Syrian telephone company. The overall problem is that of supply-chain security, because every part of the supply chain can be attacked.

And while nation-state threats like China and Huawei ­—or Russia and the antivirus company Kaspersky a couple of years earlier ­—make the news, many of the vulnerabilities I described above are being exploited by cybercriminals.

Policy solutions involve forcing companies to open their technical details to inspection, including the source code of their products and the designs of their hardware. Huawei and Kaspersky have offered this sort of openness as a way to demonstrate that they are trustworthy. This is not a worthless gesture, and it helps, but it’s not nearly enough. Too many back doors can evade this kind of inspection.

Technical solutions fall into two basic categories, both currently beyond our reach. One is to improve the technical inspection processes for products whose designers provide source code and hardware design specifications, and for products that arrive without any transparency information at all. In both cases, we want to verify that the end product is secure and free of back doors. Sometimes we can do this for some classes of back doors: We can inspect source code ­ this is how a Linux back door was discovered and removed in 2003 ­ or the hardware design, which becomes a cleverness battle between attacker and defender.

This is an area that needs more research. Today, the advantage goes to the attacker. It’s hard to ensure that the hardware and software you examine is the same as what you get, and it’s too easy to create back doors that slip past inspection. And while we can find and correct some of these supply-chain attacks, we won’t find them all. It’s a needle-in-a-haystack problem, except we don’t know what a needle looks like. We need technologies, possibly based on artificial intelligence, that can inspect systems more thoroughly and faster than humans can do. We need them quickly.

The other solution is to build a secure system, even though any of its parts can be subverted. This is what the former Deputy Director of National Intelligence Sue Gordon meant in April when she said about 5G, “You have to presume a dirty network.” Or more precisely, can we solve this by building trustworthy systems out of untrustworthy parts?

It sounds ridiculous on its face, but the Internet itself was a solution to a similar problem: a reliable network built out of unreliable parts. This was the result of decades of research. That research continues today, and it’s how we can have highly resilient distributed systems like Google’s network even though none of the individual components are particularly good. It’s also the philosophy behind much of the cybersecurity industry today: systems watching one another, looking for vulnerabilities and signs of attack.

Security is a lot harder than reliability. We don’t even really know how to build secure systems out of secure parts, let alone out of parts and processes that we can’t trust and that are almost certainly being subverted by governments and criminals around the world. Current security technologies are nowhere near good enough, though, to defend against these increasingly sophisticated attacks. So while this is an important part of the solution, and something we need to focus research on, it’s not going to solve our near-term problems.

At the same time, all of these problems are getting worse as computers and networks become more critical to personal and national security. The value of 5G isn’t for you to watch videos faster; it’s for things talking to things without bothering you. These things ­—cars, appliances, power plants, smart cities—­ increasingly affect the world in a direct physical manner. They’re increasingly autonomous, using A.I. and other technologies to make decisions without human intervention. The risk from Chinese back doors into our networks and computers isn’t that their government will listen in on our conversations; it’s that they’ll turn the power off or make all the cars crash into one another.

All of this doesn’t leave us with many options for today’s supply-chain problems. We still have to presume a dirty network ­—as well as back-doored computers and phones—and we can clean up only a fraction of the vulnerabilities. Citing the lack of non-Chinese alternatives for some of the communications hardware, already some are calling to abandon attempts to secure 5G from Chinese back doors and work on having secure American or European alternatives for 6G networks. It’s not nearly enough to solve the problem, but it’s a start.

Perhaps these half-solutions are the best we can do. Live with the problem today, and accelerate research to solve the problem for the future. These are research projects on a par with the Internet itself. They need government funding, like the Internet itself. And, also like the Internet, they’re critical to national security.

Critically, these systems must be as secure as we can make them. As former FCC Commissioner Tom Wheeler has explained, there’s a lot more to securing 5G than keeping Chinese equipment out of the network. This means we have to give up the fantasy that law enforcement can have back doors to aid criminal investigations without also weakening these systems. The world uses one network, and there can only be one answer: Either everyone gets to spy, or no one gets to spy. And as these systems become more critical to national security, a network secure from all eavesdroppers becomes more important.

This essay previously appeared in the New York Times.

Posted on September 30, 2019 at 6:36 AM39 Comments


Jon September 30, 2019 6:57 AM

Note that “Made in the USA” is not much protection either. The US Government will very cheerfully use such abilities to spy on their own citizens. Still, Mr. Schneier’s ‘secure system out of insecure parts’ will be helpful there, too – if and when it can be done. Jon

Robert Silverman September 30, 2019 8:37 AM

What prevents Microsoft/GNU/other from inserting malware into our code as
we compile it with Visual Studio/gcc/other? We would have to inspect the
object code every time we compile……

Or Microsoft/GNU/other could do the same thing only to compilers that it sells

Or it could insert malware not as code is compiled, but rather as it is linked.
This would be even harder to detect…..The assembler output would look perfectly normal.

parabarbarian September 30, 2019 9:51 AM

Stopping a Microsoft compiler from inserting a backdoor ala the infamous login backdoor in an early K&R C compiler would be difficult but not by any means impossible. The code is proprietary and, from what I’ve been told, MS development is highly compartmentalized making it a more ideal place to insert a backdoor. If the backdoor was simple enough and could be done by an offshore contractor I can see it happening.

OTOH, in today’s all-leaks-all-the-time environment it would be difficult to keep any US government demand fro a backdoor a secret for long. More than two persons would know about it and a secret shared is not really a secret anymore. An authoritarian regime like the Chicoms impose might have a better chance of pulling it off but even they would be subject to the too many people know condition.

OTGH, whenever I contemplate the difficulties such a scenarios might bring, I remember how the CIA was able to buy the titanium to build the SR-71s from the Russians.

Rj Brown September 30, 2019 11:34 AM

“Build secure systems from insecure parts…”

Reminds me of a famous quote by the late Seymour Cray. He stated that to make the Cray-I he had to build a working computer from vendorparts that did not work as specified.” He did it by redundancy.

The mathematics of error detecting and correcting codes has been developing since the 1940’s, including statistical methods like those of Viterbi, but the amount of information they act on is way too small for those techniques to apply to entire systems, be it source or various stages of object code, much less the hardware itself.

Defense in depth is about the best we can do. We have laysers of defense looking for problems. As we get to deeper layers, we can get more specific in what we look for. It is a great deal of work, and that much work only gets done when the consequences are very serious.

Starouscz September 30, 2019 11:55 AM

The problem of OSS security is also quite pervasive. It is also different for every language and industry. Acording to sonatype state of the supply chain report about 50 % of javascript npm packages has a known vulnerability. For java it is about 10 %. The solution is to build a secure supply chain with trusted providers and continuos monitoring of components security. However this is costly and maybe some regulation would help. Like only use some software and hardware components that are certified for their security and thus can be used at scale for healthcare, military or other use cases. In cars, airplanes and hospitals you cannot just use any tools from the street and doctors cannot bring their favorite knife from home. Imagine if you could go to a store with wifi routers and be able to see online up to date information about their software components including security vulnerabilities. You could take the one that has 60 or another one with 6000. This online comparison and not some one off certification would push the companies to fix the solutions that are in the market and not only for the new versions. If you just shout chinese then what makes other nations to trust your stuff and not do the same ? How can we in europe be sure your products are more trustworthy ? Setting some general process via regulation can protect against that and also from some low quality voodoo vendors selling unsafe stuff. Soon human lives will be at stake at scale and setting a regulation at that point will be already too late. All mature industries have some and there is no reason why IT should be an exception.

Clive Robinson September 30, 2019 11:57 AM

@ Bruce, the usuall suspects and ALL those who want to think,

It’s obvious that we can’t trust computer equipment from a country we don’t trust, but the problem is much more pervasive than that.

Whilst both parts are true, the second is entirely irrelevant. That is it is the hight of stupidity to do the opposit, that is “trust components from countries you trust”.

Anyone who has read this blog for even just a couple of years should know this by heart.

Even new readers should know the CIA motto is “In God we trust” with the unsaid rider “all others we check, endlessly”.

But this raises a second issue that renders the first truth irrelavent as well. That is “Endless checking” is not just resource intensive it also fails for the simple reason all such checking processes have more than a few significant cracks through which all sorts of perfidity may drop sight unseen.

The reality of this makes checking at best a probabalistic process. Hence years ago I started calling it “Probabalistic Security” and you can find out more about it by reading a series of posts by myself with @Nick P, @Wael and a number of others as small part of what got called “Castles-v-Prisons” or “C-v-P” or “CvP”. @Wael did at one point keep a whole load of links to the various threads.

The point about CvP is it does exactly what Deputy Director of National Intelligence Sue Gordon is talking about with,

    “You have to presume a dirty network.”

And you mean with,

Or more precisely, can we solve this by building trustworthy systems out of untrustworthy parts?

To which the answer is definitely “yes” and we’ve been doing it for hundreds of years in other security domains with “Castles leading to battleships”. A discussion of which should still be up on this blog comments.

As any engineer of any experience and worth their salt knows, “All Systems are built with primary components” and that “all primary components only have properties defined by the laws of nature”.

Security is in no way defined by basic laws of nature, thus all such primary components have no inherant security properties. You have to build security from the ground up, or more correctly “From the laws of nature up”.

However there is a fly in the ointment as some mathatician’s can tell you. The work back before computers were defined by Church and Turing (on the work of Cantor and others) by Kurt Gödel has a property that you can follow to an inconvenient conclusion,

    There is no way for a general purpose –Turing Compleate–computer to know it is secure.

This gives you two options,

1, Use one computer (hypervisor) to oversee another.

2, Use a non-Turing Compleat “state machine” where all states are fully known to perform certain types of limited computing function.

Or you can use both, which is what I looked into doing and built a few prototypes using Microchip PIC chips to build the functional parts for testing (which I’ve talked about in the past).

The result is you end up with a hierarchy of computing elements. At the bottom you have general purpose Turing Complete processors in a highly constrained environment (Prison Cell). Above these are statemachines (trustees) monitoring “signatures” of the prison cell “contents and activities”. These feed status signals upwards to limited functionality hard coded Turing Complete systems (guards) that in turn feed a now highly issolated Turing Compleate system (warden) that interfaces to the security administrators etc.

But the question arises of what these “signitures” are, the list is fairly long, but the important part to note is that the captive CPU in the prison cell is in a highly constrained thus reduced environment. In effect they can not run full programs but sub program tasks or functions or “tasklets”. Thus they have much less complex signitures that are comparatively easy to instrument with the likes of a limited function state machine.

As you have to have multiple prisoners to compleate even a moderate task, they can be very very simple and with a little care you can get a very greate number of them on a single silicon die.

Which also means you can have different core architectures for these prisoners. Which means you can also run them in parallel.

More than a lifetime ago the New York telephone company identified it had a reliability problem. In short the MTTF was too short and identifying units at fault was a too manpower intensive task. Their solution was to look back into Greek Myths and develop what we now call “voting circuits”. Nasa took this to new hights in the Apollo project back half a century or so ago.

As I’ve noted in the past this can usefully used to build trust using completely untrusted components.

Thus we definitely know how to build such systems and we know it’s well within our current capabilities.

The problem, does any SigInt or IC agency actually dare develop such systems, knowing that those they spy on by placing malware and similar in “exports” to them will not go down the same route thus “Killing the Golden Goose”?

In reality that is the discussion we should be having, that is,

Do we give up what is mainly trivial cyber-espionage for much stronger cyber-security

I think most can work out what my position is on this, as I’ve opened the bag and invited the cat to jump out…

Wael September 30, 2019 11:59 AM

We have no choice but to trust them completely, and it’s impossible to verify that they’re trustworthy.

And who created this problem? There’s always a choice, but perhaps we value money more.

The computers and smartphones you use are not built in the United States. Their chips aren’t made in the United States. The engineers who design and program them come from over a hundred countries.

Yet, we impose export restrictions on such devices. Go figure!!!

The overall problem is that of supply-chain security, because every part of the supply chain can be attacked.

Hard probloem to solve. Several aspects of it were discussed here over the past 7+ years.

And while nation-state threats like China and Huawei ­– or Russia and the antivirus company Kaspersky a couple of years earlier ­– make the news

I am more concerned about the threats that did not make the news 😉

we want to verify that the end product is secure and free of back doors.

Who is this “we”? Another entity that “we” don’t trust? Perhaps on the nation-state scale, that makes sense; “we” is the country that imports equipment. “we” does not make sense in the case of the individual consumer, because… they don’t trust the “we” that inspects devices, because that “we” has been caught several times with their pants down, so to speak.

It’s a needle-in-a-haystack problem, except we don’t know what a needle looks like.

And we do’t know which pile of heystack to look in, either. Assuming we know what haystack looks like 🙂

The other solution is to build a secure system, even though any of its parts can be subverted.

Enter C-v-P !

These things ­– cars, appliances, power plants, smart cities –­ increasingly affect the world in a direct physical manner. They’re increasingly autonomous, using A.I. and other technologies to make decisions without human intervention.

Solutions are being looked at in this space — industry standards.

tz September 30, 2019 2:01 PM

I think the problem is hard but simple. We are tending toward monoculture because of economies of scale. For example the recent rom DFU jailbreak that affects EVERY iPhone through X. Android phones can have different holes, i.e. only one model of Galaxy Note self-immolated.

If there were 10 vendors of 5G equipment that had to use their own chips, designs, hardware, etc. maybe half could go down, but the rest wouldn’t and would be less likely a target since it would reveal the bad nodes and not cause a catastrophic failure.

Unfortunately, engineering is hard and expensive. So it costs a lot to duplicate a spec, then there’s patents where everyone needs dozens of licenses from everyone else. So we build one or two (like the VZN/ATT duopoly which uses different technology). When I worked at a big company, “sole source” required special permission. Now everything on the desktop is typically Windows (10!), there are two main smart phone providers, etc.

Between “Just in time” which creates fragility, even when not malicious but when bad weather hits any segment, and sole sourcing – Tim Cook finds the one cheapest provider, but what if they are somewhere that get tariffed or sanctioned (Imagine if a critical component was produced in Iran), we are creating a brittle ecosystem that can easily collapse with extinction events.

Michael Tidwell September 30, 2019 2:10 PM

Great topic! One approach that can move the ball forward on securing the supply chain for IoT is to pre-provision hardware root of trust based MCUs/memory/FPGAs/secure elements in a secure facility such that only authenticated code can run or update the device in the field. Encrypting code between development teams and the mass production factory helps as well. Our company has deployed such a solution with major silicon component distributors, is available in North America and Europe prior to shipping parts anywhere in the world and supports numerous hardware based security chips from major semiconductor vendors.

David Leppik September 30, 2019 2:36 PM

With reliability, it’s actually more reliable to use unreliable parts than reliable ones. When one part is too reliable, its failure modes, and the cascading effects of its failure, are not well known. System administrators can be confident in the overall system only when all the components fail regularly enough that they’ve seen them fail and know how to respond. That’s why Netflix invented the Chaos Monkey to regularly shut down random nodes: bugs get caught soon after they are deployed, rather than festering until their service is most stressed.

Analogously, I’ve heard that cryptographers don’t trust any system built around one-time pads, because the mathematical unbreakability of the pad makes users lax about the other links in the security chain.

I don’t know what a security resilient system built on insecure parts would look like. Redundancy is not the answer if it introduces more components which can be compromised. That said, modern secure random sources mix together less trusted sources of randomness, much the way poker players reduce the risk of a cheating dealer by having another player cut the deck.

Following the “Chaos Monkey” analogy, a security-resilient system might constantly penetration test itself. However, that also leads to a greater attack surface. It would have to be designed around the idea that the penetration tester could itself be compromised.

kaosagnt September 30, 2019 8:49 PM

h = t(“./FloodgateCore/FloodgateTelemetryLogger”),


collectorUri: “”,


@Robert Silverman

Visual Studio is already watching you… no need to subvert your application, the entire OS and Microsoft cloud is riddled with so called “telemetry”, but nobody complains.

Ismar October 1, 2019 1:46 AM

What great contributions to this topic proving the blog is worth its weight in gold.

One avenue that might be worth exploring is for designing layered software/ hardware system where each layer adds more complexity while inevitably adding insecurity (I think this is similar to what Clive has been advocating).

For more secure systems, which in general do not need to support rich functionality, only lower levels should be enabled and deployed, while other less secure systems can build on top of this basic core and introduce the necessary security trade off.

This would provide a model that can be scalable enough to allow for various security systems to be developed and coexist with each other while maintaining desired security for interchange of data between more secure applications over less security shared medium (think similar to the OSI model where less secure layers cannot directly inspect any of the data exchanged by the more secure layers)

CMO Dibbler October 1, 2019 5:19 AM

Even if you can somehow impose a spec on Huawei, or ideally if they come up with one, or some process, it’s a router. It can’t all be walled off internally let alone firewalled. Some bit of it is exposed to the net. Maybe, to scupper HARdware Acclerated Man In the Middle attacks (HAMIM), every part of it could be built by a international collaboration between two companies. Security by bureaucracy. You’ll probabbly just end up with secure but dog slow routers. And who is responsible for integrating it, and being the lead supplier or primary contracting entity? And someone’s going to have fun supplying a warranty.

The problem is there’s no alternative available to Huawei’s routers (or is there?) which would allow redundancy (parallel routers) to be adopted along The aeronautical mantra could be adopted, 3 separate avionics systems, plus another backup one from a different manufacturer. Although even if principle this was implemented there, it didn’t stop two Boeing 737 Max’s from crashing.

Even if you could double up on routers from another manufacturer to beused as a fall back, they’re routers. Redundancy only defends against a Huawei router being taken off line. Not just race conditions between them either, if WW3 breaks out, and the suspect router was taken over and operated maliciously, even DDOSing the fallback routers. But it’s not just guarding against unreliability and system component failure, or even the net going down, the issue is defending against maliciously controlled devices at the heart of the net. Devices on the net still need to manage their own security.

Ultimately the owner still has physical access to their own routers, so if its traffic and its behaviour can be monitores, and it can be determined that’s it’s been compromised, you should be able to pull the plug on it. And hope everyone else who’s installed the sae model does the same….

Weather October 1, 2019 6:16 AM

I think its traceable property that might help, say each router adds some certificate value to a packet, any router that knows the value of a trusted router can know that its trusted from it to you.
If you send a packet that passes through a router and a procmious backdoor gets active, when it trys to communicate it is the source, say it was number 2 getting sent to 3 with the above the trusted 1 router didn’t add a value.

Alejandro October 1, 2019 6:21 AM

For a brief second I thought Clive wrote this. Well, string and tin cans anyone? Kind of depressing.

name.withheld.for.obvious.reasons October 1, 2019 6:52 AM

We must step back from deploying bi-static fixed and mobile radar. A mill-meter wave system that is at the airport does not belong in one’s pocket or bedroom.

name.withheld.for.obvious.reasons October 1, 2019 6:55 AM

Trust? Our own government has legalized lying. Ask NOAA. Here, write down what they say with a sharpie.

Petre Peter October 1, 2019 7:10 AM

If we think of security as a chain, then we cannot build secure products from insecure parts.

Czerno October 1, 2019 11:29 AM

Gentlemen, what about those “Libreboot” laptops & tablets assembled and sold delivered worldwide by Minifree (UK) hxxps:// ?

Especially in respect to the subj. matter “supply chain security”, please review and comment their optional “security enhancements” detailed at : hxxps://

Disclaimer : this post is neither SPAM nor an adv. I am not affiliated with the “Ministree of freedom” :=)

Thanks i.a., and apology in case these had been reviewed on this blog earlier…


SpaceLifeForm October 1, 2019 5:28 PM

Remember, the love of money is the root of all evil.

You can not rely upon a pure software defense.

If you are stupid enough to use NodeJS, you deserve your own fate.

That is YUGE supply chain problem.

And that is pure software.

Why would anyone seriously use NodeJS when it is very, very clear that you have ZERO control over that software supply chain?

Seriously. It is brain-dead stupid.

The main problem is backdoors buried in silicon.

Sancho_P October 2, 2019 6:08 PM

I think this essay is inconsistent:

”The risk from Chinese back doors into our networks and computers isn’t that their government will listen in on our conversations; it’s that they’ll turn the power off or make all the cars crash into one another.” (@Bruce, my emph)

This, and “… secure American or European alternatives …” [to Chinese solutions],
is pure naZZionalism, very sad. We won’t find peace with this thinking.
(… may be good for the NYT audience, though)

The inconsistency is that this does not fit to the last paragraph, which correctly sings along the lines of:

  • It’s not that either we or they are secure.
  • The enemy is us – That means we, singular humans, are the enemy.
  • Nationality is not important for evil.
    ”Either everyone gets to spy, or no one gets to spy.” (@Bruce)

So what did you want to say: The former or the latter?

lurker October 3, 2019 12:18 AM

@CMODibbler: the problem about trusting Huawei is not backdoors, it’s their sloppy coding, and sloppy version control. They’re commercial, they sell the same piece of kit into different markets, with different network standards, different telco demands for their customers, &c. Some vendors might ship cleanly different models to each market, but Huawei has a few “skins” for each, and the underlying system has a mishmash of outdated components. The GCSB report is an interesting read (sorry I don’t have a link at fingertip) finding a sorry number of different outdated versions of SSL bundled into one router. Hanlon’s razor here my friend, don’t trust Huawei, not because they’re Chinese spies, but because they’re incompetent. Sure, their consumer gear looks flash, sells well, but what I saw inside a few of their phone handsets put me right off hacking them…

Clive Robinson October 3, 2019 6:49 PM

@ lurker,

… the problem about trusting Huawei is not backdoors, it’s their sloppy coding, and sloppy version control.

The real problem is the whole telco industry is in exactly the same race to a very messy bottom, and much blood and guts will be splilled and flung assunder in the process, as many industry insiders can tell you (basically they are starting going down the same “rabbit hole” as the current IoT style development process…).

The report by the way is a political hatchet job basically they came up with a mythical level of desirability and held only Huawei to it. As they don’t realy have access to other Telco development labs, they could get away with such tactics.

As I’ve indicated befor, if we are going to hold one supplier to a set of standards, we should hold all bidders to the same standard. Otherwise we are effectively behaving illegaly.

That said I’m told via others that part of the reason for the very adverse report was “preasure from America”. Put simply Huawei had a contract with the UK Government through the auspices of what was once called the Communications-Electronic Security Group (CESG). It was supposed to be a “technical arrangment” whereby GCHQ personnel would review Huawei code to build assurances with the UK Gov that there was no backdoors in their kit.

Apparently GCHQ abused the arrangement, first to train their own staff at Huawei’s expense, then they brought in US personnel to be trained and to find vulnerabilities that were not being fed back to Huawei as per the agreement with the UK Gov. I’m told that this illicit behaviour was in effect leaked by behaviour on the US side. When Huawei found out what was going on they asked that the agreement be not abused. This apparently caused indignation from the US and this report thus appeared.

The fact is as most in the industry know, you could walk into the likes of Cisco, Jupiter and many other telco players and find considerably more disarray than there was at Huawei. But US corps as we know are on a “nod and a wink” arangment with the likes of the NSA. There is pretence it is not happening but that flies in the face of reality. The interdiction routien is for “Plausable Deniability” for both Cisco and the NSA. Cisco get clean hands and the NSA get to keep seni-secret who they are spying on. When the NSA poison the supply chain of Cisco, the information on how to best do it comes from within Cisco via links etc left in during the actual design process…

But getting back to Huawei, you also have to consider that politicaly Huawei and the EU are unpopular with US politicians that are on retainers from what is left of the US comms industry. Specifically the number of patents Huawei have that are going into major international telco standards with the US finding it’s commanding position more than some diminished. Various stories have been cooked up to try and make it look like Huawei have stolen US corp IP. The reality there’s a lot of smoke and mirrors, no actionable inteligence and as for evidence to take to court well… The reality is that US telecommunications companies have at the behest of the US Stock Market, chronically under funded R&D. They have laid of many engineers some who are Chinese, they have had to go and work at other places. If they have taken actuall IP rather than their experience, nobody has come up with anything. What has however happrned is US Corps have outsourced production and much else besides to Chines manufacturing, and in the process to make a quick profit, freely given away their IP “Crown Jewels”. They were warned repeatedly considerably more than a decade ago that the BRIC countries would take US IP yet a “quick buck” very short term thinking prevailed and those senior managers to get their bonuses freely gave away the very life blood of the organisations they were running, whilst also slitting the throat of the R&D departments that generated it…

Thus a big part of the problem is US industry shot it’s self in the foot. Now the chickens are coming home to roost via significant trade disparity those US politicos on retainer have to actually earn their money for once… Thus the US Gov is currently trying to “poison the well” so that the standards bodies will not consider Chinese or other Asian patents as well as European patents…

We can only wait and see what happens but I can see that at some point there will be a patent dispute. History has shown that in the US valid foreign patents are in effect considered “second class” in standing and this attitude has been seen to migrate into places like the WTO…

You only have to look at the previous US administrations attempts to “hog-tie” other nations governments in favour of US corporations via the Trans-Pacific Partnership (TPP) trade agreement, and the hidden away Investor-State Dispute Settlement (ISDS) mechanism. Negotiated secretly without other nations governments having knowledge of it, but US corps sitting in the “captains seat” with the US Gov trade agency manning the tiller…

You can see from this what is in effect a spreading malignancy in US trade relations. Several people have noted that TPP was also designed to stop the other signitory nations trading with China and pull them more strongly under US influance. Whilst the current US administration failed to ratify and infact withdrew from the agreament the other nations made some changes and then ratified it. Further some think that the signitory nations will actually find they will over all be a lot worse off, especially as Singapore appears to be the place were ISDS courts will be held.

JonKnowsNothing October 13, 2019 11:21 AM

“…anyone who’s an electronic hobbyist can do a version of this at home.”

At the CS3sthlm security conference later this month, security researcher Monta Elkins will show how he created a proof-of-concept version of that hardware hack in his basement.

With only a $150 hot-air soldering tool, a $40 microscope, and some $2 chips ordered online, Elkins was able to alter a Cisco firewall in a way that he says most IT admins likely wouldn’t notice, yet would give a remote attacker deep control.

his firewall-based attack, while far less sophisticated, doesn’t require that custom chip at all—only his $2 one. “Don’t discount this attack because you think someone needs a chip fab to do it,” Elkins says. “Basically anyone who’s an electronic hobbyist can do a version of this at home.”

[This] isn’t meant to validate Bloomberg’s tale of widespread hardware supply chain attacks with tiny chips planted in devices.

The photo is instructive. Not even any blue wires.

ht tps://
(url fractured to prevent autorun)

Clive Robinson October 13, 2019 7:18 PM

@ JonKnowsNothing,

The photo is instructive. Not even any blue wires.

Whilst there are no “blue wires”, contrary to the articles author Andy Greenberg’s comment of,

    The image below gives a sense of how tough spotting the chip would be amid the complexity of a firewall’s board

It’s actually very easy for someone who has ever done PCB inspection to spot.

First off the chip is larger than any other 8pin chip on the board.

Secondly if you look at the 8pin chip to it’s north-east you will see that two of the pins have a blob of solder across them, which presumably was due to not quite enough care on the rework with the hot air gun.

Thirdly it looks like the chip has broken the PCB design rules in it’s orientation (it’s hard to tell because the picture is fuzzy)

But fourthly, the dead give away is that there are no PCB component identification details in the screen print layer…


    most IT admins likely wouldn’t notice

Is likely true few IT admins use a screwdriver proffessionaly. They realy are unlikely to break the security seals on the case etc just out of curiosity. Even if they saw odd behaviour IT admins are more likely to “send it back” than pull it appart.

Security personnel however might treat it differently depending on the size of bot the organisation and the size of the order.

One thing people keep forgetting when they talk about the Bloomberg story is how unusuall if not unique it was in terms of orders.

Most IT orders are small and only the local reseller or an online store are going to see the order placed in the ordinary course of events[1]. Which significantly reduces the supply line poisoning threat almost to the customers employees.

The thing is as was noted back when Bloomberg pushed their stories, a number of people knew just how easy it is to hide a chip away under the likes of a connector etc, and some have been warning about it for some time. The problem for Bloomberg’s journalists, was they took a flyer on the story without a shread of physical evidence, because of Bloomberg’s corporate policy…

The threat is real enough, the problem with this demonstration is it’s amateurish and lacked finess and thus was “obvious”.

What will probably happen is that people will take the wrong message away quite deliberately. That is in order that they can carry on ignoring what is a quite complex and nigh on impossible security issue to solve, that they don’t want to go near, the “obviousness” of the attack will be used as an argument supply chain poisoning is to difficult to do covertly, thus won’t be done…

As the old saying has it, “It’s difficult to persuade a man he is wrong, when his salary depends on him not acknowledging that he is wrong”…

If people don’t think so then Cisco’s Corporate statment of,

    “We are committed to transparency and are investigating the researcher’s findings,” Cisco said in a statement. “If new information is found that our customers need to be aware of, we will communicate it via our normal channels.”

Trust me Cisco know darn well it’s possible. After all they have as corporate policy, been turning a blind eye to what the NSA has been doing to the products they export for years. Yet we don’t hear a word of it via Cisco’s “normal channels”, even though we know full well they know full well it still goes on…

But then as can be seen from the last paragraph of the article, both researchers[2] know this full well.

[1] Most organisations are not specific “targets of interest” to those who are likely to do supply line poisoning. Further as targets they are at the far end of the supply chain, as is their order. What was special about the Bloomberg story is that it was an exceptional order and even the manufacturing organisation and more than likely some of their suppliers were aware of the exceptional order, and in the case of the manufacturer almost certainly knew who the customer was.

[2] Though Hudson’s,

    The result could even be as small as a hundredth of a square millimeter … vastly smaller than Bloomberg’s grain of rice.

comment is wrong. That would be a chip the size of the end of a human hair. Whilst that would be sufficient for the in board functionality of such a chip, you’ld still have to have “bond out” wires. These need quite large pads, I/O convertion and protection devices on a chip for a number of reasons, which would just on their own require more chip real estate.

MarkH October 14, 2019 4:45 AM

@Jon, Clive:

I’m with Clive, to my eye it stands out “like a sore thumb.” I reckon anybody in the design/build business would notice it.

The orientation of the intended chips is inconsistent: most of the East/West chips have pin 1 to the left, but U6 (for example) has it on the right, like the “parasite” chip does.

The add-on is also canted a couple of degrees clockwise … I’m used to boards with handwork, and look out for twists like that.

On the other hand, I’ve seen so many design screw-ups with cobbled workarounds, I wouldn’t necessarily assume that the “odd man out” wasn’t put on by the manufacturer 🙂

JonKnowsNothing October 14, 2019 11:27 AM

@Clive Robinson @MarkH

@Clive Robinson

Is likely true few IT admins use a screwdriver proffessionaly. They realy are unlikely to break the security seals on the case etc just out of curiosity. Even if they saw odd behaviour IT admins are more likely to “send it back” than pull it appart.

Agreed that a PRO will notice but there aren’t that many PROs about any more. A PRO might open a box and look at a motherboard but is a PRO going to open EVERY BOX when a-large-number are being sent?

This is one method we KNOW the NSA-3LETS use. We know they reflash the software on the chips too. What PRO found that? What PRO would DARE to challenge what they discovered?

It wasn’t a PRO HW PCB person that confirmed it.

In the current market for mass devices, you cannot even open the back of some to replace a battery. The vast majority of consumers are not going to pull out their handy-dandy tool kits to take a peek inside their tablet/fablets/laptops and even if they did manage to wrench off the casing – what would they see? Or rather how would they see something had been changed.

It the individual hacked devices that are the target. One CEO or Net Admin or Bank Admin is worth the hassle of some bad soldering.

That this is done we know. We know more about the SW Hacks than the HW ones.

1) reference the ANT catalog / Snowden

MarkH October 14, 2019 3:38 PM

Looking at the photo in the arsetechnica article got me to thinking …

For about 5+ years, at least one company has offered a simple microcontroller in an SOT23 (surface mount small transistor) package, which pretty accurately matches the dimensions of medium-grain rice.

The best trick, would be not ADDING a package, but rather REPLACING an existing chip. It would need some good luck to find a suitable place in the circuit … but there are so many microcontrollers in so many packages, that a typical gadget design might offer several opportunities.

An advantage of an SOT23 swap would be that the device markings are rather cryptic and anyway tough to read (I usually need bright light and a magnifying lens), so that nothing less than minute inspection would catch the discrepancy.

Clive Robinson October 14, 2019 5:31 PM

@ MarkH, JonKnowsNothing,

An advantage of an SOT23 swap would be that the device markings are rather cryptic and anyway tough to read

As anyone who has fallen foul of Chinese “Grey Market Fakes” will tell you, polishing off the existing laser etched device markings and relabaling with new laser etched markings is not very difficult.

Thus visual inspection of the markings whilst not entirely pointless will not be reliable either.

Of more reliability is hight measurments, that is the top of the package to it’s sealing edge will be a give away. Likewise any “rework” even if replacing like chip for like chip will cause not just the chip to be at a different hight to the board but the hight of the solder joints as well will not just be different they will be more varied across the package legs than other chips on the board.

Also “reprograming in place” will cause charecteristic scratch marks to show up much like “pick tool marks” on locks, these can be spotted by eye much as “pristine” model railway engine collectors can tell if the wheels have ever run on rails (thus nolonger a pristine engine).

But at the end of the day all these differences are indicative not probative and often they can be subjective.

But as with “sexing chicks” it is something that takes considerable training and only some can achieve the level of ability to quickly QA boards by visual inspection. Aftervdoing it for six months when I was in my twenties, I discovered I had irreparably impared my vision, requiring the wearing of glasses…

In theory laser interferometry “fringing” by what is in effect a holographic projection can assist in all the processes above, but I’ve not seen it used / advertised for PCBs though it is a technique used on very fine precision mechanical parts.

Sancho_P October 14, 2019 6:06 PM

First, and most importantly, is: What should the modification achieve?

It depends what you’d call a HW “hack”.
To add a chip (better: replace an existing chip with an “improved” version, @MarkH: existing uC with the same type, other FW [1] ) is persistent = extremely dangerous. There are several chips where just the FW had to be improved, but if it could not be deleted on the fly = dangerous.

  • At soldered HW : You just can’t deliver in bulk, too precious.
    One would have carefully to deliver the HW to the target, person or location, and to take care for after use.
    Imagine that special modified device, by chance, ends up at the depot, before or after being used by the target.
    A personal present to the King, or the American embassy, OK, that may work 😉

What about a persistent factory backdoor in all particular chips (or even OSs), say an additional, special admin key + handling?

Think of BIOS protection:
Often one can write, but can’t extract the FW “for IP protection”:
Try “cmVpZW5oY1M5MTAy”, it will fail, but after 3 failed attempts within 5 seconds, the 4th attempt, only after waiting 10 seconds, is successful?
Other pwds, other handling / timing, who would ever find out?
It could even be delivered by a “mandatory” security update, only to chips / machines with specified serial numbers.
Easy peasy (remote) access to any device (don’t forget to delete log files afterwards).
Oh, um, it only was left over from development, sorry!

Btw. the “demonically-clever-backdoor” capacitor, like the “cmVpZW5oY1M5MTAy”, is only the trigger to activate the malfunction [2].
But if the malfunction is HW, one can not delete / overwrite it after use.
However tiny the trigger element is, the following logic circuit would be explained as being “accidentally”, too?
Unfixable, too? Only Apple can survive that.

  • Malfunction is better hidden and deleted in SW / FW.

[1] It may be difficult to replace the function of a transistor by a uC + access interesting data at that point.

[2] Some chips require out of operational range supply voltages for programming, but that’s usually a specified function.

MarkH October 14, 2019 6:14 PM


I know an artisan who was trained to NASA standards, who does very fine-scale work not usually attempted by hand.

Probably she could replace a chip without differences in soldering being noticeable to me, which isn’t to say that others wouldn’t pick it up.

Your reference to “sexing chicks” might be understood in a different meaning by some readers 😉

I’ve heard that sexing hyenas — young or adult — is notoriously difficult, even to scientists who specialize in studying them … though hyenas seem to figure it out.

I remember a documentary about the unknown authenticity of a hitherto unknown canvas seemingly by a famous British painter. A museum applied a lot of tech: X rays, spectroscopic analysis of paint samples, etc.

After a few inconclusive months of this, they brought in an expert on the painter, who hadn’t even seen a photo of the new find.

He walked into the basement room where they were studying the canvas, stared at it for several seconds, and said, “well that’s never a Turner, is it?”

JonKnowsNothing October 14, 2019 7:57 PM

@MarkH @Clive Robinson @All

The best trick, would be not ADDING a package, but rather REPLACING an existing chip

Some years ago…

When computer controlled systems first started appearing in US Muscle Cars one of their “improvements” was to “throttle” parts of the engine.

A very lucrative market emerged in replacement chips that “removed” the offending “throttle” and allowed it to be reset to a more Muscled value.

If you had to take your Muscle Car into the shop for something, you just replaced the original chip while it was being serviced.

It’s part of the fight over “right to repair”.

In the USA we have a serious problem with Big Tractors. The kind that harvests our enormous agriculture landscape. Because they are now computer controlled when something goes wrong they just STOP. When the tractor stops, the harvest stops. Harvesting goes on 24/7 because of huge night light systems. So the 24/7 part just stops too.

Farmers cannot repair the systems even if they know how or what to change. They have to sit for hours in remote fields until someone with the “Official Dx Kit” shows up to learn the “code” that indicates the failing part. Then they have to wait for that part to arrive by YouPickTheMethod.

ht tps://
ht tps://
ht tps://
ht tps://
(url factured to prevent autorun)

Clive Robinson October 14, 2019 8:12 PM

@ MarkH,

A museum applied a lot of tech: X rays, spectroscopic analysis of paint samples, etc.

There is a thought process that basically states “In the physical world only three numbers make sense, nothing, one and infinity”.

That is what ever you can think up as a physical object, there can be no instances of it, unique instances of it, and an unknown number between one and what under normal human understanding would be the equivalent of an infinite number[1].

Few implicitly understand thst the difference between unique and infinite is our ability not to make but to measure.

That is making things is way easier than measuring them. Take a simple injection molding, stiring of two different coloured paints, throwing down of short strands of chopped lengths of glass fiber in a clear polymer (GRP). They are all easy to make. But the injection molding suffers from vortex effects for which we have yet to classify them as chaotic or random. Either way even though the thousand moldings look identical, and there might be a thousand more, we can in theory measure the frozen vortex pattern and show that within measurment accuracy they fall into “sample bins” as the mrasurment precision increases the number of bins increases untill one of two things happens. Either we end up with as many bins as there are objects, or we reach a limit on our ability to measure.

The latter leaves an interesting philosophical debate, which is an object actually unique or just not measurable as such… As science has decided that we live not in an analog world, but a quantitized world, the answer tends to the fact it may be possible to make two or more objects identical. Now this actually favours making not measuring. In practice you just keep making untill you can find two objects that you can not tell apart by measuring. The idea behind mixing paint is similar but for various reasons thinking tends to random rather than chaotic, and importantly “unmeasurable”…

Which leaves the chopped lengths of glass fiber in clear polymer (GRP). This is known to be not just random but random at the quantum level due to the optical measurment process which boils down to a myriad of “slit” experiments. That is the path a photon would take is as dependent on how you measure it as much as the bulk physical properties. This means that all attempts to measure it will be genuinely uncertain if quantum effects are true.

Which raises the question of “measurment noise”, no matter how sensitive you make a measuring instrument it’s output always has a noise component… As your measurment becomes more detailed the random noise component increases relative to the desired measurment signal. The only way to reduce “random noise” on a signal is to “average it out”. This is the direct equivalent of increasing the time gate on the measurment which is the equivalent of reducing the measurment bandwidth. The problem is that with the best will in the world the averaging measurment as well as the gate control both have their own noise component. The limit on what we can measure is currently time / frequency and it’s around 1 in 10^15 or 2^50. Which is not exactly very large. I routinely generate sequences of a maximal length of 2^256[2] that I feed into digital filters of 64, 128bit wide registers which “sum the low bits”. It’s not for cryptography but synthetic noise to cross correlate and derive amplifier characteristics. Whilst it might appear “over kill” I can adjust both the input and output of the digital filter from any single bit upto a range of 16bits from any where in the 128bit range which enables the noise to be “coloured” in various ways.

[1] Think of this as the logical extention of the “successor number axiom”, that is however many you have counted, there is effectively a 50:50 probability of their being another to count, that is you just can not say there is or there is not another object, and you can not asign any other probability to it because that implies hidden knowledge, which might or might not be true in it’s own right.

[2] Yup I’m never going to live that long, nor is the Sun for that matter. But jumping to different parts of the sequence allows as many “statistically independent” but “repeatable at will” tests as I want for as long as needed at a frequency bandwidth up well into the high VHF / low UHF range. Such is the joy of modern high speed microprocessors coupled up to multiple FPGA’s of the sort you find in high end SDRs.

Weather October 15, 2019 1:47 AM

As a PC technician resting CMOS I’ve you took out the battery or you put a joiner on two pins, I’m guessing that could be a way in, or at least because of frequency won’t be too far away.

Bill T. October 16, 2019 9:19 AM

One of the ironies of Sue Gordon’s comment about having to assume a dirty network is that this is effectively an argument for end-to-end encryption and authentication. Obviously, this isn’t a panacea (Can you really trust all the components in the endpoint? What about the certificate infrastructure used to secure the communication?), it does at least limit the damage that the network can do.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.