More on Stuxnet

Ralph Langer has written the definitive analysis of Stuxnet: short, popular version, and long, technical version.

Stuxnet is not really one weapon, but two. The vast majority of the attention has been paid to Stuxnet’s smaller and simpler attack routine—the one that changes the speeds of the rotors in a centrifuge, which is used to enrich uranium. But the second and “forgotten” routine is about an order of magnitude more complex and stealthy. It qualifies as a nightmare for those who understand industrial control system security. And strangely, this more sophisticated attack came first. The simpler, more familiar routine followed only years later—and was discovered in comparatively short order.


Stuxnet also provided a useful blueprint to future attackers by highlighting the royal road to infiltration of hard targets. Rather than trying to infiltrate directly by crawling through 15 firewalls, three data diodes, and an intrusion detection system, the attackers acted indirectly by infecting soft targets with legitimate access to ground zero: contractors. However seriously these contractors took their cybersecurity, it certainly was not on par with the protections at the Natanz fuel-enrichment facility. Getting the malware on the contractors’ mobile devices and USB sticks proved good enough, as sooner or later they physically carried those on-site and connected them to Natanz’s most critical systems, unchallenged by any guards.

Any follow-up attacker will explore this infiltration method when thinking about hitting hard targets. The sober reality is that at a global scale, pretty much every single industrial or military facility that uses industrial control systems at some scale is dependent on its network of contractors, many of which are very good at narrowly defined engineering tasks, but lousy at cybersecurity. While experts in industrial control system security had discussed the insider threat for many years, insiders who unwittingly helped deploy a cyberweapon had been completely off the radar. Until Stuxnet.

And while Stuxnet was clearly the work of a nation-state—requiring vast resources and considerable intelligence—future attacks on industrial control and other so-called “cyber-physical” systems may not be. Stuxnet was particularly costly because of the attackers’ self-imposed constraints. Damage was to be disguised as reliability problems. I estimate that well over 50 percent of Stuxnet’s development cost went into efforts to hide the attack, with the bulk of that cost dedicated to the overpressure attack which represents the ultimate in disguise—at the cost of having to build a fully-functional mockup IR-1 centrifuge cascade operating with real uranium hexafluoride. Stuxnet-inspired attackers will not necessarily place the same emphasis on disguise; they may want victims to know that they are under cyberattack and perhaps even want to publicly claim credit for it.

Related: earlier this month, Eugene Kaspersky said that Stuxnet also damaged a Russian nuclear power station and the International Space Station.

Posted on November 29, 2013 at 6:18 AM32 Comments


mesrik November 29, 2013 12:04 PM

I haven’t yet read this updated version yet, but I’m curious how he came conclusion that “… having to build a fully-functional mockup IR-1 centrifuge cascade operating with real uranium hexafluoride.

I’m thinking possibility of simply using large enough (super)computing capability to model problem, test and then better understand what is best stealth strategy to screw refinement process and stay undetected as long as possible.

Or is there some obvious problem with that route I simply can’t see or underestimate?

Terry Cloth November 29, 2013 2:04 PM


There are two problems with simulations: First, for complex processes, fully-accurate simulation may require computing resources more expensive than the actual hardware to run the process.

Second, simulation of complex processes is hard. You now have two hard software problems: building the attack code, and building the simulator. To build a sufficiently-accurate simulation, you must minutely characterize the hardware operation. Between the cost of characterization (which requires access to the hardware) and the danger of introducing errors into the simulator, you might as well just use the actual device. Simpler, pretty-much guaranteed accurate, and possibly cheaper.

arkanoid November 29, 2013 2:14 PM

The analysis is good. Except one conclusion, being made that if you cannot rely on antivirus, firewalls and patching, the defence goes “beyond conventional infosec wisdom”.

Since when not being a moron goes beyond conventional infosec wisdom?

Since when anything you do not need when protecting a grocery store is “beyond conventional wisdom”? Since when industry standards are that low?

Terry Cloth November 29, 2013 2:21 PM

@mesrik, cont’d:

Of course the danger in using the real thing is that in development, the software may push it beyond its limits, and suddenly you need a second expensive piece of equipment, with the possibility that you may need a third, &c.

cyberdud November 29, 2013 2:29 PM

  1. Background: Stuxnet’s architecture is two-step: infect the controller computers (essentially PC’s) that are part of the plant computer network and from there modify the SCADA programmable logical controllers (PLC’s) that control the industrial processes. The damage is done at the PLC industrial process level, not at the plant network level. After preliminary network infection, the virus looks for certain signatures to make sure that it’s dealing with the right installation.

cyberdud November 29, 2013 2:30 PM

  1. So even if Stuxnet were to infect a cement plant or even someone else’s nuclear plant, it wouldn’t find the appropriate conditions for changes to be made to the PLC’s. It probably wouldn’t even find the PLC configurations it is looking for to be able to make its changes. So, to continue the medical metaphor of a ‘virus infection’, it would be like having an infection by a harmless virus. Of course since at the network level Stuxnet installs a Trojan with a command & control channel to its virus-masters, its virus-masters can thereby download other viruses and both study the plant and modify the plant network and SCADA PLC’s.

cyberdud November 29, 2013 2:31 PM

  1. Conclusion: What Langner is obfuscating is the result of an unplanned infection of another industrial plant. In general, the only result will be having a Trojan on the plant network, a Trojan which might or might not be dormant because the virus-masters have no interest in the plant. Of course, as Langner points out, the virus-masters might be able to do a sort of traffic analysis perhaps to disclose other, secret installations.

Godel November 29, 2013 3:15 PM

I happened to read Langer’s piece last night. Reading between the lines, he seems to be saying that the second phase was actually directed at US politicians as a demonstration piece, with the aim of raising more funding for cyber warfare projects.

In so far as the second phase of Stuxnet was much more detectable, it was actually a failure for the purposes of disabling Iran’s nuclear efforts, long term.

JoeV November 29, 2013 7:20 PM

Regarding the need to build an operational centrifuge stage in order to test the infective software, an alternative gas, with similar properties as UF6 is WF6 (tungsten hexaflouride), commonly used in the semiconductor industry but lacking the obviously problematic nuclear properties as UF6, implying that the requirements for testing are significantly lightened.

65535 November 30, 2013 12:45 AM

@ cyberdud

“Conclusion: What Langner is obfuscating is the result of an unplanned infection of another industrial plant. In general, the only result will be having a Trojan on the plant network, a Trojan which might or might not be dormant because the virus-masters have no interest in the plant. Of course, as Langner points out, the virus-masters might be able to do a sort of traffic analysis perhaps to disclose other, secret installations.”

That is a good point. I also notice Langner makes some disjointed or conflicting statement (please excuse my less that perfect post).

I looked a Wikipedia for more information and here what they said:

“A study of the spread of Stuxnet by Symantec showed that the main affected countries in the early days of the infection were Iran, Indonesia and India:
Iran 58.85%, Indonesia 18.22%, India 8.31%, Azerbaijan 2.57%, USA 1.5%, Pakistan 1.28%, others 9.2%”

“Stuxnet attacked Windows systems using an unprecedented four zero-day attacks (plus the CPLINK vulnerability and a vulnerability used by the Conficker worm). It is initially spread using infected removable drives such as USB flash drives, and then uses other exploits and techniques such as peer-to-peer RPC to infect and update other computers inside private networks that are not directly connected to the Internet.The number of zero-day exploits used is unusual, as they are highly valued and malware creators do not normally waste the use of four different ones in the same worm. Stuxnet is unusually large at half a megabyte in size, and written in several different programming languages (including C and C++) which is also irregular for malware. The Windows component of the malware is promiscuous in that it spreads relatively quickly and indiscriminately. The malware has both user-mode and kernel-mode rootkit capability under Windows, and its device drivers have been digitally signed with the private keys of two certificates that were stolen from separate well-known companies, JMicron and Realtek, both located at Hsinchu Science Park in Taiwan. The driver signing helped it install kernel-mode rootkit drivers successfully without users being notified, and therefore to remain undetected for a relatively long period of time. Both compromised certificates have been revoked by VeriSign.”

“Two websites in Denmark and Malaysia were configured as command and control servers for the malware, allowing it to be updated, and for industrial espionage to be conducted by uploading information. Both of these websites have subsequently been taken down as part of a global effort to disable the malware…”


“The attackers simply lacked the technical capability to call the attack off.”

[Supposedly, the CC server had been cut – yet you can just delete a dll]

“All one would have needed to do is make sure that the computers used for re-configuration were clean, which didn’t even afford sophisticated anti-virus software but could be done simply by checking for the presence of a
Malicious file (s7otbxsx.dll) by a simple filename search, using nothing but software tools (Explorer) available as part of the operating system.”

[Next Anti-virus will not help]

‘Anti -virus software doesn’t help against a Stuxnet-like attack for a simple reason. It is based on identifying and blocking known malware that is listed in the AV solution’s signature database. Unfortunately there will be no signature for custom-built malware that doesn’t display any strange behavior on average computer systems. As a case in point, the first Stuxnet variant was kind of rubbed into the face of the AV industry in 2007 but was identified as malware not earlier than six years later, using the knowledge gained from analyzing later variants. Malware designed like this first version is pretty much indistinguishable from a legitimate application software package and thereby flying below the radar of anti-virus technology.’

[Here is what Wikipedia indicated]

“…researchers at Symantec have uncovered a version of the Stuxnet computer virus that was used to attack Iran’s nuclear program in November 2007, being developed as early as 2005, when Iran was still setting up its uranium enrichment facility. The second variant, with substantial improvements, appeared in March 2010, apparently because its authors believed that Stuxnet was not spreading fast enough; a third, with minor improvements, appeared in April 2010.”

“The worm contains a component with a build time-stamp from 3 February 2010. In the United Kingdom on 25 November 2010, Sky News reported that it had received information from an anonymous source at an unidentified IT security organization that Stuxnet, or a variation of the worm, had been traded on the black market.”

[Who is right?]

‘Are Nation – State Resources Required to Pull off Similar Attacks against the US or Their Allies?’

[No – yes]

‘Can Technical Security Controls Block Stuxnet-Like Attacks?’

[No. it’s all marketing vapor. Network segregation. No. Intrusion detection. No. Air gaps. No. Security patches. No. Anti-virus. Again no. See page 18 called ‘Misconception about Stuxnet Operation and Impact’]

This is a head scratcher. If this rootkit/worm can’t be stopped why hasn’t the Iranians unleashed this against the NSA?

Wesley Parish November 30, 2013 1:32 AM

‘Are Nation – State Resources Required to Pull off Similar Attacks against the US or Their Allies?’

[No – yes]

Interesting question. What do you think? If a supercomputer’s necessary to compute a nuclear test simulation, and a botnet can be configured into a faux-supercomputer, and the powers-that-be are happy to permit the setting-up of botnets for their own purposes, then there’s every reason why one need not be a power-that-be to use nation-state-sized resources to launch such an attack. All one needs to do is to have power-that-wannabe-sized ambitions and make use of the Stuxnet functionality to establish oneself in an NSA-sourced botnet, then make use of its resources to attack …

Pauly caught a bullet
But it only hit his leg
Well it should have been a better shot
and got him in the head

Cinnamon and sugary
And softly spoken lies
You never know just how you look
Through other people’s eyes

Clive Robinson November 30, 2013 6:37 AM

With regards the physical hardware model used, and the question of why not a software simulation.

All software simulations of physical systems are based on existing hardware that has been tested Obviously the development of the software and it’s accuracy is critical upon this testing.

You need to study the history of these centrfuges to see why a physical model was required.

The centrfuge design was supplied/sold by a Swiss company set up by the “father of Pakistans A-Bomb” A.Q.Khan. He did not actualy design it himself but stole the design from Europe and made little or no modification to the actual centrfuge but did develop drive and control systems around commercialy available parts.

We know this from a prototype system supplied by the company intended for Lybia that was impounded by customs and as normal the US ensured it got a very good look at it prior to UN agencies etc.

However we know from publicaly available information that Iran nolonger used the Type 1 Khan centrefuge but one that they had modified along with North Korea.

This type 2 system was thus sufficiently different from the type 1 that the existing software model was nolonger suitable.

The US obtained copies of the type 2 drawings (supposadly) from UN inspectors and a physical copy was built to produce a new software model.

Having built the hardware what do you do with it?

Mothball it or find some other use…

Thus as it was available it was used as opposed to wasting storage space.

Jiadran November 30, 2013 7:28 AM

After reading the long technical version (it’s really interesting) I have a slightly different theory of what might have happened. Langner thinks that the tactics (and probably the team as well) changed over time. Based on his observations I propose the following (conspiracy) theory:

The attacks on the enrichment plants have been going on much longer than anyone so far claims, maybe since the beginning. That’s why Iran’s progress was so much slower than what the Pakistany managed to do (the first generation centrifigues are supposedly extremely tricky). Instead of discovering the initial attack (described in the document), the Iranian’s compensated for the seemingly random problems by including additional control measures not present in the design from Pakistan: shut-off valves to quickly isolate a malfunctioning centrifuge and over-pressure valves. It took them ten years instead of the two years of the Pakistany, but they still managed to get enrichement started. Maybe with their added failure-tolerant design the original attacks didn’t work anymore, or there was a leadership change (as Langner speculates). Maybe the Iranian’s suspected something and changed procedures also for contractors and workers (Langner thinks that the initial attack was with direct access to the system while the later attack had to somehow find a way in). Maybe then the initial team was the Israelis who wanted to remain hidden, and when their approach didn’t work anymore they asked the Americans for help who used the NSA’s attack library for a way accros the air gap. The Americans would probably also be less worried about remaining hidden and maybe actively wanted to send a message.

Altought admittely pure speculation, I think this scenario fits the known facts and observations. I’m curious to see what you think of this 😉

PS: Full disclosure: I submitted this comment to Slashdot but didn’t get any meaningful replies.

Spaceman Spiff November 30, 2013 8:15 AM

“Related: earlier this month, Eugene Kaspersky said that Stuxnet also damaged a Russian nuclear power station and the International Space Station.”

The results of the Law of Unintended Consequences…

YouDon'tSeeIt-NowYouDo November 30, 2013 9:09 AM

The short form of the article is blocked by and FP dialog requesting that one register to get access. Don’t bother.

Do an Internet search for parts of the title, and you get access without registration.

CallMeLateForSupper November 30, 2013 10:59 AM

“Do an Internet search for parts of the title, and you get access without registration.”

Turn OFF JavaScript and you get acess without doing internet searches for parts of the title. 🙂

Sean November 30, 2013 1:37 PM

The reality is “there is no airgap”. And there never was.

Security is easily breached by convenience and the need for maintenance by people so conditioned to doing it that they don’t even think of the consequences.

I’ve seen contractors plug USB drives directly into the company server without asking permission, outside salespeople searching for a network jack and then wondering why they can’t get internet access because the DHCP they unknowingly expected was not enabled.

All security can be breached merely by doing an “end run” around it.

Clive Robinson December 1, 2013 3:53 AM

@ Sean,

    The reality is “there is no airgap”. And there never was.

Whilst what the rest of what you say is true enough, the reality is there are probably more airgaped computing systems than not.

And whilst most people don’t realise it the fight to breach the airgap to realy invasive access to your second by second personal life mandated by governments is starting to be the new battle ground all in the name of the environment…

When you look around your home just about every appliance be they white good or brown made since 2000 has a computer in it, which means after you tot them up the average home has over ten times as many “unseen computers” embedded in it as it does those people recognise such as a desktop / laptop / netbook / pad / smart phone.

Various criminals want access too all computers, some in your own government some in other governments and some most would think of as criminals but quite a few others most would only consider an anoyance. They all glibly talk of the “Internet of Things” (IoT).

In order to look “green” various politicos have talked up not just “Smart Metering” but “Smart Control” of your energy consumption be it Electricity or Gas and have put in place or are in the process of puting in place legislation to enforce this on the general populous. Look at that legislation as being the “Calea of IoT”. Like Calea access is mandated and it’s at your expense, and you will be the one breaking the law if you try to prevent it. Unfortunatly the level of spying possible is based not just on the power signiture of your equipment but on what manufactures end up having to install to be compatible with your energy supplier who is likely to want bells, whistles or whatever other features appear in the wildest fantasies of their marketing and value added departments to turn them selves into Google like entities.

But it’s not just your home government and energy suppliers, we have also seen recently Asian manufactured goods with hidden WiFi in home goods and this has included such items as “steam irons”. More noticably are “entertainment systems” that have to do an ET (call home) before they will work.

The point is for the manufactures of all these home goods economy of scale is king, and it’s the same with the silicon chip suppliers with their SoCs. It’s cheaper to make a one size fits all SoC and then dumb it down with software configuration than it is to make different chips. The home goods manufactures won’t write the software they will just take a “glob” from the SoC supplier and write a few bits around it (just as we see with smart phones).

Ask yourself how long it will take for all of this to be subverted and used by criminals such as your government, or some other countries government, and the advertisers scamers or your more traditional criminal.

Then consider the same SoC chips will end up in medical equipment and into your pacemaker etc that current US Health Care Companies are paying to be put in you (as it saves them money) and ask the question again…

65535 December 1, 2013 5:23 AM

Sorry about the delay. With the long weekend and family I was hampered getting to the comments.

@Wesley P

Yes, given the distributed computing it may be possible make use of Stuxnet-NSA style attacks. But, I would have guessed that the major AV vendors would have fingerprinted all Stuxnet types by now (but who knows).

@ Jiadran

Yes, different teams with different methods of attack could have re-complied the virus and introduced it in a different fashion. To add to your speculation maybe a number of AV vendors was in on the pad with the NSA and simply did not provide a solution (NSA has a huge budget and plenty of customers).

@ Spaceman

Did Eugene Kaspersky provide a method of cleaning the virus from said Space Station or the nuclear power plant? If he did then he would have been one of the few that provided an AV solution to his customers.

@ Sean and Clive

In the long run there really is no air gap. If enough technicians use enough DVDs, CDs, and memory sticks on a system it could become infected.

Clive you could be on to something regarding mandated “Smart Meters” which could use IP over power-line transmission. There are a lot of home appliances with small to medium motherboards on them which are rarely checked for viruses. Further, the CALEA problem seems to be the NSA’s ultimate loop-hole. Things are not looking good.

Neuromancer December 1, 2013 9:42 AM

@JoeV and others

1 Uranium Hexaflouride and Tungsten hexafluoride have very different physical properties so thats out.

2 You need to build the thing rather than just run a cfd simulation as you have to have proved the model before its valid. My first job was working at the world leading fluid dynamics place and there is a reason that they had several large lab buildings at CIT full of physical models.

Carl 'SAI' Mitchell December 1, 2013 2:40 PM

Stuxnet also demonstrated a lack of foresight on the part of the centrifuge designers. It should not be physically possible to spin such equipment fast enough to cause damage. Either a smaller motor, a better frame, or a mechanical governor could prevent both sabotage and ham-fisted errors.

Dirk Praet December 1, 2013 7:33 PM

I don’t entirely agree on the “Can Technical Security Controls Block Stuxnet-Like Attacks?”-parts. There certainly are a number of simple things that can be done to make it harder:

  1. What kind of military organisation allows contractors to plug in USB’s or mobile devices on a high security network ? That’s just asking for problems. In a non-MLS environment, data diodes or data pumps come to mind. If it’s Windows machines, some simple GPO’s can be implemented to achieve just that.
  2. Any setup that sensitive needs some serious Tripwire-like security configuration management, monitoring, logging and auditing. Can be complemented with heuristic anti-virus/malware scanning. As soon as any new and unknown file appears or an integrity check of existing ones fails, an alert is raised.
  3. Restrictive egress firewall rules: why can a high security network or facility contact C&C servers in Denmark or Malaysia, or anything on the internet, for that matter ?

Malware that’s hidden in the hardware is an entirely different story, but that was not the case here. Personally, I think the Iranian’s ITSec was seriously sub-standard.

OfficerX December 2, 2013 4:54 AM

I thought Kaspersky said that Stuxnet was detected at those sites (not damaged, big difference. Even “badly infected”, for me, falls under detected unless we get info on impact.

Tim#3 December 2, 2013 9:00 AM

I find the infiltration model fascinating. This whole system, which as the article describes took “vast resources” to develop, has been created on the single assumption that some people’s IT security behaviour will be so lax that it will enable it to infiltrate the target sites. That’s quite some assumption to take to a sponsor.

Clive Robinson December 2, 2013 1:05 PM

@ Tim#3,

    This whole system, which as the article describes took “vast resources” to develop, has been created on the single assumption that some people’s IT security behaviour will be so lax that it will enable it to infiltrate the target sites

Yes, because it’s an IT Industry “truism” that “IT Security” is an oxymoron.

The reality is the problem of “root” which is a single point of failure. Once the process acting on behalf of an entity (human or otherwise) gets root priveledges then it can do what it wants to do in the system…. And for reasons to long and boring to explain “root” is a problem all computers have as it’s more or less baked in at the hardware layer (and befor you ask no it does not have to be that way, it’s just the way humans fail that makes it so).

And as Ed Snowden has subsiquently shown the underlying reality is also true of the NSA…

Thomas Archer April 8, 2015 6:44 AM

I really confused when I saw this article. several years ago I watched a video Bruce Dang one of Microsoft Research team member mentioned we knew about Stuxnet but we weren’t allowed to talk about it till now (2010). but now days that everybody knows about the Stuxnet and its mission is finished, why Microsoft haven’t patched it forever? is there something remained unmentioned about the Stuxnet mission?
I searched a lot to find the video I had watched several years ago which contained much more knowledge about the role of Microsoft Research in responsibility of postponing patch the Stuxnet vulnerabilities but I couldn’t find it. If someone has any idea please contact me in

Trevor January 25, 2019 10:43 AM

If you stand back far enough, you realize that this is your sworn enemies technology running a manufacturing plant/system that they’d prefer you to not have. In this case, running Windows and using western hardware and software means that your network is ALREADY compromised. Air gap or not, AV or not, you have exposed yourself. Until people stop using technology built by their “foes”, the risk of infiltration, spying and general maleficence towards you will remain elevated. I suppose yes, you need to develop your own linux distro or use simple electronic devices/hardware that can be created end to end.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.