Entries Tagged "Stuxnet"

Page 3 of 4

Attacking PLCs Controlling Prison Doors

Embedded system vulnerabilities in prisons:

Some of the same vulnerabilities that the Stuxnet superworm used to sabotage centrifuges at a nuclear plant in Iran exist in the country’s top high-security prisons, according to security consultant and engineer John Strauchs, who plans to discuss the issue and demonstrate an exploit against the systems at the DefCon hacker conference next week in Las Vegas.

Strauchs, who says he engineered or consulted on electronic security systems in more than 100 prisons, courthouses and police stations throughout the U.S. ­ including eight maximum-security prisons ­ says the prisons use programmable logic controllers to control locks on cells and other facility doors and gates. PLCs are the same devices that Stuxnet exploited to attack centrifuges in Iran.

This seems like a minor risk today; Stuxnet was a military-grade effort, and beyond the reach of your typical criminal organization. But that can only change, as people study and learn from the reverse-engineered Stuxnet code and as hacking PLCs becomes more common.

As we move from mechanical, or even electro-mechanical, systems to digital systems, and as we network those digital systems, this sort of vulnerability is going to only become more common.

Posted on August 2, 2011 at 6:23 AMView Comments

New Siemens SCADA Vulnerabilities Kept Secret

SCADA systems — computer systems that control industrial processes — are one of the ways a computer hack can directly affect the real world. Here, the fears multiply. It’s not bad guys deleting your files, or getting your personal information and taking out credit cards in your name; it’s bad guys spewing chemicals into the atmosphere and dumping raw sewage into waterways. It’s Stuxnet: centrifuges spinning out of control and destroying themselves. Never mind how realistic the threat is, it’s scarier.

Last week, a researcher was successfully pressured by the Department of Homeland Security not to disclose details “before Siemens could patch the vulnerabilities.”

Beresford wouldn’t say how many vulnerabilities he found in the Siemens products, but said he gave the company four exploit modules to test. He believes that at least one of the vulnerabilities he found affects multiple SCADA-system vendors, which share “commonality” in their products. Beresford wouldn’t reveal more details, but says he hopes to do so at a later date.

We’ve been living with full disclosure for so long that many people have forgotten what life was like before it was routine.

Before full disclosure was the norm, researchers would discover vulnerabilities in software and send details to the software companies — who would ignore them, trusting in the security of secrecy. Some would go so far as to threaten the researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers announced that particular vulnerabilities existed, but did not publish details. Software companies would then call the vulnerabilities “theoretical” and deny that they actually existed. Of course, they would still ignore the problems, and occasionally threaten the researcher with legal action. Then, of course, some hacker would create an exploit using the vulnerability — and the company would release a really quick patch, apologize profusely, and then go on to explain that the whole thing was entirely the fault of the evil, vile hackers.

I wrote that in 2007. Siemens is doing it right now:

Beresford expressed frustration that Siemens appeared to imply the flaws in its SCADA systems gear might be difficult for a typical hacker to exploit because the vulnerabilities unearthed by NSS Labs “were discovered while working under special laboratory conditions with unlimited access to protocols and controllers.”

There were no “‘special laboratory conditions’ with ‘unlimited access to the protocols,'” Beresford wrote Monday about how he managed to find flaws in Siemens PLC gear that would allow an attacker to compromise them. “My personal apartment on the wrong side of town where I can hear gunshots at night hardly defines a special laboratory.” Beresford said he purchased the Siemens controllers with funding from his company and found the vulnerabilities, which he says hackers with bad intentions could do as well.

That’s precisely the point. Me again from 2007:

Unfortunately, secrecy sounds like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers…. But that assumes that hackers can’t discover vulnerabilities on their own, and that software companies will spend time and money fixing secret vulnerabilities. Both of those assumptions are false. Hackers have proven to be quite adept at discovering secret vulnerabilities, and full disclosure is the only reason vendors routinely patch their systems.

With the pressure off, Siemens is motivated to deal with the PR problem and ignore the underlying security problem.

Posted on May 24, 2011 at 5:50 AMView Comments

More Stuxnet News

This long New York Times article includes some interesting revelations. The article claims that Stuxnet was a joint Israeli-American project, and that its effectiveness was tested on live equipment: “Behind Dimona’s barbed wire, the experts say, Israel has spun nuclear centrifuges virtually identical to Iran’s at Natanz, where Iranian scientists are struggling to enrich uranium.”

The worm itself now appears to have included two major components. One was designed to send Iran’s nuclear centrifuges spinning wildly out of control. Another seems right out of the movies: The computer program also secretly recorded what normal operations at the nuclear plant looked like, then played those readings back to plant operators, like a pre-recorded security tape in a bank heist, so that it would appear that everything was operating normally while the centrifuges were actually tearing themselves apart.

My two previous Stuxnet posts. And an alternate theory: The Chinese did it.

EDITED TO ADD (2/12): More opinions on Stuxnet.

Posted on January 17, 2011 at 12:31 PMView Comments

Cyberwar and the Future of Cyber Conflict

The world is gearing up for cyberwar. The U.S. Cyber Command became operational in November. NATO has enshrined cyber security among its new strategic priorities. The head of Britain’s armed forces said recently that boosting cyber capability is now a huge priority for the UK. And we know China is already engaged in broad cyber espionage attacks against the west. So how can we control a burgeoning cyber arms race?

We may already have seen early versions of cyberwars in Estonia and Georgia, possibly perpetrated by Russia. It’s hard to know for certain, not only because such attacks are often impossible to trace, but because we have no clear definitions of what a cyberwar actually is.

Do the 2007 attacks against Estonia, traced to a young Russian man living in Tallinn and no one else, count? What about a virus from an unknown origin, possibly targeted at an Iranian nuclear complex? Or espionage from within China, but not specifically directed by its government? To such questions one must add even more basic issues, like when a cyberwar is understood to have begun, and how it ends. When even cyber security experts can’t answer these questions, it’s hard to expect much from policymakers.

We can set parameters. It is obviously not an act of war just to develop digital weapons targeting another country. Using cyber attacks to spy on another nation is a grey area, which gets greyer still when a country penetrates information networks, just to see if it can do so. Penetrating such networks and leaving a back door open, or even leaving logic bombs behind to be used later, is a harder case—yet the US and China are doing this to each other right now.

And what about when one country deliberately damages the economy of another, as one of the WikiLeaks cables shows that a member of China’s politburo did against Google in January 2010? Definitions and rules are hard not just because the tools of war have changed, but because cyberspace puts them into the hands of a broader group of people. Previously only the military had weapons. Now anyone with sufficient computer skills can take matters into their own hands.

There are more basic problems too. When a nation is attacked in a regular conflict, a variety of military and civil institutions respond. The legal framework for this depends on two things: the attacker and the motive. But when you’re attacked on the internet, those are precisely the two things you don’t know. We don’t know if Georgia was attacked by the Russian government, or just some hackers living in Russia. In spite of much speculation, we don’t know the origin, or target, of Stuxnet. We don’t even know if last July 4’s attacks against US and South Korean computers originated in North Korea, China, England, or Florida.

When you don’t know, it’s easy to get it wrong; and to retaliate against the wrong target, or for the wrong reason. That means it is easy for things to get out of hand. So while it is legitimate for nations to build offensive and defensive cyberwar capabilities we also need to think now about what can be done to limit the risk of cyberwar.

A first step would be a hotline between the world’s cyber commands, modelled after similar hotlines among nuclear commands. This would at least allow governments to talk to each other, rather than guess where an attack came from. More difficult, but more important, are new cyberwar treaties. These could stipulate a no first use policy, outlaw unaimed weapons, or mandate weapons that self-destruct at the end of hostilities. The Geneva Conventions need to be updated too.

Cyber weapons beg to be used, so limits on stockpiles, and restrictions on tactics, are a logical end point. International banking, for instance, could be declared off-limits. Whatever the specifics, such agreements are badly needed. Enforcement will be difficult, but that’s not a reason not to try. It’s not too late to reverse the cyber arms race currently under way. Otherwise, it is only a matter of time before something big happens: perhaps by the rash actions of a low level military officer, perhaps by a non-state actor, perhaps by accident. And if the target nation retaliates, we could actually find ourselves in a cyberwar.

This essay was originally published in the Financial Times (free registration required for access, or search on Google News).

Posted on December 6, 2010 at 6:42 AMView Comments

Stuxnet News

Another piece of the puzzle:

New research, published late last week, has established that Stuxnet searches for frequency converter drives made by Fararo Paya of Iran and Vacon of Finland. In addition, Stuxnet is only interested in frequency converter drives that operate at very high speeds, between 807 Hz and 1210 Hz.

The malware is designed to change the output frequencies of drives, and therefore the speed of associated motors, for short intervals over periods of months. This would effectively sabotage the operation of infected devices while creating intermittent problems that are that much harder to diagnose.

Low-harmonic frequency converter drives that operate at over 600 Hz are regulated for export in the US by the Nuclear Regulatory Commission as they can be used for uranium enrichment. They may have other applications but would certainly not be needed to run a conveyor belt at a factory, for example.

The threat of Stuxnet variants is being used to scare senators.

Me on Stuxnet.

Posted on November 22, 2010 at 6:19 AMView Comments

Stuxnet

Computer security experts are often surprised at which stories get picked up by the mainstream media. Sometimes it makes no sense. Why this particular data breach, vulnerability, or worm and not others? Sometimes it’s obvious. In the case of Stuxnet, there’s a great story.

As the story goes, the Stuxnet worm was designed and released by a government–the U.S. and Israel are the most common suspects–specifically to attack the Bushehr nuclear power plant in Iran. How could anyone not report that? It combines computer attacks, nuclear power, spy agencies and a country that’s a pariah to much of the world. The only problem with the story is that it’s almost entirely speculation.

Here’s what we do know: Stuxnet is an Internet worm that infects Windows computers. It primarily spreads via USB sticks, which allows it to get into computers and networks not normally connected to the Internet. Once inside a network, it uses a variety of mechanisms to propagate to other machines within that network and gain privilege once it has infected those machines. These mechanisms include both known and patched vulnerabilities, and four “zero-day exploits”: vulnerabilities that were unknown and unpatched when the worm was released. (All the infection vulnerabilities have since been patched.)

Stuxnet doesn’t actually do anything on those infected Windows computers, because they’re not the real target. What Stuxnet looks for is a particular model of Programmable Logic Controller (PLC) made by Siemens (the press often refers to these as SCADA systems, which is technically incorrect). These are small embedded industrial control systems that run all sorts of automated processes: on factory floors, in chemical plants, in oil refineries, at pipelines–and, yes, in nuclear power plants. These PLCs are often controlled by computers, and Stuxnet looks for Siemens SIMATIC WinCC/Step 7 controller software.

If it doesn’t find one, it does nothing. If it does, it infects it using yet another unknown and unpatched vulnerability, this one in the controller software. Then it reads and changes particular bits of data in the controlled PLCs. It’s impossible to predict the effects of this without knowing what the PLC is doing and how it is programmed, and that programming can be unique based on the application. But the changes are very specific, leading many to believe that Stuxnet is targeting a specific PLC, or a specific group of PLCs, performing a specific function in a specific location–and that Stuxnet’s authors knew exactly what they were targeting.

It’s already infected more than 50,000 Windows computers, and Siemens has reported 14 infected control systems, many in Germany. (These numbers were certainly out of date as soon as I typed them.) We don’t know of any physical damage Stuxnet has caused, although there are rumors that it was responsible for the failure of India’s INSAT-4B satellite in July. We believe that it did infect the Bushehr plant.

All the anti-virus programs detect and remove Stuxnet from Windows systems.

Stuxnet was first discovered in late June, although there’s speculation that it was released a year earlier. As worms go, it’s very complex and got more complex over time. In addition to the multiple vulnerabilities that it exploits, it installs its own driver into Windows. These have to be signed, of course, but Stuxnet used a stolen legitimate certificate. Interestingly, the stolen certificate was revoked on July 16, and a Stuxnet variant with a different stolen certificate was discovered on July 17.

Over time the attackers swapped out modules that didn’t work and replaced them with new ones–perhaps as Stuxnet made its way to its intended target. Those certificates first appeared in January. USB propagation, in March.

Stuxnet has two ways to update itself. It checks back to two control servers, one in Malaysia and the other in Denmark, but also uses a peer-to-peer update system: When two Stuxnet infections encounter each other, they compare versions and make sure they both have the most recent one. It also has a kill date of June 24, 2012. On that date, the worm will stop spreading and delete itself.

We don’t know who wrote Stuxnet. We don’t know why. We don’t know what the target is, or if Stuxnet reached it. But you can see why there is so much speculation that it was created by a government.

Stuxnet doesn’t act like a criminal worm. It doesn’t spread indiscriminately. It doesn’t steal credit card information or account login credentials. It doesn’t herd infected computers into a botnet. It uses multiple zero-day vulnerabilities. A criminal group would be smarter to create different worm variants and use one in each. Stuxnet performs sabotage. It doesn’t threaten sabotage, like a criminal organization intent on extortion might.

Stuxnet was expensive to create. Estimates are that it took 8 to 10 people six months to write. There’s also the lab setup–surely any organization that goes to all this trouble would test the thing before releasing it–and the intelligence gathering to know exactly how to target it. Additionally, zero-day exploits are valuable. They’re hard to find, and they can only be used once. Whoever wrote Stuxnet was willing to spend a lot of money to ensure that whatever job it was intended to do would be done.

None of this points to the Bushehr nuclear power plant in Iran, though. Best I can tell, this rumor was started by Ralph Langner, a security researcher from Germany. He labeled his theory “highly speculative,” and based it primarily on the facts that Iran had an unusually high number of infections (the rumor that it had the most infections of any country seems not to be true), that the Bushehr nuclear plant is a juicy target, and that some of the other countries with high infection rates–India, Indonesia, and Pakistan–are countries where the same Russian contractor involved in Bushehr is also involved. This rumor moved into the computer press and then into the mainstream press, where it became the accepted story, without any of the original caveats.

Once a theory takes hold, though, it’s easy to find more evidence. The word “myrtus” appears in the worm: an artifact that the compiler left, possibly by accident. That’s the myrtle plant. Of course, that doesn’t mean that druids wrote Stuxnet. According to the story, it refers to Queen Esther, also known as Hadassah; she saved the Persian Jews from genocide in the 4th century B.C. “Hadassah” means “myrtle” in Hebrew.

Stuxnet also sets a registry value of “19790509” to alert new copies of Stuxnet that the computer has already been infected. It’s rather obviously a date, but instead of looking at the gazillion things–large and small–that happened on that the date, the story insists it refers to the date Persian Jew Habib Elghanain was executed in Tehran for spying for Israel.

Sure, these markers could point to Israel as the author. On the other hand, Stuxnet’s authors were uncommonly thorough about not leaving clues in their code; the markers could have been deliberately planted by someone who wanted to frame Israel. Or they could have been deliberately planted by Israel, who wanted us to think they were planted by someone who wanted to frame Israel. Once you start walking down this road, it’s impossible to know when to stop.

Another number found in Stuxnet is 0xDEADF007. Perhaps that means “Dead Fool” or “Dead Foot,” a term that refers to an airplane engine failure. Perhaps this means Stuxnet is trying to cause the targeted system to fail. Or perhaps not. Still, a targeted worm designed to cause a specific sabotage seems to be the most likely explanation.

If that’s the case, why is Stuxnet so sloppily targeted? Why doesn’t Stuxnet erase itself when it realizes it’s not in the targeted network? When it infects a network via USB stick, it’s supposed to only spread to three additional computers and to erase itself after 21 days–but it doesn’t do that. A mistake in programming, or a feature in the code not enabled? Maybe we’re not supposed to reverse engineer the target. By allowing Stuxnet to spread globally, its authors committed collateral damage worldwide. From a foreign policy perspective, that seems dumb. But maybe Stuxnet’s authors didn’t care.

My guess is that Stuxnet’s authors, and its target, will forever remain a mystery.

This essay originally appeared on Forbes.com.

My alternate explanations for Stuxnet were cut from the essay. Here they are:

  • A research project that got out of control. Researchers have accidentally released worms before. But given the press, and the fact that any researcher working on something like this would be talking to friends, colleagues, and his advisor, I would expect someone to have outed him by now, especially if it was done by a team.
  • A criminal worm designed to demonstrate a capability. Sure, that’s possible. Stuxnet could be a prelude to extortion. But I think a cheaper demonstration would be just as effective. Then again, maybe not.
  • A message. It’s hard to speculate any further, because we don’t know who the message is for, or its context. Presumably the intended recipient would know. Maybe it’s a “look what we can do” message. Or an “if you don’t listen to us, we’ll do worse next time” message. Again, it’s a very expensive message, but maybe one of the pieces of the message is “we have so many resources that we can burn four or five man-years of effort and four zero-day vulnerabilities just for the fun of it.” If that message were for me, I’d be impressed.
  • A worm released by the U.S. military to scare the government into giving it more budget and power over cybersecurity. Nah, that sort of conspiracy is much more common in fiction than in real life.

Note that some of these alternate explanations overlap.

EDITED TO ADD (10/7): Symantec published a very detailed analysis. It seems like one of the zero-day vulnerabilities wasn’t a zero-day after all. Good CNet article. More speculation, without any evidence. Decent debunking. Alternate theory, that the target was the uranium centrifuges in Natanz, Iran.

Posted on October 7, 2010 at 9:56 AMView Comments

The Stuxnet Worm

It’s impressive:

The Stuxnet worm is a “groundbreaking” piece of malware so devious in its use of unpatched vulnerabilities, so sophisticated in its multipronged approach, that the security researchers who tore it apart believe it may be the work of state-backed professionals.

“It’s amazing, really, the resources that went into this worm,” said Liam O Murchu, manager of operations with Symantec’s security response team.

“I’d call it groundbreaking,” said Roel Schouwenberg, a senior antivirus researcher at Kaspersky Lab. In comparison, other notable attacks, like the one dubbed Aurora that hacked Google’s network and those of dozens of other major companies, were child’s play.

EDITED TO ADD (9/22): Here’s an interesting theory:

By August, researchers had found something more disturbing: Stuxnet appeared to be able to take control of the automated factory control systems it had infected – and do whatever it was programmed to do with them. That was mischievous and dangerous.

But it gets worse. Since reverse engineering chunks of Stuxnet’s massive code, senior US cyber security experts confirm what Mr. Langner, the German researcher, told the Monitor: Stuxnet is essentially a precision, military-grade cyber missile deployed early last year to seek out and destroy one real-world target of high importance – a target still unknown.

The article speculates that the target is Iran’s Bushehr nuclear power plant, but there’s not much in the way of actual evidence to support that.

Some more articles.

Posted on September 22, 2010 at 6:25 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.