34 SCADA Vulnerabilities Published

It's hard to tell how serious this is.

Computer security experts who examined the code say the vulnerabilities are not highly dangerous on their own, because they would mostly just allow an attacker to crash a system or siphon sensitive data, and are targeted at operator viewing platforms, not the backend systems that directly control critical processes. But experts caution that the vulnerabilities could still allow an attacker to gain a foothold on a system to find additional security holes that could affect core processes.

Posted on April 1, 2011 at 6:58 AM • 11 Comments

Comments

mdbApril 1, 2011 7:13 AM

I used to install these systems at water and waste water plants. Everyone I worked with was on a separate network with no outside access. That was 10 years ago, so I don't know if that it changed. There were also the usual operators issues (e.g. plugging the computer into another jack), but it is one reason that I think stuxnet was designed and released the way it was.

JeroenApril 1, 2011 7:37 AM

Most plants these days are connected to the internet somehow. There are products for global, cross-plant monitoring for Business Intelligence applications, interaction with ERP systems etc, all of which require networking the hardware. Most of the names mentioned in the article offer these products, and if their SCADA offerings contain vulnerabilities, it's a good bet their other products have them as well, which could be potentially harmful. For example, subtle changes in an ERP system could result in material or parts being under- or over-stocked. Many systems also have maintenance and/or administrator modes that allow manual control of the machinery, which provides a lot of opportunity for mischief. And while actually safety-critical actions should be prevented by hardware interlocks, which cannot be overridden through software, I'm sure a hacker able to gain that sort of control can cause a lot of trouble.

Clive RobinsonApril 1, 2011 9:16 AM

They are vulnerabilities that can be exploited plain and simple, that makes them a problem in two ways.

1, Patching existing systems
2, External access.

The first issue is why even if patches become available they may not be applied.

We talk glibbly of CRM / Human Resources / web portals et al of being "mission critical", well if these systems break after a patch has been applied they can be repaired quickly from a backup taken just befor the patches were applied. Less than a days outage if people have done things sensibly.

Now think of an oil terminal with attached cracking plant, or a steel works or large manufacturing automated production line. If a patch breaks the "plant" then there is no "roll back" option...

It might even cause some plants to rapidly transport parts of the plant or materials being processed well beyond the plant perimeter...

Think the Italian TCP plant where a closed valve caused the reactor vesel temprature to rise causing first ring closure turning the TCP into the poison dioxin then sufficient temprature and over preasure for several tons of dioxin to be atmosphericaly vented where it went on to poison many many people who were lucky if all they suffered was bad disfigurment from "chloro acne".

So it is a realy good bet that very few of these systems will be patched, what however will be a piece-meal upgrade as parts get replaced. Which means some systems potentialy will remain vulnerable for 15 or more years.

Which brings us onto the second issue of how a vulnerable system can be exploited.

The old Security 101 way was "air gap" but as I have been saying for quite some time (with the likes of voting machines) and Stuxnet proved by crossing the air gap either inwards, or worse both inwards and outwards (some APT type prototypes I've built whilst testing covert control channels do this) the air gap is far from impregnable.

So the question becomes how do you issolate a know to be vulnerable but cann't be patched box, when the exploits are already out there...

Simple answer is it's a very very hard problem because falable humans are involved, and some can be persuaded to deliberatly attack systems to their own advantage.

BF SkinnerApril 1, 2011 11:43 AM

@Clive "good bet that very few of these systems will be patched"

No I agree Clive after years of being told a system outage would have catastrophic consequences when it did finally happen - no biggy. And this MISSION CRITICAL system was down a week.

But I agree that the ICS systems really are in that category.

So here's the thing I've pondered about these.

How can you have a vital function in an ICS system, make them dependent on a general computer operating system (Windows CE anyone?) and then have NO redundency or failover? Why aren't they backed up by a standby device (at what ever level) first so you can take the device off line to patch it, second to fail back and third to provide hot/hot standby in case the thing fails in the first place?

JPApril 1, 2011 12:34 PM

@BF

In every system with some level of size and criticality I have seen, these systems are designed first for availability, second for integrity, and then finally for confidentiality.

Operator stations are redundant if the system has any size and it matters.

Engineering workstations and historians generally have redundant capabilities in separate PCs (risk point) though the engineering workstation capabilities may not be fully redundant.

Field process controllers are what actually turn the valves and move the arms. These are rarely running Windows or any general purpose operating system. May be a custom OS or embedded Unix. On DCS based systems - almost always redundant. If PLC based - less likely redundant and only occasionally running IP.

Do you want to patch half of a redundant system and not the other half and leave yourself open to killing people when different systems are inconsistently trading vital data.

I have done it, and on one occasion regretted it when the patch broke some code that did not appear until after the next config update. The patch broke 2 emergency safeties on an ammonia system without indication. It was dangerous for the operators during an E-Trip at 2AM. You can do it but have to be extremely careful and consider the risks which often hidden and occur much later.

- The average Instrument and Controls engineer does not have the capability to do this.
- The average IT person does not have the capability to even know what would happen if he breaks something.
- The average manager is not willing to lose a few hundred thousand in production to patch the system for an attack that probably will never happen.
- The average customer is not willing to put up with having his electricity or water or sewer or.... be turned off when the patch goes wrong.

Everything about this is risk management.

Doug CoulterApril 1, 2011 2:17 PM

I have some friends in the business of building plants. More often than not, it's build the plant, and you've got X budget, that's it, and have to account for any overruns or delays inside that.

With any left going to the guys who built the plant on contract. They have real good reasons to not waste a dime.

Almost always, there is manual control for things in case some of the automation goes down. The issue even then is there anyone who can run the place manually - once in operation there are usuall too few people trained in the process to run the plant, though there might usually be enough to shut it down safely (depends on how hard that is and is process-dependent).

Of course, once a plant is operational, the bean counters resume control, and good luck with trying to convince them that they need more safeties and backups before the plant has broken even. And once it gets to that point, it's been running awhile without that, so they become even harder to convince to add that stuff - even if it were free, taking down the plant for a day or few to do it isn't free.

Since the line people who actually understand the process are...well, not the most intellectual appearing folks on the planet (they tend more towards being really savvy, but don't impress the suits) they don't have the communication skills to get the bean counters to approve things like this, and we go on, waiting for the inevitable disaster, prevented most of the time only by the savvy operators whose lives tend to be on the line.

This is based on discussions I've had over years with a long term friend and fellow engineer who now builds these plants on contract for big firms, be it oil refinery stuff, ethanol plants, solvent recovery plants or some kind of chemical processing.

He's a good guy, and when he can, applies very high human productivity for the money to make room in the budget for the safeties and backups. But it's not always possible in that game, and I get the feeling he's the exception.

That's one hard business. It can take months to do all the work to bid on a fixed price contract, and he's been sued for backing out of one when the price of the stainless steel doubled during the time after he bid and before the bid was accepted.
But he had no choice but to walk away -- he couldn't come up with the difference, and saw it as more honest to just not accept the job after all.

The court agreed -- after considerable time and legal expense. Fortunately most customers are a little more reasonable, but not all that reasonable, if you understand my meaning.

I had pointed out a couple years back (maybe 3-4) how vulnerable SCADA systems were, and Bruce poo poohed me then, saying nothing was Internet connected. Well, I was right then -- and it was connected then, so the bean counters could watch the plants run in real time, change thing (sometimes causing issues), and require less on site paid workers to get the job done at the highest profit.

At least in the earlier days, most of this was done over dedicated lines, but the switch to the net caught out Bruce just like grocery store scanners surprised Bush 1....

Thomas B.April 2, 2011 10:01 PM

Sure, backend hacks are bad, but vulnerabilities in operator viewing platforms are extremely dangerous.

Go ahead and tell an operator that a certain tank is empty when it's actually full. Turn off warning indicators for system instability. Systematically under-report all changes in system state so the operator will over compensate when trying to fix the problem.

Very bad things will happen.

This isn't all theory. This is basically what happened in 2005 at the Texas City BP refinery. The refinery exploded, killing and injuring scores.

John NormanApril 3, 2011 1:40 PM

There's definitely a big problem with updating any type of real-time software.
The process of patching or updating your control system carries a risk of undoing previous fixes to the system or breaking something that worked previously. Since all of these systems are highly customized, it's very difficult to test the patch before it gets applied.

I discovered a lot of these problems while developing an access control system for our shop, based on the Arduino. Every new feature or bug fix I made really needed to be tested on a set of "development hardware" before I could be confident that it wouldn't lock me out or leave the door open. And this is for a program with only about 2K lines of code.

FYI, you can google "Open Access Control Arduino" if you'd like to check this out for yourself.

JN

GweihirApril 3, 2011 10:02 PM

I don't quite see why compromising display components is less problematic. Just display something that makes the operators blow up their own plant...

Dirk PraetApril 4, 2011 5:50 PM

For as far as I am concerned, any system deemed mission critical as a result of a risk analysis or business impact assessment requires adequate processes and procedures in place for maintenance, upgrading and patching. Not or inadequately implementing these may save time, effort and money. If however things go wrong, it is clear that the person or body responsible for these decisions should also be held accountable for them, and not get a bonus for "best year in safety" like the (* rude language *) at Transocean.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..