Hack Against SCADA System

A hack against a SCADA system controlling a water pump in Illinois destroyed the pump.

We know absolutely nothing here about the attack or the attacker’s motivations. Was it on purpose? An accident? A fluke?

EDITED TO ADD (12/1): Despite all sorts of allegations that the Russians hacked the water pump, it turns out that it was all a misunderstanding:

Within a week of the report’s release, DHS bluntly contradicted the memo, saying that it could find no evidence that a hack occurred. In truth, the water pump simply burned out, as pumps are wont to do, and a government-funded intelligence center incorrectly linked the failure to an internet connection from a Russian IP address months earlier.

The end of the article makes the most important point, I think:

Joe Weiss says he’s shocked that a report like this was put out without any of the information in it being investigated and corroborated first.

“If you can’t trust the information coming from a fusion center, what is the purpose of having the fusion center sending anything out? That’s common sense,” he said. “When you read what’s in that [report] that is a really, really scary letter. How could DHS not have put something out saying they got this [information but] it’s preliminary?”

Asked if the fusion center is investigating how information that was uncorroborated and was based on false assumptions got into a distributed report, spokeswoman Bond said an investigation of that sort is the responsibility of DHS and the other agencies who compiled the report. The center’s focus, she said, was on how Weiss received a copy of the report that he should never have received.

“We’re very concerned about the leak of controlled information,” Bond said. “Our internal review is looking at how did this information get passed along, confidential or controlled information, get disseminated and put into the hands of users that are not approved to receive that information. That’s number one.”

Notice that the problem isn’t that a non-existent threat was over hyped in a report circulated in secret, but that the report became public. Never mind that if the report hadn’t become public, the report would have never been revealed as erroneous. How many other reports like this are being used to justify policies that are as erroneous as the data that supports them?

Posted on November 21, 2011 at 6:57 AM42 Comments

Comments

Vince Mulhollon November 21, 2011 7:32 AM

“Was it on purpose? An accident? a fluke?”

No, it was engineering malpractice at the design phase. A properly designed system fails safe, you can’t blow it up remotely by cycling it on and off. Hardware temp sensors trip off a timer relay interrupting current. Hardware pressure sensors measure improper NPSH and interrupt current. Hardware RPM sensors measure out of spec speed and interrupt power. Thats how pros do it. The amateurs? They just wire it up and pray no one does anything stupid.

Just like OSHA requires guards and railings “even though no one would ever do something that stupid”

Just like systems administrator malpractice events where a bare unfirewalled windows SQL server gets owned, sycophants will come out of the woodwork to defend, “oh thats just how its done (by fools)” “no one understands how hard it is to be me (if you’re a 12 year old teenage girl fan of emo vampire movies, not a licensed PE)”

Clive Robinson November 21, 2011 8:40 AM

Whilst I agree with @ Vince Mulhollan it looks like poor engineering practice it raises an interesting problem.

I personally after many years have not seen an “accident”, I’ve generaly only seen lack of forsight and bad risk analysis.

However I’ve also seen having worked in petro chem and similar that you cannot defend against every individual risk even if you can foresee it, there just is not the technology or the money to do it (as the US realised with NORAD’s mountain location and the Russian 50Mton T’sar Bomba).

So how do you deal / mitigate the problems.

My take is to actually not defend agaist individual risks unless very high, but defend against classes of effect for other foreseeable risks and finally go for a generalised distributed architecture to try to defend against the unknown.

Whilst the latter strategy works for tangable real world risks and attacks, it fails miserably against intangable information world risks and attacks, simply because every thing is effectivly at the same point of time and space for all practical purposes.

So how do we actually defend against information effects caused by poor design and or attack?

gebi November 21, 2011 8:57 AM

@Vince Mulhollon:
It’s definitly possible to destroy a pump or any other eletromechanical system from the management system (SPS).
And most of the time there is NOTHING the hardware can do about it.

EscapedWestOfTheBigMuddy November 21, 2011 8:59 AM

The intruders […] gained access by first hacking into the network of a software vendor that makes the SCADA system used by the utility [,…] stole usernames and passwords that the vendor maintained for its customers, and then used those credentials to gain remote access to the utility’s network.

…and the vendor was maintaining a list of credential because….???

Wayne November 21, 2011 9:43 AM

@EscapedWestOfTheBigMudd

For no good reason usually, but often it might be accounts they have for maintenance service work, which is just a bad idea. If they need remote access then they should work through the local IT/Process Control staff to gain access just for the time they need.

Qhartman November 21, 2011 11:05 AM

@Wayne

Our controls contractors are always baffled when I refuse to give them accounts on our systems. I actually had quite a confrontation with one of them because of this. HVAC / Controls people just have no clue when it comes to infosec best practices

Clive Robinson November 21, 2011 11:20 AM

Anybody else notice the similarity between this and the RSA embarrassment a while ago?

That is the attack was made possible by a third party storing authentication credentials on a server for “technical support” reasons but also connecting it to a network which could be accessed from outside the organisation by unautherised persons…

I wonder how many other third party organisations do this and how long it will be before they wake up and secure such valuable databases in a suitably secure way?

c November 21, 2011 12:59 PM

@ Ohartman – Our controls contractors are always baffled when I refuse to give them accounts on our systems. I actually had quite a confrontation with one of them because of this. HVAC / Controls people just have no clue when it comes to infosec best practices

Seems like anyone should now be wise enough to know that you don’t put your critical infrastructure on the web, password or no password. Maybe a few class-action lawsuits against those who fail this simple intelligence tests….. hopefully the nuc operators (who are usually highly motivated to be risk-averse) know this.

Hugh November 21, 2011 4:23 PM

Was it on purpose? An accident? A fluke?

If this asset is still called critical infrastructure, you should have a protection from intentional and unintentional threats beside the motivation.

D November 21, 2011 5:31 PM

A number of years ago I did an assessment of a SCADA network that was going to transition over to TCP/IP. At the time, I recommended against it, because once it’s TCP/IP, it’s accessible from the Internet (and the devices tend to assume that if you’re on the network, you have access). That risk was blown off because 1) “we have firewalls”, and 2) the company wanted a ‘smart’ grid, and those products only ran on TCP/IP.

The security advantages of a heterogeneous network were completely disregarded.

Dirk Praet November 21, 2011 5:40 PM

@ Clive

“So how do we actually defend against information effects caused by poor design and or attack? ”

By factoring in their real weight when making our business impact analysis. While most of us will agree that it is both financially and technically impossible to defend against any attack vector, my observation is that many organisations continue to systematically underestimate and downplay the associated risk and potential consequences.

This false sense of security translates into insufficient attention being paid not only to technical controls, but especially to best practice policies, procedures and the implementation thereof. Whereas most vendors want to make us believe we should always spend more on the former – beware of the Chinese APT’s ! – , I strongly believe the average outfit benefits just as much or even more form the latter, and at a much lower price. Well, in terms of cash that is. The real hurdle to take is that of a change in mentality and the way you go about things.

Just like the average Mac user still believes his machine is impervious to viruses and malware, it is astounding that so many companies operating SCADA-systems still seem to have no clue that they are being targeted both by lone wolves and other foes. It is beyond me that even with Stuxnet and the likes being all over the news, some folks complacently persist in connecting these systems to the internet or having three digit passwords. Such an attitude has nothing to do with risk management anymore, but for all practical purposes boils down to utter ignorance and incompetence on behalf of whoever is in charge there.

And it illustrates once again that there really are few terrorists out there. If they were so numerous and ingenious as the military-surveillance industry would like to make us believe, we’d be seeing pumps and other facilities getting blown up pretty much all over the place from Iceland to South-Africa.

RobertT November 21, 2011 7:04 PM

I don’t work in SCADA systems but I have noticed that there seems to be an underlying belief that obscurity will protect us. In general it probably does, but obscurity is no defense against an attacker that is focused on just you, and no one else. In this sense it is not accidental when they find a weakness.

I’m also curious how much the end customers understand Information security and the costs of “fail safe”. One previous poster has suggested that water pumps have hardware sensor backup for every critical function. That sounds expensive! I can easily imagine that a flashy software GUI with limit controls can create a much better impression with the customer than a hardware lock out that refuses to take commands to operate in a dangerous manner.

The obvious problem with all hardware lock outs is that they only address the problems known at the time of installation and are VERY expensive to upgrade after that. Don’t get me wrong, in general I’d also rather have dedicated hardware lock-outs BUT how do you sell this to the customer, especially when the opponent is selling flashy management GUI’s with “smartphone” control, or some similar rubbish. Is it all a “trust me” business relationship?

RobJ November 21, 2011 7:41 PM

@Ohartman
“Our controls contractors are always baffled when I refuse to give them accounts on our systems. ”

This statement makes me ask two questions. Do the contractors need system access to do their jobs and how do you balance security verses the ability to do the work?

signalsnatcher November 21, 2011 7:51 PM

As ever, Clive makes a good point there.

Four years ago, when the question of internet-based attacks on SCADA systems was raised my response was that there was little to cause concern because:
1) There are so many SCADA protocols you need insider knowledge;
2) You would need fairly detailed knowledge of the system you are attacking to cause any damage;
3) The damage would become quickly apparent;
4) Who would connect a SCADA system to the Internet?

The attack on the Maroochydore sewerage plant seemed to show that even someone with inside knowledge could only cause minor damage.

In my opinion an attack through the radio control system was the most likely but that would require knowledge of radio data modems and some SCADA programming skills – not a combination usually found in one person and what would motivate a team to form?

During a Y2K exercise (remember that?) our team discovered (and fixed) a number of critical patient-care systems in a large public hospital that would have been affected but the water supply authorities found that their systems were based on relays and rotating “program” drums. Still are for the most part.

This attack looks like a proof-of-concept exercise, probably by an individual with some inside knowledge.

Clive Robinson November 22, 2011 1:00 AM

With regards @ Dirk Praet’s comment on risk,

“By factoring in their real weight when making our business impact analysis.”

That it should be an on going process with a good “weather eye” for the future storms blowing in.

Sadly the opposite is usually true in that “business Impact analysis” tends to be done at some point in time for any given project then given “a stamp of approval” and then filed in an archive box somewhere and not get updated.

Worse they often get copied without modification or update from one project to another. Oh and nobody ever reads them untill something goes wrong anyway (anyone remember the story doing the rounds of the “Deepwater horizon” “environmental impact assessment” sent to the EPA for approval that supposadly talked about the arctic?)

The important thing is “at a point in time”, prior to Stuxnet for well over a decade I was one of the lonely voices in the wild saying it was stupid to connect SCADA systems to the Internet or any other outside access point. Some people in the industry would politly (pretend) to listen and then ignore the advice because it stood in the way of “competitive business drivers” and “there was no proof there was a problem”…

I would just as politely tell them there was proof and talk about what had happened in the telecommunications industry where PABX’s and even CO switches were getting regularly owned. The response was usually something along the lines of “different industry” or “that’s to get free phone calls, no such incentive here”.

Even when certain people in the US started crowing about blowing up a Russian energy instalation in the worlds biggest non nuclear fireball and droping hints that it was “an agency action” in general the responses were the same (mind you I’m on record as saying I thought the claims were a crock, and why).

Then there was that (supposed) demonstration by a US agency of how software could be used to destroy a generating set.

And still few if any were listening, the UK water and energy producers appeared to be the exception, mainly due to “still paying lip service” to the clauses put in the deregulation by Maggie Thatcher’s “civil uprising” and “ash city” paranoia in the 1980’s which saw a rebuilding of underground bunkers etc (remember being paranoid does not make you wrong only obsesive 😉

Now even after Stuxnet in most places it’s the same arguments as always, so many are not waking up and smelling the coffee even though it’s scalding the hands of their industry friends.

And to be honest, I can see why…

In an unregulated cost sensitive market the infrastructure is the biggest capital cost, and where annual maintenance costs are often as large as profits it can be a case of take a risk or price yourself out of business.

Unregulated markets are almost always a race for the bottom unless the price of technological inovation makes it cheaper to upgrade than not (which has been the case with software control over hardware control).

It has been seen in both Canada and New Zeland, what happened in their unregulated power markets was base infrustructure was neither correctly maintained nor upgraded on the excuse of “if it ain’t broken we don’t need to fix it” from the directors providing “shareholder value” on a quater by quater basis. Eventually their luck ran out (as it does for everyone) and they basicaly bleeted “not our fault” before disappearing behind a wall of lawyers to let other pick up the pieces and importantly the cost (I’ve mentioned this before on this blog when talking about cascade faliures).

Which is why security of infrastructure should be mandated by regulation to prevent a “race for the bottom” which can only result in infrastructure failure, which always hurts badly.

Now the 64trillion dollar question is what happens now…

Remember infrastructure generaly has a quater century or more of return on investment time. What is going to be the cost of bringing everything up to spec and over what time period, and how much damage is going to happen in that intervening time window (which could easily be ten to thirty years).

Oh and one of the big lessons from Stuxnet, is “air gapping” is nolonger a valid temporary or permanent fix, we realy have to get in as close to the metal as possible and fix security there.

With “air gap” crossing all attacks are “insider attacks”, “perimeter defence” is only going to keep out the script kiddes for a year or two untill they get given “air gap crossing attacks”.

It’s sad realy because I worked out how to do the various tricks to cross air gaps and mentioned them on this blog long before Stuxnet reared it’s ugly little head. I talked about them specificaly as a way to get at “voting machines” via the maintanence staff with “fire and forget” malware. It makes me wonder if the attackers listen more than the defenders…

So remember when you hear people talking about “firewalls” and “SSH” and all those other security solutions that have been around for twenty years and failing us, just remember that’s “perimeter defence” and it’s now without any kind of doubt a “known fail”.

With “air gap” crossing attacks, there is no “perimeter”, no “choke point” every host is effectivly connected to the internet via somebodies memory stick or maintanence man with a software upgrade.

Likewise don’t think “code signing” is a solution, it’s not, again I was warning just how much of a fail it could be and mentioning on this blog why long before Stuxnet proved it to be the case.

So in the short term get out your old books on “Bastion Hosts” and remember every system that is connected to your network is a host that can be owned. If you cannot harden it get out the wire cutters and cut not the network connection but the power cable. Otherwise it could be your organisation we will be discussing here.

I’ll leave it to Nick P and RobertT to give you the further bad news about OS security and Chip security.

Jeremy Duffy November 22, 2011 8:27 AM

We DO know that the chance that the SCADA systems were connected to the Internet with very little thought to security or stability is very, very high.

The government is not well known for thinking ahead.

Todd Knarr November 22, 2011 10:36 AM

@RobJ: “This statement makes me ask two questions. Do the contractors need system access to do their jobs and how do you balance security verses the ability to do the work?”

Same way we used to do it at university: maintenance accounts are set to a password known only to the system admins. When vendor or contract maintenance workers need access, they ask us for it. We let them reset the password to one of their chosing (subject to our normal password policy), and when the work’s finished we reset the password to ours. Vendors get access when they need it, while we’re protected against vendor-default-password attacks and against vendors accessing our systems without us being aware of it.

john conner November 22, 2011 10:38 AM

clive, of course the attackers pay closer attention, they are interested, the protectors are just civil serpents. they have no incentive to be creative, just a mudhead boss whos approval will never be seen anyway
its even rumored that some of the attackers work for free.
you can’t hire that kind of creativity, the reward is in the doing

Nick P November 22, 2011 2:41 PM

@ Clive Robinson

I’d tell them to google the further bad news. The good news, though, is that they can reduce their risk running high quality security software on a safety- or security-critical RTOS on RISC hardware. Also, if we think exploits are inevitable or something, I’d also say we should re-consider randomized instruction sets. I remember some were broken, but at least one showed good promise. (No, not Transmeta, it was academic.)

suckaforagoodstory November 22, 2011 3:49 PM

The Gov has known for atleast 10 years that these systems are open to attacks. Now as the gov do you make it ironclad and well guarded against a attack, and then these defences would insure that other governments that our gov would want to attack- would now be safe from our cyber manipulation, thus denying our gov from being able to take them down. Think if these systems or Centrafuge were not open to attack. Maybe the point is to leave holes, so we can take it down when we want and if these open our own systems to control from folks who have access then these are the breaks??

Clive Robinson November 22, 2011 4:28 PM

@ suckaforagoodstory,

“Maybe the point is to leave holes, so we can take it down when we want and if these open our own systems to control from folks who have access then these are the breaks??”

You would hope not, whilst the best for of defense is offence, it is only true if your rear is well protected which means you still need strong defences irrespective of your offensive capability.

However, your comment can be likened to a small pile of kindling and twigs to which you have successfully struck a spark into a small flame.

Please alow me to add a gallon or two of very high octaine fuel to make it a merry blaze.

In the US Industrial Control Systems (ICS) have their very own Computer Emergancy Response Team known as ISC-CERT run by non other than the Department of Hapless Security.

A few weeks back they pulled a master stroke by reclasifing their problems away…

I will let you read the version of events posted by Ralph Langner who did so much to tame stuxnet, he to put it mildly sounds gob smacked,

http://www.langner.com/en/2011/09/23/dhs%e2%80%98-new-semantic-approach-to-risk-mitigation/

Yes Mr DHS Martty Edwards has decided that over 90% of security faults with ICS software are not ICS-CERT’s reponsability to inform people of… But you should still tell ICS-CERT of everything you discover so they can pass it on to “the favourd few” who the DHS have gathered together…

Thankfully unlike others commentators on this Ralph has not resorted to unseamly personal comments about Mr Edwards.

RobJ November 22, 2011 8:46 PM

@Todd Knarr

My question was addressed to the gentleman who asserted that he never provided access to external vendors. I did not see how this stance allowed for contractor to access the system as part of doing their contracted services.

Your reply, granting temporary access by changing a password temporarily and then changing it back implies that your accounts have permanent passwords. Considering that the SCADA hack involved a software vendor who was maintaining a list of customer passwords that then got disclosed, I would think that your approach would have a similar risk of your password list getting out which increases with time.

My point is that proving access to any computer system increases the risk that the system will be compromised, but in order for the system to be useful, access has to be provided.

RobertT November 22, 2011 11:07 PM

Since Clive mentioned it, I’ll just add a quick reminder that these days the Chip functions you buy commercially, especially in small quantities, are not necessarily just the function that you wanted. There could be hardware included for lots of other functions that you are never told about.

The problem is that we can manufacture a chip with say 20M transistors for $2, including packaging.

In very high volume applications the exact functions will be optimized because size effects costs, however for low volume applications I might be able to sell just one or two functions of the chip, say HDMI plus UBS3 sections of a TV chip, for more than I can sell a TV chip. If I sell the HDMI + USB for say $10 why should I (semiconductor vendor) care if I’m actually shipping a hobbled TV chip, I still make $8 profits. In all likelihood making a dedicated HDMI+USB chip would cost more than the market is worth (due to the small size of the component market and the high NRE costs) so I ship the hobbled TV chip and never tell the customer about the other functions.

For most commercial chip purposes, exactly what capabilities the chip in the package, has is irrelevant information, however for High security apps, you have just dramatically increased the attack space and changed it in ways that most programmers would never anticipate. Not only do you have secret sections of RAM, you were never told about you could also have very accurate analog components. A typical TV chip has a triple ADC 12bits 100Mhz, very handy bit of hardware if you wanted to build an on chip differential Power analysis system or a communication system.

I’m not sure how the high security world will deal with this, because it is a relatively new problem (last 5 years) however it is a problem that could make a complete joke of air-gaping SCADA systems.

David November 23, 2011 2:34 AM

@RobertT – a similar problem has been around a lot longer… I recall at least 10 years ago it became pretty-well impossible at the budget-end of the market to actually receive a low-cost hub when you bought one. You’d always get a switch.

Seems it was easier for the plants to make only switches and label some of them as hubs. Which was great if you wanted a switch, you could get one more cheaply if you bought the hub, but if you really did want a hub, you were pretty-much out of luck.

The problems for the plants only became obvious when the secret became widely known (although they were probably still making plenty of money even at the hub pricing).

David November 23, 2011 2:55 AM

Earlier today (maybe yesterday depending on your timezone!) ICS-CERT announced that all suggestions that the Illinois water treatment plant ‘event’ were linked to any kind of attack were refuted totally. Here’s the content of their email to the mailing list:

Greetings:

After detailed analysis, DHS and the FBI have found no evidence of a cyber intrusion into the SCADA system of the Curran-Gardner Public Water District in Springfield, Illinois.

There is no evidence to support claims made in the initial Fusion Center report – which was based on raw, unconfirmed data and subsequently leaked to the media – that any credentials were stolen, or that the vendor was involved in any malicious activity that led to a pump failure at the water plant. In addition, DHS and FBI have concluded that there was no malicious or unauthorized traffic from Russia or any foreign entities, as previously reported. Analysis of the incident is ongoing and additional relevant information will be released as it becomes available.

In a separate incident, a hacker recently claimed to have accessed an industrial control system responsible for water supply at another U.S. utility. The hacker posted a series of images allegedly obtained from the system. ICS-CERT is assisting the FBI to gather more information about this incident.

ICS-CERT has not received any additional reports of impacted manufacturers of ICS or other ICS related stakeholders related to these events. If DHS ICS-CERT identifies any information about possible impacts to additional entities, it will disseminate timely mitigation information as it becomes available. ICS-CERT encourages those in the industrial control systems community who suspect or detect any malicious activity against/involving control systems to contact ICS-CERT.

Regards,

ICS-CERT

Clive Robinson November 23, 2011 6:37 AM

@ David

Thanks for posting that, it actually made me laugh for a number of reasons one being,

There is no evidence to support claims made in the initial Fusion Center report – which was based on raw, unconfirmed data and subsequently leaked to the media

That the Fusion Center has a bigger “leak” than the water plant.

Speaking of leaks the DHS (ICS-CERT) communication does remind me of the story of “The little Dutch Boy” plugging a hole with his finger.

It is such a strongly worded denial made before they have compleated their investigation, such things are almost never made by engineers only politicians because they have a habit of comming back to haunt you.

For instance if the water company is connected to the Internet, and they are keeping logs of inbound TCP/UDP/etc connections as most firewalls can, I would be very surprised if they did not see connections from Russia, China and a whole host of other places, most people see such connection attempts on a daily if not minute by minute basis in their logs. Analysing such data can be a difficult task at best with an uncertain outcome. As an over generalised case you can only realy rule it out when you have a directly attributable cause that clearly shows no involvment of Internet traffic.

For instance the result is apparently a burnt out pump motor, this can happen for a whole host of reasons including manufacturing defects, wear and tear, poor maintanence, operator error and quite a few others. In most of these cases it is actually difficult to find “proof positive” of the cause just an assumption. The reason is there is rarely sufficient instrumentation to tell and most times the instruments that are there are not sufficiently well logged, or their sampling window is too great to give sufficient detail.

Now we know from “The Auora Attack” it is realisticaly possible to get “inside control loops” and cause problems specifically in that case by opening and closing relays between a generator and a power network. When a generator under load is taken off load by the relay opening it’s speed increases and a phase shift between it’s ouput and that of the power network occurs. On closing the relay an EMF pull back occurs. This puts a sudden undew stress on the generator shaft and the turbin driving it. Generaly the control loops are sufficiently damped to prevent feed back effects giving incorrect control signals due to various response delays resulting in positive feedback and thus some form of oscillatory behaviour (often called “loop hunting”). The damping also prevents alarm conditions when the generator is “normaly” connected or disconnected from the network. The Auror Attack uses the loop damping to hide it’s activities, specifically it opens and closes the relays in a very short period of time providing very short sharp torque shocks that don’t get through the instrumentation damping to raise an alarm.

The result is the generator or the turbine driving it dies a very early death, and it would ordinarily be attributed to some unknow defect in manufacturing, maintanence etc or if insufficient logging is performed operator error is assumed.This is the same as with air and sea accident investigations, if a pilot / captain is not alive to tell their side of the story then the aircraft / ship was lost due to “pilot error” / negligence.

Now the thing is as I’ve said on a number of occassions transducers work both ways, that is a microphone can act as a speaker and a speaker can act as a microphone. It is the same with motors and generators. The speed of a motor and importantly the current it draws from the supply is controled by it’s load and the back EMF generated. Importantly it is the back EMF of a motor that stops it drawing a very very large current from the supply (the DC resistance of a winding is very very small for efficiency reasons).

Now when you open the relay between the motor an the supply a number of things happen, firstly load inertia keeps it turning, secondly the motor becomes a generator thirdly if it is an AC motor it slips with regards to the supply phase.

Rapidly opening and closing a power supply relay will cause not just stress on the actuall pump shaft but also the motor windings and depending on the supply, that as well.

If the relay opening and closing is done in the right way, the other instrumentation feeding back to the local control system loop will not see it and therefore neither will the SCADA system which is the only place any instrument logging is likley to be done.

Now I’d better make one thing clear these days when a power control engineer talks about a “relay” they are not talking about the old fashioned mechanical device you might find in your thirty year old boiler. They are generaly talking about a solid state device that is quite complex and driven by a microprocessor. Because there is a microprocessor in there modern relays tend to be very complex and are quite often “network hosts” in their own right or are connected to a small network host. As with any other network host they can be “hacked” and ephemeral rouge code can be added to their RAM (thus not leaving any rouge code behind after the relay is powered down to repair the pump etc).

Thus they are just as vulnerable as the motor drive controlers on the centrifuges that Stuxnet targeted.

Clive Robinson November 23, 2011 2:17 PM

@ Bruce,

CSO Online has an article related to this,

http://www.csoonline.com/article/694707/experts-advise-caution-information-sharing-in-wake-of-alleged-utility-attacks

One thing I strongly object to in it is,

MANDIANT’s Bejtlich argues that while mandatory reporting to a central critica infrastructure CIRT (Critical Incident Response Team) would be”a step in the right direction,” he doesn’t agree that public reporting will do much to improve critical infrastructure risk posture. “Public breach reporting isn’t necessarily going to improve security within critical infrastructure. Since 2006 I’ve advocated creation of a National Digital Security Board to investigate important incidents. NDSB reports do not need to”name names” in order to have a positive impact on security.

This is tantamount to the “favored few” mechanism whereby the Government (usually for political reasons) only lets some people know about security vulnerabilities, thus leaving many many others vulnerable.

At the end of the day it is in everybodies interest to have all vulnerable systems fixed promptly. The reason is as with Stuxnet, you realy cannot control where malware ends up or where it decides to go next, thus vulnerable systems just act as infection agents/vectors.

And the idea of “stacking up” vulnerabilities to use as “cyber-weapons” frankly I find absolutly pathetic, it shows that those involved have little idea about such things. As I’ve noted before,

Whilst ofense may be the best form of defense, it requires that your home base be secure otherwise you will find your self wining a battle but losing the war.

RobertT November 23, 2011 10:26 PM

@Clive R
“One thing I strongly object to in it is,…”

I don’t really understand your objection. I know I never personally rely on any DHS documents to advise me on the potential ways to hack into a system. Nor for that matter do I object if some of the better methods are kept secret for a little longer. Frankly whatever DHS does with this information is irrelevant to me. If the zero-day is disclosed, or even widely known, than it is useless for new covert monitoring / control applications, regardless of the target.

If this secret process results in information asymmetry, whereby certain countries / individuals benefit, than I’m guessing I’ll benefit more often than I loose, so No problem!

Clive Robinson November 24, 2011 4:55 AM

@ RobertT,

“I don’t really understand your objection.”

As with many. things (with me 😉 it’s a bit complicated.

As you note once a specific attack vector is used it loses it’s value very quickly. However the loss of value is directly related to the number of vulnerable systems and the proportion that remain unpatched etc.

As we know from past experiance the speed a manufacture/supplier provides a patch is very much related to how many customers know about it.

We already know of atleast one major ICS software and systems supplier that has a number of critical vulnerabilities, that they have decided to ignore and thus are selling know to be critical vulnerable systems.

The fact that ICS-CERT’s Marty Edwards has basicaly decided not to make the vulnerabilities concerned a public reporting function of ICS-CERT (but the “favourd few” get told) means that the major supplier is under no real preasure to mend it’s broken systems.

I fully expect this ridiculous state of affairs to get considerably worse and for the likes of ICS-CERT to effectivly become pariahs as the industry realises they are being hung out to dry by the overly politicized idiots.

Now one thing that is known that malware that exploits vulnerabilities has a habit of turning up in unexpected places. Stuxnet was supposadly highly directed but poped up in all sorts of unintended places. Stuxnet was supposadly written by “experts”, so if the supposed experts get it badly wrong what does it say about other malware written by people not so expert, which would be the majority of malware writters.

Thus any cyber-weapon written by the US to be used against other countries is likley to come back at US organisations using ICS who are not part of the “favourd few” as well as US allies and totally uninvolved bystanders. Some of these non target nations may well suffer major harm and thus will regard the US malware as a serious attack on their Sovereignty, which for those who don’t know is “a decleration of war by deed” and against many international treaties.

Now a Sovereign nation could respond in many ways some by direct action against US persons within their borders, some by direct economic action of seizing all US assets, and some by retaliating in kind, which will lead to rapid escalation which is not good for anybody, potentialy the economic effects would be worse than a limited scale nuclear war.

The problem for the US is it is probably the number one “Internet dependent” nation in the world so the US gets it worse than any other nation…

Put simply for the US it is more in their interest to ensure all ICS systems are secure than for any other nation, this appears to be a point lost on the “war hawks”.

Then there is the question of who comprises the “favourd few” and why, I sincerly doubt that anybody in the US DHS has actually sat down and worked out what the “true critical infrastructure” is. Modern manufacturing and society are so intermeshed it is difficult to determine exactly what would happen if an ICS cyber-weapon touched down in the US. For instance we know the “utility” organisations are considered critical, but what about “food producers”, “medical supply producers”, “iron and steel producers”, etc, etc.

What do you thing would happen if tommorow 30% of the processed food producers could not produce their goods?

Depending on who you beleive something like half of US households do not know how to cook beyond the very very basics, certainly not enough to be able to make bread or pastry or cook meat safely. This is radicaly different to just a generation or so ago, and is largely due to the rapid increase in “single member house holds” in the “proffesional classes and the asy availability of “ready-meals”. Oh and don’t say they could go to a restaurant, the major chains (ie fast food outlets) are all criticaly dependent on ICS to industrialy produce food to the point where it can be “cooked in store”. Then there’s the canning and botteling plants…

So rather than,

If this secret process results in information asymmetry, whereby certain countries / individuals benefit, than I’m guessing I’ll benefit more often than I loose, so No problem

The more industrialised the nation the worse off it will be and the number one vulnerable nation is the US currently, so not “No problem” but “Big problem”.

Oh and “certain countries” like China can bring the US to near economic colapse just by being a bit awkward over shipping certain goods in a timely manner. And they can do this at any time just by deciding to “improve customs checks”.

And we know China are well aware of this, and are acting in a way to make the situation worse for the US and other western nations, just look at what is going on with the likes of rare earth metals which is one of their more blatent tactics…

Steve Jones November 24, 2011 2:28 PM

never mind FALSE ALARM
(read the follow up news stories)

not unusual for the monthly SCADA cyber panics

a huge industry and lots of government money funding now funding it

unfortunately, still having trouble finding a real example of SCADA hacking

one particular consultant, same guy reporting this attack, is still batting 1000

Melegar December 15, 2011 9:03 PM

“Never mind that if the report hadn’t become public, the report would have never been revealed as erroneous.”

How do you know this? It seems to me ICS-CERT and the FBI investigated and responded to the erroneous report as quickly as possible… The report existed for about a week before they refuted it. You can’t assume that the only reason they investigated was because it went public. I have a feeling ICS-CERT and the FBI investigate all potential cyber attacks targeting US critical infrastructure and they would have discovered this one as false even if it had not gone public.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.