wiredog November 6, 2012 7:23 AM

As I said on Krebs article:

When I worked in industrial automation we enabled remote access in everything we sold. But you had to plug in the network (in our case, telephone) cable first. It’s impossible to remotely attack a device that isn’t physically connected.

So why are people leaving them connected now? And why would such a system care if it was disconnected? Seems that just pulling the plug is a good security strategy.

Dave M November 6, 2012 7:41 AM

So why are people leaving them connected now? And why would such a system care if it was disconnected? Seems that just pulling the plug is a good security strategy.

It is a great security strategy. But it may leave huge operational gaps.

Most likely the systems are left plugged in to the network because the people who can troubleshoot them effectively (or even report problems accurately) are spread too thinly throughout their organizations to be on site. And remote monitoring sometimes is a lot more effective at noticing problems quickly than waiting for somebody who has the cajones to shut down a production system and ask for help.

wiredog November 6, 2012 7:57 AM

Remote troubleshooting was why we set them up for network access. But it was only needed for that. So the normal operation was to leave it unplugged and, if there was an issue, a guy (with no special training) could take a couple minutes to go out on the shop floor and plug the network cable in.

It was in the manual, and verbally conveyed, that the systems were generally to be disconnected for securities sake. Heck, we were doing that in 1995.

Dan November 6, 2012 8:05 AM

Neither of those is a reason for them to be accessible from the public internet, which is the real issue here.

These kinds of “access control” systems are often designed to be deployed on private intranets with the “real” access control provided by a separate firewall, but all too often they are open to the world.

The moral of the story is to keep your mission-critical infrastructure off the public internet whenever possible, and to secure any points of ingress to your management network. This won’t protect you from a determined attacker breaking into the management network, but it will prevent your gear being stumbled upon by someone who found a vulnerability in a particular piece of equipment and is shopping for targets.

Steve Boyko November 6, 2012 8:16 AM

Stuff like this is why the NERC CIP standards were developed… some companies connect their critical systems directly to public networks for convenience of monitoring and remote troubleshooting, without thinking of the security implications. NERC CIP (and others) force companies to think about security and to take steps to secure their systems. The standards are not perfect but they are better than nothing.

ChristianO November 6, 2012 8:30 AM

@Dan Stuxnet iirc also came over sneaker net into those networks … having them networked even without internet seems problematic enought

Kronos November 6, 2012 9:17 AM

The systems I have dealt with in ‘process control’ were connected to the network so support personnel could do remote access. They were blocked from internet (outside world) access via their static IP address. Infection by those doing support was the most likely avenue.

Northen Realist November 6, 2012 11:18 AM

@Dave M – Unless one is in constant need of trouble-shooting (in which case you have another bigger problem!) there is no need for continuous connections. The line can always be connected when needed.

Dave M November 6, 2012 1:26 PM

Agreed. But if the operational plan calls for remote monitoring then you are stuck with leaving it connected. There is still no excuse for allowing open access from the Internet. Firewalls and VPNs are cheap and easy (although they sometimes make people think they are more secure than they really are).

RobertT November 6, 2012 4:31 PM

” Seems that just pulling the plug is a good security strategy”

Wow are you guys even aware of how Stuxnet was designed to spread?

Don’t get me wrong, isolating the network ‘air-gapping” is always a good idea but it is insufficient to stop the spread of viruses.

In many ways air-gapped systems are actually easier to infect because they are so difficult to maintain and keep “patched” also there is a definite “if it aint broke don’t fix it mentality” within the PLC community. This is understandable because the software that they run is often custom made and proven to work in THIS configuration OS/hardware. Nobody wants to discover, on their production line, a realtime issue that might arise between two patches of an OS.

The net result is that you don’t need exotic zero-days to attack an industrial control system, typically any virus from the last 3 years will do the job. The only real problem is reducing the spread of the virus so that it is not discovered by accidentally infecting too many machines and thereby coming to the attention of anti-virus writers.

Dirk Praet November 6, 2012 6:03 PM


also there is a definite “if it aint broke don’t fix it mentality” within the PLC community.

Not just in the PLC community. It’s also true for the majority of small to medium-sized businesses I have ever done work for, their IT staff and/or the support organisations they outsource their infrastructure to. The “just keep it running”-mentality is unfortunately still deeply embedded in way too many folks who can’t even be bothered with keeping COTS products such as M/S operating systems or network appliances up to date.

That’s not to say that you should just go about patching anything anytime as soon as a patch, PTF, service pack, update or upgrade is released. Doing this properly is exactly what patch, version and release management is all about, and to which generally speaking insufficient attention is being paid in many companies. Failure to understand this in combination with the misguided belief that small to medium-sized business are rarely a target for attackers IMHO are the main reasons for this attitude.

The issue of SCADA and other systems being flawed by design, backdoored deliberately or for “remote access” purposes has been discused many times before on this blog. I think by now it’s safe to assume that many – if not most – probably are and that any company serious about security should not only think in terms of prevention and hole plugging, but just as well about contingency, incident response and other mitigation plans for when yet another one is discovered.

Ben Brockert November 6, 2012 11:39 PM

For water treatment plants, which is what I worked on, a normal city or district would have a number of locations and one control area. The water towers, pumps, and treatment plants would all be PLCs and SCADA systems, plugged into phone lines. Then from the main plant it was possible to see what each pump was doing, what the levels were in the water tower, etc. It also gives them the ability to phone home problems.

So they need to be networked somehow, and the legacy way of doing it is phone lines and modems. Since you want water to work even when power goes out, it means they have to talk over a system that similarly stays up with power down, and in many towns hardwired telephone does keep running.

Clive Robinson November 7, 2012 4:09 AM

@ wiredog,

So the normal operation was to leave it unplugged and, if there was an issue, a guy (with no special training) could take a couple minutes to go out on the shop floor and plug the network cable in.

Tell me when did you unplug your TV/PC/electrical item that might catch fire befor you went to bed?

Relying on somebody skilled or otherwise to unplug the network cable no matter who is told and how many guides and manuals it’s written in is rather silly as a security meassure because X times out of ten (wher X is considerably greater than five) it’s just not going to happen. I could give you a list of excuses that I’ve heard oof the years but to be quite honest there are so many that I’ve forgoton most of them and those I do remember would fill this blog page beyond reasonable expectation of it not being rejected…

Dirk Praet November 7, 2012 5:58 AM

@ Clive

Tell me when did you unplug your TV/PC/electrical item that might catch fire befor you went to bed?

I actually do that whenever leaving for business trips or holidays in excess of two days. And before going to sleep, I turn off, disconnect and unplug all of my computer and other networked devices/appliances, including routers and smart phones. Even when arriving home stone drunk after a night in the pub. The only way I can be reached then is over my landline, number of which is only known to a limited number of relatives and close friends.

I believe it is all about building in small routines into your life and daily practices, just like brushing your teeth before going to sleep, doing daily sit-ups or blinking your tail lights when making a turn with your car.

TomTrottier November 7, 2012 8:36 AM

@clive Since it’s easy to check (even automatically!) if still plugged in when not needed, it would be easy to escalate unplugging from the janitor up to plant manager if it were not done promptly.

Perhaps a phone line and modem would be more secure – how much bandwidth do you need?

Alex November 7, 2012 2:17 PM

Most of the PLC-based systems I’ve designed and installed over the past 10 years DID have some form of remote access. BUT, it was through a computer and the PLCs themselves were never directly available via internet.

It’s my understanding that this is the way most PLCs are installed. The higher-level stuff might be available via intranet, but I think there’s probably very few of them which are left out on the open internet. Even then, very few PLCs have the normal internet daemons (http, ftp, telnet, etc.) which are most often used for exploitation.

Roger November 9, 2012 1:25 AM

I have recently started to brush against these issues at work. It’s a little more subtle and a lot more important than “remote trouble shooting.”

Most process industries pretty much run on PLCs. They both monitor and control the ingredients going in, the temperature, the pressure, the rate of stirring, the rate of product coming out, how far and fast the big robot arm is swinging, how much each filled packet weighs, everything. To manage the plant, you have to see what data the PLCs are seeing.

You can do this, after a fashion, by wandering around the shop floor with pencil and paper, jotting down what you see on front panel displays. But you need continuous, near-real-time data from the PLC network if you are to use that data to understand the subtleties of how everything interacts, and use that understanding to optimise everything at once (cost, quality, safety, environment, delivery.) If you don’t, you’ll always be chasing something: a high defect rate here, a safety problem there, a slow-down there, and add-on costs popping up all over.

So networking is practically mandatory for a safe, efficient modern plant. The only real alternative is to periodically use removable media to collect data from the PLCs’ logs. But as Stuxnet showed, that may be somewhat safer but it is far from safe. (Also, based on how fast logs get filled up and overwritten at our plant, “periodically” will mean twice a day. And we have over a hundred PLCs …)

What we really need is a system that can reliably separate the privileges of viewing current data, viewing logs, issuing commands, and updating code. That won’t be coming from the SCADA software makers (they still forbid changing the dba passwords as it will break their kluges.) So perhaps what we need is an application firewall for Profibus etc.

Cuneiform November 10, 2012 8:56 AM

This looks like a non-discovery of a non-problem to me:

  1. Password protection was not built in in the first place, i.e. the PLCs are open by design. Thus, this is not a discovery.
  2. “Attackers” still need physical access to the network. Which means they might as well use an axe to damage the PLC.

If network access is not needed, plug them off. If network is needed, you have to physically secure the network, or else someone could cut the cable in halves.

Thus this can hardly count as a problem

bob November 10, 2012 2:47 PM


“They were blocked from internet (outside world) access via their static IP address.”

Assuming that the attackers don’t know what they’re attacking and assuming that the attackers need to see the response to the stages of their attacks, that’s a very sensible approach.

Idiotic assumptions of course, but otherwise sensible.

Unless the attacker can directly manipulate arp tables.

So just idiotic.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.