New Siemens SCADA Vulnerabilities Kept Secret

SCADA systems—computer systems that control industrial processes—are one of the ways a computer hack can directly affect the real world. Here, the fears multiply. It’s not bad guys deleting your files, or getting your personal information and taking out credit cards in your name; it’s bad guys spewing chemicals into the atmosphere and dumping raw sewage into waterways. It’s Stuxnet: centrifuges spinning out of control and destroying themselves. Never mind how realistic the threat is, it’s scarier.

Last week, a researcher was successfully pressured by the Department of Homeland Security not to disclose details “before Siemens could patch the vulnerabilities.”

Beresford wouldn’t say how many vulnerabilities he found in the Siemens products, but said he gave the company four exploit modules to test. He believes that at least one of the vulnerabilities he found affects multiple SCADA-system vendors, which share “commonality” in their products. Beresford wouldn’t reveal more details, but says he hopes to do so at a later date.

We’ve been living with full disclosure for so long that many people have forgotten what life was like before it was routine.

Before full disclosure was the norm, researchers would discover vulnerabilities in software and send details to the software companies—who would ignore them, trusting in the security of secrecy. Some would go so far as to threaten the researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers announced that particular vulnerabilities existed, but did not publish details. Software companies would then call the vulnerabilities “theoretical” and deny that they actually existed. Of course, they would still ignore the problems, and occasionally threaten the researcher with legal action. Then, of course, some hacker would create an exploit using the vulnerability—and the company would release a really quick patch, apologize profusely, and then go on to explain that the whole thing was entirely the fault of the evil, vile hackers.

I wrote that in 2007. Siemens is doing it right now:

Beresford expressed frustration that Siemens appeared to imply the flaws in its SCADA systems gear might be difficult for a typical hacker to exploit because the vulnerabilities unearthed by NSS Labs “were discovered while working under special laboratory conditions with unlimited access to protocols and controllers.”

There were no “‘special laboratory conditions’ with ‘unlimited access to the protocols,'” Beresford wrote Monday about how he managed to find flaws in Siemens PLC gear that would allow an attacker to compromise them. “My personal apartment on the wrong side of town where I can hear gunshots at night hardly defines a special laboratory.” Beresford said he purchased the Siemens controllers with funding from his company and found the vulnerabilities, which he says hackers with bad intentions could do as well.

That’s precisely the point. Me again from 2007:

Unfortunately, secrecy sounds like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers…. But that assumes that hackers can’t discover vulnerabilities on their own, and that software companies will spend time and money fixing secret vulnerabilities. Both of those assumptions are false. Hackers have proven to be quite adept at discovering secret vulnerabilities, and full disclosure is the only reason vendors routinely patch their systems.

With the pressure off, Siemens is motivated to deal with the PR problem and ignore the underlying security problem.

Posted on May 24, 2011 at 5:50 AM67 Comments

Comments

Dave Van den Eynde May 24, 2011 6:12 AM

I’d like to point out that in most environments where a SCADA system is used to monitor and control an industrial process, civil engineering demands that the software is never able to put a machine in a situation where it becomes a danger to itself and to its environment. I know this because I worked for a time in developing embedded control software and I’ve seen first hand that machines were designed in such a way that the software could not put it in dangerous state: apart from the control logic there are mechanical failsafe systems that no amount of hacking could circumvent.

I’m sure there are situations where such a dangerous situation could be formed by doing something that is otherwise a routine job, but even then, if there’s a potential for danger to the public, there’s probably a manual check involved or a mechanic failsafe.

wiredog May 24, 2011 6:23 AM

IAW Dave above, but of course some of the systems don’t have proper controls, or it was too expensive to test them thoroughly, or all the interactions aren’t well known.

The embedded and industrial systems I programmed had no integral security on the assumption that if you had physical access then you could do whatever you wanted. Since they weren’t networked, or were behind strong firewalls, this wasn’t an issue. Unless, of course, someone hooked the system up to a modem to allow remote access for debugging and forgot to unhook it later.

BF Skinner May 24, 2011 6:41 AM

” personal apartment on the wrong side of town where I can hear gunshots at night ”

I recall a movie, Wargames by name, where the main character takes Ally Sheedy to consult with ‘these guys’.

They are in a datacenter. Not a strech that hackers, back in the day, would aggregate around concentrations of processing power. But today everyone has a computer right? So all the intruders are apparently spotty face children in free fire zones.

Neg. Intruders can intrude into large scale systems and networks because they are already working in large scale systems and networks. They can defeat security controls becuase they have time to think about how they can be bypassed.

@Dave “a manual check involved ”
I ain’t so sanguine. Each manual check requires a person to perform that check. Each person adds to head count which adds to the personnel cost center. “Efficiency” for a company drives to lower that cost. If an exec is told that they can’t be manipulated through their SCADA and they have a safety redundency what are they gonna decide? Eliminate the redundency in favor of the efficiency gained.

Jonadab May 24, 2011 7:00 AM

One thing the researcher can do when he notifies the vendor, which can help somewhat, is to be clear that the secrecy expires on a particular date and the vulnerability will then become public.

However, this may require either tipping the vendor off anonymously or living in a jurisdiction where they cannot lawyer you into submission.

Also, it’s only worth doing things that way if you have any reason to believe the company will start working on a fix before the vulnerability becomes public. If based on their past behavior it is clear that they won’t, full disclosure is better.

Clive Robinson May 24, 2011 7:13 AM

As others above have commented generaly systems are designed to be “fail safe”.

However this is not always possible, to use the stuxnet example the centrifuges could not be designed to be fail safe due to the way they were designed (they can neither be practical or sufficiently efficient if you try to make them “fail safe”).

That is there are many processes that cannot for many reasons be made “fail safe”, the usual solution is “encasement” that is you surround the elements that cannot “fail safe” with some sort of mechanical containment to “catch the pieces”.

But there is a problem with this, sometimes what you want to contain can only be done to some point (gases and liquids under pressure) and then the containment blows irrespective of how you design it. To stop this you opt to allow “controled release” of the hazardous substance.

We have seen this with the nuclear power plants in Japan.

All process engineers I know are accutly aware of this, we even have language for if we talk about “failure modes” and systems being designed to be fail safe with 1,2 or 3 faults. We also talk about “intrinsic safety”.

But it is only the design engineers talking amongst themselves who talk about the probability of events, and we have to decide that for $X you can have 99.9…% safe systems where each eaxtra 9 cost not ten times as much but often 100’s if not million’s of times as much as the previous 9.

However we always assume the “random failure mode” not the “intelligent directed failure mode”. This we don’t talk of even amongst our selves

But we all know how to frig the systems to get a cascade failure that can make what you think would be the “worst case” even worse.

Rob May 24, 2011 7:14 AM

IIRC wasn’t it the case that Stuxnet was able to attack the SCADA systems through the PCs used to upload/edit the software that was running the PLCs? The weak point was not the integrity of the systems themselves (pace Dave above) but the second-tier systems used to manage them. And in any case, there are many multi-tier, hierarchical control systems that ‘talk’ to the upper tiers to get the changes in set-points or objective functions needed to do JIT processing or on-the-fly economic optimisation.

Clive Robinson May 24, 2011 7:24 AM

@ Bruce,

What you have not mentioned is the thought processess behind some of these “keep it quite” ideas that managers, bureaucrats and politicos have with “information”.

As physical beings we have inbuilt assumptions about our physical world.

Many thing in objects that are tangable and the very clearly do not understand about “information” and the knowledge that arises from it.

Even though we have had over a thousand years of people (politicos and church leaders etc) repressing information in various ways including murdering the messenger many still believe contrary to the historical evidence that information can be “bottled up”.

Even those who do know sometimes take the attitude that they can repress knowledge long enough for them to “jump ship” before the proverbial hits the fan.

I hope you include this “working on false assumptions” in your up coming book.

kingsnake May 24, 2011 8:03 AM

I honestly believe that PR and marketing are the 7th Level of Hell, and well below lawyers.

Fnord May 24, 2011 8:06 AM

Remember Kaminsky and the DNS cache poisoning attack? THAT was kept secret until it was fixed (well, mostly fixed).

Pete May 24, 2011 8:07 AM

The value of full disclosure is a function of rediscovery and opportunity costs. Secrecy is almost always a good idea because the chance of a good guy finding the same vulns as bad guys is pretty low – 7% by one estimate.

The question of whether patching the discovered vulnerability is worth it must be evaluated based on all the other things the same set of developers could be doing. Maybe they are patching other vulnerabilities. Maybe they are adding other control elements to the software. Maybe they are creating even more value for customers.

So, keep in mind that there is rarely reason to believe that good guys will find the “right” vulns (btw, good guys routinely keep vulns they find a secret even when they espouse full disclosure). Note that it is possible that SCADA systems have a small enough code base that the likelihood is higher than general purpose systems.

Without full disclosure, there would be orders of magnitude fewer incidents, much lower costs, and more focus on stopping the bad guys.

I was reminded recently that Gates’ Trustworthy Computing memo came out after Code Red and Nimda – incidents are clearly the driving force for fewer vulnerabilities.

JackO May 24, 2011 8:25 AM

Not only for chemical plants, power plants, and the like. Think about this. SCADA system are used throught DoD in manufacture, rebuilding, logistics, everywhere. What if intruders got into the system that bores tank barrels and introduced an over bore situation, impacting the accuracy of the main gun? Same for jet aircraft engines, submarine reactors, the list is endless.

Mike T. May 24, 2011 8:42 AM

@Dave

The biggest problem with the Siemens vulns (both Stuxnet and these new Beresford ones) is that they allow you to modify that logic, remove the protections, and send the system into an unstable state.

The protections, because they are not enforced, are no longer protective.

Mike
Private Citizen

Jordan May 24, 2011 8:46 AM

@Pete

Vulns generally exist because someone made a mistake. What you are suggesting is that a product full of mistakes is what the customers bought, which is far from the truth.

A company is obligated, both by law and by ethics, to provide the best effort they can to fix their mistakes that can cause monetary a physical damage to people and customers.

Full disclosure is not about making things safer right that moment. It is about stopping companies and programmers from being lazy and creating mistakes that cost millions of dollars and lots of wasted effort, and the fact that you seem to think that’s a bad thing is comical.

As a programmer, I can tell you right now full disclosure is not only a good policy, but a necessary policy, and I would even go so far as to say the best policy, and the argument you presented is so full of holes that I feel you’re entirely missing the point.

moo May 24, 2011 8:52 AM

@Fnord:

That’s a great example, but it may be the exception that proves the rule.

Do you think SCADA vendors are nearly as enlightened? More to the point, how many SCADA systems are physically deployed out there, and how many of them get patched at all for discovered vunerabilities? Even if Siemens fixed their software, I bet >50% of the deployed systems would still be unpatched and vulnerable to these same attacks a couple years from now, and thats probably why DHS pressured him not to release the info.

Kevin Granade May 24, 2011 8:55 AM

@Pete,

  1. What is your “one estimate”?
  2. I think increasing security of the system provides pretty high “value for customers”. Also you are asserting that the business interests will act in good faith without the additional accountability that disclosure provides, where this is generally considered to not be the case.
  3. You are asserting again that there is “rarely” reason to believe that researchers find vulnerabilities that will be exploited, but this contradicts common knowledge on the subject as I know it and you provide nothing to back up your assertion.
  4. This is a completely insane statement.
  5. Are you saying that the only path for white hats to drive increased security is to cause incidents? In that case full disclosure will also have the desired effect, since then unrelated black hats are free to cause incidents that will drive vulnerability reduction.

Andre LePlume May 24, 2011 9:01 AM

What kind of pressure did DHS bring? This sounds an awful lot like prior restraint. Perhaps one of these researchers should grow a pair and tell the govt to piss off.

Jake Brodsky May 24, 2011 9:01 AM

Bruce, you and your readership might want to read what Dillon Beresford wrote on the SCADASEC e-mail list before you pass judgement.

This is so much bigger than just some crappy software…

Andre LePlume May 24, 2011 9:02 AM

Now that I’ve RTFA, I see that the researcher decided on his own and that the DHS “in no way tried to censor”. I retract my prior comment.

Orwell May 24, 2011 9:28 AM

Pete will realise his dream of protecting the inept.

Since neither intent nor legalities of the acts are factors, it is only a matter of time before most public security research is considered to be providing material support to terrorist organisations.

Kevin Granade May 24, 2011 9:56 AM

@Jake Brodsky

I’m not sure what judgement is being passed here that you think is undeserved, I just read the “[SCADASEC] Siemens” thread on the referenced ML, and while it does clarify that the researcher states that he was not pressured through legal means, otherwise it is at least as bad as people here are expecting. My main takeaways from the thread:
1. Estimated time for closing the vulnerability: “who knows”
2. Stated impact of vulnerability according to researcher (in my words) “Total Ownage”
3. Stated impact of vulnerability according to Siemens PR, “irregularities in
the products’ communication functions.”
4. General consensus from the ML that it is very likely that Siemens will not be addressing the vulnerability promptly.
4. Speculation that these vulnerabilities have been in the wild for three years.

BF Skinner May 24, 2011 10:05 AM

@Clive allow “controled release”

Or uncontrolled release.
I’m thinking Bophal 1984 and more Phillips in Pasadena Tx 1989

OSHA findings (fm wikipedia so we no it’s 100% truthful) on the Phillips explosion

Lack of process hazard analysis;
inadequate standard operating procedures (SOPs);
non-fail-safe block valve;
inadequate maintenance permitting system;
inadequate lockout/tagout procedures;
lack of combustible gas detection and alarm system;
presence of ignition sources;
inadequate ventilation systems for nearby buildings;
fire protection system not maintained in an adequate state of readiness.

Additional factors found by OSHA included:
Proximity of high-occupancy structures (control rooms) to hazardous operations;
inadequate separation between buildings;
crowded process equipment;
insufficient separation between the reactors and the control room for emergency shutdown procedures

Another way to view these safety measures is cost vs revenue generating.
cost
cost
cost
cost
cost
cost
cost
cost
cost
cost
cost
cost
cost

I contend that if they aren’t getting the physical plant basics right then SCADA automation of flawed and unsafe design isn’t going to be any better. And management/owners are incented to ignore problems with code because they are only potential, abstract and unproved.

In the US there’s a well funded determined effort to gut regulation and oversite. If the USG won’t/can’t make ’em behave they can shrug off any damages.

Look at Sony’s stock price. Company has been hacked a bunch of times and it’s not affecting value for long. BP continues to make profit AGAIN after the Gulf oil spill.

Spaceman Spiff May 24, 2011 10:28 AM

There is NO security through obscurity, as Bruce has so rightly pointed out many times in the past. Shame on Siemens!

GreenSquirrel May 24, 2011 10:38 AM

@BFSkinner

It just goes to show how share prices can bounce back quickly. Where I work, that is one of the drivers for a crazy policy on data classification based on what it might do to share price – generally the principle is they dont care because hardly anything will affect the share price…. head meet desk (rinse, repeat).

@Dave

“I’d like to point out that in most environments where a SCADA system is used to monitor and control an industrial process, civil engineering demands that the software is never able to put a machine in a situation where it becomes a danger to itself and to its environment. ”

Maybe true, however nearly every SCADA implementation I have encountered flies in the face of this demand.

I have encountered a couple of SCADA systems where in every emergency situation a human operator is required to approve or override decisions but in all four of those systems, the human operator makes a decision based on data provided by the SCADA system.

If the SCADA is compromised, then the human operators decision is compromised to the same extent.

BF Skinner May 24, 2011 11:04 AM

@GreenSquirrel “hardly anything will affect the share price”

At Shmoocon a couple years back someone I think it was Bruce (the other one) said “Nobody cares you all. Your parents don’t care. You companies don’t care and the market doesn’t care. Look at Heartland’s share price.”

We’re trying to make an argument based on the Ponemon Institute’s estimate for cost per record per breach.

We are getting traction on those systems that have records but it doesn’t seem to apply to an ICS.

Dave May 24, 2011 11:50 AM

I have had some peripheral involvement with PC control of SCADA systems in mining. Even if a system is protected with mechanical failsafes, it is quite easy to imagine building a virus which would cause components to run below their breaking point but which could still cause damage. If every process in a large refinery operation ran 10% less efficiently than it should for a few hours or days, the costs could be immense.

John May 24, 2011 11:51 AM

This is ridiculous. Having worked with Siemens PLC’s myself on numerous machines and plant management, these vulnerabilities were known for quite a while, but nobody really cared even when alarms were raised before. Claiming that hackers won’t find these issues because they’ve kept them “secret” is kind of stupid when people that work with them everyday know about them. I’ve even raised the alarm about them myself to project managers, but they really don’t care. And no one wants to do a proof of concept on a working million dollar machine, not even UL/CSA tests for that for their safety requirements.

I’ll also add on to this that Siemens is not the only manufacturer of PLC’s and systems that is vulnerable, just no one mentioned it yet, or it’s just buried somewhere.

Meanwhile the managers at these plants have no clue what these vulnerabilities are, so they really don’t put any effort into protecting themselves. The only real reason why these things were never hacked before is because no one was really interested in causing damage, yet. Eventually some disgruntled employee will do something and we’ll hear about it all over the news.

DoubtingThomas May 24, 2011 12:16 PM

Siemens and/or other providers of SCADA platforms have a mindset that created the platform with these vulnerabilities, and that mindset will continue to focus on short term profits vs long term sustainability (i.e., highly secure control system). This is similar to the emphasis that MS had early on (profit before security), and also similar to TEPCO corporate philosophy (image and profit before safety/security).

The reasoning fault is to assume that profit and quality are in conflict, but as Deming insisted, quality drives profitability. Here quality of the product would include the robustness of the code/platform against malicious access.

Andrew Philips May 24, 2011 12:46 PM

Until you have a large number of vulnerabilities or some seriously embarrassing incidents, its quite difficult for the internal security people to get top level management authority to whip the rest of the engineers into shape. Having worked for a very, very large s/w company doing just that, I can tell you from first hand experience a handful of nasty, high profile, widely published security vulnerabilities can do wonders. I’m not proud we had them (the vuls or the disclosures), but the incidents created an internal environment that allowed us to balance the need to release s/w with the need to release more secure (patched) s/w. I’m proud of the changes we made (quarterly release trains published a year in advance on which security vulnerability fixes could ride).

There were many things my company did that I didn’t always agree with when it came to marketing our security. I never approved of that boasting. Then again, I was paid to be professionally paranoid and a curmudgeon.

When it came to security fixes we did, however, strike the right balance. When researchers submitted security bugs, we were able to keep them up to date and allow them to track fixes and let them know weeks in advance if the fix was going to be in a particular release. They weren’t always happy, sometimes it can be incredibly convoluted to fix a particular problem. Nonetheless, most were satisfied with the process – external researchers, customers, internal developers and our internal security staff.

deepcover May 24, 2011 2:57 PM

Are Siemens UK headquarters still in Staines? Pity the poor receptionist there answering the phone…

Davi Ottenheimer May 24, 2011 2:57 PM

@ John

“these vulnerabilities were known for quite a while, but nobody really cared even when alarms were raised before”

completely agree, with small caveat.

vulnerabilities in SCADA in the late 90s were widely discussed but not addressed because of two arguments:

1) the (mistaken) belief that there was separation between control and corporate/networked systems

2) the risk management model run by those who build the financial forecasts doesn’t see security as a risk

the first argument was laid to rest for good about five years ago. the second argument, linked to the first, is taking longer to be defeated because of residual beliefs in obscurity.

for what it’s worth, i may be biased on this because a long time ago when i was working on a related issue and reported it internally, i was hauled into the org lawyer’s office and told if the situation leaked i would be sued personally — i was threatened. fortunately, i called their bluff and survived.

@DoubtingThomas

“as Deming insisted, quality drives profitability”

deming’s ideas of product quality make a lot of sense. but a notable difference with SCADA devices is that they can be so focused on a unique purpose or so simple that judging their quality does not address security on its own. best to measure quality as a function of the whole environment working together and not just individual components in isolation…it would be kind of like measuring the quality of a keyboard in order to assess the profitability of a computer program.

Bob Roberts May 24, 2011 3:42 PM

I don’t think “infecting” an SCADA system is quite as straight forward as infecting a PC or server. You can certainly bug it so that motors spin at the wrong speed or direction, temp and pressure setpoints change, actuators move incorrectly and such. It would take very detailed knowledge of both the physical process, it’s mechanical configuration and its SCADA representation to effectively target the process, especially if you are looking for subtlety. Knowing the configurations well enough to say, cause a centrifuge to operate longer then desired at a resonate speed, would require extremely detailed system knowledge. It is not something that malware could target unless it was a very sophisticated AI and would be well beyond anything I have ever read about. The amount of inside information needed for such an attack implies access to the system much more intimate than what is needed to bypass any security.

no1axed me May 24, 2011 4:40 PM

Most meetings go…

Salesman: No, we don’t have TIME to fix all that! We’ll lose money! Add Super Chrome instead!

Engineer: But…

Salesman: I SAID SHUT UP!!

Big Cheesehead: You heard the man. SHUT UP!!

moo May 24, 2011 6:40 PM

About the safety (or lack thereof) of SCADA systems.

There’s a 25-year-old Tom Clancy novel called “Red Storm Rising” about a fictional World War III between the USSR and Western powers. The USSR is forced into starting the war by a small group of terrorists who blow up a key oil refinery in the USSR and threaten to wreck their economy. They didn’t use viruses or anything, they just shot their way in with machine guns and then their insider started messing with control panels (opening and closing valves until all the pipelines ruptured and caught fire, or whatever).

Realistic or not, if you run some kind of manufacturing plant, one threat model worth thinking about occasionally is something like this: “if bad guys had the co-operation of a couple of insider experts from my plant, who helped get them physically inside and then they had one hour to fuck around with all of the controls before the cavalry could get in there to stop them, how much damage could they do?”

If your answer is something like “they could pretty much destroy this multi-hundred-million-dollar facility”, or “they could dump hundreds of thousands of gallons of highly toxic materials into the environment”, or even just “they could cause some kind of injury to large numbers of our workers” then maybe you need to worry about that.

Now mix in the angle of remotely-pwned SCADA systems and ask how much damage could be done just with those? Assume your adversaries have the co-operation of one of your inside experts who knows exactly what your SCADA systems do and how they do it. Assume they have a month to develop their attack and some help to covertly test it.

If I was a country willing to engage in covert economic warfare against my enemies, I wouldn’t even necessarily want to break things. Even causing work stoppages while employees wasted time trying to solve some weird intermittent problems with the SCADA machine might be worth the effort.

BF Skinner May 24, 2011 7:26 PM

Remember when Ford did the cost/benifit analysis on the Pinto and decided it was cheaper not to fix the life threatening risk of a rear end collision turning the Pinto into a molotav cocktail and just pay off the insurance claims instead?

that was 35 years ago!

We haven’t moved any farther forward since then than this?

Dirk Praet May 24, 2011 8:20 PM

The behaviour of Siemens and quite some other players in this field stems from a general lack of company security culture or deeply misguided approach thereof.

Traditionally, the business field – i.e. sales, management and CxO level – consider almost anything security related as a business inhibitor rather than an enabler. In their perception, it’s an aspect that costs a lot of money for little quantifiable return on investment, especially in the short run. In my experience, loss of reputation is far less a driver than mandatory regulation, compliance and accountability (e.g. Sarbanes-Oxley, ISO/IEC-27002, PCI, BASEL) are.

Even in such a context, it is quintessential to have an educated CEO actively sponsoring a culture of security awareness, policy and procedures, advised by a seasoned CSO reporting directly to him. Not only is the latter required to be a subject matter expert in quite some security domains, he also needs exceptional political skills and be a master at risk management to deliver on a daily basis irrefutable business cases in which ROI stands for risk of incarceration as much as it stands for return on investment or loss of business.

Unless security flows top-down in an organisation, it is hardly ever going to be done right. Nobody in product marketing, sales or middle management listens to paranoid engineers getting in the way of achieving their targets unless there are proper procedures in place to deal with red flags being waved, including but not limited to direct escalation channels to upper management. In absence of such a framework or insufficient knowledge thereof, and as correctly pointed out by Clive, it is inevitable that many people will fall back on assumptions and solutions that offer an easy way out: secrecy, cover-up, understatement or denial of the problem, transferring risk or blame … I think Sony’s recent attempt at blaming Anonymous for “enabling” hackers to breach their systems was a fine exampe thereof.

IMHO full disclosure is an indispensable part of proper incident response management. Ignoring or downplaying vulnerabilities, threatening those who disclose them with an army of lawyers or plain lying about it are tokens of incompetence and an insult to customers and the public in general. It’s just as unacceptable for a corporation as it is for an individual to run someone over with his car and then make a run for it instead of pulling over and taking up his responsability.

This is not to say that any vulnerability or breach should be published in detail as soon as it is discovered. Any serious vendor and their customers have (or should have) procedures in place governing version control, release, distribution, maintenance and patch management. Reasonable time should be allowed for corporations to properly deal with newly discovered vulnerabilities. Failure to react is merely showing off arrogance and stupidity, which we all know in the long run never goes unpunished, even if you have the law on your side.

Pete May 24, 2011 9:28 PM

Wondering if disclosure is doing more harm than good? You’re in good company! Here is a special reading list for you:

http://spiresecurity.com/?p=1172

@Jordan –

  1. Vulns are inactive mistakes and have no impact without the intelligent adversary. It is also generally assumed that no non-trivial software will be vuln-free.
  2. Companies are constantly fixing “mistakes” even though they have no impact without the intelligent adversary.
  3. There are more vulnerabilities created every day than there are vulns found/fixed. I would say your efforts are not working. (Not comical). Btw, programmers are human (except for you?)
  4. Not missing a thing, thanks.

@Kevin –

  1. http://spiresecurity.com/?p=127
  2. I disagree. I think people want fewer compromises over “more secure” software.
  3. You are confusing cause and effect.
  4. No, no it’s not if you think about it, rather than being led astray by the crowd.
  5. No, I am not saying that. I am saying that incidents are and will happen, disclosure (and discovery) does nothing to reduce them, and that people like you are being led astray (albeit unknowingly), and they will have an even stronger effect on software development as shown by TwC.

Not convinced? Read more here: http://spiresecurity.com/?p=1172

Dave May 25, 2011 12:10 AM

civil engineering demands that the software is never able to put
a machine in a situation where it becomes a danger to itself and to
its environment

That’s “accidentally”, not “deliberately” create a danger. I can take any number of benign actions that aren’t protected against (because they’re benign) and combine them in an unexpected way that causes harm. Even a single action can be misused. Want to shut down a major city with one single action? Program a (hypothetical) embedded toilet device to flush every toilet in the city at the same time.

Jay May 25, 2011 12:58 AM

You can’t risk-manage against an intelligent attacker. What’s the probability that a buffer overflow will trip itself? 0.0% (or close to it). Someone else – well, what’s the probability that your PR suits will piss off Anonymous?

RobertT May 25, 2011 3:03 AM

@Doug C

I think what also being ignored is that the SCADA developer / operator did not pick-up some wild random virus, rather he was targeted. You can be certain this virus does not start life by trying to infect 1000 other random machines, nor send 1GB data to some ROK website. So until the exploit becomes known there are no tools to even detect the infection. This makes the developer / operator the ideal vector for air gap jumping virus, so by definition they will be personally targeted. The infection may result from a “sneak-n-peak” or maybe from deliberately infecting web sites that the operator is known to go-to. Sometimes malicious emails from friends.

Bottom line: all actions are intentional.

The same thing goes for the SCADA side of the equation. It is possible that the developer did a very good job on the control system, but again he is thinking of accidental combinations occurring, whereas the attacker is always trying to find the “deadly combination” there is nothing coincidental about the SCADA attack plan or attack vectors.

Sean Farrell May 25, 2011 3:38 AM

Basically chemical plants are designed to be safe. This means that the operating process, with no external physical influence, will “only” fail up in a way that will not endanger the surrounding populace. The reason is that the programming of PLC devices are assumed to be faulty or just plain glitch. But the main problem is that this is only checked in industry nations and there are probably cases where this just is not true.

Even if you assume that the plant fails “safely”, the damage is huge. Just stopping chemical plants can result in substances cooling and hardening. This means you need to replace the affected components. The damage can easily be multiple hundred millions per plant.

Imagine that a hacker can put all plants of one company to a halt? That company can close shop. Alternatively a hacker that disables all fuel refineries. How do you think the economy will react? These have real world impact.

Pierre May 25, 2011 7:09 AM

The day vendors will no longer get paid by intelligence agencies for backdoors called “security holes” that are patched via “automatic remote updates”, then security researchers will have far less opportunities – and the world will be much safer.

Given the fact that this world would make today’s richest become less rich and powerful, this world will not materialize any time soon.

Backdoors are everywhere – from cars to phones, game consoles, computers, video boards, network adapters, and even cameras… because someone is paying a premium for these extra (invisible) features that consumers are not aware of.

Dirk Praet May 25, 2011 11:34 AM

@ Pete

I read your blogpost at http://spiresecurity.com/?p=1172 .

You are making a couple of valid points in terms of accusing certain individuals and companies of being in vulnerability research and disclosure for their own profit and competitive advantage rather than for the common good. I also understand that disclosure may not always be a good thing for the scores of users out there that are blissfully unaware of what’s going, never bother to patch their systems or inform themselves about safe internet conduct. My personal take on that is very simple: being able to drive a car is not enough to get you from A to B safely. When failing to understand the meaning of a red light or absence of regular car maintenance gets you into trouble, you have no one to blame but yourself. It’s an entirely different thing when your car blows up on you when the vendor has kinda neglected to inform you of a known defect or has put in zero effort to research/fix it because it hardly ever happens.

You totally lose me when claiming that in absence of any vulnerability disclosure “people would work harder to further the goals of trusted computing because the stakes were higher and more funds were available”. I genuinely have no idea what makes you think that as it is completely inconsistent with anything I’ve ever seen in my 27 years on the job.

With the exception of a number of military and government agencies, I have rarely ever worked at organisations where management actually gave a toss about security until they got a gun against their head, and to both temples. Without going into too much details as to the hows and whys, let’s just stick with the Siemens example at hand. According to your theory, it would have made sense for Siemens to have launched a massive and well-funded research program to make their SCADA-systems more secure once Stuxnet had been dissected. Maybe they did, but the newly reported vulnerabilities at least cast a serious shadow of doubt over their efforts. Microsoft only started taking product security serious when they got blasted time and time and again over scores of exploits and vulnerabilities published by 3rd parties, to the point that it became a liability for further market penetration at high-end customers.

De-regulation and self-regulation just don’t work in corporate environments. I guess that’s only one of the lessons learned from the financial crisis. Just like governments without oversight don’t work. The former are in it for the money, the latter to perpuate themselves and their legacy. However thought-provoking, the approach you are advocating IMHO has no corroborated basis whatsoever in real life. It would only add to digital darwinism, where end-users are tricked into a false sense of security, corporations have zero incentive of getting security right and governments and gangsters alike can freely shop at a flourishing black market for vulnerabilities and exploits to further their goals of robbing ordinary people of their money and privacy.

Nick P May 25, 2011 4:15 PM

@ Dirk Praet

That was a slam dunk, Dirk! I don’t think anyone has ever said it better. I’d like to add that a side-effect of vulnerability disclosure was that companies actually improved their development processes in a way that reduced defects. Operating system vendors and processor manufacturers also started including features that help mitigate certain kinds of attacks, like Intel’s NX bit and Windows Integrity Controls. The move to managed code frameworks like .NET was also partly influenced by the disclosure trend and this has had more benefits than just reducing buffer overflows. Finally, the constant stream of news bites we get from the vulnerability researchers is excellent security awareness eduction for developers, network administrators, and other IT professionals. It’s hard to forget about the security implications when seeing yet another data breach story in the paper. Of course, they can still ignore it outright.

All in all, I’d say vulnerability disclosure is a good idea so long as the company gets reasonable time to patch it. If we trust them, then it becomes an externality. They just don’t care. The Ford Pinto case is always my favorite illustration: they knew hundreds of people would burn to death and decided the cost of lawsuits would be less than fixing the product. More than companies usually do, they had to add to the bottom line and didn’t care if it took a breach of ethics (or mass murder). A new, soon-to-be-published vulnerability is an incentive to fix a product because negative press can harm the company. Blind faith in corporate pronouncements has never accomplished this, but it has gotten people killed thousands of times.

Richard Steven Hack May 25, 2011 5:43 PM

Bob Roberts: “The amount of inside information needed for such an attack implies access to the system much more intimate than what is needed to bypass any security.”

No one is suggesting that some general purpose hacker can build Stuxnet without some Seimens knowledge. That is a given. It’s not relevant to the discussion of whether SCADA systems in general can be hacked.

If someone (say, Israel and the US) are motivated to hack a SCADA system, Stuxnet proves they have or can find hackers to do it. The same applies to Al Qaeda or any other group with more than a little money. Someone mentioned here recently that South American drug lords were able to hire technical people to design their own secure communications systems.

Bottom line: Anyone with money can hire someone equally greedy who has the knowledge to do what they want done. Whether insider or technically competent outsider doesn’t really matter. And the person doing the design work for the hack isn’t necessary the hacker doing the programmer.

Bottom line of all this: Once again, there is no security. Suck it up.

Maybe I should just expand that one more level up to: Life is not safe. Suck it up.

Also relevant is the dictum: It’s always management’s fault. And “management” one level up merely means human primate hierarchy needs. People need to take orders from other people to get anything done. Those other people by definition are not the experts in doing anything except giving orders. So they will always screw up. Always.

This is one reason why the state will never achieve the peace and security that is the alleged justification for its existence.

Also, people who talk about “regulating industry” just don’t get it. Industry has the money; the state has politicians that want money and power. Put the two together and no amount of “regulation” will ever achieve anything beneficial to the citizen. It’s impossible.

The problem is not “de-regulation”, the problem is the existence of “corporations” which are by definition creatures of the state, licensed to exist by the state and the primary revenue source of the state. Remove the state and there can be no such thing as a “corporation” – only a “company” with personal liability and considerably more competition.

Of course, that’s impossible now that we’ve had close to two centuries of “corporations”, not to mention ten thousand years of the state. The “real world” of human behavior has been so distorted by these facts that it is impossible to produce “change” – only absolute destruction of the existing system.

Which, fortunately, is around the corner (i.e., probably within the next 50-75 years) when the Transhuman paradigm becomes dominant through pure technological impact.

Suck it up.

Pete May 25, 2011 6:19 PM

@Dirk –

First, thanks for your reasonable and measured response – these debates can sometimes devolve into ad hominem attacks which doesn’t help anyone.

I have some comments for you:

  1. Analogies are horrible, and possibly car ones most of all. When you factor in an intelligent adversary, you would need to blame the driver for being carjacked at gunpoint or having their window broken and car stolen, etc. Also, do you think the same things about furnaces, plumbing, electricity, landscaping, cement work, construction, etc.? Comparative advantage has its, uh, advantages you know 😉 And I really want my 70-something mother to be “allowed” to socialize with her grandchildren on the Internet without being held “responsible” by folks for not knowing the ins and outs of her system.
  2. It is tricky to predict what would happen without vulnerability disclosure (let’s call it ‘no-disc’), but security pros in general talk about exploits and compromises – that’s the whole reason they do what they do, right? To try to get in front of that problem. What I suggest is that if and when those compromises happened in the no-disc alternate reality, they would be much more powerful incentives for vendors to address their security weaknesses overall. That is, exploits are a better way to make the point. In fact, I would suggest that the reason Bill Gates sent out his Trustworthy Computing memo was because of the exploits (just after Code Red and Nimda hit) rather than vulnerability disclosure which was simply annoying to them. I would support legislation/regulation of vendors if they didn’t respond promptly to incidents in this no-disc alternate reality.
  3. Before you go thinking how cruel I am to allow exploits to happen, I would like to point out that THERE IS NOTHING WE ARE DOING NOW TO PRECLUDE THIS. In fact, we have evidence of “undercover exploits” occurring periodically throughout our history – starting with the Morris worm. WMF is a great recent example. There are simply too many vulnerabilities to successfully identify them all before the attacker, and given the ample evidence that attackers are simply using disclosed vulnerabilities against us, I don’t think it is worth it. In addition, our actual chances of finding the ‘right’ vulns before the bad guys are almost negligible in the sea of all available ones.
  4. So, the next time you go to patch a vulnerability, you should say to yourself, “what about the unknown ones?” What about the vulns that we know exist in our systems (but haven’t specifically identified yet) and might be disclosed next month, next year, or never and yet we remain vulnerable. Talk about false sense of security, I believe there is a great case to be made that patches provide FSOS.
  5. In the no-disc world, patches would be much less frequent and we could allocate those resources to the truly nefarious attackers that use those undercover vulnerabilities we all believe are being used. This would allow us to employ and perfect preventive controls unrelated to patches to protect ourselves. We will never have complete information about specific vulns on our system, so knowing or not knowing about 1, 2 or a dozen should not have such a significant impact on our security posture. It’s like junk food – tastes good but really isn’t healthy.

  6. Regarding guns to heads – we can’t totally eliminate risk, so I am not clear why everyone should have your individual level of risk aversion? If you accept that different people can have different tolerance levels, then it opens the door to really getting down to managing risk.

  7. Regarding vendor response – my biggest point is that we rarely have a clue about what is on the plate of the developers involved, and by artificially inflating the importance of some specific vuln, it may be at the expense of some others. The other problem we have is that nobody can define when enough is enough – vuln-seeking is a big black hole of looking. Even in the face of us recognizing that vuln-free software is probably impossible, our only threshold is “well I found one so you are negligent.” I simply don’t believe this is true. The developers I know are some of the smartest people I know and I believe they are working hard to minimize the number of vulns. Any regulation I might support (in the no-disc world ;-)) would NEED to have some threshold for vuln seeking by the vendor.

  8. I don’t have nearly the jaded outlook about people that you seem to indicate in your response. I think most people are trying their best and there are a few ‘rotten apples’ that give entities the reputations they get.

Hope this helps! Thanks again for your thoughtfulness.

I don’t like hijacking comment threads (sorry, Bruce) so please contact me at petelind@spiresecurity.com or @SpireSec on Twitter (or my blog) for any follow-up.

Nick P May 25, 2011 6:24 PM

“Which, fortunately, is around the corner (i.e., probably within the next 50-75 years) when the Transhuman paradigm becomes dominant through pure technological impact. ”

I see the change coming in a darker way. Human nature means it’s more likely to come to a violent or disastrous end than some worldwide Enlightenment. That is, with all the momentum the nation-states have.

Dirk Praet May 25, 2011 9:03 PM

@ Pete

I don’t do ad hominem attacks unless physically threatened or assaulted. And in which case the issue is generally transferred to a couple of really mean Chechnian neighbours with blowtorches 😉

1) I believe that in this particular case the car analogy is spot-on. Car jacking, theft and the like are just other risks that can be mitigated, beit never entirely avoided. Both my 75-year old mom and early-teen nieces run on decently set up, secured and maintained systems. They have been thoroughly briefed on the most common dangers and pitfalls of accessing the internet. They’re not giving me half the headaches of other relatives and friends.

2) Exploits are indeed a better incentive than vulnerabilities. Reverting to the car analogy once more, I believe preventive maintenance is a better practice than waiting to have your breaks fixed until you hit a tree. Most companies nowadays do pay people like us to prevent stuff from happening rather than just cleaning up the mess after the facts.

3) Of course it is impossible to discover and protect from any and all vulnerability. That’s why we strive to build resilient and redundant systems and architectures. Factoring in an active strategy against both known and unknown vulnerabilities is just another layer of defense in-depth, and in my opinion well worth the effort. How far you take it depends on a case by case risk and business impact analysis.

4) Every vulnerability patched is just one potential attack vector less to worry about. I’d rather have 2 out of a 100 taken care of than none at all.

5) The freed up resources could indeed be re-allocated. That’s what you would do. And me too. The more likely scenario in a corporate context is that these budgets will end up becoming part of management bonuses or to reinstate the corporate jet program for executive travel.

6) Risk management probably means the same to you as it does to me. The outcome of what needs to be avoided, mitigated, transferred or accepted will always depend on a given context. The both of us will also be using generally accepted, industry standard best practices methods. From my own experience, risk management strategies in many corporate environments boil down to how can we hide it, how can we cover it up and who can we blame ?

7) As correctly pointed out by Nick P., exploits and vulnerability disclosures have been some of the major drivers for companies to embrace new methods of product development, managed code frameworks, lifecycle management and service delivery in which security is an intrinsic part of the process instead of something bolted on afterwards. Having done quite some programming myself, I’ll be the last person to finger individual developers or teams thereof knowing only too well the constant pressure there under of being understaffed, making deadlines and keeping up with new methods and technologies. But when it comes to code scrutiny in search of defects or vulnerabilities, the simple fact is that a 3rd party generally is in a better position to do this than the coder himself.

8) We are all products of our observations and experiences. Perhaps yours have been better than mine in this field, but I have seen way too many instances of a few rotten apples infecting the whole basket because they were the ones rising to fortune whereas the clean ones where left behind.

If you want to discuss further, please ask Moderator for my email address.

Nick P May 26, 2011 4:32 AM

@ Clive Robinson

Definitely. Good catch. Although I do say I’m somewhat flattered that spammers consider my posts attention grabbers. Now if I could only get them to start payin’ those royalties! 🙂

Richard Steven Hack May 26, 2011 7:08 AM

Nick P: “Human nature means it’s more likely to come to a violent or disastrous end than some worldwide Enlightenment. That is, with all the momentum the nation-states have.”

That’s the other alternative…:-)

And I wasn’t referring to an “Enlightenment” – that would be the Extropian notion. My “radical Transhumanist” notion is: 1) someone figures out the necessary tech (either aboveground or underground) and uses it; 2) humans try to stomp it out of existence, leading to 3) humans get stomped out of existence.

There are only four possible outcomes of a Transhuman future:

1) Humans try to destroy the Transhumans and are destroyed instead;

2) Humans are trasmogrified by Transhumans whether they like it or not (there won’t be any complaints once they are Transhuman by definition);

3) Transhumans ignore humans and go their own way leaving the chimps to destroy themselves; and

4) (the most likely outcome) – all of the above; some humans get exterminated, some get transmogrified, some get ignored.

There are of course the possibilities that humans will destroy themselves before Transhumans manifest, or humans may actually manage to prevent Transhumans from manifesting. The latter can only happen with total – and I mean total – control of technology worldwide which is fairly unlikely IMHO.

As for the sub-topic of disclosure vs no disclosure, the problem of software security sucking and vulnerabilities being legion are a direct result of the extremely poor way software is engineered and designed – or rather that it isn’t “engineered” or “designed” in any way a real engineer of physical things would recognize.

Anyone here ever read the books “Why Things Don’t Work”, and “Design for the Real World”?

I just spent the last eight hours trying to build a new PC for a client. It still can’t detect two of the hard drives and only 16GB of the 24GB of RAM in it.

There has to be a better way. Computer hardware and software today absolutely suck. It’s all “consumer level” junk on a design par with an Edsel.

Dirk Praet May 26, 2011 7:48 AM

@RSH

Your notion of transhumanism reminds me of the works of Pierre Teilhard de Chardin. A really recommended read.

Dirk Gebert May 27, 2011 3:51 AM

After only reading your comments for a while I would like to also bring in a point of view from Siemens.

I’m Dirk Gebert and I am System Manager for Security on Siemens Industrial Automation Systems team. So I work on this and other security topics/products.

I fully agree that security is an important point for SCADA systems. However the reported bugs are not related to a SCADA system but rather to a PLC family that is used for small automation solutions (e.g. small machines). These components are typically not used within SCADA systems. Nevertheless it’s beyond all question that we will fix these issues as soon as possible.

We are working on the firmware update and you can find ongoing updates on this issue on this website: http://www.siemens.com/industrialsecurity. Please let me know if you have questions that are not answered with these updates and I will do my best to address them.

Dirk Gebert May 27, 2011 5:01 PM

Just a short update to my previous comment: The PLC firmware update is completed and currently runs through our internal system test.
To provide an additional test by an independent institution we have already sent this version today to the ICS-CERT for validation.

Dirk Gebert May 30, 2011 10:41 AM

@asd
I’m not really sure about the intention of your question. Our PLCs use a power supply with 24VDC.
So what should be happen or what should be prevented if somebody put a capacitor (in parallel) to the PLC?

todb May 30, 2011 1:19 PM

1) Dillon is a smart guy and he is determined to find bugs. However, he is not the smartest nor the most determined human on Earth. I find it amazing that Siemens and DHS is deciding that he is the first and only person to discover these, and not merely the most recent.

1) Dillon lives about 2 miles from me. It is absolutely not the wrong side of town. Having lived in Oakland and Los Angeles, I’m pretty certain Austin doesn’t have a wrong side of town. 🙂

asd May 30, 2011 4:27 PM

@Dirk Gebert, I just thought that if the crystal that gives the timing to the pic was slowed down ,it might have negtive effects.
Adding a cap to the power line, might effect suspoable flat dc power supplies down the line and adjust the time of the crytsal.
Just a thought

d1n May 30, 2011 9:15 PM

You can’t install that needed update unless you buy the proprietary Siemens Simatic MMC card with their special partitionm, the card costs somewhere around 250 euros or $357 USD plus shipping. So, instead of insuring that all Siemens customers get these needed updates, Siemens is forcing people to buy this memory card just so they can install the firmware image.

That is completely unethical!

Clive Robinson May 30, 2011 11:00 PM

@ d1n,

“That is completely unethical!”

True but it is also the conciquence of doing business “at the lowest price”.

Flip the issue the other way up, the person making the original purchase choice chose (for whatever reason) to purchase a device with (known to be fallible) software without the ability to upgrade the software.

We have seen this discussion a number of years ago when Apple pushed out computers that did not have floppy drives therefore standalone software instalation was difficult. Likewise when floppy drives stoped being supplied as normal on PC’s.

Or later when a number of well known computer manufactures sold PC’s with read only optical drives, when the first thing the OS asks you to do is make backup OS reinstall optical disks before you do anything else….

The argument boils down to what do you get for your money, and secondly do you understand the consiquences as a purchasor?

That is if you want the “lowest price” then you have to accept that you will get a certain minimum level of hardware that does not have some features that you might not want now (or ever).

There was an old story which I heard back in the 1980’s about the ethics (or lack there of) in US car sales. Apparently some “dealers” considered wheels and tyres as optional extras when it came to selling at the lowest possible price, but would throw in a tank full of petrol/gas…

Getting back to these programable logic controlers etc, they tend to be used in a very price sensitive market with long project times. I’ve known people lose money on contracts simply because they had to quote low in a different currency, and in the length of time the contract took to get to FAT sign off and payment the exchange rate change had wiped the profit. Similar has happened even when the quote has been in the same currancy as the contractor but not that of the equipment manufacturer.

And this “contract build” is where the problem of low spec parts originates. As the end company setting up a new plant, you issue a specification on which contractors bid. Unless you have real inhouse experts the contractors have a knowledge advantage on you. Arguably it is this expertise you are paying for but as in all things in life it can and often is a double edged sword.

That is you issue a contract based on your specification you can only put in the contract and specification what you can foresee and some contingency. If you make the contingency to broad then the bid prices could go up exponentialy, or if you tried to enforce it from the “lowest price bid winner” they may well just go out of business, either way you lose.

Arguably there are many systems out there that do not care what security vulnerabilities there are in the PLC’s as they will never be upgraded or changed except on breakdown and are run as standalone systems without external connectivity. Should the price of such systems be made uneconomically expensive because they are forced to absorb the security costs of others who do have to upgrade because they do have external connectivity?

Andrew2 June 1, 2011 2:24 PM

There’s a significant difference between “giving the vendor time to fix the problem” and “keeping it secret until a bad guy finds it.” The difference isn’t binary, however.

I believe the proper response to a request to keep quiet must be to give a deadline. This keeps the pressure on, but still avoids opening a window of public-knowledge vulnerability.

@Clive re: physicality of information

You may be suffering some bias here. By their very nature successful attempts to bottle up information are not widely known. It might just work most of the time, and we’d never know.

Clive Robinson June 1, 2011 5:52 PM

@ Andrew2,

“You may be suffering some bias here.”

That is entirely possible, and yes it may also be the case that some people have bottled up information successfuly.

However is it likley they have done it for a long period of time?

One thing we do know about humans in general is that large segments of the population are not particularly good at keeping secrets, whilst others are.

For instance many crimes are solved not by “Sherlock Holmes” style detective work, but by the criminals telling people how clever they are etc. And within short order the Police get to hear about it and they then focus on the individuals concerned.

Irrespective of “bigging it up” to our peers with the “if you knew what I know…” we know that accidental disclosure of information happens all the time as a conciquence of using it.

To be honest I’ve been told so many things in my time that are supposed to be secret I cann’t even remember the half of them (I think 😉 Which means it’s quite likley that I’ve remembered or rediscovered technical information and failed to remember it’s supposadly secret for whatever reason and thus used it in another project etc.

Secrets also reveal them selves by the hole they leave behind. That is the discovery process tends to be sufficiently noisy that it can be observed from afar, and when a discovery becomes secret the noise suddenly stops, not die away naturally as it would otherwise.

Another way to look at it is secrets are like stones on the bottom of a muddy river. You cann’t see them directly but their presence causes vortexes and pressure waves that are visable on the surface of the water thus revealing they are there.

It is being able to see such things that makes a good investigative journalist or intelligence officer and it is a part of what Bruce calls “hinky”.

However I am quite sure some secrets go to the grave with some people. In part because that’s the type of person they are and in part the secret is of limited interest to others and in part the secret is incomprehensible to most people.

Peedee Pirate June 10, 2011 8:06 PM

@Dirk Gebert–

Dirk said — “I fully agree that security is an important point for SCADA systems. However the reported bugs are not related to a SCADA system but rather to a PLC family that is used for small automation solutions (e.g. small machines). These components are typically not used within SCADA systems. Nevertheless it’s beyond all question that we will fix these issues as soon as possible.”

(Please see below from the press release when the S7-1200 family was introduced)

“The Simatic S7-1200 also includes an integrated 10/100Mbit
Ethernet communications port with Profinet protocol support for programming, HMI /SCADA
connectivity or PLC-to-PLC networking. Traditional controllers often require a separate add-on
module for Ethernet communication, which adds cost and creates a larger footprint.”

http://www.sea.siemens.com/us/Products/Automation/S7-1200-channel/Documents/S71200USPressRelease.pdf

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.