Schneier on Security
A blog covering security and security technology.
« TJX Hack Blamed on Poor Encryption |
| The Economist on Privacy and Surveillance »
October 2, 2007
Staged Attack Causes Generator to Self-Destruct
I assume you've all seen the news:
A government video shows the potential destruction caused by hackers seizing control of a crucial part of the U.S. electrical grid: an industrial turbine spinning wildly out of control until it becomes a smoking hulk and power shuts down.
The video, produced for the Homeland Security Department and obtained by The Associated Press on Wednesday, was marked "Official Use Only." It shows commands quietly triggered by simulated hackers having such a violent reaction that the enormous turbine shudders as pieces fly apart and it belches black-and-white smoke.
The video was produced for top U.S. policy makers by the Idaho National Laboratory, which has studied the little-understood risks to the specialized electronic equipment that operates power, water and chemical plants. Vice President Dick Cheney is among those who have watched the video, said one U.S. official, speaking on condition of anonymity because this official was not authorized to publicly discuss such high-level briefings.
More here. And the video is on CNN.com.
I haven't written much about SCADA security, except to say that I think the risk is overblown today but is getting more serious all the time -- and we need to deal with the security before it's too late. I didn't know quite what to make of the Idaho National Laboratory video; it seemed like hype, but I couldn't find any details. (The CNN headline, "Mouse click could plunge city into darkness, experts say," was definitely hype.)
Then, I received this anonymous e-mail:
I was one of the industry technical folks the DHS consulted in developing the "immediate and required" mitigation strategies for this problem.
They talked to several industry groups (mostly management not tech folks): electric, refining, chemical, and water. They ignored most of what we said but attached our names to the technical parts of the report to make it look credible. We softened or eliminated quite a few sections that may have had relevance 20 years ago, such as war dialing attacks against modems.
The end product is a work order document from DHS which requires such things as background checks on people who have access to modems and logging their visits to sites with datacom equipment or control systems.
By the way -- they were unable to hurt the generator you see in the video but did destroy the shaft that drives it and the power unit. They triggered the event from 30 miles away! Then they extrapolated the theory that a malfunctioning generator can destroy not only generators at the power company but the power glitches on the grid would destroy motors many miles away on the electric grid that pump water or gasoline (through pipelines).
They kept everything very secret (all emails and reports encrypted, high security meetings in DC) until they produced a video and press release for CNN. There was huge concern by DHS that this vulnerability would become known to the bad guys -- yet now they release it to the world for their own career reasons. Beyond shameful.
Oh, and they did use a contractor for all the heavy lifting that went into writing/revising the required mitigations document. Could not even produce this work product on their own.
By the way, the vulnerability they hypothesize is completely bogus but I won't say more about the details. Gitmo is still too hot for me this time of year.
Posted on October 2, 2007 at 6:26 AM
• 60 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
It's Where2K all over again. The same kind of nonsensical hype surrounded the mythical failure of embedded systems. No one listened to engineers who stated that absolutely nothing would happen...and, of course, on 01-01-2000 nothing happened!
The main reason absolutely nothing happened on Y2K is because an enormous number of man-years was dedicated to fixing all the critical problems ahead of time, not because there were no problems to begin with. It is rather unfair to ignore the huge amount of work that was put into making sure nothing happened on Y2K, then say that because nothing happened it was nonsensical.
The problem illustrated in this demo is not leet haxors but poor security. Either protect the equipment by installing a 5-bob controller board that restricts the hardware to safe operating modes or don't connect the equipment to a network. Problem solved.
Except, as we all know, the security threat is not seen as a problem - it is a political opportunity to push through sweeping changes in law that would be unthinkable unless the world were 45 minutes from destruction by those evile, leet, democrat-voting, america-hating haxors.
Follow the research money.
Back in the early years of steam engines, somebody figured out a foolproof mechanical limiter, called a governor, to prevent an engine from running so fast it could break itself up.
I just bet if our nation's finest minds would apply the same idiot-simple technology to electric generators they could in a stroke prevent runaways from damaging anything, thus wholly neutralizing this threat.
Sarcasm aside, I strongly suspect the people behind this had to first sabotage the built-in protections against runaway conditions in order for this 'demonstration' to work.
The Y2K people have realized the flaw with 1-1-2000: It actually arrived. If you instead choose a World Ending Disaster without a fixed date, you can keep your sweet contract jobs for obsolete skills forever.
Cyber-terrorism: Like Y2K times one million!
I agree with previous speakers.
Its the usual propaganda from the FUD profitizers. As usual they are not focusing on the real problem, which is bad engineering, but on an imaginary and unlikely threat.
But then again, this is true for all of the security issues in the industry. People are more than willing to invest in antivirus, spywareremoval and firewall software. Instead of demanding that the application vendors start using a better develoment model for fixing the problem, which in the end is crappy written and tested code.
That is complete Bunk.
But i sure as hell made a *lot* of money out of it as did plenty of others. Someone even asked "We can't even test for this sort of thing!". Say what. Perhaps set the date on your test system to 31/12/99 23:30 and wait 30 mins? After the fact I got 2 companies asking to give me a bonus (huge ones in fact), because they we so pleased they didn't spend loads of money getting y2k ready at my advice.
Y2K was nothing but hype. Many unix systems were never going to be affected, because they are safe to about 2030 (IIRC). And the systems that were or could be affected where pretty much the systems that needed 24 TLC just to keep running anyway. And some of them did fail on y2k. (One i know of ordered 50 tons of frozen chicken for a supermarket that only sells about 10 tons a month, but then that sort of thing had happened before with that system)
The idea that a whole lot of computers crashing is a major disaster is bunk purely because most system are crashing all the time anyway. Common Computers are not reliable and when they crash its inconvenient not a disaster.
Any system that demands reliability won't have windows or linux installed (but it still mite have a intel inside).
I like the Simpson's episode on y2k. Even a milk carton leaks....
"By the way, the vulnerability they hypothesize is completely bogus but I won't say more about the details. Gitmo is still too hot for me this time of year."
Right. Uhhh. whisteblowers seem to be afraid of deportation. The Land of the Free on the road to facism?
Not yet, but certainly a thing to keep an eye on. Like Bruce is saying, we are one attack away from a police state.
But you don't have to be afraid of deportation or whatever. I have a wife and kids, and just loosing my job would be enough to exercise caution. If its a real person, you won't stay anonymous for long if a big enough whistle was blown.
I found it interesting that the date in the released video shows Mar 4, 2007, but I found the articles below apparently referencing a similar if not identical test performed in/prior to March 2005.
I know the Idaho lab is home to the SCADA National Test Bed, but routinely "blowing up" equipment doesn't seem like a good way to spend my tax dollars. Proof of concept is good, but do it once and show the video to the latecomers. Also, I'm not a conspiracy theorist, but I am familiar with software vendors trying to statistically narrow the window of vulnerability after the fact.
If the software flaw had been corrected, as the articles suggest, then why all the end of the world hype (fear)? Is it because adoption/implementation of the fix by industry is voluntary? Is the release of the video (now) supposed to shame the industry into implementing the fix because the gov't has no real enforcement power to require it be done?
Promote fear while providing little or no detail. Consult management, not the engineers, but use the latter group's names to make the document look respectable. Hire a writer to write it. Sounds like an effort to give another privately owned group of public utilites a whopping big tax break to fix something that may or may not be broken. The tax break will pump up their profits even more. Nothing will be fixed. Ten years from now, they'll cry that they need more public funding to fix our decaying infrastructure after yet another big blackout. And they'll be using a real-time version of Windows XP Pro because Microsoft gave them a great deal and Vista still doesn't work well or reliably in small embedded devices.
SCADA security is a very real issue for two reasons,
1, The systems are designed by engineers with only one asspect in mind to control complex systems (oil platforms etc)
2, Managment no longer want to pay to have people on site any longer just on call from home or some other office in the world.
The problem with 1 is that security was never ever a consideration in the design. And like Unix most SCADA systems will do as they are told irespective of the consiquences.
The problem with 2 is that the Internet is the cheapest solution and accountants love cheep as do share holders.
The result is systems that have no built in safe guards appearing on the internet with minimal security, that control billion $ chemical / industrial (and possibly nuclear) plants, and the services that you and I depend on (Water Electricity Gas etc) for our day to day existance.
As an example well over half of London had a black out a couple of years ago due to incorrect switching under fault conditions causing a cascade overload. If this can happen accidently how long before it is done deliberatly?
A system used in a way or in an environment for which it was not designed is a potential problem.
As @Robinson pointed out SCADA systems were largely designed to not be connected to the Internet. Simply connecting them without significant redesign is a recipe for serious problems.
This principle of new use without redesign has played out many times in the security realm.
I was referring to the myth that embedded systems would fail on 01-01-2000....none did...hence, "where2k". Same hype, same gloom and doomsaying nonsense.
I think this is great news 'cause it makes electric motors sexy again!
First off, that's a diesel engine not a turbine and it's an extremely small one in comparison to any utility sized generator (it something you might see backing up a single building). You cannot overspeed a generator once it's synchronized onto the grid, off the grid yes but most engines will have integral overspeed device. On the majority of steam turbines running today, the level of automation just does not extend that far into the mechanics, it's still hydralics, gears, etc... that have the final control.
My guess on how they did it was closing in the main breaker when the generator was not sychronized with the power grid, or they found a way to somehow shut off lube oil flow or cooling water while supressing the alarms and cutouts & trips. I've seen much worse and spectacular damage to plants due to operator errors and/or equipment failures. All that said, there is a great potential to monkey around with various systems and cause great havoc, destruction and potentially loss of life - thus utilities (I've worked with a dozen) are typically very cautious about access and interconnections to the internal plant control systems, there are intrinsic / mechanical safety interlock and trip devices installed, and finally there are plant staff that routinely react to things like that in the video.
I would be more concerned about someone hacking into control of the transmission / distribution system - look around some pole tops, there are radio controlled switches everywhere.
While this is indeed more bogus fearmongering (and the "descending greenlit binary code" motif ripoff from "The Matrix" in the video would be hilarious if it wasn't so sad), DHS has plenty of good source material for it's research in what has been happening in Baghdad when it comes to disruption and loss of electrical power, running water and other badly needed infrastructure. You might say it's been a grand experiment of sorts.
I also love how they infer that the only threats would come from the outside.
Finally, CNN ends up pandering to DHS interests by ending the story with a comment about the "cyber defense budget" for next year being "only 12 million dollars" - as if the rest of us are unaware that like the rest of DoD, their budget is already ridiculously inflated and already contains a huge amount contracts currently under examination for fraud, waste and abuse:
A cascading overload in the electric system is a success. Devices shut down to protect themselves, and once the initial failure is fixed the devices (which turned off instead of failing) are turned back on. It's how things are supposed to work. That's why critical facilities have back-up power sources.
Your comments make it sound as if you think this is a bad thing, when in reality, it is a very good thing. It allows a grid robust enough to flex in response to demand and generation variation while protecting components from sudden catastrophic failure.
Department of Homeland Security and Department of Defense are separate Departments, each with their own funding. Also, it seems to me that FWA investigations are a GOOD thing.
To expand on Clive's points:
- SCADA systems are built using off the shelf components (on the human interface side), MS Windows is common.
- The systems are seldom patched, in some cases, the software vendor will not support systems that have 'unapproved' patches.
- The systems are built with life expectancies measured in decades.
So imagine a system built on NT4 that hasn't been patched since SP2 was released (and that's best case!). It is easy to see that the system will be vulnerable; does that mean that an attacker can make a generator explode? Probably not. Does it mean an attacker could effect the system being controlled? Probably. Is that a good thing? No, it is not.
"We softened or eliminated quite a few sections that may have had relevance 20 years ago, such as war dialing attacks against modems."
I am still blown away about how many modems I still find on networks today. Everything from PBXs, manufacturing gear, even an accounting system. Modems are still a relevant attack vector.
While I think the report is probably overblown, the security of embedded systems in general (not just SCADA) is a real, live issue. As time goes on, more and more critical control functions in things like electrical generation, chemical production, and so on are handed over to embedded systems, because they can be, and because it makes things like maintenance and troubleshooting easier. And again, in service of convenience for management and maintenance, it's all getting networked, with everything from 9600 baud modems over POTS (who said wardialing was dead?) to the latest fiberoptics and even short-range wireless in some cases.
The fundamental problem is that your average embedded guy doesn't know much of anything about network security, and isn't hooked into social or professional networks that might tell him. OTOH, he's got an advantage over your average programmer, because embedded systems have to be much more tightly built in the first place, i.e. unhandled cases are unacceptable in general, and critical bugs tend to get fixed quickly, because the consequences are potentially catastrophic in a way that crashing your computer simply isn't. The software is also immensely simpler and more rigid than your average network application. The first step is to convince embedded programmers and their managers that malicious attack is as real and urgent a potential failure as any of the others that the software must handle.
DHS is about as separate from DoD at this point as are the CIA, the NSA and the FBI. It's a parasite hanging off of a very big dog, and it's separation is only a matter of convenience for financial and control purposes.
Of *course* FWA investigations are a good thing, but the CNN hype doesn't make much mention of them while talking about the budget this latest Chicken Little cartoon is designed to generate. That little detail is left to Bill Moyers, who is broadcasting on PBS.
And before I forget, the largest non-nuclear explosion in history was caused by an embedded system attack, way back the early 1980s. The US managed to get the USSR to install what amounted to a Trojan Horse in the control system for their natural gas pipeline network. It worked swimmingly for some months, and then twiddled the valves and pumps such that the line exploded somewhere out in Siberia.
The interesting thing to me is that the video shows a 'directed attack' on a particular turbine. And perhaps a 'bad guy' would be interested in such an attack...or on a cooling system, etc. An expert hacker 'sighting in' on a discrete target.
However, what I find more un-nerving about shaky SCADA system security is the potential for "script kiddie" or "wrench-in-the-works" type attacks. Simple 'If-it's-on-turn-it-off, if-it's-off-turn-it-on' type of "button pushing" could really raise havoc on a wide scale.
All this takes is system level access and rudimentary programming skills.
My solution? The diagonal cutter "firewall". Get these systems off networks that conect to the internet.
A malicious or inattentive operator at the plant in the middle of the night could do the same thing. Nothing "cyber" is necessary for this attack.
Scada systems have been "vulnerable" for a long time, but they're not typically hanging out on the internet waiting to be abused.
An interesting article to research would be the turn-over on control system security issue program managers at DHS since it formed in 2003. Not only who occupied the spots and their experience but how long they actually stuck around.
HypeStop makes a great point, follow the money, might find one lab in particular is the biggest bread winner of that line item.
Don't over estimate the effectiveness of air gaps.
If they run windows and the user install software.....like super minesweeper....... Well you know the rest.
In general however systems are designed not to get unsafe, from *user* input. Sure its not rock solid. But compared to what we PC users put up the software is the Ferrari and we are the Lada. Many systems are a Harvard architecture and we used write once program sections. Even the flash program memories will often need higher voltages to program. You mite be able to hack it, but it won't be easy and its not going to be some clown on the other side of the world without some inside knowledge. At which point you have to ask is where is the weakest link. Got some wire? Got a kite or perhaps some balloons?
I really don't think that this is the weakest link.
But lets take a page from The most popular book in the universe:
Since we're discussing physical vulnerabilities, let me throw my 0.02 in.
Let's start with first principles. insiders do the damage. It's very hard to background check an engineer when you have so very few of them, and the pool of replacements is mostly from overseas. In the old days, you didn't have to -- the engineering schools knew that they were putting lives in these men's hands, so verifying the diploma was good enough.
The most disturbing trend I have seen in background checks is to preferentially hire recent immigrants from overseas (with background check waivers are in effect) as opposed to U.S. citizens with no criminal record but spotty credit or other risk factors. Sometimes this is a H1B issue.
More often, it's a product of laziness in not conducting real backgrounds on people born outside the USA. Unless DHS is doing really, really good checks prior to allowing these people into the USA (which takes a lot of money), this is a serious vulnerability with respect to international terrorism.
Further and worse -- it's not how much damage an insider could do (enormous!) but how long it would take to fix. Some of the equipment used in the power distribution system is manufactured only a few places in the world; spare parts inventory does not exist; lead time for replacement is measured in months not weeks; and transportation of these larger than 8'x8'x40' components is a real hassle under 'ordinary' conditions.
Is your data center prewired to be able to use rental generators for weeks or months if necessary? Do you have ironclad contracts with multiple sources of said generators? Did you think to strike the 'act of God' clause regarding nonperformance in the event of natural or man-made disaster?
If not, you're kidding yourself about maintaining uptime in a disaster. The fastest way to find out that your on-site generators haven't been properly maintained is to run them for a week and watch them fail . . . In a real disaster, your emergency generators are a temporary bridge to some other power source. Unless you thoughtfully lay hands on a generator technician you employ, a large spare parts inventory, and ridiculous amount of diesel fuel storage well in advance.
>Is your data center prewired to be able
>to use rental generators for weeks or
>months if necessary?
If the electric grid is taken down by a SCADA attack, EMP bomb over Nebraska, or whatever...what are you going to do with the computers other then heat the building?
In such scenarios we'll be relying on common sense and people using paper to write notes on. The (logic) systems you have in place on the computers today have not been designed to anticipate the disruptions such a wide scale catastrophe would create -- "Good news! We can ship from our Warehouse! Bad news! We can't get Diesel deliveries, and the truck drivers are all staying home with their families!"
Yes, you need to have contracts for backup generators and wire to accept them, but for scenarios of limited geographic impact, and to give the Utilities a week or two to restore normal services after an event like a hurricane. If the event is beyond that, who will you be doing business with?
The technical flaw used to hack the particular generator destroyed, IS a very real potential problem.
EVERY generator used to supply power to a grid anywhere in the world can be attacked with the approach used in the video, however not all generators are possible to be hacked by a cyber attack and the attack is not as simple as loading up an exploit and running it.
I have worked in the protection and control field for electrical utilities for a number of years. Yes, more and more of the old electrical mechanical relay logic controls have been replaced by PLCs, RTUs and bay level controllers, combined with SCADA. Yes, the majority of SCADA systems used run on commodity hardware and Windows OS.
These power system components are normally operated and controlled via computers and there is no reason that they cannot be incorrectly, and maliciously operated by these same computers, whether by a remote cyber attack, or by (as pointed out) by inattentive, careless, or malicious operations and maintenance personnel. This should not surprise anyone (and is certainly not newsworthy).
No, these systems are not typically "connected to the internet". They are, however, interfaced to most companies business networks, through some type of firewall, in order for operational data to make it to "the business", and for maintenance staff to access diagnostic information. This connectivity, however, can safely be managed following fairly standard methods of defense in depth, and implementing reasonable security practices.
That report was a brilliant exercise in fear-mongering, making extrapolations that are unexplainable to people in the utility sector. It is an assumption that for this "test" to have had the result it did, all of the normal safeguards and protections in place had to have been disabled. Similar initiating events (synchronizing a generator out of step) do happen because of human error, and the protections perform as designed preventing significant equipment damage from occurring.
And finally WRT to the DHS budget of "only 12 million", nowhere does the report refer to the total cost of implementing the NERC cyber-security requirements being carried by the power utilities themselves right now (over 20 million dollars by ONE utility in ONE region alone).
I thought the CNN report was terrible and loaded with nothing more than FUD. One point that people have made on a SCADA mailing list that I follow:
is that any generator of this type that is used in production would have lots of safety controls on it to make it almost impossible for it to be destroyed in the manner depicted in the INL/DHS video demonstration.
There are issues with cyber security on SCADA systems and smart and dedicated people are doing what they can to minimize these problems and we continue to monitor & correct new threats all the time.
IMHO, the CNN 360 show's producers should be ashamed of themselves for airing this FUD crapola and the INL & DHS, and others involved, should clarify what the real security risks are before they lose their credibility with the rest of us who work on SCADA systems in our various industries.
A more realistic "threat" would be the same sort of common viruses, worms, etc. which affect ordinary PCs. Older SCADA systems used to run on proprietary hardware or on UNIX workstations. Newer ones are using PCs with Windows for display, monitoring, alarm display and data logging. On the more sophisticated systems control though is often still through proprietary hardware, but on the cheaper ones control is done on the same PC as display. The industry has gone this way to take advantage of cheaper PC hardware. There are a few vendors basing their systems on Linux instead of Windows, but these ones specialise in the more sophisticated end of the market. Wonderware, Citec, WinCC, Rockwell, etc. however all use Windows.
Connections from the SCADA computers to the field devices (valves, sensors, etc.) used to be by networks using proprietary RS-485 or other similar special hardware. All the hardware vendors however have or are in the process of moving their product lines to Ethernet. This again is simply to save money on hardware costs. The control protocols themselves are still proprietary, so it has nothing to do with improving the ability to share data between systems.
In most cases, the new Ethernet based control protocols are secret and protected by armies of lawyers (the exception being Modbus/TCP). The companies which own them provide binary drivers in a format known as "OPC". OPC runs only on Windows, so a customer pretty much has to use Windows to run their SCADA system whether they want to or not.
The field devices which are controlled by these protocols are not very sophisticated and will accept commands from anywhere without requiring any sort of authentication. The assumption is that if you are on the network, you are not going to do anything malicious.
Given the above, a worm or virus could DDOS or send undesirable commands to pretty much any newer control system if it can get access to the network. The SCADA networks are getting connected to the business networks because the business side wants real time reporting and production scheduling. This means that if viruses and worms are a realistic threat to office PCs, they are a realistic threat to the plant as well.
The only thing which has kept this from being a major problem so far is that most plant equipment is old so equipment with this capability is in the minority. The only practical solution is to put the plant on an isolated network with some sort of intermediary security box between the plant and the office which only allows limited information to pass each way. Trying to secure every individual valve and other plant device is unrealistic.
It is possible to make generators self destruct via internet. It is also possible to lunch nuclear missiles via the internet, or shut down international oil and gas pipe lines. Just think of what control systems is put in place, and which could be disengaged by an insider and a low speed internet connection, even if it is via an external laptop computer with a dial up modem.
Why is this insider a vital part of it? Any of these systems have at least one fail safe system that must be sabotaged, and most likely only an insider would know where, how and when to do it.
bad link, can not find movie.
well, if Iran really had the same PASSWORDS (as is said in the CNN video), we would have noticed it. And that kind of stupidity would mean that the people responsible for such a terrible fail would require LARTing with a huge generator.
If this is intended to cause fear, does that make it "Insecurity Theater"?
> If the electric grid is taken down by a SCADA attack, EMP bomb over Nebraska, or whatever...what are you going to do with the computers other then heat the building?
If your data center is not essential to your business, why bother having one? Contract it out.
My type of clients are organizations that simply cannot afford to be down for any length of time. If you're going to bother having a data center, have two on opposite sides of the country.
>> In such scenarios we'll be relying on common sense and people using paper to write notes on.
Go visit the FEMA Web site and take a NIMS course or two. If the country has to go back all the way to pen and paper (which my organization is ready to do at the drop of a hat, not by the way), a lot of infrastructure disruption is going to take place and it's going to be pretty catastrophic, in and of itself.
You don't want to have to run a modern distribution center with pen and paper. You really don't.
> "Good news! We can ship from our Warehouse! Bad news! We can't get Diesel deliveries, and the truck drivers are all staying home with their families!"
Good news: since WalMart controls its entire supply chain, they were able to ship in disaster supplies during Katrina. Bad news: FEMA couldn't.
> Yes, you need to have contracts for backup generators and wire to accept them, but for scenarios of limited geographic impact, and to give the Utilities a week or two to restore normal services after an event like a hurricane.
You mean like Katrina? If you really put that much blind faith in "the utilities" or for that matter, "the government," you deserve to go out of business.
Any geographic area in the USA can expect some kind of local catastrophic event about every thirty years.
> If the event is beyond that, who will you be doing business with?
You may be doing business with nobody. The rest of us will be doing business with our customers.
Ha - I thought I didn't hear correct - you heard it too?
"- they have the same equipment" well - no problem about that - is it?
"- they have the same training" okay - why not
"and they have the same passwords" - hahaha - really?
They need 12 million to put in different passwords?
Take this for 8 millions, but keep it secret:
What is more funny?
The fact, they have the same password, or the fact they expect us to believe it?
As someone who's worked for both an EMS(Energy Management System of which SCADA is a core part) vendor and a large utility company...I can tell you this is completely bogus. As others have send the steps you'd have to go thru to disable safety systems to get a generator to do this are quite extensive and require physical access.
Regarding the use of PCs, that may be true for smaller SCADA-only systems, but I'd be surprised if a transmission system used it. The major vendors(GE, Siemens, etc...) of such systems still use UNIX for actual processing and control. The operator UI may run on Windows, but that's about it. The cheaper hardware argument is bogus because when it comes cost, the price of the computing hardware is microscopic in comparison to the overall system cost. And when you're talking about equipment to a utility company, the cost of a new EMS is small compared to some of the other things we buy. Reliability and security our prime concerns, cost is less so, at least where I work.
Now concerning networking and security. At least at our company, the "control" net only connects to the corporate one at one DMZ spot with a firewalls on each side. The only allowed traffic is to the historical logging server so the bean counters get their beans. Actual operational access can be granted remotely, and only during an emergency or critical problem, by the control center(staffed 24x7) who enables VPN access to create one session, then shuts you off and monitors what you do. Working from home, I wish. :)
All remote field devices(e.g. RTU) are connected either dedicated circuits or frame-relay and no IP or other "routable" traffic is allowed. This is required for any transmission system >24KV (I think at last read) by NERC CIP standards. You can use IP, but do so extends your security permitter to include the RTU's physical location. This means you must have a firewall and IDS on both ends of the connection and maintain physical access control to the location where the RTU is. If it is an unmanned station an alarm must be generated to notify an control center operator if the site is accessed(authorized or not).
While no security plan is full-proof, I personally think this goes a long way to helping protect the grid from evil hackers and keep the lights on for our customers.
I'm still waiting for an internet with bigger pipes.
I think this is timely as NIST has just released a publication on Industrial (SCADA) Security (SP800-82). In section 3.7 it documents one incident and several unintended consequences and collateral damage caused by virus, worms, unauthorized configuration changes.
@Jeff: You said: "Regarding the use of PCs, ... I'd be surprised if a transmission system used it. The major vendors(GE, Siemens, etc...) of such systems still use UNIX for actual processing and control."
Newer GE gas turbine control systems use PCs with Windows for the MMI. They have discontinued their own MMI system, and currently sell a re-branded product from someone else.
"The operator UI may run on Windows, but that's about it."
The MMI is what you use to control the equipment. If you control the MMI, you control the equipment. The equipment control system itself has protective relays and other over rides, but the MMI system still has a lot of factors and parameters that are set at commissioning which can damage the equipment if set incorrectly. You can also of course, simply shut down the system by issuing a shutdown command.
"The cheaper hardware argument is bogus because when it comes cost, the price of the computing hardware is microscopic in comparison to the overall system cost."
Every penny saved is another penny in the vendor's pocket. I can't help it if you don't like this, but the vendors themselves say they are doing it to save money. It doesn't matter how good your design is because the customers will demand arbitrary price cuts. This is standard purchasing department tactics during the negotiation of any purchase.
"Now concerning networking and security. At least at our company, the "control" net only connects to the corporate one at one DMZ spot with a firewalls on each side."
The sad reality is that I can walk into lots of smaller companies where the equipment is directly on the internet because nobody really understands networking. The current fad in business is to outsource engineering work, so many "engineering" departments are little more than project managers. There is nobody left at those companies who actually understand equipment engineering. The contractors who do the design work are only interested in getting their little piece of the puzzle done as quickly as possible and are not getting paid to take a larger view of the operations. The problem is less one of the initial design than what happens during the gradual change and evolution of the plant over the years.
The generator demo sounds bogus, but there is still a big problem with ordinary viruses and worms. The automation industry is at least 10 years behind the IT industry when it comes to networking and this includes network security. Add to this the fact that most engineers in the business may use computers every day, but they really know very little about them.
Another problem is that in the US the utilities used to pay into EPRI to get research done for the common good. EPRI would have been the logical party to deal with these problems. After deregulation though, many of these companies are not willing to pay for research anymore. They are looking for money from the taxpayers to pay for this now, and "homeland security" has a reputation for being a bottomless purse. I suspect that this is the real reason behind this demo.
I haven't seen a single peep about this demo on the industrial control forums that I follow. Nobody there seems to think it is significant enough to discuss.
Don't even bring NIST into this, vis a vis SP800-82.
NERC has already passed and approved CIP-002 through 009 for the utility sector:
Please don't drag in "yet another standard" to the debate.
"The operator UI may run on Windows, but that's about it."
This may be changing as some SCADA vendors want to cut their costs and only support one platform. We initially were told by our SCADA vendor that we would have to go all Windows, HMI workstations & servers, if we wanted to upgrade to the latest version of their system.
However, my boss just got back from a vendor user's group meeting and informs that they will provide Unix servers if the customer really wants to stay with that configuration but apparently we are too far along in the process to change it now.
I guess I didn't yell loud enough before we signed the upgrade contract. :-(
I used to work for a SCADA company, way back in 1977-1979, and can think of only one thing:
It's the bean-counter's fault for not wanting to maintain a separate network.
Oh, sure, these systems were networked, usually over a fairly slow wire, so it is all in allowing the control systems to do more than monitor and control devices over the specialized SCADA network, since the remote devices, I believe, may be speaking IP... but, in Power/Gas/etc networks, there's a lot of equipment that would be considered obsolescent (Anyone remember Visicode switches? PDMs?) but, if they work, won't be scrapped.
There may be economization measures to place managed equipment on a shared network...
It's all cost/benefit stuff and there's not much engineers can do.
Those who can, do.
Those who can't, teach.
Those who can't understand, manage.
Those who can't care do the accounting.
GE is a mixed bag with regards to their offerings, I think last I had heard they had 13+ different SCADA systems depending on the division you were working with. But I can say authoritatively that their Energy Management System offerings are UNIX, same with Siemens. I do seem to remember that they had a smaller Distribution Mgmt. System that was windows based, but those systems typically don't have an generation control, merely routing at the street level.
From the various utilities I've worked with, both as the vendor and a "peer", I can say that cost of computing hardware was not a concern. Now I'm speaking of the bigger electric systems like , Southern California, NYC, Southern NJ, etc. Some smaller rural utilities may see that cost reduction from running Windows make a significant change to the overall price of an a new control system, but to these bigger utils, again it's a negligible amount.
I also know that some of these larger utilities have rejected vendors because they were on Windows. There's an inherent distrust, at least at this level, of running mission critical stuff on such a "new" platform. Most of the utils that I installed a new EMS in were replacing equipment that's been in place since the 70s and 80s.
I do agree with you regarding smaller companies and their understanding of networks, but if they run a transmission system they better learn quick before the CIPs kick in next year. After that the fines can be pretty heavy.
"A cascading overload in the electric system is a success."
I'm not sure that you and I are refering to the same thing here or there is a gulf of difference between us.
The definition I use for a "cascading overload" is one where a local problem caused by any local event propergates out of the local area into other areas that are not at fault (which most would agree is highly undesirable).
For instance if I turn on my microwave and it overloads my local service supply the most I would expect it to do is trip my local substation, not a third of the substations in a metrapolitan area.
In previous times suppliers put sufficient and well thought out safegaurds into their networks and introduced changes in a managable fashion.
Unfortunatly the modern drive to maximise efficiency and return makes the likleyhood of such propergating faults all the more common.
As I have said before "security and efficency appear to be at opposit sides of the fulcrum point".
A most interesting thread. Some of the comments here take me back to my airplane days.
Most of you probably know that many of todays airliners are "fly-by-wire". That is, all command and control information is passed between sensors (e.g. where are those flaps?), control units (the pilot says to retract the flaps) and actuators (e.g. turning on the flap motor) digitally. Over a bus. Airplane manufactures went digital for many reasons: to save money (electronic equipment is much cheaper than electromechanical equipment), to make the equipment more reliable (avionics electronics typically has an MTBF of 20 years or so; mechanical was about 3 years); easier to diagnose; simpler to repair; easier to upgrade; easier to enhance; &etc.
All of these arguments apply to power, gas, oil, water, whatever ... equipment as well. The manufacturers of these equipments will go digital in their newer generation products.
Now, imagine if someone wanted to bring down an airplane and figured out how to tap into the on-board bus. What would the flap control unit do if it received a flood of "i'm fully retracted" data values and a few "i'm still extended" data values?
For all the reasons given above (and more) systems will eventually distribute sensory, control and actuator functionality over a network. That means that the sensory data upon which the control function operates will be vulnerable to attack as well as the commands to actuators, engines, valves, &etc.
Can every electronic device in every system have its own security front-end to protect its data communications? If not, could one bring down, say, a power network by simply faking data values from a remote transformer farm saying "Hey! I'm overloaded!" and let the control function (over-) react?
"Can every electronic device in every system have its own security front-end to protect its data communications? If not, could one bring down, say, a power network by simply faking data values from a remote transformer farm saying "Hey! I'm overloaded!" and let the control function (over-) react?"
This is probably the way that any attack would be carried out. Operators that use remote system implicitly trust the reading on their instruments. One of the most efficient ways to disable a system is to supply bogus readings and watch the operators crash their own systems. Do it at 3:00am when peoples decision making is at its worst and it could be serious. The following article discusses some of these issues.
People are the weakest link. An insider can do more damage than any outside hacker. This article discusses some of this.
"To test this, the Training Camp recently conducted an experiment where they gave out 100 CDs around Liverpool Street Station in London. The CD promised it would take users to a web site where they could win a trip to Paris. The CDs, however, reported back via IP logging. Around 70 per cent of the discs, which could have come with all kinds of nasties, were put into machines by people at work, including two household insurance companies and a retail bank that’s in the top four. “That was just a simple bit of social engineering,��? Chapman says. “It’s a top four bank and their security was bypassed by somebody just physically putting the CD in their briefcase and walking through the door.��?"
A previous poster's security precautions are to be applauded however I have seem some shockers. In one incident a contracter anxious to complete his installation connected 2 completely seperate parts of our banking network together totally compromising our security. We only discovered it days later when we could contact servers we should not have been able to. Another was 100 servers rolled out with their C: drives open to anonymous and undetectable attacks because of one configuration error. Again this was in a sector that you would expect to be secure however it was not. On yet another occasion I went to a shared PC to fix it and written in pencil around the edges of the monitor where all the usernames and passwords of all the people that used this particular PC to access the banks systems.
In short people are the problem.
Fascinating commentary, mostly from folks who have no clue what was actually done to attack the generator. Some food for thought...
Yes, the vulnerability is real. Does it affect every generator on the grid? No. Can the exploit cause the damage seen in the video in a large utility generator? Potentially.
All you need to do to take the generator offline is damage it in some way. Bend a shaft, crack a turbine blade, you do not have to destroy the whole thing. Consider the long lead time for repair or replacement.
Of course Iran (and China, Pakistan, N. Korea, etc.) know the passwords. It is amazing how many times the default password is not changed. Not that many vendors out there to choose from and the manuals are available on the 'net.
20-year old technology? That is sometimes the newer equipment in the generation plants and substations. Dial-up accessible? Absolutely. Modems left enabled? More often than you would think. And, yes, the newer hardware is IP accessible, not always securely installed and configured.
Yes, the generator has safety control systems to protect it. That is what tripped generating units off during the 2003 blackout. Attack and disable the protection systems, however...
I was a bit surprised by the SCADA test. Attacking the generators in that way seems to require a lot of effort for the results obtained. There is a much simpler way to bring down the power grid.
The wavelength of 60 Hz power is 3100 miles (186000/60). The phase of the 60 Hz power will be in phase in New York and California, but out of phase with Illinois. It is the difference in distance that causes this difference in phase.
If a key point on the power grid could be closed, then two legs of the grid would become connected. If these two legs are of different length, then there would be a phase difference between them. A difference in length of the two legs of just a few miles would cause a slight phase difference that would cause serious trouble on a megavolt power line.
While the power grid is designed to provide dynamic control of this phase difference, as well as phase compensators (switchable capacitive and inductive loads to compensate for the phase difference), if one could rapidly switch in and out several legs in the power grid, the dynamics of such a rapid change in power load and phase would be very difficult to compensate for. Weak spots in the grid would overload or burn out as they dissipated the heat developed by the current from the phase mismatch.
As a former communications engineer, I am very familiar with using mismatches to cancel each other to create a system that is in match. The opposite is equally true. How secure are the controls to the switches that control the configuration the power grid? What would happen if the local load balancing equipment (marked as LBE on their structures in neighborhoods) were blocked or taken over? It may be possible to shut down the power grid from the very local to the nation levels.
@Paul Schumacher raises interesting points about the damage someone could do by physically interfering with the grid. Given that outages seem to cascade rather easily, especially during heavy load times (middle of summer, anyone?) , how much damage could be done by causing failure at a local substation? In my region, at least, these tend to be unmanned, secluded, and guarded only by a chain-link fence and some barbed wire. Most of the gear and lines appears uninsulated. I'm not a power engineer, but it seems to me you could raise a whole lot of havoc with a good arm and a roll of heavy-duty aluminum foil.
I recently came across your blog and have been reading along. I thought I would leave my first comment. I don't know what to say except that I have enjoyed reading. Nice blog. I will keep visiting this blog very often.
@Paul Schumacher -
What you suggest is impossible. The bus ties between ISO's are DC (direct current). The "phase" from one ISO does not make it to another ISO.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.