Schneier on Security
A blog covering security and security technology.
« More Research on the Effectiveness of Terrorist Profiling |
| The Washington Post on the U.S. Intelligence Industry »
July 23, 2010
Internet Worm Targets SCADA
Stuxnet is a new Internet worm that specifically targets Siemens WinCC SCADA systems: used to control production at industrial plants such as oil rigs, refineries, electronics production, and so on. The worm seems to uploads plant info (schematics and production information) to an external website. Moreover, owners of these SCADA systems cannot change the default password because it would cause the software to break down.
Posted on July 23, 2010 at 8:59 AM
• 56 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
There are so many baffling questions for me...
(1) Why do so many database applications use this 1985-style authentication model? Its dangerous brokenness has been un-ignorable since the events described in Stoll's "The Cuckoo's Egg", where hundreds of VAXes with system passwords set to "SYSTEM" were targeted for espionage. It's not just SCADA. Lots of applications do this, and lots are being written now that still do this. Why? Are the right people not being sued?
(2) On SCADA: Why are SCADA systems not on air-gapped networks? Is it really necessary for them to see the porn-net? If inbound remote access is required, couldn't this be through a VPN served by a hardened OpenBSD citadel in a DMZ? And why do critical systems not have their USB ports silly-puttied shut?
Radical isolation of this stuff seems so obvious that I'm sure it's more likely that I'm missing something than that SCADA admins are idiots. Does anyone here work with SCADA systems? What's the deal?
Sounds like another great opportunity to apply for national security funds, "we need to improve the protection of critical infrastructures against cyber-warfare systemic risks" or something like that.
From the articles it seems that the worms should be mostly harmless if the computer running the SCADA software is not connected to the open internet. Deciding how to deal with CEO and security officiers of plants where this happen is left as an exercise for the readers.
This is bringing (and will continue to bring) to light disturbing aspects of all SCADA and ICS vendors. Siemens is in the spotlight right now, but other vendors have exactly the same issues with their software. Entities that must comply with NERC CIP requirements but cannot due to the control system vendor's inability to address even basic security concerns. Unfortunately, I think that there will be more of this type of threat on known vulnerabilities throughout SCADA/DCS environments. Perhaps that will get the SCADA/DCS vendors to finally integrate security and compliance in their products.
(1) It worked for 20 years and would YOU like to be the guy that breaks all installed systems controlling valves and such by adding security that nobody demands?
(2) Because business demands it. The data generated by these systems can be incredibly valuable for business analysis, so it will be uploaded to other systems (outside the SCADA realm).
And admins need to update the systems, right? They use CD's for that. Or USB sticks. Stuxnet uses that vector. An airgap will not protect you from infection, only from leaking data.
BTW. Work in the environment before calling the people there idiots...
You can't change the password.
Let's just not forget that Microsoft has his share of guilty on this stuff too. (When it doesn't, right?)
"(2) On SCADA: Why are SCADA systems not on air-gapped networks? ... And why do critical systems not have their USB ports silly-puttied shut?"
a. Reports generated by the SCADA system are wanted by the maintenance team, plant management, corporate head office, etc. and they all demand it in real-time.
b. Equipment vendors want to perform remote diagnostics and software upgrades without physically visiting every site around the world.
c. New algorithms and parameters have to be loaded into the system, maybe imported from another site.
I *strongly* doubt this is the "first" scada malware.
First caught, *maybe*. *Doubtful* on that, very.
Default password? It has been the practice of decades.
In my early days as a sysadmin I have deliberately 'man-in-the-middled' the static password to the HP server firmware when an HP serviceman was called in to fix things - by connecting a PC with two RS-232 ports between the terminal and the console port. The serial cable went below the raised floor (and the 'spy' PC was hidden there). The guy probably never thought of checking it or connecting a laptop directly to the server's console port before asking me to leave the room.
Why the owner of the machine was barred from doing specific things in the firmware in the first place was (and still is) beyond me. After that I never had to wait three to four days for a service if I had to diagnose a problem or reconfigure something.
The company that currently employs me only recently built SSH (and other secure protocols in place of plaintext ones) into its IP-connected embedded devices and made a policy of never (more) using plain-text authentication and hard-coded or 'engineering' passwords. Even if the IP network the devices were installed on was never intended to be connected to the public Internet (and no client of ours would ever think of it), having telnet access with default login and password beeing the company name was rather stupid, yet a standard practice for many years. How many systems were fielded with this? Luckily enough, in our area of business upgrading systems regularly is a business necessity for our clients, so the old versions of software already are (or shortly will be) replaced.
"An airgap will not protect you from infection, only from leaking data"
Err sorry you are wrong on that last bit an air gap won't stop data leaking out any more than it will stop malware getting in.
There is no magic that says the writable USB stick (and most have only a software write protect) cannot carry data back out, as well as infecting.
I discused "air gap" jumping malware via removable writeable media some time ago on this and the lightbluetouchpaper blog.
Oh and don't think CD-R's or CD-RW's are safe either they are not. Many CD's are writen without being "closed" and even those that have been closed can still have data writen to them if there is "unused" space at the end.
As for the issues with SCADA systems I have been saying "it's crazy" for years and it does not bring any sense of satisfaction saying "I told you so" (It just makes you unpopular ;)
There is a flip side to why some SCADA systems are on networks. Think back just a few months to that minor problem that has been dumping oil on the US coast...
Well there is no data from the systems on the platform to say what went wrong it went down with the ship as it where...
The two issues with SCADA systems in general is "fear" and "life expectancy" even those that write them use tippy toe modifications due to the fear of breaking a production system and being responsable for a multi-billion dollar disaster with hundreds of dead injured and longterm disabled people.
Some production SCADA systems still run on unpatched Windows NT 4 systems at the head end. Worse some instrumentation uses a stripped down version of Win NT 4 in "ROM" (HP where doing this) with custom drivers for custom hardware with semi custom upload protocols to the SCADA system as the industry standard 4-20mA loop is "lacking"...
Security is a quality issue and you cannot bolt it on retrospectivly you have to build it in from day zero.
But who is going to have the "spherical objects of steel" to start a new SCADA system from scratch that's still going to be secure in 25 years?
If it was down to me I would start with a telcoms grade Unix with open source not a closed source OS with a maximum three year obsolescence time...
AFAIK - the other vulnerability is the LNK bug that every version of Windows, since Windows 2000 and perhaps Windows NT, have. In other words, Microsoft systems have been vulnerable for a decade. The worm uses the default password to propagate, and the vulnerability to escalate privileges.
The researcher who discovered this vulnerability in the wild was the first to disclose this. I don't think anyone knows how long that vulnerability has been used prior to this discovery.
And also consider this - all Windows XP SP2 systems are and always will be vulnerable, since MS no longer supplies patches.
On why SCADA systems can't be air gapped.
I work at an electric utility and SCADA is my area and he's how this has play out over the last 30 years or so.
Companies installed SCADA systems in the early 1980s and they were air gapped. most were setup with default passwords and the like, but it didn't matter since networks weren't even popular much less the internet. All the utilities operated as islands. These systems weren't patched or updated, because vendors didn't want to recertify their system for every update when they were isolated anyway.
Then the information revolution happend and energy markets were formed and SCADA data became critical for these markets to function. The SCADA networks were connected to the corporate network (hopefully through a firewall, but not always). SCADA systems was/are maintained by engineers not IT people and never patched or upgraded. Many of these systems are still running VMS versions from the 1980s, we just upgraded ours a little over a year ago. Now IT is getting involved, but there is a huge backlog of work to get all of these systems up to speed.
Is it just me, or is a fixed known password no more effective than a lack of password?
"SCADA systems was/are maintained by engineers not IT people and never patched or upgraded."
This is the first really plausible explanation that I've read or heard.
One other thing I forgot to mention. there is absolutely 0 maintenance window for these systems. The users require these systems to be always available. Newer systems do quite well with failovers and redundant networks and such, but with the older systems, it was always quite a bit of hoping it would go well.
I've worked with SCADA folks in the late 80's and can add slightly to your history.
The security(obscurity) where I worked was the the phone numbers to dial into the SCADA systems were not published(widely). All you needed was to find the phone number and then guess baud rate and parity settings. Not hard to do at all.
In my mind, there are two big news items here...
First, the LNK 0-day vuln was announced not too long ago and it has a great potential, I would liken it to ILoveYou's capabilities.
Microsoft is not raising the alarm (SANS did) and I can understand since the only thing in the wild is for the industrial world, not the general public.
But, and this is very interesting, has anyone noticed how fast this 0-day vuln became such a precise and very targeted attack ?
To me, this is the silent news in the matter. Those folks were quick to draw, most likely already primed for attack...
I'm with Carlo re his 1). Unchangable passwords don't pass the straight-face test.
@erwin - I would want to be the guy who insists on including security that the customer doesn't know he needs. The vendor is supposed to know more than the customer and be able to do it better.
You criticise Carlo Graziani for 1) on grounds that no-one wants to be responsible for changes that could break the system. Then you say "admins need to update the systems, right?".
So which is it? No-one is allowed to change the systems, or admins need to update the systems? If systems can be updated safely for one reason, surely they can be updated safely for other reasons.
The real problem with these systems is that they are usually cost-critical.
It would be easy to setup an IPSEC tunnel mesh, and ensure that only things that should connect, do, and only on protocols that they should - however this complexity impacts implementation time, and system performance, which are measurable and easy converted to expense - defining the required level of "security" isn't obvious, and isn't so easily cost-justified.
e.g. I suggest network encryption above, but would application encryption and firewalling provide 99% of the attack surface, at 80% of the cost?
We really need a risk-assessment framework that helps drive investment in the right places - rather than the current ad-hoc "yeah, viruses are bad, so it must run antivirus." approach to system design.
"(1) It worked for 20 years and would YOU like to be the guy that breaks all installed systems controlling valves and such by adding security that nobody demands?"
Ironically, this is the same commentary you hear from those who don't want to replace paperbased systems..
Isn't that the point of testing in parallel before shutting down an older system?
I've been thinking about the difference between design quality between close & open-source products.
I've seen a lot of both -- and I think this is a great example of poor design in closed source systems.
Why? Well, the solution in a closed source commercial product is to throw more labor at the problem. The biggest cost is time -- one prefers to make a quick hack and get it out of the door. Coding labor is cheap compared to thinking labor and design. You can always just hire a few extra contractors.
In open-source situations, you usually have a lack of money for "extra labor". You'd rather spend an extra weak or month thinking your design through and avoiding having to hack the system up for a specific configuration, for a specific demand. Coding labor is expensive relative to thinking labor and time.
SCADA is an excellent collection of hacks over a quarter century to get "something" out the door. You can never throw it away and start from scratch -- much too expensive relative to the cost of just hacking it a little bit more.
You often, however, see open-source forks of systems that are complete rewrites -- they just recently completely rewrote the entire KDE desktop, for example. Apache has had some major rewrites. GCC has had major rewrites.
The tendency in open-source is to keep your source small but flexible -- spend time thinking about general solutions rather than hiring a team for a specific solution. In a commercial setting, you're better off hiring 10 contractors and hacking it all just one more time rather than refactoring and rewriting systems.
This becomes a major security risk, from the obvious problems of opaque code-bases.
Another example is looking at the revision control trees of a commercial versus open source product. The commercial trees are nasty -- every customer has their own branch, with repeated bug fixes and customizations. Open-source trees stay much more simple -- you don't have a team that "lives inside the tree", you don't have the manpower to administer a nasty tree, and there's lots of pressure to merge in if you want to have continued support.
Do you think anyone in MS has really understand their spaghetti? Explains why they have such a problem making even simple large-scale changes, such as new hardware, endianness or word-size. Open source kernels generally are quickly upgraded -- they could never afford the inconsistency and bad design that would make it a problem to do a simple typedef.
I'm surprised there has not been more public outcry about the fact that this exploit uses a stolen Verisign code signing certificate. I'm no expert on this, but it seems like in the absence of that signed code, the worm wouldn't install on 64-bit OS's, which would at least somewhat mitigate the vulnerability.
"but it seems like in the absence of that signed this, but it seems like in the absence of that signed code, the worm wouldn't install on 64-bit OS's which would at least somewhat mitigate the vulnerability"
Ha Ha Ha Ha,
Sorry but that is quite the funniest comment I've read today.
The reason is not what you are saying is untrue, but that I cannot think of a 64bit SCADA system, not even most are 32bit, and some still run as 16bit in DOS boxes (I kid you not).
I was more interested in the list of stuff that was being uploaded. Plant schematics, settings, history, schedules... Whatta haul of industrial espionage.
You probably go through metal detectors and have to check your cellphone to get in so you can't take a picture of the receptionist, and the real stuff is zipping off to a server in whoknowsistan.
"(1) It worked for 20 years and would YOU like to be the guy that breaks all installed systems controlling valves and such by adding security that nobody demands?"
I don't have to because it's being done for me. Every time this comes up, first you hear the above explanation; then you hear "Oh, sh1x, the hackers don't care what I think, they've got access" and then "Why didn't my manufacturer make my software secure, I wanna sue them."
It's all laughable, and like a train wreck, you can see the problem a quarter mile away, but because of the inertia, you have to let stuff happen. Only then will certain industry standards be upgraded to fit reality.
"SCADA is an excellent collection of hacks over a quarter century to get "something" out the door"
Hmm I would agree with you if you changed the "excellent" to "dire / desperate / diabolical / disfunctional" or some other such antinim of "excellent" ;)
And you are almost spot on with,
"You can never throw it away and start from scratch -- much too expensive relative to the cost of just hacking it a little bit more"
You just missed out the bit that due to lay offs and outsourcing and the occasional natural migration there is in now nolonger any staff that understand the code base. Also those that did knew their likley fate and made dam sure the code was unreadable as a form of self defense.
For my sins when I worked in a place that suffered these ills I kept two code bases.
The one I worked on in private and the one I left on the company systems.
The difference between them was I ran the private version through a modified pre-processor that striped all the comments changed meaningfull variable and sub names into unique hashes and did one or two other things with white space and commers etc. Yes it made using a source code level debuger harder than it could be but then I rarely used one for various reasons (mainly because they where crap and the ICE tools served me way way better).
The moral is,
If as a manager you treat the people you are responsable for like chimps you must expect them to throw a "tea party" from time to time...
Which brings me nicely to your observations about closed -v- open source code. In general I have found closed source code quality reflects the ethos of senior managers in code cutting (sweat) shops.
And is almost invariably well written and thoughtfully commented when it is written by hardware design engineers or others that did not go down the "comp sci" path for their MSc / PhD.
Also those that had spent three or four years writing Real Time Assembler code generaly put serious commenting in their code. Which also tended to work first time as they like formal methods (which they stuck in the comments).
It takes about twice as long for them to write code this way but takes about a third the time to get "production ready code". Oh and maintanence tends not to be an issue unless there are underlying "here be dragons" hardware issues.
For some reason the non CompSci trained programmers I've worked with are streets ahead when it comes to embedded and real time coding and they invariably don't use "objects" except where it provides advantages. The reason might be as one bod put it at interview "because I'm lazy I take the time to do it right the first time" (I couldn't argue with that and he proved it over and over again as an employee).
As for MS it is of course (supposadly) difficult to say ;).
However those that have gone through MS's various revisions of code by disassembling the .com and .exe files tell tales of "to many meat balls on the spaghetti rolling of the plate and out of sight". It actually got worse not better with Win MFC, and was not helped by the "macho man" windows code cutters who horded snipits of knowledge to themselves, which always gave rise to serious code revision and maintanence issues.
However what we do know from the "tear drop" DOS attack (any one remember it?) that MS used to carry forward hugh traunches of code unmodified from Win 3.11 to NT 4+. We also know that they ripped off from Open Source most of their "networking tools" software and broke some in "command line translation"....
We also know that Dave Cuttler (famed for attending senior managment meetings wearing a tee shirt that said "your screwing me") did not write "a better unix than unix" when he went to Microsoft simply because he never had enough time and had to "reboil cabbage" code from MS-DOS into it.
Another result was "monilithic dameons" if you ever look at the processes runing on an NT box to do with the likes of networking, you will see the same process name over and over again. This is one large "all thrown in" executable that uses comand line flags / environmental variables to decide what function this particular instance will perform. This 5000ft view does not bode well for closer observation.
We might have an indicator of just how bad the MS code base is from,
It would appear that MS will not put their hand in their pocket to pay for pepole to disclose bugs to them. Unlike Google who will pay just over 3K USD for critical bugs in Chrome.
This is a bit silly because it does not induce "responsable disclosure" in "bug hunters / researchers".
As a wild guess / possibly this may be due to the fact MS have a fairly good idea of the number of bugs in their code base, and somebody has done the math, and does not like the result.
Which also might indicate that they will only fix bugs that become public irrespective of what they might know from internal "security code reviews"...
The "single unchangeable password" really is inexcusable. Kerberos was available before Siemens created their crapware.
"because of the inertia, you have to let stuff happen."
Sad and true, but sometimes the cost to society can be unacceptable. That's why i started working in this field (a long time ago).
The main problem is that a lot of business people (the ones with the budget) do not understand (or are not willing to see) the risks that security will mitigate.
Good scada security is only 10% technical and 90% awareness/proper risk management.
Calling for VPN/DMZ/IPSEC/Antivirus and such will not fix the root of the problem, only its symptoms.
> It is true that most of the viruses enter our system
> and laptops via usb sticks
NO and don't advertise (by) stupid FUD.
I'm hearing some fairly weak excuses for the shockingly bad state of SCADA security here.
I have seen critical, government-owned infrastructure protected by a 6-letter dictionary word password on a very large, very open LAN with internet access.
More recently, I've found some less-critical-but-still-very-important infrastructure accessible on the open internet with no credentials or encryption required.
I can't think of any reason why that first system should be connected to the internet. It doesn't help reporting or system updates at all.
In the second case, the hardware in question probably does need to be on the internet, but it should have all sorts of passwords and transport encryption.
"Sad and true, but sometimes the cost to society can be unacceptable. That's why i started working in this field (a long time ago)."
On such things, empires have fallen and been erased from this planet, only to be known by the ruins they've left behind.
We show time and time again that we fail to understand that, and commit the sins anew. SSDDDT should be the new pass phrase, translated for the rest of the world, "Same Sh1x, Different Day, Different Technology."
The weak excuses can be reduced to, "No one appreciates security until a breach rips them a new one". That of course is the equivalent of living in California and deciding the only time to implement earthquake safety standards in building construction is after a 7.5 or greater quake rips through the area instead of incrementally putting them into effect so 30,000 people don't have to die from a measly 6.5 because their unreinforced masonry dwellings squash them flat.
We're smart enough to know what kills people in an earthquake, why can't we be smart enough to know what can potentially kill a business when their ability to function is killed off by industrial espionage sapping off their intellectual property or worse.
@Hate advert posts
He's just trying to tell you that viruses can now be transmitted securely and reliably with his product. Encrypted .lnk files will be the new rage!
"The "single unchangeable password" really i inexcusable. Kerberos was available before Siemens created their crapware"
You should be asking the question of,
"what was the password intended for?"
Before making judgment.
In many "stand alone" systems all it was intended to do was replace the "operator key" of older non-PC systems.
That is the level of security it was intended to provide was similar to the "standard key" on trains or excavation equipment etc. Without the expense of a switch lock or attendant "key loss" issues.
It was most certainly not intended to provide any physical security as that was provided by operations room entry control etc, much like that of the very early "big Iron" batch processing computers.
The sort of security that modern ICT systems need was not even envisioned back then as PC's for the most part where not even networked, and SCADA systems usually connected via a serial port to Vaxen or PDP systems.
Most insturmentation was 8bit on the likes of the 1802 MOS processor with just a K or two of memory.
I had the distinction of designing the first approved 16bit "Intrinsically Safe" (ExE Zone 1) instrumentation / telemetry unit in the UK. It was designed as a eurobuss card which could use either the 8088 or 8086 with upto an astounding (for the time) 64K of SRAM and 256K EPROM.
It would appear the current prefered method is a COTS system bassed on the likes of PC104 in an "Explosion Proof" (ExD) box using fiber based ethernet not 24V RS232 as the Ex requirments are significantly alleviated.
@ Clive Robinson regarding his eventual fate
"I had the distinction of designing the first approved 16bit "Intrinsically Safe" (ExE Zone 1) instrumentation / telemetry unit in the UK."
Alright, I just got the itch again. That sounded unique enough to put me into round 2 of trying to pin down the slippery Clive Robinson, finally figuring out just which one you are. How many people could be working on 16 bit telemetry units? I figure it was a while back, so the info might not be readily available on the internet. I thought I had you with that one. How arrogant I was....
Main Problem: Many key contributors to U.K.'s telemetry, computing and government projects are named Clive Robinson. They are also in the age bracket I put you in.
Weird Problem: I tried to use the 16 bit stuff to filter it. If I don't put quotes around your name, Google gives me some strange results. There were dozens of projects, mainly in 16 bit audio devices, where the first name of one guy was Clive and the last name of the other was Robinson. It happened over and over. The other names were different for many of them, suggesting it wasn't the same two repeatedly. Is that just weird to you too? What are the odds of that?
Best Guess After 5min: [Former] Managing Director of PRQ
Assurance of Correctness: Less than MS Windows
I'm sure I could pin you down with what I know, but I'm just not *that* obsessed with it. The consistent lack of my usual quick success is starting to annoy me though. Like trying to find a needle in a stack of identical looking needles. Cumbersome and painful, but a fun challenge. Your going down, Clive. It's just a matter of time. ;)
Why, exactly, are rig mainframes attached to the internet? If the system's job is to monitor what goes on in the rig, why does it need to be networked to anything off of it?
scada_man: Couldn't all of these things be done on a schedule; perhaps once a day or even a few times a week? Heck, on updates you could always just have a stack of cd's next to the public-use comp on the rig, dl the fix to that, then have a tech manually install and config instead of risking malware through constant network connection.
As for the suits onshore, why would they need access to real-time maintenance data whenever the mood strikes? I don't ask to be snide, I just don't know enough about the industry to be able to think up a reason.
"Before making judgment.
In many "stand alone" systems all it was intended to do was replace the "operator key" of older non-PC systems. "
And thanks to easy access to data, everyone now has the master key. Sounds pretty reasonable to me. On a whim, I can walk in and start turning valves and flipping switches to see what happens to the process. And hand out access to several other people so we can make the plant supervisor go insane as things start to discombobulate. And we don't really have to be smart hackers to do this anymore. Power to the People!
Slight historical correction: The *username* was/is SYSTEM. Roughly equivalent to root in Unixland. The default password on old (as in prior to V5.0) VMS installations was MANAGER.
VMS systems of that vintage actually shipped with two accounts enabled by default. SYSTEM was one. The other was FIELD. Want to guess the password? Hint: The account was intended for use by DEC's Field Service organization...
Also, this default password vulnerability has been gone from (Open)VMS for a loooong time. I believe that it was as of V5.0 (first shipped in 1988) and later that you are prompted to pick a SYSTEM account password during the install dialog. But "everyone knows" the story about the default password on VMS.
Battleship of an operating system. Lord, how I miss it.
"Is it just me, or is a fixed known password no more effective than a lack of password?"
Remember the Gates of Moria. A fixed password is great if the attacker doesn't know it, but terrible if he does -- or he can guess it. (I suppose the real moral of the Gate of Moria is not to write your password down and stick it to your monitor.)
@ Nick P,
"I'm sure I could pin you down with what I know but I'm just not *that* obsessed with it. The consistent lack of my usual quick success is starting to annoy me though. Like trying to find a needle in a stack of identical looking needles Cumbersome and painful, but a fun challenge Your going down, Clive. It's just a matter of time. ;)"
Hmm, I'm curious do you like fishing as a hobby?
Fishing is apparently the number one sport in England if not the UK. However due to the twin effects of weather in the UK and a subset of English Women the way the "sport" is carried out in England is like no other place in the world (I hope ;)
A brief description will I hope get the oddity of the English "rod man" across,
When walking in the English Countryside or sailing on her inland waters you can see all these odd looking blokes congregating on the muddy banks.
Invariably these blokes are sitting on their tackle boxes fiddling with their rods, and with their flies discussing what bait is required to make their misserable looking worms more effective. So as to attract attention and perhaps a nibble for from the poor creature of their desires.
Then in the majority of cases when these blokes have had a nibble, played their rod back and forth and left the poor creature of their desires exhausted and breathless, they then spurn the retched creature and cast it away. Before going and boasting to their likeminded friends about the size of the poor creatures physical atrabutes, how much resistance was put up before they where subdued and the conquest compleated.
Worse they boast about how after each and every entrapment they quickly move on to the next conquest.
Quite honestly I've never seen the atraction or gratification involved with rod and line fishing, and they way they continuously fiddle with their tackle. The whole game has always struck me as a compleate waste of time and resources.
After all what's wrong with a large net or inshore tide or drift line, atleast it puts food on the table?
I'm assured by rod and line "men" that "It's not sporting" and that patience, required for "the thrill of the chase" is it's own reward in these respects and the longer the chase the better the satisfaction on aquiring the prey.
Then when you ask them about that one creature of their desires that always alludes them and fails to take their bait, they go all misty eyed and claim "One day..."
"As for the suits onshore, why would they need access to real-time maintenance data whenever the mood strikes? I don't ask to be snide, I just don't know enough about the industry to be able to think up a reason."
First off it's mainly not the "suits" that need access to the data.
It's the various engineers and the attendant safty systems.
Once upon a time all the instrumentation was as close to the "Christmas Tree" (well head) as possible. And the control of flow and all sorts of other things was made by the guys turning the handles.
Well safety and other (mainly financial) reasons made the companies move away from the Christmas Tree in the 60's then record and anaylse the dats in the 70's and start computerising it in the 80's. During the 90's the cost and reliability of comms droped to the point where remote platforms where nolonger controled from adjacent mand platforms but from on shore. This provided great financial savings not just on man power but by JIT supply of product etc, but also increased safety. Towards the end of the 90's the well heads went subsea.
One advantage of this is it enables "what went wrong" to be dedetermined should a significant event occure.
Unfortunatly this still tends not to be the case with exploritory platforms, which means that when events move from significant to catastrophic and the jakets integrity is compramised data is lost.
This is the case with the deep sea drill off of the US coast, the last data that is known is from five hours before the loss of the platform and crew. Which makes determining the cause of the event very difficult. something that unfortunatly makes politicos and lawyers extreamly happy as they can jump on people with their own pet fears and by and large get away with it.
@Clive: At least a few years ago, I know that Microsoft Office source code invoked the function SaveA5World(). That function did something on the original Macintoshes using the 68000-68040 chips. (If I'd kept my original Inside Macintosh books, I could tell you what.) It never did anything on a PowerPC Macintosh or on anything with an Intel chip or equivalent.
I consider that evidence that Microsoft engineers don't understand their code base, and are afraid to change certain things because they don't understand them.
"I consider that evidence that Microsoft engineers don't understand their code base, and are afraid to change certain things because they don't understand them."
That's actually two claims: don't understand; afraid to change. The first is obviously true: Windows and Office are extremely complex multi-million line programs whose developers come and go. That kind of complexity is hard to master no matter how you do it. Eventually, there's just so much to think about that the mind can't fathom it easily. Microsoft wasn't using low defect methods to write most of that code base, which means the documentation and structuring probably sucks. So, if a current developer didn't understand the code, do you blame them?
On the other point, Microsoft is definitely concerned about making changes to the code. Understanding the code is just one part of it. Legacy is the real issue. Microsoft has sold its applications as a legacy in the making, each app being supported long enough for two or three other versions to come out. The products also all interoperate. This means a change in one app can hurt the platform as a whole. Even if they understood the code, they'd be pretty careful. Since they don't understand the code, they just add the features anyway and rely on people's willingness to put up with software flaws in essential apps.
@ Clive Robinson
"do you like fishing as a hobby?"
For fish or people? ;)
I find fishing too boring. Games of anticipation and patience aren't my thing. I often play online shooter games. The fishers are the "campers" who wait in certain spots and shoot what they see. Then, there's the guys who like to be really close. There are also guys who shouldn't be there. Then, there's me. I'm the first out the door, usually get the first kill, and then I mix my tactics. I change my tactics and weapons to suit the enemies'. I run through areas, mowing people down, then wait around a corner like spider in a web. I mess with their minds via stunts like crawling up to a sniper, putting a claymore on him, and watching in humor as it blows him off a roof when he lifts his gun for a shot. I appear more skilled than I am, but it becomes real for my enemies. I find that this sport provides more variety and immersion than the English's fishing...
"Then when you ask them about that one creature of their desires that always alludes them and fails to take their bait, they go all misty eyed and claim "One day...""
Nice finish. I see the bard ends the epic with an allusion to an inflated sense of self-importance. You neither took the bait, nor did I desire to cast another. The only desire I pursue with passion is assurance. I often see a glimmer of it, which turns out to be a mirage, and then I say with longing eyes: one day.
I think you hominds are going to have to rethink your "cyberwar" strategies. The sabotage potential of this worm is stunning.
@ Raven, Software Guy,
If you put your two comments together,
"have to rethink your "cyberwar" strategies"
"Symantec reports that the majority of detected infections are in Iran.."
You would get the seed of another possible conspiracy theory...
Afterall there where claims in the past that the US (CIA or who ever I cann't remember the details) put malware etc into hardware that ended up at the center of a major Russian oil production area that then went "pear shaped" and caused massive cost.
So you have,
1, US warhawks banging on about Iran (in a similar way to the build up of the Iraq invasion).
2, Various US agencies vying for a slice of the "Cyber Pie" by showing their expertise to politico's in secret "pissing contest" briefings.
3, A very sophisticated piece of 0-Day malware (implying proffessionals) specificaly targeting industrial plant control equipment.
4, Malware that has "air gap" crossing ability.
5, That sends back details of plant equipment (schematics etc).
6, Apparently directed against Iran that just happens to be one of the US's "Axis of Evil" nations that the US have very few assets in.
Any one one to cry out "Cyber-spying for WMD" and get the ball rolling?
Just one fly in the ointment apparently there is no reverse airgap crossing (which is odd as I described a way to do this some time ago)...
@raven: "The sabotage potential of this worm is stunning."
It really didn't work well. It got caught.
Unless the intention was to raise awareness of the problems of scada, it failed.
They should have done their homework first.
Then, worked at stealthy attacks that were not amateurish.
Nothing to rethink here, either. Very brainless, really, worm. Very obvious. Scada problems are well known.
If this were a spy movie, I would rate it as derived, obvious, painfully predictable.
Still, at least it raised some awareness about the problems.
> If you put your two comments together,
> "have to rethink your "cyberwar" strategies"
> "Symantec reports that the majority of detected infections are in Iran.."
Unfortunately for this line of reasoning, no two reports that I have read agree on the list of most-affected countries. In some lists, Iran is one of the hardest hit; in others, it is way behind the USA. The simplest explanation for this is that infection is currently growing fast and so the stats are changing all the time.
If we want to speculate about national originators based on the stats, what's more interesting is countries that one might expect to see in the list, but do not. For example, one conspicuously unaffected country is China ....
This is probably not the first time that industrial control systems have been threatened, but it is a dramatic example of the truism that no business or industry is safe from the threat of cyber attacks. This is not the time to stick your head in the sand and say “it can’t happen here.” This bug hit the Middle East and the Asia subcontinent hard. Cyber attacks on industrial control system are happening now and will probably increase. We use a combination of industry standards with proprietary technology and best practices recommendations to fill in holes. It takes a lot of disciplines working together to create secure industrial environments in a world that is increasingly connected, and we are strong advocates for making sure that security is part of any industrial system deployment – and that companies are vigilant in watching for incursions as new threats are introduced.
Only one problem with putting security on a SCADA system is that it hits performance and a hit on performance is a hit on the production process response time. Get that wrong and BANG!
Before we get to the software, we need to get to the systems. An outside auditing / security company could do the work, but since the plants are privately owned, there's no law that says they need to be secure. Let's get the white-hats in there first. Apparently the folks that work there are under pressure to ignore safety. We've seen this in coal mines and now in the news about the gulf oil rig disaster. When the bosse's have control over the safety folks, guess who's vote counts more? Outside auditing and regulation are necessary.
"Outside auditing and regulation are necessary"
And in most cases the "man who cuts the cheques" would still carry on doing the same stupid and dangerous things.
I could go and and give a hugh list of why, but the simple fact is it's the "short term gamble" for senior execs chasing "shareholder value" again, allied to a failed audit process.
The simple fact is safety costs every day and risk is probabalistic so "only happens on somebody else watch". Even when caught and convicted the fines etc are little more than token and the Execs walk away from it and blaim somebody else (sub contractor etc). And we the tax payer are apparently (according to the politicos) not willing to "pay for safety" (so we just pay higher insurance instead).
Any legislation that is going to work has to have real teeth to those in walnut corridor.
People talk about "corporate manslaughter" but there is rarely the evidence for it, even without legislation. The US has a "point the finger" system that has come out of "plea barganing" but that is a dangerous road to go down when there is multiple people in the frame and no real evidence.
@ 'O Dear'
If they run MS Windows on their HW, they are _not_ concerned with optimal/peak performance :-)
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.