Forever-Day Bugs

That's a nice turn of phrase:

Forever day is a play on "zero day," a phrase used to classify vulnerabilities that come under attack before the responsible manufacturer has issued a patch. Also called iDays, or "infinite days" by some researchers, forever days refer to bugs that never get fixed­--even when they're acknowledged by the company that developed the software. In some cases, rather than issuing a patch that plugs the hole, the software maker simply adds advice to user manuals showing how to work around the threat.

The article is about bugs in industrial control systems, many of which don't have a patching mechanism.

Posted on April 17, 2012 at 1:22 PM • 20 Comments

Comments

samApril 17, 2012 1:53 PM

Vendors should always fix security holes in new versions of software for new installations, but as far as patching a live industrial system, there's a nasty trade off - industrial control systems are always expensive, and often safety critical, and even though a vendor may have tested the fix in 10,000 different configurations, every installation is different. So you get to choose between vulnerable software (which if you're smart, is physically isolated from the internet) or the considerable expense of taking the system offline, installing the new software, hoping that the configurations you created years ago still work in the new version of the software, spending hours or days fixing the configurations that don't still work, and recommissioning the system one module at a time testing that the new control software works correctly. And remember, you bought the industrial control system to increase factory productivity and up-time in the first place, so a "patch Tuesday" regime will never be the norm in this field.

Ken (Caffeine Security)April 17, 2012 2:27 PM

From the end user perspective, there are some threats which can be easily mitigated through proper security procedures. For example...if your industrial control system has to be managed by a Windows system, never connect that system to a network, and never plug in a USB drive which could possibly be infected with a virus.

Then again, from a vendor perspective, software should be designed so that it can be updated without possible impact to functionality. I've had this problem with Oracle software in the past, especially Fusion Middleware. Apply a patch, the whole system breaks, and of course the programmer who wrote the part of your application that interfaces with Fusion Middleware left years ago. So now you're faced with leaving a vulnerability unpatched, or spending a large number of man hours trying to figure out how to fix it.

Before I get too off course with my rant, there is a point to all of this.

Vendors should never write code and assume it will never need to be updated. Vendors should also program with security in mind from the start, and not as an afterthought, as I have seen all too many times.

But don't hold vendors too accountable if you didn't follow their security recommendations or industry best practices and left your industrial control system connected to the internet without a firewall or other sane security protection. There's a reason guides such as DISA STIGs exist - because it's impossible for a vendor to configure a "one size fits all" security policy.

Carl 'SAI' MitchellApril 17, 2012 2:53 PM

I think this is really a design issue. Industrial control systems are designed to be hooked up to the internet to enable remote monitoring, and sometimes remote control. Remote monitoring is easy: have a dedicated monitor system that is hooked to the ICS via ethernet with the transmit lines from monitor to ICS cut. Then use UDP + something like syslog to send the data. Anyone breaking in to the monitor system can see what's going on, but they can't change anything. All your remote alarms and such still work. Updates to the monitor system are easy, and much lower risk than updating the control system.

Remote control is hard. For that you need a secure remote access system that can be easily updated. But here any easy updates will need to be tested, running one into the problems we have now.

Vendors supporting remote control leads to it being used more often, and inevitable security bugs can be exploited. Even the most security-focused systems have had remote-execution bugs. OpenBSD had one in 2004, and they do regular audits of the entire codebase.

Brandioch ConnerApril 17, 2012 3:34 PM

@Carl 'SAI' Mitchell

"Remote monitoring is easy: have a dedicated monitor system that is hooked to the ICS via ethernet with the transmit lines from monitor to ICS cut. Then use UDP + something like syslog to send the data."

That's pretty much it. Or a special serial cable or whatever. People forget that hardware can be the answer.

Once you allow control of the system from anywhere other than the console physically attached to that system you have introduced an entire range of possible attacks.

bcsApril 17, 2012 3:41 PM

I agree with some of the sentiment of "you just can't do that for these systems" but what you *can* do is ensure that fixes are *available* so that they can be installed when and if the end user decides to do so.

Jenny JunoApril 17, 2012 3:53 PM

@one-way datacomm hardware

I looked into such products a few years back, the slickest one I found was the Data Diode - http://www.datadiode.eu/

When I saw the original article about Forever Days it seemed like there might be a market for dedicated network filters with some sort of deep packet inspection that you could put right in front of the vulnerable system. Teach the filter to block known exploits and you've effectively "patched" the system without having to touch the firmware.

Stew (The duck)April 17, 2012 5:27 PM

Ken is correct, vendors can do only so much.
This is a battle we may not win.
Years ago I did lots PLC - SCADA programming for a big consulting engineering Co. The specs were given to us by the marketing people, rarely by the client's engineers. Almost all were fix bid and a tight time delivery. Our request for a quality OS or security was ignored if it was not in the contract. There was never a request for security. Marketing people LOVED internet remote access to the controller.

Last year I "consulted" to a factory floor control station. No PLC, No SCADA, just straight (MS-Basic) PC-machine interface. The factory technician was proud he could surf the net on the same Win-XP controller and simultaneously run a very large, expensive (and dangerous) CNC cutting machine. He had installed an anti-virus program so he was safe. ...sigh.

This set up is the norm not an exception.

--end rant.

Clive RobinsonApril 17, 2012 6:55 PM

As many have observed both in this post and many previous to it dealing with SCADA etc "remote access" is the issue to be resolved.

One way where only monitoring is required is the simple cutting of TX wires in either a serial or network lead (sugested in quite a few Unix Security books from the last century). Likewise blocking / locking up other "data connectivity" ports such as USB, firewire, etc, etc.

But the reality is managment want remote "full control" irrespective of what the risk is, worse they want to "maximise shareholder value" so it has to be done as cheaply as possible, which usually means always open public access connections.

There are a couple of old "extend the secure network by VPN" ideas but the reality is, these days this is fairly easy to bypass, as has been shown by those who have taken over peoples PC's with "malware" that actually fakes user input to access bank systems etc.

This sort of attack is possible because people will use "Commodity OS's", now I don't care if you are talking about MS or *nix platforms, the simple fact is there are way to many lines of active code and way to little formal design to say they are even remotly secure, the best yoou can say is attackers have not yet got around to "attacking them in anger" yet.

And due to the complexity of the systems in use it's not realy possible to spot anything other than the most blatant of unknown exploits in use or known attacks, so stealthy use of zero-day will most times slide past without triping any alarms.

To actually design the hardware and software to do the remote access with a high degree of assurance is actually not that difficult for low complexity systems. But quickly becomes prohibativly more dificult at a slightly greater rate than the complexity increases.

Thus common sense would say "if we have to have remote access, lets strip it's complexity and thus functionality to a minimum". But no common sense is not sexy or particularly cheap, all singing all dancing whiz bang systems requiring the latest hardware fully maxed out on memory etc is what gets purchased...

These are the sad realities of life and they are not going to change untill the "business drivers" change managements outlook on life. And all sorts of easily avoidable industrial accidents that happen every day tell you all that management will do is "externalise the risk" by spending on insurance and lawyers, not fixing the problem as that costs more.

And you will find this management attitude / outlook all the way up the supply chain...

For years now I've said people need to make "security" into a "quality process", because most management are happy to spend on quality systems. Not because they understand them or the inherent benifits, but because they see the positive change in the bottom line of the accounts, and it is this and this alone that their "pay and conditions" are based on....

BApril 17, 2012 8:48 PM

Last friday I commented on the Squid post about a recent talk I attended given by a DHS person.

After the talk I went up and asked him about the prevalency of forever day exploits. He said he didn't have access to that information, that that information was classified and that I really didn't want to know. I grinned and nodded and came away with the impression that the situation really is as bad as we feared.

Clive RobinsonApril 18, 2012 4:57 AM

@ B,

He said he didn't have access to that information, that that information was classified and that I really didn't want to know.

Talk about "out of the mouths of babes and fools", I've heard almost exactly the same from so many bureaucrats in the past I've given up counting.

I once actually decided to push on such a "pompous bureaucrat" to see if he would fall over or wobble back up again by asking the overbearing idiot in a Q&A Session the simple question,

If as you say, you don't have access because it's classified, then how on earth can you possibly make a valid evaluation sufficient to say I "don't want to know"?

I'll let you guess what colour he went... oh and for some strange reason I haven't received an invertation back to similar events...

The real answer is that the whole thing is a political embarrassment, I posted a link to the "official statment indicating the change of classification policy" back in Nov 2011 ( http://www.schneier.com/blog/archives/2011/11/... ) which I picked up from Ralph Langner's blog.

The part of the DHS with responsability (ICS-CERT) has decided to abdicate it's responsability to act as a CERT and just collect data and instantly classify the majority of it and only hand out the knowledge to a "select few" which appears not to include the vendors or users of vulnerable products etc...

I cann't remember the link of the top of my head but the statment was made at the Applied Control Systems (ACS) Conference in Washington, where ICS-CERT Director Marty Edwards, said that the agency was changing the process for handling reported vulnerabilities.

Basicaly anything that was at all serious would be treated not as a reportable vulnerability but as a "systemic design features" which due to the classified nature of such things would only be reported to those with the appropriate clearances...

However things that a vendor could produce a "quick fix patch" for would be reported...

Thus a simple buffer overflow bug would get reported to the vendor who would make a patch then ICS-CERT would report (advertise?) the patch availability and thus make the specific exploitable bug public to the detriment of nearly all because the majority cann't patch in a timely manner (see other posts above for why). But... something more serious such as a protocol error that effectivly throws all security away would be treated as a "systemic design feature" (which is what my now very old joke about "Bugs are CREeping feATURES" was ment to highlight amongst other design failings).

So you actually get a worse result with the DHS's ISC-CERT than you would without it...

There are a few (possibly cynical) view points you can take on this,

The first is that this is because the problem would require ICS-CERT to actually do some work as opposed to running around "networking with industry" to get future high paying job options etc.

The second is again because if the actually did some work to resolve the issues then they could not run arround on the hill being "Chiken little" in front of the purse string holders so they could get a bigger slice of the pie.

Thirdly is they are involved in that latest sexy craze of "Cyber-Weapons" and infact their "chosen few" are tasked with developing them for another Stuxnet...

And quite a few other even less flattering views of ISC-CERT and Marty Edwards...

Lest people think I'm being a little harsh on Marty and his DHS ICS-CERT, I'm not the only person with these view points. A quick Google will pull up quite a few including,

http://threatpost.com/en_us/blogs/...

wiredogApril 18, 2012 6:29 AM

In some cases, rather than issuing a patch that plugs the hole, the software maker simply adds advice to user manuals showing how to work around the threat.
Nothing wrong with that. It's been a common practice for decades, because the known bug with a known workaround is often safer than the unknown bugs introduced when you patch the system. As Bruce Sterling said:
Some software is bad and buggy. Some is "robust," even "bulletproof." The best software is that which has been tested by thousands of users under thousands of different conditions, over years. It is then known as "stable." This does not mean that the software is now flawless, free of bugs. It generally means that there are plenty of bugs in it, but the bugs are well-identified and fairly well understood.

Have To Be AnonymousApril 18, 2012 9:02 AM

Will try to keep this vague to protect myself but I know this all too well.
We have a software vendor which provides us with software for mission-critical resource, asset and mission planning. This software is used in the military at an operational level but has severe security issues (CWE-732, CWE-321, CWE-602) and their excuse? That it was designed in an environment where even the clients are considered completely trusted, despite the fact it has it's own internal access-control mechanism to restrict access and that there have been plenty of real-world situations that show this sort of thinking just doesn't work.

This is an idiotic notion and a fallacy but in the meantime I'm stuck with this expensive problem and have to design my entire infrastructure differently to try to reduce the attack surface area of this stupidly designed solution. The difference is, I'm not a powerless plebeian, I actually have decision making power and damn it I plan to use it. Only way to make vendors realize that this garbage is not acceptable, is not tolerable and that security needs to be part of the design itself and not some after-thought or extra feature is if we all start demanding this from our vendors and providers or start moving to those who do provide this, no matter how painful. Also, I need more experts shouting down the castle gates about these problems, I need to bury the necessary people with easily accessible evidence and expert opinion in addition to the typical risk assessments.

Meanwhile I'm stuck with the detailed knowledge that parts of our nation defense can be taken down because every man and his dog with the configured client software has full database read/write access. What a joke...

PApril 18, 2012 9:51 AM

@HTBA

> Only way to make vendors realize that this garbage is not acceptable, is not tolerable and that security needs to be part of the design itself and not some after-thought or extra feature is if we all start demanding this from our vendors

S/w purchases should require that the product conforms to security standards and demand a refund if it doesn't.

Complain about specific faults you can identify and point out what should have been done instead.

I spent some time in 2010 working towards getting some future version of the scheduling tool Control-M to a state where it would not write output files into world-writable directories. (Where if hostile users linked the output filename to something else that would be overwritten.) This involved several mails and phone conferences and providing model code after they said it could not be done.

paulApril 18, 2012 9:53 AM

The "data diode" scheme, whether in software or hardware, may be harder to implement than you think. Unless you're willing to live with corrupted monitoring data (and never being able to reconfigure your monitoring needs) you will need some kind of flow-control, packet ack, retransmission request and so forth. And once you have that you have a potential path in.

Remember the researchers who managed to hack a car's internal network by giving the music player a malformed MP3 file.

DavidApril 18, 2012 2:00 PM

"[H]ave a dedicated monitor system that is hooked to the ICS via ethernet with the transmit lines from monitor to ICS cut. Then use UDP + something like syslog to send the data."

As stated, won't this fail to send because the ICS won't have the ARP of the monitor?

JonApril 18, 2012 9:51 PM

Sanitize the input. I build industrial controls,and they accept certain commands, and silently ignore everything else.

J.

Clive RobinsonApril 19, 2012 5:04 AM

@ Paul,

The "data diode" scheme, whether in software or hardware, may be harder to implement than you think.

A software data diode has complications a hardware data diode does not normaly have in that the software is usually (due to poor design of commodity OS's) mutable remotely, where as "cut TX lines" tends to require an "on site visit" to change.

With regards,

Unless you're willing to live with corrupted monitoring data (and never being able to reconfigure your monitoring needs) you will need some kind of flow-control, packet ack, retransmission request and so forth.

Not of necessitie true. Think back to the original concept of "Data Warehousing" and what it was designed to achieve and how.

Look at it this way,

1, ICS
2, ICS control/monitor (PC)
3, Serial/ethernet cut RX lines at 2
4, Data store PC with shared Data Drive.
5, Firewall or software data diode
6, Access control PC with restricted software.

You as an attacker have to first get SU rights on the Access Control PC (6) to get around the software that limits what you can do. Whilst not difficult with a poorly setup system there are plenty of ways you can make this quite secure (have a look at some Open Sorce RAS systems). One little trick is to have the limiting software do application level encryption on the outbound path to the Firewall/Datadiode using an ephemeral key held in rapidly mutating data shadows. Thus not only do you have to get root you also have to get the encryption system and key (not impossible but way beyond that which has sofar been exhibited by APT types).

Having got your root level access etc (on 6) you then have to get root access etc on the fire wall or software Data Diode (5) to disable the application level command sanitation, which again uses ephemeral keys for bot data from 6 but also to 4. Provided appropriate logging is implemented tripwires should have set off alarms long before such access became possible.

But the Firewal/Datadiode then has only "read only access to the warehouse data stored on the shared drive (of 4). And again the comms and requests are locked down.

The warehouse data is actually generated by the ICS control/monitor PC (2) and it "dumps" all data to the data warehouse (4) via the hardware diode (3)

This warehouse data can be protected in a number of ways. The first is some OS's have fairly solid "Append Only" file systems which can be put in place (on 4). Secondly blocks of data can be "crypto signed" (by 2)and "tracing data" can be added (this is where you add a checksum that has been augmented by an IV from a generator using say AES in CTR mode, the legitimate "remote" user knows what the unencrypted counter value and AES key are unlike the attacker who therefore cannot generate valid checksums).

There are other precautions you can add to improve the likely hood of detecting an attack and dealing with it. Whilst it is not 100% no passive defensive system ever is, what it does do is buy the legitimate system users time to get staff on site etc to either thwart the attack or disconnect the system.

The problem is that although these systems can be (and in very rare cases are) built they generaly are regarded as way to expensive just for "Remote system Monitoring".

Whilst "Remote system Control" is always going to be risky there are ways it can be reduced. One way is by "scripted actions" you see this with emergancy ssystems such as "Red Shutdown". The simple opening of a switch, trigers a hardwired response in the ICS. Because it is only a single bit of data (switch open or closed) it can only communicate the desired effect (of doing an emergancy shutdown). Thus in effect the only attack that can be performed is "fails safe" and is of (expensive) nuisance value as are many Denial Of Service attacks.

Most Control System Engineers know how to define action scripts that are either "fail safe" or "kept within limits" or "made safe" by other autonomous control measures. The hard work is developing the "one bit / script" communicatiions and ensuring there are no exploitable "control loop" or "race" conditions or cascade effects in these normalised activity scripts (google "Generator Aurora attack" which is similar to the attack Stuxnet did to the seperator centrifuges).

The very hard work is stopping what are effectivly insider attacks, where a person with direct or indirect access to the ICS (say via removable media) can hide software that opens a "covert channel" such that the "single bit" communications channel which has a time or phase components as a natural consequence of it's ordinary function is used as a side channel to convey information "morse code like" which is decoded by the added software to carry out non scripted functions.

The two usuall solutions for this are "limit access" to the ICS so that the decoder software cannot be installed and "limit the bandwidth" such that the one bit signal line has a very very low bandwidth. Another technique is "clock the inputs and clock the outputs, with hard fail on error" of an intermediate node in the communications path such that the error recovery system cannot be used as a covert channel. And another technique is "re-modulation" where you add "jitter" to data edges such that pulse width and phase cannot be used as covert channels.

J.C. DentonApril 25, 2012 5:56 PM

Argh... I'm reading so much bullshit here like "Never connect the system to a network [...]", "Don't plug in a USB stick which may contain MALWARE (not necessarily a virus goddamn...[...])" and so forth. C'mon guys there's more required from secure-by-design OSes (*BSD, 9P2000, *IX, etc.) to proper processes (ISO 27001, ISO 20000, PCI DSS, et al.). Nevertheless the article doesn't primarily refer to that but to, quote: "[...]bugs that never get fixed­--even when they're acknowledged by the company[...]" and "[...]adds advice to user manuals showing how to work around the threat.". Guess we'd rather start a discussion about company/organization policys on what software to use, buy and so on (not so much focus on technical aspects)...

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..