Research on Patch Deployment
New research indicates that it’s very hard to completely patch systems against vulnerabilities:
It turns out that it may not be that easy to patch vulnerabilities completely. Using WINE, we analyzed the patch deployment process for 1,593 vulnerabilities from 10 Windows client applications, on 8.4 million hosts worldwide [Oakland 2015]. We found that a host may be affected by multiple instances of the same vulnerability, because the vulnerable program is installed in several directories or because the vulnerability is in a shared library distributed with several applications. For example, CVE-2011-0611 affected both the Adobe Flash Player and Adobe Reader (Reader includes a library for playing .swf objects embedded in a PDF). Because updates for the two products were distributed using different channels, the vulnerable host population decreased at different rates, as illustrated in the figure on the left. For Reader patching started 9 days after disclosure (after patch for CVE-2011-0611 was bundled with another patch in a new Reader release), and the update reached 50% of the vulnerable hosts after 152 days.
For Flash patching started earlier, 3 days after disclosure, but the patching rate soon dropped (a second patching wave, suggested by the inflection in the curve after 43 days, eventually subsided as well). Perhaps for this reason, CVE-2011-0611 was frequently targeted by exploits in 2011, using both the .swf and PDF vectors.
Vulnerability Researcher • May 20, 2015 4:58 PM
This is a very well known problem in application security areas, which, btw, are often separated from the same organizations in charge of patching at many organizations. :/ That separation is always a bad combination. But, usually, an organization’s it security department will have some visibility and at the very least be in charge of operating security scanners. (For whatever reason, from corporate to government, these organizations seem to naturally evolve out into the very same structures.)
VeraCode actually does a pretty good job of ferreting out problematic third party libraries that have vulnerbilities. But they have a high false positive rate in general, like IBM AppScan Source & HP Fortify.
That is one of the most insidious problems with patching: third party libraries which are included in custom vendor code. Third party libraries are a staple in the industry, that will not change. Taking code from online sources is a related problem, and also that practice is likely not to change. Typically, however, that can be even more insidious when they are security vulnerabilities in that manner of code. Because that kind of code is very often put in by patch work and so more difficult to detect.
Much of these problems are in the grasp of network scanners. Some of them perform analysis simply by authenticated registry calls. Others perform full binary sweeps checking hash data of known bad vulnerabilities. The later is obviously much stronger at detection for known vulnerabilities… but it is far more intrusive. And being more intrusive means something: already full AV scans are extremely demanding on systems, and this sort of functionality operates in much the same way.
Patching, I do believe, one can reason out is, however, the best strategy to have. The problem is heuristic technology for detection and defeating of vulnerabilities already known is sketchy and highly resource intensive. This later strategy can be, however, invaluable to properly detect who is attempting ‘what’ attacks, and so promises to expand, greatly, the capacity for maintaining accuracy against well known attacks.
Why are those sorts of systems sketchy? Because there are many classes of critical vulnerabilities which can have an extensive array of potential exploit code and much of the exploit code might appear little or no different from legitimate activity. This is, for instance, especially true with ‘business logic’ type vulnerabilities, and many forms of web application vulnerabilities.
It is entirely troubling when it involves lateral network sensors, because most security strategies are designed for perimeter defense, not lateral, ‘behind the DMZ defense’.
This later problem is increasingly problematic as companies increasingly embrace ‘iot’ technology. Already companies are deeply invested in wide spread wireless access and there are many problems with that access to where they effectively mean that their users and customers are bypassing ordinary dmz perimeter controls. eg, handset applications which directly communicate with post-dmz databases, or internal applications initially designed for wired network security instances but now effectively open to external inspection simply because internal access goes over the air, as opposed to the wire.