Measuring Vulnerability Rediscovery

New paper: “Taking Stock: Estimating Vulnerability Rediscovery,” by Trey Herr, Bruce Schneier, and Christopher Morris:

Abstract: How often do multiple, independent, parties discover the same vulnerability? There are ample models of vulnerability discovery, but little academic work on this issue of rediscovery. The immature state of this research and subsequent debate is a problem for the policy community, where the government’s decision to disclose a given vulnerability hinges in part on that vulnerability’s likelihood of being discovered and used maliciously by another party. Research into the behavior of malicious software markets and the efficacy of bug bounty programs would similarly benefit from an accurate baseline estimate for how often vulnerabilities are discovered by multiple independent parties.

This paper presents a new dataset of more than 4,300 vulnerabilities, and estimates vulnerability rediscovery across different vendors and software types. It concludes that rediscovery happens more than twice as often as the 1-9% range previously reported. For our dataset, 15% to 20% of vulnerabilities are discovered independently at least twice within a year. For just Android, 13.9% of vulnerabilities are rediscovered within 60 days, rising to 20% within 90 days, and above 21% within 120 days. For the Chrome browser we found 12.57% rediscovery within 60 days; and the aggregate rate for our entire dataset generally rises over the eight-year span, topping out at 19.6% in 2016. We believe that the actual rate is even higher for certain types of software.

When combined with an estimate of the total count of vulnerabilities in use by the NSA, these rates suggest that rediscovery of vulnerabilities kept secret by the U.S. government may be the source of up to one-third of all zero-day vulnerabilities detected in use each year. These results indicate that the information security community needs to map the impact of rediscovery on the efficacy of bug bounty programs and policymakers should more rigorously evaluate the costs of non-disclosure of software vulnerabilities.

We wrote a blog post on the paper, and another when we issued a revised version.

Comments on the original paper by Dave Aitel. News articles.

Posted on July 31, 2017 at 5:59 AM6 Comments

Comments

reggie July 31, 2017 7:22 AM

I wonder if there are deeper and more generic mechanisms at work here. When I founded a startup company, I got funded by a venture capital company (a VC, as they are known). My contact in the VC told me that they routinely received approaches from people, all of whom had had the same idea for a new company at the same time. In fact, apparently they used the number in the cluster of related proposals as a measure of how good an idea was, and only chose the best pitch when they had exceeded more than a threshold number of ‘same ideas’.

A cynic (what, in the comments on this blog, surely not?) might assign this to industrial espionage, poor security, etc., but there’s a lot of arguments against this happening: huge attack surface that you need to monitor, ideas tend to only happen very infrequently, many ideas are developed by individuals working on their own…

I wonder if there’s something about technology that means that lots of people receive subtle clues about something that then builds until it eventually emerges as an ‘idea’. Does ‘being informed’ or ‘reading blogs’ provide some of these nudges that sometimes turn into ideas, or rediscoveries?

(My assumption is that three letter agencies are already well aware of this, have studied it, have characterised it, and that cluster thresholds are part of their standard operating practice…)

rigged July 31, 2017 8:16 AM

@reggie : the mechanism applied to business ideas is interesting, but I don’t really see how it would extend to vulnerabilities. Their author himself isn’t usually aware that he’s introducing them (except for back-doors), so the “subtle clues” would be really too subtle to have any practical importance…

Or maybe I misunderstood your comment ?

Evan July 31, 2017 9:43 AM

@reggie, rigged

Counter-hypothesis: discovery and rediscovery are definitionally random and independent events; therefore we can use the rediscovery rate to determine the discovery rate, and estimate the rate at which undiscovered vulnerabilities get discovered:

P(rediscovery) = P(A n A) = P(A) * P(A)

P(discovery) = P(A) = sqrt(P(rediscovery))

total number of vulnerabilities = (discovered vulnerabilities) / P(discovery)

I wouldn’t rely on this in a general, heterogenous case due to statistical effects of (inter alia) aggregating dissimilar codebases, but it makes sense as a rule of thumb for a sufficiently well-defined sample, probably controlling for language, bug bounties, number of active users and researchers, type of bug/vulnerability, etc.

Daniel July 31, 2017 12:50 PM

Re: clustering

The more likely explanation is the herd mentality combined with Shakespeare and a million monkeys. People are faddish and if enough people look at the same set of concepts purely by random chance there will be multiple instances of the same insight. This is the statistical “method of least squares” applied to human creative behavior.

“In fact, apparently they used the number in the cluster of related proposals as a measure of how good an idea was, and only chose the best pitch when they had exceeded more than a threshold number of ‘same ideas’.”

This strike me as a conceptually suspect approach. It works if and only if the clustering of ideas among venture capitalist proposals accurately reflects the interest/demand of the population at large. In other words, does the sample generalize to the population? If it does than what the VC doing is smart. But the risk is that it might not. So one would expect that a VC company would have some type of second test to eliminate any such false positives.

de La Boetie July 31, 2017 1:00 PM

The data necessarily relates to KNOWN rediscovery rates, correct? So the new higher estimates are still underestimates, because, for example, foreign intelligence agencies may well have discovered but not yet revealed or deployed the exploit, so would not be counted by the research.

I don’t think the VEP will ever be equitable while the structural issues of the NSA being attack & defence persists – you’ve already highlighted that problem.

gordo August 6, 2017 7:56 PM

Somewhat OT:

.Security of Things
.Dan Geer, 7 May 14, Cambridge

The Gordian Knot of such tradeoffs — our tradeoffs — is this: As society becomes more technologic, even the mundane comes to depend on distant digital perfection. Our food pipeline contains less than a week’s supply, just to take one example, and that pipeline depends on digital services for everything from GPS driven tractors to drone-surveilled irrigators to robot vegetable sorting machinery to coast-to-coast logistics to RFID-tagged livestock. Is all the technologic dependency, and the data that fuels it, making us more resilient or more fragile? Does it matter that expansion of dependence is where legacy comes from? Is it essential to retain manual means for doing things so that we don’t have to reinvent them under time pressure?

Mitja Kolsek suggests that the way to think about the execution space on the web today is that the client has become the server’s server.[MK] You are expected to intake what amount to Remote Procedure Calls (RPCs) from everywhere and everyone. You are supposed to believe that trust is transitive but risk is not. That is what Javascript does. That is what Flash does. That is what HTML5 does. That is what every embedded Browser Help Object (BHO) does. How do you think that embedded devices work? As someone who refuses Javascript, I can tell you that the World Wide Web is rapidly shrinking because I choose to not be the server’s server, because I choose to not accept remote procedure calls. (para. 17-18)

http://geer.tinho.net/geer.secot.7v14.txt

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.