The Vulnerabilities Market and the Future of Security

Recently, there have been several articles about the new market in zero-day exploits: new and unpatched computer vulnerabilities. It's not just software companies, who sometimes pay bounties to researchers who alert them of security vulnerabilities so they can fix them. And it's not only criminal organizations, who pay for vulnerabilities they can exploit. Now there are governments, and companies who sell to governments, who buy vulnerabilities with the intent of keeping them secret so they can exploit them.

This market is larger than most people realize, and it's becoming even larger. Forbes recently published a price list for zero-day exploits, along with the story of a hacker who received $250K from "a U.S. government contractor" (At first I didn't believe the story or the price list, but I have been convinced that they both are true.) Forbes published a profile of a company called Vupen, whose business is selling zero-day exploits. Other companies doing this range from startups like Netragard and Endgame to large defense contractors like Northrop Grumman, General Dynamics, and Raytheon.

This is very different than in 2007, when researcher Charlie Miller wrote about his attempts to sell zero-day exploits; and a 2010 survey implied that there wasn't much money in selling zero days. The market has matured substantially in the past few years.

This new market perturbs the economics of finding security vulnerabilities. And it does so to the detriment of us all.

I've long argued that the process of finding vulnerabilities in software systems increases overall security. This is because the economics of vulnerability hunting favored disclosure. As long as the principal gain from finding a vulnerability was notoriety, publicly disclosing vulnerabilities was the only obvious path. In fact, it took years for our industry to move from a norm of full-disclosure -- announcing the vulnerability publicly and damn the consequences -- to something called "responsible disclosure": giving the software vendor a head start in fixing the vulnerability. Changing economics is what made the change stick: instead of just hacker notoriety, a successful vulnerability finder could land some lucrative consulting gigs, and being a responsible security researcher helped. But regardless of the motivations, a disclosed vulnerability is one that -- at least in most cases -- is patched. And a patched vulnerability makes us all more secure.

This is why the new market for vulnerabilities is so dangerous; it results in vulnerabilities remaining secret and unpatched. That it's even more lucrative than the public vulnerabilities market means that more hackers will choose this path. And unlike the previous reward of notoriety and consulting gigs, it gives software programmers within a company the incentive to deliberately create vulnerabilities in the products they're working on -- and then secretly sell them to some government agency.

No commercial vendors perform the level of code review that would be necessary to detect, and prove mal-intent for, this kind of sabotage.

Even more importantly, the new market for security vulnerabilities results in a variety of government agencies around the world that have a strong interest in those vulnerabilities remaining unpatched. These range from law-enforcement agencies (like the FBI and the German police who are trying to build targeted Internet surveillance tools, to intelligence agencies like the NSA who are trying to build mass Internet surveillance tools, to military organizations who are trying to build cyber-weapons.

All of these agencies have long had to wrestle with the choice of whether to use newly discovered vulnerabilities to protect or to attack. Inside the NSA, this was traditionally known as the "equities issue," and the debate was between the COMSEC (communications security) side of the NSA and the SIGINT (signals intelligence) side. If they found a flaw in a popular cryptographic algorithm, they could either use that knowledge to fix the algorithm and make everyone's communications more secure, or they could exploit the flaw to eavesdrop on others -- while at the same time allowing even the people they wanted to protect to remain vulnerable. This debate raged through the decades inside the NSA. From what I've heard, by 2000, the COMSEC side had largely won, but things flipped completely around after 9/11.

The whole point of disclosing security vulnerabilities is to put pressure on vendors to release more secure software. It's not just that they patch the vulnerabilities that are made public -- the fear of bad press makes them implement more secure software development processes. It's another economic process; the cost of designing software securely in the first place is less than the cost of the bad press after a vulnerability is announced plus the cost of writing and deploying the patch. I'd be the first to admit that this isn't perfect -- there's a lot of very poorly written software still out there -- but it's the best incentive we have.

We've always expected the NSA, and those like them, to keep the vulnerabilities they discover secret. We have been counting on the public community to find and publicize vulnerabilities, forcing vendors to fix them. With the rise of these new pressures to keep zero-day exploits secret, and to sell them for exploitation, there will be even less incentive on software vendors to ensure the security of their products.

As the incentive for hackers to keep their vulnerabilities secret grows, the incentive for vendors to build secure software shrinks. As a recent EFF essay put it, this is "security for the 1%." And it makes the rest of us less safe.

This essay previously appeared on Forbes.com.

Edited to add (6/6): Brazillian Portuguese translation here.

EDITED TO ADD (6/12): This presentation makes similar points as my essay.

Posted on June 1, 2012 at 6:48 AM • 40 Comments

Comments

Jeff MartinJune 1, 2012 7:20 AM

So much for responsible disclosure. Now it is even more irresponsible to keep vulnerability information a secret for any amount of time, since other adversaries might have secretly paid for the information months or years previously in order to keep it a secret and use it.

LukasJune 1, 2012 8:03 AM

The really interesting question would be: Why didn't this happen earlier? The incentives must have been there for some years now, right?

Or maybe it DID happen earlier and we're only now becoming aware of the problem.

El G@llegoJune 1, 2012 8:12 AM

Bruce, do you think using free software could remediate this problem, as is more difficult to keep hidden vulnerabilities in this kind of software?

Ad LagendijkJune 1, 2012 8:26 AM

Through rewarding people, who find security leaks, substantially and publicly a company in the security business could get friends and fight enemies.

-BJune 1, 2012 8:33 AM

There has always been pressure to sacrifice Constitutional and human rights for expediency of enforcement (righteous or oppressive). In democratic governments that pressure was largely resisted through the efforts of (at least) a small majority involved in the processes.

After 9/11 there was a sea change in attitudes and those former watchdogs either panicked or caved to the stigma of "supporting the enemy".

A perfect example is "The Patriot Act". It consisted of all the previous failed efforts to compromise Rights but it sailed through passage based on a name change and the desire to "do something" on the heels of 9/11 (regardless of if it was the right thing to do).

-BJune 1, 2012 8:37 AM

Correction:

"It consisted of all the previous failed efforts to compromise Rights..."

should have read:

"It consisted of many previous failed efforts to compromise Rights..."

PaeniteoJune 1, 2012 9:00 AM

@El Gallego: "do you think using free software could remediate this problem"

I'm not Bruce, but IMHO there is no fundamental difference between open and closed source software here.

Say, you find a flaw in Debian's OpenSSL random number generator as compared to Windows' RNG...
In both cases you have to decide whether to publish the vulnerability or to sell it secretly.

JoachimSJune 1, 2012 9:03 AM

Aaloha!

The really disturbing part is that this is governments doing this. That mean that our own governments are using tax money, our money to make us all more vulnerable. A sad development indeed.

RookieJune 1, 2012 9:04 AM

I believe @Lukas hits the nail on the head.

Buying and/or not disclosing security vulnerabilities is so obvious of a strategy for nation-states and certain other entities that I can't believe it wasn't happening before. Certainly both the buyers and the sellers would be incentivized to keep the transactions below the radar.

NobodySpecialJune 1, 2012 10:08 AM

@JoachimS - even worse consider some of the other users.
If the NSA found a vulnerability in say AES and kept it secret, and the CIA, SAC and the Whitehouse kept using it there might be some very pointed questions about whose side the NSA was on

DinahJune 1, 2012 10:19 AM

How much could you use 0-day exploits for long term gains? If some unsavory character is selling the exploit to me, I can reasonably suspect he's selling it to 1, 10, or 1000 other people. I'd better use it quickly and not expect it to be reliable for long.

anonymousJune 1, 2012 10:22 AM

The more I have seen of companies attacking people who behave responsibly, the more I come to believe that full disclosure via anonymous means is the only socially responsible mechanism for reporting flaws.

MozJune 1, 2012 10:27 AM

@NobodySpecial

Consider that right now, IIRC, the NSA does not allow the use of AES for top secret communications. Take that as you wish.

Chris SJune 1, 2012 11:48 AM

@Ad Lagendijk: "Through rewarding people, who find security leaks"

...and you have instantly created a revenue stream that your internal developers can leverage, that allows them to benefit from building a vulnerability and then receiving an under-the-table payment from someone outside who "discloses" this vulnerability for the reward.

Davi OttenheimerJune 1, 2012 12:58 PM

Excellent essay. I would add that vendors are facing increasingly complex ethical/political issues of how to manage the secrecy and loyalty of their staff. The likelihood of leaks/moles (lured by dreams of riches or service to an agency) is being dramatically increased at the same time that the information on what outsiders know is dramatically reduced. In other words, the usual controls that prevent insiders from willfully leaking knowledge of flaws are being tested by the lure of money and pride (e.g. service to an agency). Vendors (or countries that depend on vendors, for that matter) have no choice but to consider new controls. They may put an overemphasis on silencing dissent rather than trying to research and understand how to systemically fix flaws. It's a classic problem: some may say they are focusing on fixing the leak in the roof instead of mopping up the floor while others will accuse them of shooting the messenger. It is not an easy job to prioritize which flaws get fixed first or to get unanimous support for a fix schedule, and the increasing outside demand for secret high-value flaws makes that problem even harder.

DanielJune 1, 2012 1:01 PM

"it gives software programmers within a company the incentive to deliberately create vulnerabilities in the products they're working on -- and then secretly sell them to some government agency."

Bingo! The software programmer becomes the middle-man and makes profits two ways. He makes a profit by selling his program to people foolish enough to trust him and he makes a profit selling a hack to a person who is willing to exploit his own customer's trust.

@rookie.

Honestly, I think stuxnet changed the game in a major way. I think it opened people's eyes to the real political--even life and death--consequences being able to exploit zero days could have. The idea is not new, nor the behavior, but I think the scale of the behavior has changed as the scale of the consequences has changed.

Arthur DoohanJune 1, 2012 1:46 PM

Seems to me that there is huge scope for gaming this market...It requires massive amounts of that elusive 'Schneier-esque' quality 'TRUST' to operate...

What is to stop the 'buyer' saying 'Nah, we already have that one'...or the 'seller' selling to more than one 'buyer'...

From the outside, agencies will be better off mining their own ZDexploits which is what they appear to have done for Stuxnet...

Thierry ZollerJune 1, 2012 1:57 PM

Your essay perfectly complements a past presentation of mine entitled "The rise of the Vulnerability Markets - History, Impacts and Mitigations" at OWASP. Granted your essay is more easily digestible than my slides, conclusions of mine are similar but focused more on what this means in terms of attacker classifications

For this interested there is more here :
http://blog.zoller.lu/2012/05/updated-posts-and-notable-updates.html

and

https://www.owasp.org/images/b/b7/OWASP_BeNeLux_Day_2011_-_T._Zoller_-_Rise_of_the_Vulnerability_Market.pdf

karrdeJune 1, 2012 3:57 PM

@ Paenito
I'm not Bruce, but IMHO there is no fundamental difference between open and closed source software here.

Say, you find a flaw in Debian's OpenSSL random number generator as compared to Windows' RNG...
In both cases you have to decide whether to publish the vulnerability or to sell it secretly.

One disadvantage to Open Source is that the attacker can see the source code. (Instead of whatever methods an outsider would use against Closed Source software.)

One way in which Open Source has no advantage or disadvantage over Closed Source is that the malicious finder of zero-day exploits has the same incentives to keep his discovery secret.

Another way in which Open Source is no different from Closed Source is the possibility that an insider will deliberately put a zero-day exploit into the codebase and not be detected by fellow coders.

One advantage to Open Source is that there is (usually) a community of programmers on the project who have been looking for such trouble.

As the example of Debian's OpenSSL vulnerability shows (also an earlier kernel-level bug which allowed escalation from any user to root), Open Source is not immune.

Overall, I don't see much advantage to Open Source. Except for the ability to manually audit the code before use.

But even that security can fall to the "Can you trust the compiler?" problem.

http://cm.bell-labs.com/who/ken/trust.html

AtJune 1, 2012 4:05 PM

There's always full disclosure. And there can even be an economic motivation for it, in that you want to embarrass your rivals. And there's FOSS, too.

That said, I think we need to work on ways to change the game so the economics of it don't go out of whack.

simonJune 1, 2012 6:14 PM

I disagree with you that vulnerability brokers make or keep people vulnerable. Software vendors create the vulnerabilities whereby making people vulnerable. Software vendors don't always play nice with researchers. Software vendors make a killing off of their product and expect researchers to deliver their work product for free. Would you work for free? I sure wouldn't. Why shouldn't researchers sell their work to high paying ethical buyers. It is important to remember that not all exploit buyers are unethical, not all are ethical either. Finally, selling exploits to your government might do more to protect you than allowing the vendor to fix the issues. My parting comment, don't group all exploit brokers in the same bucket... some of them are in fact ethical and doing good things.

MattJune 1, 2012 11:40 PM

One interesting thing this tells us is that, for a stuxnet-scale operation, a reasonable estimate of the required budget to purchase zero-day windows exploits is around 4*$120,000.

Very affordable for an organization that is even contemplating such things in the first place.

PatJune 2, 2012 12:51 AM

I am astounded that you are surprised about the rise of the vuln market. Every time someone reveals a website vulnerability, this has lead to legal problems for the person who announced revealed the vulnerability. The vuln market make perfect sense.

If the software manufacturer wants to know about the vulnerability they can pay for it. The hacker gets paid and not arrested, neither of which happens if s/he goes the public disclosure route.

JeffJune 2, 2012 11:50 AM

A counterpoint to consider.

(1) A viable financial market for vulnerabilities creates stronger incentives to search for vulnerabilities than just credibility, thus more vulnerabilities will be discovered. The market dynamic encourages players to keep the vulnerabilities secret, but it is naturally balanced by the need to use the vulnerabilities to realize returns. With use, they will become public and get patched - accomplishing the same goal on a larger scale.

(2) Additionally, their public use by targeted attackers creates a stronger incentive for software vendors to ship more secure software by increasing the impact of an attack, relative to the mild reputation damage of a security bulletin.

OtterJune 2, 2012 9:59 PM

Some may find it ironic that the other day you posted an article titled, "The Psychology of Immoral (and Illegal) Behavior".

chrisJune 2, 2012 10:31 PM

Bruce:
with near-earth asteroids, scientists have long been using the rate of discovery to estimate the total number of potentially dangerous objects ("earth-orbit crossers") that remain to be discovered.

How the heck many zero-day exploits can there be out there remaining to be discovered? Can some of the security professionals who regularly comment here, please post estimates? Or estimate the discovery rate? Is it increasing (perhaps because of recent financial incentives), or decreasing? Yes, I know that incentives for e.g. Microsoft are to keep releasing "versions" that produce new vulnerabilities, but don't they learn from earlier mistakes, so as not to repeat them?

simonJune 3, 2012 7:22 PM

Bruce:

Were we really more vulnerable because we didn't disclose the vulnerabilities used by Stuxnet? Would you be happier if those vulnerabilities had been disclosed and Iran's nuclear program went without hinderance? Just curious...

taco June 3, 2012 11:37 PM

>> Even more importantly, the new market for security vulnerabilities results in a variety of government agencies around the world that have a strong interest in those vulnerabilities remaining unpatched.

I hope this helps illustrate that state governments are the enemies of civilization and not it's protectors.

artJune 5, 2012 2:56 PM

@chris

There is an effectively unlimited supply of 0-day vulnerabilities.

greenleafJune 7, 2012 8:54 PM

I work as a systems administrator for a Fortune 500 company, and we spend an *enormous* amount of money and time dealing with security incidents and threats. The Internet is a hostile environment, and bad actors are seeking constantly to breach our networks. It's a non-stop battle.

simonJune 11, 2012 4:45 PM

Sure I can comment, I'm an exploit broker. The number of vulnerabilities is growing as technology continues to grow. Some vendors like Microsoft are doing a better job at writing safe code, but they aren't nor will they ever be perfect.

The amount of exploits that we broker per quarter / year hasn't gone down at all. In fact, its been very steady since 2001.

And remember, unlike what Bruce seems to think (despite the fact he has no experience as an exploiter or a broker) exploit brokers don't make or keep you vulnerable. Bad software is what makes and keeps you vulnerable.

The fact of the matter is that once an exploit is used it is as good as dead to most people.

simonJune 12, 2012 9:02 AM

I'd rather not disclose that in this forum, but we are one that will sell 0-days to vendors if they approach us. The irony is that no vendor has ever approached us.

SecureThinkingJune 19, 2012 5:23 AM

The whole issue of vulnerability broking leaves us with a number of issues:

1. It's in the interests of the broker to get multiple vendors/agencies to bid for their "product" in order for them to get the best price. This means the buyer might not have the most ethical use in mind for the vulnerability.

2. It's in the interests of the seller to perhaps pass the vuln on to multiple brokers. After all the brokers are likely to keep their "products" close to their chests and the seller can probably get a bumper payday before any subterfuge is discovered.

3. It's in the developers interests to introduce vulnerabilities into their code in order to benefit themselves financially (as mentioned why not get paid 2 or 3 times for 1 piece of work?).

4. The hiding of vulnerabilities for gain introduces another issue - software patching and vulnerability management will get further and further behind the curve making risk management even harder.

So the question is how do we stop this happening and reduce what looks like ever increasing risks?

The only way I can see is for vendors to introduce hugely expensive and time-consuming quality control and code analysis procedures which reduce the likelihood of vulnerabilities existing in code in the 1st place.

But I can't see this happening anytime soon.

simonJune 19, 2012 9:02 AM

My comments are embedded below, this is in response to SecureThinking:

The whole issue of vulnerability broking leaves us with a number of issues:

1. It's in the interests of the broker to get multiple vendors/agencies to bid for their "product" in order for them to get the best price. This means the buyer might not have the most ethical use in mind for the vulnerability.

The first thing that people should do is stop grouping all brokers into one bucket. Every broker is different, some are legitimate, some operate on the black market, others are grey. I am a legitimate exploit broker, I operate within the bounds of the law and take great measures to ensure that what I do is both ethical and legal. Other brokers do not go through the same steps. Sure it is in my interest to sell an item more than once whenever possible, but only to my established buyer list. My established buyer list is made up of trusted entities with a legitimate and ethical need.

2. It's in the interests of the seller to perhaps pass the vuln on to multiple brokers. After all the brokers are likely to keep their "products" close to their chests and the seller can probably get a bumper payday before any subterfuge is discovered.

The second thing that people should do is to stop grouping all sellers into one bucket. Every seller is different, some are legitimate, others operate on the black market, others are grey. I deal with a variety of different highly ethical security researchers. Many of them ask me a series of well aimed questions before considering a relationship. Some ask for proof that I will not broker to the black market. I like working with sellers who maintain a strong ethical foundation. Sellers that don't have such a foundation are rejected from my program as I have no interest in supporting unethical people and their behavior. (The seller and the Developer are usually the same)

3. It's in the developers interests to introduce vulnerabilities into their code in order to benefit themselves financially (as mentioned why not get paid 2 or 3 times for 1 piece of work?).

This is ridiculous speculative FUD at best. Back this up with proof (because I've never seen any and I've been doing this for a long time).

4. The hiding of vulnerabilities for gain introduces another issue - software patching and vulnerability management will get further and further behind the curve making risk management even harder.

This is illogical. The rate at which software patches are applied does not have any relation to the number of unknown vulnerabilities or patch deployment latency, etc. Moreover, exploit developers who sell their exploits to brokers do not create software vulnerabilities. They discover vulnerabilities through an intensive research process. If those vulnerabilities remained undiscovered they would still exist and be discovered by someone else. Simply put, the problem isn't exploit developers or exploit brokers, the problem is the vulnerable software. The solution is for software vendors to care about the security of their software. That solution would solve the zero-day problem (and yes, vulnerable software is a problem).

So the question is how do we stop this happening and reduce what looks like ever increasing risks?

You can't stop this from happening without finding away to write perfect, flawless code. With that said your concerns aren't founded on reality. Did you know that over 89% of all compromises happen because known vulnerabilities (not zero-day) are exploited? Zero-day exploitation makes up a very, very, very small percent of compromises. In fact, most worms these days spread through the exploitation of human stupidity (social engineering) and not through the use of zero-day's. Truth be told, eliminating zero-day vulnerabilities would have an exceedingly small impact, possibly not noticeable, on security as a whole.

The only way I can see is for vendors to introduce hugely expensive and time-consuming quality control and code analysis procedures which reduce the likelihood of vulnerabilities existing in code in the 1st place.

You are right, but that's ok because the problem really isn't as big as everyone thinks it is.

But I can't see this happening anytime soon.

That's the truth sadly...

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..