Oracle and "Responsible Disclosure"

I’ve been writing about “responsible disclosure” for over a decade; here’s an essay from 2007. Basically, it’s a tacit agreement between researchers and software vendors. Researchers agree to withhold their work until software companies fix the vulnerabilities, and software vendors agree not to harass researchers and fix the vulnerabilities quickly.

When that agreement breaks down, things go bad quickly. This story is about a researcher who published an Oracle zero-day because Oracle has a history of harassing researchers and ignoring vulnerabilities.

Software vendors might not like responsible disclosure, but it’s the best solution we have. Making it illegal to publish vulnerabilities without the vendor’s consent means that they won’t get fixed quickly—and everyone will be less secure. It also means less security research.

This will become even more critical with software that affects the world in a direct physical manner, like cars and airplanes. Responsible disclosure makes us safer, but it only works if software vendors take the vulnerabilities seriously and fix them quickly. Without any regulations that enforce that, the threat of disclosure is the only incentive we can impose on software vendors.

Posted on November 14, 2018 at 6:46 AM13 Comments


Iggy November 14, 2018 7:23 AM

Astonishingly, in America where our Constitution protects you from the government silencing you, if a big wallet hires a big lobbyist they can buy a law that makes it illegal to say bad, though accurate and truthful, things to the consuming public, even when such truth telling serves the naive end user who paid money for a safe product that doesn’t betray them. Such truth telling serves the software vendor too, in the final analysis. If a vendor builds a reputation for fixing flaws swiftly, then people like Schneier will crow about it and new customers show up.

But as we all know, profit makers seek guaranteed revenue streams at every turn. Not spending money is a guaranteed revenue pool.

Douglas L Coulter November 14, 2018 7:48 AM

Ummmmm. Deserved or not, Oracle has a rep in the business for being nasty/bad/evil/extortionate/(any other similar term). They are the poster child for jerk/a**inine/self-important/immoral.

I don’t need to pass judgement, others have taken care of that one. I know no person working with their stuff as a developer – and I know a lot – who are happy with the way they operate. My friends and acquaintances are stuck due to Oracle’s lock-in with customers like governments, and their own need for a paycheck. Making Oracle’s stuff actually work is a well paid challenge, especially when not-so-hip customers are in the mix. Competing with Oracle’s amazingly overpriced and semi-competent support is a walk in the park, I’m told.

From a distance, much of what they do is amusing, like threatening audits if customers don’t add cloud services they don’t use just so Larry can show nicer cloud numbers – years after dissing cloud (he was probably right the first time).
And most of us know what an Oracle audit means – a few million extra bucks for them due to you having – even if you don’t use it – some extended API into their crap you didn’t even realize you were supposed to pay for, since you never used it.

It’s sad. But – I look at it this way. Couldn’t happen to a more deserving outfit. Let them be the poster child and that shining (not light, pile of smelly muck?) on the hill that warns all to not go there.

Any resulting financial woe would drive Bruce’s point home further. The sad thing is that Oracle seems to find enough lawyers to wiggle out of most of the consequences of their behavior – so far.

I guess I’m saying…even though one dislikes bad things happening, at least it’s happening to the most deserving. And one could hope good comes from it. Eventually, and probably after a few more like this.

asdf November 14, 2018 8:02 AM

@Iggy “… in America where our Constitution protects you from the government silencing you..”

Yes, but the Constitution does not protect you from corporations silencing you. Boycotts, blacklisting etc. We actually need legal protection to prevent powerful corporations from manipulating public policy with cartel-like behavior.

But don’t hold your breath. They own the politicians.

Timothy November 14, 2018 10:30 AM

A Help Net Security article says that Zelenyuk was dissatisfied with Oracle for not only taking so long to fix a previously disclosed vulnerability — and he stresses that bug bounty programs are often imprecisely communicated and administered — but also because Oracle did not even ultimately credit Zelenyuk for the discovery. The SecuriTeam Secure Disclosure program, through whom Zelenyuk did report, does credit him; I don’t know how all this was translated however.

Related to the topic of vulnerability disclosure, Citizen Lab’s Christopher Parsons recently released a paper on the Canadian VEP process. Mr. Schneier, along with a host of top security experts, is referenced in the draft paper. Zack Whittaker added a link to Mr. Parson’s paper in his November 11, 2018 newsletter. (@zackwhittaker’s newsletter tweet)

Humdee November 14, 2018 12:04 PM

While I agree with the general concern @bruce raises this incident is a poor example of it. What is being neglected both by the researcher and by the news article is that VirtualBox is a free product. So it is not an item that has top priority at Oracle. As the old saying goes, beggars can’t be choosers. Moreover, if you read through the researchers screed he doesn’t even mention Oracle by name but just a generalized rant about the industry.

So it is misleading to turn this into a news story about Oracle.

Erik November 14, 2018 12:52 PM

@Humdee – I would disagree that just because an application is made freely available that there is any significant absolution for security maintenance. Generally these applications are “free” because the vendor’s name and logo are plastered onto it and it’s providing a marketing impression and (hopefully) generating goodwill. Goodwill is a line item on a balance sheet, and it materially affects the value / stock price of an organization.

If an organization has code that they don’t want to properly support, they should either depreciate it or make it FOSS and turn it over to the community. There’s no excuse to be a poor custodian of IT security. The worst offender here is Adobe, with the Flash plugin, but there are plenty of others.

James November 14, 2018 1:46 PM

“Responsible” should go both ways. Researchers are supposed to be responsible with disclosure, and the morons writing (selling) the buggy software should be responsible with patching it in timely manner. (when i say morons i refer to the ones playing the we’ll sue you card, usually O’s or lawyers, not the hard working developers. mistakes are normal). However, you can’t stop progress. Most of the O’s are relying on legislation to cover their bad decisions. Guess what, it doesn’t work. Good luck taking legal action against a researcher in Russia, China or someone that you don’t know or can’t identify. “Legal shit” won’t protect anyone. Look at car/farm equipment manufacturers, not to mention MPAA/RIAA freak show that has been going on for years. They launch some new DRM shit, and soon it’s broken.

Clive Robinson November 14, 2018 1:48 PM

@ ALL,

Can anyone remember when a senior VP at Oracle effectively accused people –that having had problems with the product had tested to find the bugs and then had the temerity to report them– as being the equivalent of criminals on her blog?

It hit the Internet –via HN IIRC– and spread quickly forcing her to take her posting down. Though she got no sanction from Oracle, when she should have been drop kicked out the door, which tells you a lot about the view point of senior Oracle mangment.

From the few times I’ve had any dealings with Oracle I’ve found them to be a compleate waste of the universe’s time. As some one I worked with pointed out they were reusing the old IBM strategies that got IBM into such trouble, and were also making the “neo-twonks” look good…

P.S. Polite version “Plague avoid like the”.

Hubert November 14, 2018 5:49 PM

Making it illegal to publish vulnerabilities without the vendor’s consent means that they won’t get fixed quickly — and everyone will be less secure.

This is a non-sequitur to me. Are you referencing something? Like, has anyone seriously proposed that? (The CSO Online link in the old essay is dead.)

If we’re looking for law proposals, how about one that explicitly protects researchers who find vulnerabilities?

Responsible disclosure makes us safer

So, you never really support that assertion in this article or the previous essay. For one thing: safer than what? Than full disclosure? I guess not, being that the essay’s title was “Full Disclosure of Security Vulnerabilities a ‘Damned Good Idea'”.

All the essay says against full disclosure is that it’s unpopular with companies, because it’s expensive and gets them bad publicity. Well, so what? They put the public at risk by releasing vulnerable software; I say vulnerabilities should be expensive, perhaps very expensive, so as to make security-ab-initio look reasonable by comparison. If a bridge is about to collapse, we don’t hide that so the engineers can save face—or so they can keep using the old cheap-but-broken engineering methods in future.

(Citicorp-Center-style secrecy doesn’t work for software security because, A: the manufacturer cannot monitor everything if the general public are running copies, and B: attacks work instantly with no prior warning.)

I’ve heard other people assert that “responsible” disclosure—a terrible term, because it pushes one interpretation of a subjective term as the only correct one—is safer than immediate full disclosure. Is that supported by evidence, or just a claim software companies make for their own convenience?

echo November 14, 2018 7:11 PM


I’ve heard other people assert that “responsible” disclosure—a terrible term, because it pushes one interpretation of a subjective term as the only correct one—is safer than immediate full disclosure. Is that supported by evidence, or just a claim software companies make for their own convenience?

I find this term and the policy implications to be less than clear but policies and precedents do exist, and their equivalents in other domains.

Within the UK examples might be youth custody centres withholding training manuals from FOI requests because of the threat young prisoners would learn physical combat techniques to attack prison officers. In practice this isn’t true. Another example might be the broadcasting of violence or suicide may create copycats or place vulnerable people at risk.

So what you have is a balance to be struck between the task and the security issue, and replicability and scaleability, and mitigation and time.

Ultimately security is similar to applied materials science. The focus is to build an appropriately resilient system.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.