Legal Restrictions on Vulnerability Disclosure

Kendra Albert gave an excellent talk at USENIX Security this year, pointing out that the legal agreements surrounding vulnerability disclosure muzzle researchers while allowing companies to not fix the vulnerabilities—exactly the opposite of what the responsible disclosure movement of the early 2000s was supposed to prevent. This is the talk.

Thirty years ago, a debate raged over whether vulnerability disclosure was good for computer security. On one side, full disclosure advocates argued that software bugs weren’t getting fixed and wouldn’t get fixed if companies that made insecure software wasn’t called out publicly. On the other side, companies argued that full disclosure led to exploitation of unpatched vulnerabilities, especially if they were hard to fix. After blog posts, public debates, and countless mailing list flame wars, there emerged a compromise solution: coordinated vulnerability disclosure, where vulnerabilities were disclosed after a period of confidentiality where vendors can attempt to fix things. Although full disclosure fell out of fashion, disclosure won and security through obscurity lost. We’ve lived happily ever after since.

Or have we? The move towards paid bug bounties and the rise of platforms that manage bug bounty programs for security teams has changed the reality of disclosure significantly. In certain cases, these programs require agreement to contractual restrictions. Under the status quo, that means that software companies sometimes funnel vulnerabilities into bug bounty management platforms and then condition submission on confidentiality agreements that can prohibit researchers from ever sharing their findings.

In this talk, I’ll explain how confidentiality requirements for managed bug bounty programs restrict the ability of those who attempt to report vulnerabilities to share their findings publicly, compromising the bargain at the center of the CVD process. I’ll discuss what contract law can tell us about how and when these restrictions are enforceable, and more importantly, when they aren’t, providing advice to hackers around how to understand their legal rights when submitting. Finally, I’ll call upon platforms and companies to adapt their practices to be more in line with the original bargain of coordinated vulnerability disclosure, including by banning agreements that require non-disclosure.

And this is me from 2007, talking about “responsible disclosure”:

This was a good idea—and these days it’s normal procedure—but one that was possible only because full disclosure was the norm. And it remains a good idea only as long as full disclosure is the threat.

Posted on November 19, 2025 at 7:04 AM11 Comments

Comments

KC November 19, 2025 11:21 AM

re: Contracts

Legal NDAs are powerful. A one-way gate.

Shall we think about this?

Is it always beneficial to require a stifling legal contract as a condition of vuln disclosure?

Per Kendra’s conference “s***post” let’s say you work as a vuln researcher… and are submitting bugs… these NDA contracts are blinding to you. They force shut the discussion of labor conditions.

How do you know if the submission platforms are responding to you accurately? You can’t compare notes. What if they say this vuln is “out of scope” or closed as a duplicate. You cannot talk about it. So…?

Clive Robinson November 19, 2025 11:47 AM

@ Bruce, ALL,

With regards “responsible disclosure” and why it worked only for a while.

As you note,

“but one that was possible only because full disclosure was the norm. And it remains a good idea only as long as full disclosure is the threat.”

And legally it was a common law contractual arrangement.

Where both sides gained a small amount.

Both sides gained from the original arrangement, with the threat hanging over the commercial entity of full disclosure, embarrassment and worse.

They then “wormed out” by shifting the rules of the game claiming a bug bounty program that realy did not pay out or do anything proactive, except give the company the upper hand.

You as the discoverer of the vulnerability, however,

1, Got gagged
2, Had to hand over not just the bug but all sorts of other things.
3, Then they would decided how little they could get away with paying. Often nothing at all.

Importantly they would claim you were second or third to report it and similar nonsense so they increasingly were neither paying out or “working the issue”.

So unsurprisingly several people decided that just ignoring the company was best, and sell the exploit for what they could get on the “black market”…

After all why get at best just $5000 from the company when you could get ten to twenty times that on the black market?..

Because after taxes and health insurance etc you’ld be lucky to get to keep 1/6th of it and stay out of jail.

There needs to be a better way for the vulnerability finders and importantly adjudicated by those not working directly or indirectly for the companies, and not required to hand the exploit over to them.

Impossibly Stupid November 19, 2025 12:23 PM

Finally, I’ll call upon platforms and companies to adapt their practices to be more in line with the original bargain of coordinated vulnerability disclosure

Do calls like this ever work? These companies have intentionally done the Darth Vader “I’m altering the deal” move; they are not trustworthy partners. More to the point, nobody is “responsible” to a company that creates vulnerable systems, researchers should instead see themselves as responsible to the victims that an exploit can create.

I’d be happy to work with anyone who wants to create an independent platform that does incremental disclosure and makes the companies jump through hoops if they are sincerely interested in fixing their problems. Honestly, that should already be the way the existing vulnerability databases function!

NombreNoImportante November 19, 2025 1:50 PM

Has anyone voiced any concerns about keeping all this Zero Day info in a few repositories, the bug bounty sites, seems like we are reducing the work for the three letter agencies, if it’s been voiced great. Seems like it should be highlighted.

Durand November 19, 2025 1:52 PM

Dear all,

Just to let you know that’s the end of the game in the European Union starting September 2026. The Cyber Resilience Act (CRA) will impose legal obligations on vulnerability disclosure: For any product placed on the European market and featuring an exploitable vulnerability, the vendor has the duty to inform the european cyber security agency (ENISA) and some local CERTs. Then, a remediation must be proposed within a couple of weeks. Not respecting this may lead to financial penalties based on the company worldwide turnover.

Regards

PatG November 19, 2025 11:07 PM

Security researchers need to stop wanting to get paid for discovering bugs and just announce them. Once they are public, the developers have to fix it.

lurker November 20, 2025 1:10 AM

@PatG
“Once they are public, the developers have to fix it.”

Sounds like Open Source, good idea … But wait, this is proprietary code. There’s probably a clause in the Misuse of Computers law they would twist to hang you with.

ResearcherZero November 20, 2025 4:07 AM

Plenty of companies announce that new features are vulnerabilities, sometimes before those features are released, or afterwards. Those vulnerabilities may be built-in and difficult to disable, included in an update to mitigate previous vulnerabilities, or may require the complete upgrade of the operating system or purchase of a new device that supports it.

Microsoft will roll out a new method of novel attack on Windows using Agentic CoPilot.

‘https://arstechnica.com/security/2025/11/critics-scoff-after-microsoft-warns-ai-feature-can-infect-machines-and-pilfer-data/

ASUS routers were added to an ORB network by exploiting the AiCloud service on the devices.
https://securityscorecard.com/blog/operation-wrthug-the-global-espionage-campaign-hiding-in-your-home-router/

ResearcherZero November 20, 2025 4:12 AM

@PatG, lurker

Use the built in AI service to push a fix via a software update.

‘https://www.welivesecurity.com/en/eset-research/plushdaemon-compromises-network-devices-for-adversary-in-the-middle-attacks/

Clive Robinson November 20, 2025 7:12 AM

@ lurker, PatG, ALL,

With regards,

“But wait, this is proprietary code. There’s probably a clause in the Misuse of Computers law they would twist to hang you with.”

Even if there is not they will put something in the EUL to make it so…

Any one remember Oracle, kept advertising their products were secure (they were far from it).

Well somebody did a little investigating and reported security faults…

An Oracle Senior responded on her blog that what they were doing ie investigating was against the licence agrements, and thus unlawful.

It created an uproar and the post was removed shortly there after but not before it had been read by many many people and commited to memory.

It’s a reason that after that many used to advise the old,

“If the answer to your question is Oracle, then you are asking the wrong question!”

Due to recent idiocy at the top, I would say the advice is still true…

Make a dumb announcement and loose 300billion in share value in just days to what is now called “The Curse of OpenAI” by the UK FT…

https://www.ft.com/content/064bbca0-1cb2-45ab-85f4-25fdfc318d89

Unfortunately as it’s owned by Murdoch, it’s got a dumb ass paywall… So maybe a more independent read,

https://hackr.io/blog/oracle-market-value-plummets-after-openai-deal

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.