Bug Bounty Programs Are Being Used to Buy Silence

Investigative report on how commercial bug-bounty programs like HackerOne, Bugcrowd, and SynAck are being used to silence researchers:

Used properly, bug bounty platforms connect security researchers with organizations wanting extra scrutiny. In exchange for reporting a security flaw, the researcher receives payment (a bounty) as a thank you for doing the right thing. However, CSO’s investigation shows that the bug bounty platforms have turned bug reporting and disclosure on its head, what multiple expert sources, including HackerOne’s former chief policy officer, Katie Moussouris, call a “perversion.”

[…]

Silence is the commodity the market appears to be demanding, and the bug bounty platforms have pivoted to sell what willing buyers want to pay for.

“Bug bounties are best when transparent and open. The more you try to close them down and place NDAs on them, the less effective they are, the more they become about marketing rather than security,” Robert Graham of Errata Security tells CSO.

Leitschuh, the Zoom bug finder, agrees. “This is part of the problem with the bug bounty platforms as they are right now. They aren’t holding companies to a 90-day disclosure deadline,” he says. “A lot of these programs are structured on this idea of non-disclosure. What I end up feeling like is that they are trying to buy researcher silence.”

The bug bounty platforms’ NDAs prohibit even mentioning the existence of a private bug bounty. Tweeting something like “Company X has a private bounty program over at Bugcrowd” would be enough to get a hacker kicked off their platform.

The carrot for researcher silence is the money—bounties can range from a few hundred to tens of thousands of dollars—but the stick to enforce silence is “safe harbor,” an organization’s public promise not to sue or criminally prosecute a security researcher attempting to report a bug in good faith.

Posted on April 3, 2020 at 6:21 AM19 Comments

Comments

myliit April 3, 2020 9:02 AM

Ps. Continuing from above, I think the analogy holds

“When regulatory capture occurs, a special interest is prioritized over the general interests of the public, leading to a net loss for society. Government agencies suffering regulatory capture are called “captured agencies.” The theory of client politics is related to that of rent-seeking and political failure; client politics “occurs when most or all of the benefits of a program go to some single, reasonably small interest (e.g., industry, profession, or locality) but most or all of the costs will be borne by a large number of people (for example, all taxpayers).”[3] …”

"Mr William" April 3, 2020 9:56 AM

Meh… Headline is mostly a continuation of the vuln dev ideology and agenda. Researchers would like the publicity in addition to the cash, and perhaps a chance to monetize their findings again. Any “protecting the public” is just the sales pitch for that.

Infosec news likes to pretend that these researchers are white knights helping defend the realm, and while that view of a researchers motivation is sometimes (perhaps usually) true; there is representation in these programs from researchers who are greyhat mercenaries and thug like blackhats who view the program as one way to extract payment among other illegal options. God help you if you’re dealing with the latter group who is demanding payment for a bug that was already payed out or a researcher who thinks he can triple up the bounty on his RCE.

I do IR for a faceless cooperation that earnestly runs these programs to patch vulnerabilities and improve security. The researchers virtually never follow the rules of engagement promised by the program and running the program introduces a ton of problems such as frequent DDOS like traffic patterns from researchers racing to to perform intrusive scans to be the first one who gets payed. We now treat each of the bounty reports as an incident because there was one incident that a “researcher” moved laterally, exfiltrated data, and then upon failing to get enough of a foothold to further monetize the access -> requested entrance to the private bug bounty program to report the bug for payment. And of course the bug bounty program needed to protect the identity of that individual. Where there would be criminal charges or even a civil action for monetary damage, there is a sales guy from the bug firm trying to convince you that the “researcher is new and didn’t know better” and recommending that you just pay.

I still think bug bounties are a net good, and I’m friends with a few top researchers: but there is a point where the anti-corporate anti-government hacker ethos comes in direct conflict the shared mission of ethical researchers, and the companies participating in the program. Just as researchers want to be safe from legal retaliation, the person paying out the bounty wants a chance to patch before needing to engage PR to respond to an internet smear campaign; or have a thousand script kiddies running the metasploit module for the POC released into the wild 2 hours after the bounty is paid.

We want vendors to be more responsible, but the researchers also need to be more responsible.

yoshi April 3, 2020 10:37 AM

Unlike anyone else here I actually read the article and it makes a ton of dodgy claims. Including stating that bug bounty programs are a violation of GDPR and Labor laws like California’s AB5. I also take issue with the elitists quoted in the article that bug bounty programs some how insult what ever ethics they pretend to have. Lets just cut to the chase:

Are bug bounty programs helping company close security issues?

The answer is yes.

If you have a problem with signing an NDA – that’s your problem. That’s standard business practices. Get off your high horse and play the game.

"Mr William" April 3, 2020 10:47 AM

@yoshi oh I read it, then I spell checked it and sent the CSO a bill for minimum wage under AB5 because I wouldn’t want CSO thinking they could abuse internet grammar police without compensating them as professional editors. 🙂

myliit April 3, 2020 10:52 AM

The plot thickens, see start ofOP link above about Zoom

https://theintercept.com/2020/04/03/zooms-encryption-is-not-suited-for-secrets-and-has-surprising-links-to-china-researchers-discover/

“[headline skipped; tired of the Intercept using all caps

Zoom
Zoom-yes again]

MEETINGS ON ZOOM, the increasingly popular video conferencing service, are encrypted using an algorithm with serious, well-known weaknesses, and sometimes using keys issued by servers in China, even when meeting participants are all in North America, according to researchers at the University of Toronto.

The researchers also found that Zoom protects video and audio content using a home-grown encryption scheme, that there is a vulnerability in Zoom’s “waiting room” feature, and that Zoom appears to have at least 700 employees in China spread across three subsidiaries. They conclude, in a report for the university’s Citizen Lab — widely followed in information security circles — that Zoom’s service is “not suited for secrets” and that it may be legally obligated to disclose encryption keys to Chinese authorities and “responsive to pressure” from them.
…”

Tatütata April 3, 2020 11:31 AM

If bugs are kept secret, what would prevent a bug-finder from claiming more than once the bounty through a third-party?

Or are the bugs actually corrected but kept under wraps?

Clive Robinson April 3, 2020 12:58 PM

@ Tatütata, Bruce,

what would prevent a bug-finder from claiming more than once the bounty through a third-party?

Because they “only pay once”…

Now flip thatcon it’s head…

As a genuine researcher you send in a bug only to be told “it’s already been found”.

What happens in the background at the “agency” is that often many involved with the agency are hackers as well. So they take advantage of the secrecy. You as a genuine research send in a bug, the person receiving it at the agency “deep sixes it” and finds some reason to claim you broke the rules or some such, meanwhile they pass the bug you found onto a buddy who sends it in as their bug, and the guy at the agency credits them with the discovery so the budy gets the cash reward and you get a reputation as a rule breaker…

In a way it’s like “log rolling” in the publishing industry because the buddy does the same in return at another point in time.

Thus the whole “Bug Bounty” system, especially where any kind of secrecy is involved is a “rigged market”…

Timbo April 3, 2020 4:08 PM

@Clive you beat me to the punch.

HackerOne has been accused of shutting down security researchers by telling them that a particular bug has already been reported, but they give no proof, no links, no info of same. Plus H1 is staffed by hackers who very well could simply be keeping that info and submitting it themselves, as they are also “security researchers” in many cases. There have been reports that an H1 staffer required PoC code and screenshots to “prove” the hack works, then after receiving same, the researcher was told it was either not really a security hole or that it had already been submitted by someone else.

Regardless of whether or not these claims are true, and they may very well be, H1 has given the community no proof it is trustworthy in this respect. Frankly, with all of the work and red tape involved lately, I’m really surprised that hackers don’t simply sell these quite lucrative exploits on the dark web for real bitcoins. They get notoriety, quick cash, and zero hassle. Seems like H1, et al, is working against itself.

/smh

JohnDoe11 April 3, 2020 4:36 PM

If you tell a security researcher that the bug has already been found & there’s no payout, then why wouldn’t the security researcher immediately publish it? After that happens a few times, folks will stop cheating the researchers.

Then, of course, if they don’t fix the bug in 90 days, the security researcher’s friend, colleague, brother-in-law, college roommate’s sister’s husband, etc will no doubt “independently” discover the bug and seek a reward. See step #1.

Sure it’s a violation of the NDA, but HOW DO YOU PROVE IT?

This is a problem that can solve itself.

Impossibly Stupid April 3, 2020 4:43 PM

Used properly, bug bounty platforms connect security researchers with organizations wanting extra scrutiny.

Or the organization could, you know, hire a competent HR staff/recruiters that can hire competent security professionals to get that job done. Most bounty/reward programs as currently used are just another retread of exploitative spec work. It’s a shame so many people still fall for these “do a lot of costly work for us and maybe you’ll be the lucky one we pay” schemes.

@”Mr William”

Researchers would like the publicity

Publication is a cornerstone of any good science. If they’re not primarily interested in doing that, they shouldn’t even be called “researchers”, just mercenaries. If the companies aren’t interested in that kind of transparency, they shouldn’t be seeking “researchers” either.

I do IR for a faceless cooperation that earnestly runs these programs to patch vulnerabilities and improve security.

So you’re part of the problem. Tell them to pay to hire a proper security staff rather than expecting external agents to test your systems for free.

We want vendors to be more responsible, but the researchers also need to be more responsible.

Then hire the researchers into a position of responsibility. Until that happens, you really don’t have much standing. If you don’t like the ecosystem that your bug bounty program has created, it’s up to you to fix yet another broken system you have created.

Clive Robinson April 3, 2020 5:43 PM

@ Timbo,

HackerOne has been accused of shutting down security researchers by telling them that a particular bug has already been reported

They are not the only one…

But more inyerestingly, H1 are the only one as far as I’m aware who have been “publicly accused” by an established “name”.

The fact that H1 did an “ostrich act” and apparently did not even reach for a lawyer, speaks volumes about the state of play.

The fact this behaviour appears to be endemic in the industry kind of says what is going on in that particular snake infested wheel rut.

Whilst I’m sure there will be honourable agencies in the market place, other reports suggest the “denial activity” is also happening in the companies who have started the particular bug bounty. Thus those companies are likely to avoid the honourable agencies. Thus there is a very real posability that honourable agencies will get “starved out” leaving only the rouge agencies behind…

"Mr William" April 3, 2020 5:59 PM

@Impossibly Stupid

Lots of “should” and “shouldn’t” opinions there. I’m just relaying what you really get when you deal with these programs. Which is a lot of noise, some low hanging fruit when you add a corporate acquisition to the bug bounty scope, and frequent drama from researchers who feel entitled to larger payments, or payments for the same bug you payed them for on the same web server because they changed the domain name; and constant drama where researchers feel ripped off for being slower than the faster researcher who is breaking rules of engagement. The people cutting the check are just as unhappy with the middle man as the people receiving them.

“So you’re part of the problem. Tell them to pay to hire a proper security staff rather than expecting external agents to test your systems for free”

Presumptuous and it sounds like your just fighting your own strawman… the organisation has a large internal staff (including in-house red team, app-sec, ect) and they pay for multiple external pen tests a year. The bug bounty program and VDP are just there to give the freelancers a place to report things they find, and a mechanism to compensate them for the work. And while its a pain in the butt, the whole program costs less then 1 full time infosec person so its a “if it keeps the internal staff honest than why not” situation.

Precisely the opposite of “expecting external agents to test your systems for free”. But I’ll be even more provocative and say those agents aren’t curing cancer or writing zero days… they are running sqlmap and dnsrecon.py with giant brute force dictionaries. They are the ambulance chasers of the offensive research community being marketed and touted by their pimp’s as something they are not.

amfM April 4, 2020 4:32 AM

but the stick to enforce silence is “safe harbor,” an organization’s public promise not to sue or criminally prosecute a security researcher attempting to report a bug in good faith.

Rely on that thought at your peril, methinks.

still old April 4, 2020 8:25 AM

I’ve found, or helped verify, a couple of bugs. The memorable was an uninitialised-memory-use bug in Intel Ethernet cards. If you examined the Ethernet frame’s padding bytes, you could extract the bytes from a previously transmitted packet because ethernet frames are always padded to word boundaries (and minimum frame length). Since this was before https and long passwords, you could get lucky and extract an entire password from a single packet.

Since the use of encryption is now much more widespread, I’m not sure how much information you could glean using this method, but I’d bet money that Intel still uses similar shortcuts without understanding the implications.

Impossibly Stupid April 4, 2020 3:55 PM

@”Mr William”

Lots of “should” and “shouldn’t” opinions there. I’m just relaying what you really get when you deal with these programs.

Nothing exists in a vacuum. If you don’t like what you’re getting, stop following a program that works in counterproductive ways (or, to phrase it ironically/paradoxically, fix the bugs in your bug bounty system). Call it mere opinion if you wish, but I’m not sure how trying to dismiss advice like mine is going to help you. I mean, I’d like to think Bruce is highlighting this article because he’d like to see a better outcome than your sentiment of resignation.

Presumptuous and it sounds like your just fighting your own strawman… the whole program costs less then 1 full time infosec person so its a “if it keeps the internal staff honest than why not” situation.

There’s no need for me to construct a straw man when you continually post evidence that your organization is doing things wrong. Rather than using the system to provide better security, you acknowledge that it is really about getting cheaper “security”. The need to keep your security employees “honest” is an acknowledgement that HR has not done its job to find professionals to do the work. Nor does it help that you keep using the term “researcher” for the spec work mercenaries that you maintain a contestable relationship with.

The bottom line is that there are professional approaches and unprofessional ones. Even if you have a security apparatus that forwards a lot of professional actions, there is still a responsibility to eliminate all the elements that push for these unprofessional ones. That may mean getting new security staff, replacing security (or other) management, or raising the whole HR department if they can’t competently locate enough professional employees at any level. See the ruling in Chain v. Weakest Link.

~~~ April 5, 2020 1:56 AM

Security experts – sell the bugs to the highest bidder. Why would you care more for the companies’ clients than they do? 0-days don’t need NDAs, the companies will have a bigger incentive to fix the bug pronto instead of swiping it under a rug, users will be educated of the value of their business to said companies, and you’ll have more money. I’d call it win(no NDA)-win(quick fix)-win(in the open)-win(user education)-win(money).

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.