Entries Tagged "disclosure"

Page 1 of 10

Responsible Disclosure for Cryptocurrency Security

Stewart Baker discusses why the industry-norm responsible disclosure for software vulnerabilities fails for cryptocurrency software.

Why can’t the cryptocurrency industry solve the problem the way the software and hardware industries do, by patching and updating security as flaws are found? Two reasons: First, many customers don’t have an ongoing relationship with the hardware and software providers that protect their funds­—nor do they have an incentive to update security on a regular basis. Turning to a new security provider or using updated software creates risks; leaving everything the way it was feels safer. So users won’t be rushing to pay for and install new security patches.

Second, cryptocurrency is famously and deliberately decentralized, anonymized, and low friction. That means that the company responsible for hardware or software security may have no way to identify who used its product, or to get the patch to those users. It also means that many wallets with security flaws will be publicly accessible, protected only by an elaborate password. Once word of the flaw leaks, the password can be reverse engineered by anyone, and the legitimate owners are likely to find themselves in a race to move their assets before the thieves do. Even in the software industry, hackers routinely reverse engineer Microsoft’s patches to find the security flaws they fix and then try to exploit them before the patches have been fully installed.

He doesn’t have any good ideas to fix this. I don’t either. Just add it to the pile of blockchain’s many problems.

Posted on September 9, 2022 at 8:33 AMView Comments

Wyze Camera Vulnerability

Wyze ignored a vulnerability in its home security cameras for three years. Bitdefender, who discovered the vulnerability, let the company get away with it.

In case you’re wondering, no, that is not normal in the security community. While experts tell me that the concept of a “responsible disclosure timeline” is a little outdated and heavily depends on the situation, we’re generally measuring in days, not years. “The majority of researchers have policies where if they make a good faith effort to reach a vendor and don’t get a response, that they publicly disclose in 30 days,” Alex Stamos, director of the Stanford Internet Observatory and former chief security officer at Facebook, tells me.

Posted on April 4, 2022 at 6:13 AMView Comments

Missouri Governor Doesn’t Understand Responsible Disclosure

The Missouri governor wants to prosecute the reporter who discovered a security vulnerability in a state’s website, and then reported it to the state.

The newspaper agreed to hold off publishing any story while the department fixed the problem and protected the private information of teachers around the state.

[…]

According to the Post-Dispatch, one of its reporters discovered the flaw in a web application allowing the public to search teacher certifications and credentials. No private information was publicly visible, but teacher Social Security numbers were contained in HTML source code of the pages.

The state removed the search tool after being notified of the issue by the Post-Dispatch. It was unclear how long the Social Security numbers had been vulnerable.

[…]

Chris Vickery, a California-based data security expert, told The Independent that it appears the department of education was “publishing data that it shouldn’t have been publishing.

“That’s not a crime for the journalists discovering it,” he said. “Putting Social Security numbers within HTML, even if it’s ‘non-display rendering’ HTML, is a stupid thing for the Missouri website to do and is a type of boneheaded mistake that has been around since day one of the Internet. No exploit, hacking or vulnerability is involved here.”

In explaining how he hopes the reporter and news organization will be prosecuted, [Gov.] Parson pointed to a state statute defining the crime of tampering with computer data. Vickery said that statute wouldn’t work in this instance because of a recent decision by the U.S. Supreme Court in the case of Van Buren v. United States.

One hopes that someone will calm the governor down.

Brian Krebs has more.

EDITED TO ADD (11/12): The governor doubled down a few days later.

Posted on October 18, 2021 at 6:20 AMView Comments

China Taking Control of Zero-Day Exploits

China is making sure that all newly discovered zero-day exploits are disclosed to the government.

Under the new rules, anyone in China who finds a vulnerability must tell the government, which will decide what repairs to make. No information can be given to “overseas organizations or individuals” other than the product’s manufacturer.

No one may “collect, sell or publish information on network product security vulnerabilities,” say the rules issued by the Cyberspace Administration of China and the police and industry ministries.

This just blocks the cyber-arms trade. It doesn’t prevent researchers from telling the products’ companies, even if they are outside of China.

Posted on July 14, 2021 at 6:04 AMView Comments

Bug Bounty Programs Are Being Used to Buy Silence

Investigative report on how commercial bug-bounty programs like HackerOne, Bugcrowd, and SynAck are being used to silence researchers:

Used properly, bug bounty platforms connect security researchers with organizations wanting extra scrutiny. In exchange for reporting a security flaw, the researcher receives payment (a bounty) as a thank you for doing the right thing. However, CSO’s investigation shows that the bug bounty platforms have turned bug reporting and disclosure on its head, what multiple expert sources, including HackerOne’s former chief policy officer, Katie Moussouris, call a “perversion.”

[…]

Silence is the commodity the market appears to be demanding, and the bug bounty platforms have pivoted to sell what willing buyers want to pay for.

“Bug bounties are best when transparent and open. The more you try to close them down and place NDAs on them, the less effective they are, the more they become about marketing rather than security,” Robert Graham of Errata Security tells CSO.

Leitschuh, the Zoom bug finder, agrees. “This is part of the problem with the bug bounty platforms as they are right now. They aren’t holding companies to a 90-day disclosure deadline,” he says. “A lot of these programs are structured on this idea of non-disclosure. What I end up feeling like is that they are trying to buy researcher silence.”

The bug bounty platforms’ NDAs prohibit even mentioning the existence of a private bug bounty. Tweeting something like “Company X has a private bounty program over at Bugcrowd” would be enough to get a hacker kicked off their platform.

The carrot for researcher silence is the money—bounties can range from a few hundred to tens of thousands of dollars—but the stick to enforce silence is “safe harbor,” an organization’s public promise not to sue or criminally prosecute a security researcher attempting to report a bug in good faith.

Posted on April 3, 2020 at 6:21 AMView Comments

Marriott Was Hacked—Again

Marriott announced another data breach, this one affecting 5.2 million people:

At this point, we believe that the following information may have been involved, although not all of this information was present for every guest involved:

  • Contact Details (e.g., name, mailing address, email address, and phone number)
  • Loyalty Account Information (e.g., account number and points balance, but not passwords)
  • Additional Personal Details (e.g., company, gender, and birthday day and month)
  • Partnerships and Affiliations (e.g., linked airline loyalty programs and numbers)
  • Preferences (e.g., stay/room preferences and language preference)

This isn’t nearly as bad as the 2014 Marriott breach—made public in 2018—which was the work of the Chinese government. But it does call into question whether Marriott is taking security seriously at all. It would be nice if there were a government regulatory body that could investigate and hold the company accountable.

Posted on April 2, 2020 at 11:33 AMView Comments

NordVPN Breached

There was a successful attack against NordVPN:

Based on the command log, another of the leaked secret keys appeared to secure a private certificate authority that NordVPN used to issue digital certificates. Those certificates might be issued for other servers in NordVPN’s network or for a variety of other sensitive purposes. The name of the third certificate suggested it could also have been used for many different sensitive purposes, including securing the server that was compromised in the breach.

The revelations came as evidence surfaced suggesting that two rival VPN services, TorGuard and VikingVPN, also experienced breaches that leaked encryption keys. In a statement, TorGuard said a secret key for a transport layer security certificate for *.torguardvpnaccess.com was stolen. The theft happened in a 2017 server breach. The stolen data related to a squid proxy certificate.

TorGuard officials said on Twitter that the private key was not on the affected server and that attackers “could do nothing with those keys.” Monday’s statement went on to say TorGuard didn’t remove the compromised server until early 2018. TorGuard also said it learned of VPN breaches last May, “and in a related development we filed a legal complaint against NordVPN.”

The breach happened nineteen months ago, but the company is only just disclosing it to the public. We don’t know exactly what was stolen and how it affects VPN security. More details are needed.

VPNs are a shadowy world. We use them to protect our Internet traffic when we’re on a network we don’t trust, but we’re forced to trust the VPN instead. Recommendations are hard. NordVPN’s website says that the company is based in Panama. Do we have any reason to trust it at all?

I’m curious what VPNs others use, and why they should be believed to be trustworthy.

Posted on October 23, 2019 at 6:15 AMView Comments

Zoom Vulnerability

The Zoom conferencing app has a vulnerability that allows someone to remotely take over the computer’s camera.

It’s a bad vulnerability, made worse by the fact that it remains even if you uninstall the Zoom app:

This vulnerability allows any website to forcibly join a user to a Zoom call, with their video camera activated, without the user’s permission.

On top of this, this vulnerability would have allowed any webpage to DOS (Denial of Service) a Mac by repeatedly joining a user to an invalid call.

Additionally, if you’ve ever installed the Zoom client and then uninstalled it, you still have a localhost web server on your machine that will happily re-install the Zoom client for you, without requiring any user interaction on your behalf besides visiting a webpage. This re-install ‘feature’ continues to work to this day.

Zoom didn’t take the vulnerability seriously:

This vulnerability was originally responsibly disclosed on March 26, 2019. This initial report included a proposed description of a ‘quick fix’ Zoom could have implemented by simply changing their server logic. It took Zoom 10 days to confirm the vulnerability. The first actual meeting about how the vulnerability would be patched occurred on June 11th, 2019, only 18 days before the end of the 90-day public disclosure deadline. During this meeting, the details of the vulnerability were confirmed and Zoom’s planned solution was discussed. However, I was very easily able to spot and describe bypasses in their planned fix. At this point, Zoom was left with 18 days to resolve the vulnerability. On June 24th after 90 days of waiting, the last day before the public disclosure deadline, I discovered that Zoom had only implemented the ‘quick fix’ solution originally suggested.

This is why we disclose vulnerabilities. Now, finally, Zoom is taking this seriously and fixing it for real.

EDITED TO ADD (8/8): Apple silently released a macOS update that removes the Zoom webserver if the app is not present.

Posted on July 16, 2019 at 12:54 PMView Comments

1 2 3 10

Sidebar photo of Bruce Schneier by Joe MacInnis.