Friday Squid Blogging: On Squid Communication

They can communicate using bioluminescent flashes:

New research published this week in Proceedings of the National Academy of Sciences presents evidence for a previously unknown semantic-like ability in Humboldt squid. What's more, these squid can enhance the visibility of their skin patterns by using their bodies as a kind of backlight, which may allow them to convey messages of surprising complexity, according to the new paper. Together, this could explain how Humboldt squid­ -- and possibly other closely related squid­ -- are able to facilitate group behaviors in light-restricted environments, such as evading predators, finding places to forage, signaling that it's time to feed, and deciding who gets priority at the dinner table, among other things.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Posted on April 3, 2020 at 4:30 PM47 Comments

Security and Privacy Implications of Zoom

Over the past few weeks, Zoom's use has exploded since it became the video conferencing platform of choice in today's COVID-19 world. (My own university, Harvard, uses it for all of its classes. Boris Johnson had a cabinet meeting over Zoom.) Over that same period, the company has been exposed for having both lousy privacy and lousy security. My goal here is to summarize all of the problems and talk about solutions and workarounds.

In general, Zoom's problems fall into three broad buckets: (1) bad privacy practices, (2) bad security practices, and (3) bad user configurations.

Privacy first: Zoom spies on its users for personal profit. It seems to have cleaned this up somewhat since everyone started paying attention, but it still does it.

The company collects a laundry list of data about you, including user name, physical address, email address, phone number, job information, Facebook profile information, computer or phone specs, IP address, and any other information you create or upload. And it uses all of this surveillance data for profit, against your interests.

Last month, Zoom's privacy policy contained this bit:

Does Zoom sell Personal Data? Depends what you mean by "sell." We do not allow marketing companies, or anyone else to access Personal Data in exchange for payment. Except as described above, we do not allow any third parties to access any Personal Data we collect in the course of providing services to users. We do not allow third parties to use any Personal Data obtained from us for their own purposes, unless it is with your consent (e.g. when you download an app from the Marketplace. So in our humble opinion, we don't think most of our users would see us as selling their information, as that practice is commonly understood.

"Depends what you mean by 'sell.'" "...most of our users would see us as selling..." "...as that practice is commonly understood." That paragraph was carefully worded by lawyers to permit them to do pretty much whatever they want with your information while pretending otherwise. Do any of you who "download[ed] an app from the Marketplace" remember consenting to them giving your personal data to third parties? I don't.

Doc Searls has been all over this, writing about the surprisingly large number of third-party trackers on the Zoom website and its poor privacy practices in general.

On March 29th, Zoom rewrote its privacy policy:

We do not sell your personal data. Whether you are a business or a school or an individual user, we do not sell your data.

[...]

We do not use data we obtain from your use of our services, including your meetings, for any advertising. We do use data we obtain from you when you visit our marketing websites, such as zoom.us and zoom.com. You have control over your own cookie settings when visiting our marketing websites.

There's lots more. It's better than it was, but Zoom still collects a huge amount of data about you. And note that it considers its home pages "marketing websites," which means it's still using third-party trackers and surveillance based advertising. (Honestly, Zoom, just stop doing it.)

Now security: Zoom's security is at best sloppy, and malicious at worst. Motherboard reported that Zoom's iPhone app was sending user data to Facebook, even if the user didn't have a Facebook account. Zoom removed the feature, but its response should worry you about its sloppy coding practices in general:

"We originally implemented the 'Login with Facebook' feature using the Facebook SDK in order to provide our users with another convenient way to access our platform. However, we were recently made aware that the Facebook SDK was collecting unnecessary device data," Zoom told Motherboard in a statement on Friday.

This isn't the first time Zoom was sloppy with security. Last year, a researcher discovered that a vulnerability in the Mac Zoom client allowed any malicious website to enable the camera without permission. This seemed like a deliberate design choice: that Zoom designed its service to bypass browser security settings and remotely enable a user's web camera without the user's knowledge or consent. (EPIC filed an FTC complaint over this.) Zoom patched this vulnerability last year.

On 4/1, we learned that Zoom for Windows can be used to steal users' Window credentials.

Attacks work by using the Zoom chat window to send targets a string of text that represents the network location on the Windows device they're using. The Zoom app for Windows automatically converts these so-called universal naming convention strings -- such as \\attacker.example.com/C$ -- into clickable links. In the event that targets click on those links on networks that aren't fully locked down, Zoom will send the Windows usernames and the corresponding NTLM hashes to the address contained in the link.

On 4/2, we learned that Zoom secretly displayed data from people's LinkedIn profiles, which allowed some meeting participants to snoop on each other. (Zoom has fixed this one.)

I'm sure lots more of these bad security decisions, sloppy coding mistakes, and random software vulnerabilities are coming.

But it gets worse. Zoom's encryption is awful. First, the company claims that it offers end-to-end encryption, but it doesn't. It only provides link encryption, which means everything is unencrypted on the company's servers. From the Intercept:

In Zoom's white paper, there is a list of "pre-meeting security capabilities" that are available to the meeting host that starts with "Enable an end-to-end (E2E) encrypted meeting." Later in the white paper, it lists "Secure a meeting with E2E encryption" as an "in-meeting security capability" that's available to meeting hosts. When a host starts a meeting with the "Require Encryption for 3rd Party Endpoints" setting enabled, participants see a green padlock that says, "Zoom is using an end to end encrypted connection" when they mouse over it.

But when reached for comment about whether video meetings are actually end-to-end encrypted, a Zoom spokesperson wrote, "Currently, it is not possible to enable E2E encryption for Zoom video meetings. Zoom video meetings use a combination of TCP and UDP. TCP connections are made using TLS and UDP connections are encrypted with AES using a key negotiated over a TLS connection."

They're also lying about the type of encryption. On 4/3, Citizen Lab reported

Zoom documentation claims that the app uses "AES-256" encryption for meetings where possible. However, we find that in each Zoom meeting, a single AES-128 key is used in ECB mode by all participants to encrypt and decrypt audio and video. The use of ECB mode is not recommended because patterns present in the plaintext are preserved during encryption.

The AES-128 keys, which we verified are sufficient to decrypt Zoom packets intercepted in Internet traffic, appear to be generated by Zoom servers, and in some cases, are delivered to participants in a Zoom meeting through servers in China, even when all meeting participants, and the Zoom subscriber's company, are outside of China.

I'm okay with AES-128, but using ECB (electronic codebook) mode indicates that there is no one at the company who knows anything about cryptography.

And that China connection is worrisome. Citizen Lab again:

Zoom, a Silicon Valley-based company, appears to own three companies in China through which at least 700 employees are paid to develop Zoom's software. This arrangement is ostensibly an effort at labor arbitrage: Zoom can avoid paying US wages while selling to US customers, thus increasing their profit margin. However, this arrangement may make Zoom responsive to pressure from Chinese authorities.

Or from Chinese programmers slipping backdoors into the code at the request of the government.

Finally, bad user configuration. Zoom has a lot of options. The defaults aren't great, and if you don't configure your meetings right you're leaving yourself open to all sort of mischief.

"Zoombombing" is the most visible problem. People are finding open Zoom meetings, classes, and events: joining them, and sharing their screens to broadcast offensive content -- porn, mostly -- to everyone. It's awful if you're the victim, and a consequence of allowing any participant to share their screen.

Even without screen sharing, people are logging in to random Zoom meetings and disrupting them. Turns out that Zoom didn't make the meeting ID long enough to prevent someone from randomly trying them, looking for meetings. This isn't new; Checkpoint Research reported this last summer. Instead of making the meeting IDs longer or more complicated -- which it should have done -- it enabled meeting passwords by default. Of course most of us don't use passwords, and there are now automatic tools for finding Zoom meetings.

For help securing your Zoom sessions, Zoom has a good guide. Short summary: don't share the meeting ID more than you have to, use a password in addition to a meeting ID, use the waiting room if you can, and pay attention to who has what permissions.

That's what we know about Zoom's privacy and security so far. Expect more revelations in the weeks and months to come. The New York Attorney General is investigating the company. Security researchers are combing through the software, looking for other things Zoom is doing and not telling anyone about. There are more stories waiting to be discovered.

Zoom is a security and privacy disaster, but until now had managed to avoid public accountability because it was relatively obscure. Now that it's in the spotlight, it's all coming out. (Their 4/1 response to all of this is here.) On 4/2, the company said it would freeze all feature development and focus on security and privacy. Let's see if that's anything more than a PR move.

In the meantime, you should either lock Zoom down as best you can, or -- better yet -- abandon the platform altogether. Jitsi is a distributed, free, and open-source alternative. Start your meeting here.

EDITED TO ADD: Fight for the Future is on this.

Steve Bellovin's comments.

Meanwhile, lots of Zoom video recordings are available on the Internet. The article doesn't have any useful details about how they got there:

Videos viewed by The Post included one-on-one therapy sessions; a training orientation for workers doing telehealth calls, which included people's names and phone numbers; small-business meetings, which included private company financial statements; and elementary-school classes, in which children's faces, voices and personal details were exposed.

Many of the videos include personally identifiable information and deeply intimate conversations, recorded in people's homes. Other videos include nudity, such as one in which an aesthetician teaches students how to give a Brazilian wax.

[...]

Many of the videos can be found on unprotected chunks of Amazon storage space, known as buckets, which are widely used across the Web. Amazon buckets are locked down by default, but many users make the storage space publicly accessible either inadvertently or to share files with other people.

EDITED TO ADD (4/4): New York City has banned Zoom from its schools.

Posted on April 3, 2020 at 10:10 AM54 Comments

Bug Bounty Programs Are Being Used to Buy Silence

Investigative report on how commercial bug-bounty programs like HackerOne, Bugcrowd, and SynAck are being used to silence researchers:

Used properly, bug bounty platforms connect security researchers with organizations wanting extra scrutiny. In exchange for reporting a security flaw, the researcher receives payment (a bounty) as a thank you for doing the right thing. However, CSO's investigation shows that the bug bounty platforms have turned bug reporting and disclosure on its head, what multiple expert sources, including HackerOne's former chief policy officer, Katie Moussouris, call a "perversion."

[...]

Silence is the commodity the market appears to be demanding, and the bug bounty platforms have pivoted to sell what willing buyers want to pay for.

"Bug bounties are best when transparent and open. The more you try to close them down and place NDAs on them, the less effective they are, the more they become about marketing rather than security," Robert Graham of Errata Security tells CSO.

Leitschuh, the Zoom bug finder, agrees. "This is part of the problem with the bug bounty platforms as they are right now. They aren't holding companies to a 90-day disclosure deadline," he says. "A lot of these programs are structured on this idea of non-disclosure. What I end up feeling like is that they are trying to buy researcher silence."

The bug bounty platforms' NDAs prohibit even mentioning the existence of a private bug bounty. Tweeting something like "Company X has a private bounty program over at Bugcrowd" would be enough to get a hacker kicked off their platform.

The carrot for researcher silence is the money -- bounties can range from a few hundred to tens of thousands of dollars -- but the stick to enforce silence is "safe harbor," an organization's public promise not to sue or criminally prosecute a security researcher attempting to report a bug in good faith.


Posted on April 3, 2020 at 6:21 AM19 Comments

Marriott Was Hacked -- Again

Marriott announced another data breach, this one affecting 5.2 million people:

At this point, we believe that the following information may have been involved, although not all of this information was present for every guest involved:

  • Contact Details (e.g., name, mailing address, email address, and phone number)
  • Loyalty Account Information (e.g., account number and points balance, but not passwords)
  • Additional Personal Details (e.g., company, gender, and birthday day and month)
  • Partnerships and Affiliations (e.g., linked airline loyalty programs and numbers)
  • Preferences (e.g., stay/room preferences and language preference)

This isn't nearly as bad as the 2014 Marriott breach -- made public in 2018 -- which was the work of the Chinese government. But it does call into question whether Marriott is taking security seriously at all. It would be nice if there were a government regulatory body that could investigate and hold the company accountable.

Posted on April 2, 2020 at 11:33 AM18 Comments

Clarifying the Computer Fraud and Abuse Act

A federal court has ruled that violating a website's terms of service is not "hacking" under the Computer Fraud and Abuse Act.

The plaintiffs wanted to investigate possible racial discrimination in online job markets by creating accounts for fake employers and job seekers. Leading job sites have terms of service prohibiting users from supplying fake information, and the researchers worried that their research could expose them to criminal liability under the CFAA, which makes it a crime to "access a computer without authorization or exceed authorized access."

So in 2016 they sued the federal government, seeking a declaration that this part of the CFAA violated the First Amendment.

But rather than addressing that constitutional issue, Judge John Bates ruled on Friday that the plaintiffs' proposed research wouldn't violate the CFAA's criminal provisions at all. Someone violates the CFAA when they bypass an access restriction like a password. But someone who logs into a website with a valid password doesn't become a hacker simply by doing something prohibited by a website's terms of service, the judge concluded.

"Criminalizing terms-of-service violations risks turning each website into its own criminal jurisdiction and each webmaster into his own legislature," Bates wrote.

Bates noted that website terms of service are often long, complex, and change frequently. While some websites require a user to read through the terms and explicitly agree to them, others merely include a link to the terms somewhere on the page. As a result, most users aren't even aware of the contractual terms that supposedly govern the site. Under those circumstances, it's not reasonable to make violation of such terms a criminal offense, Bates concluded.

This is not the first time a court has issued a ruling in this direction. It's also not the only way the courts have interpreted the frustratingly vague Computer Fraud and Abuse Act.

Posted on March 31, 2020 at 7:51 AM8 Comments

Privacy vs. Surveillance in the Age of COVID-19

The trade-offs are changing:

As countries around the world race to contain the pandemic, many are deploying digital surveillance tools as a means to exert social control, even turning security agency technologies on their own civilians. Health and law enforcement authorities are understandably eager to employ every tool at their disposal to try to hinder the virus ­ even as the surveillance efforts threaten to alter the precarious balance between public safety and personal privacy on a global scale.

Yet ratcheting up surveillance to combat the pandemic now could permanently open the doors to more invasive forms of snooping later.

I think the effects of COVID-19 will be more drastic than the effects of the terrorist attacks of 9/11: not only with respect to surveillance, but across many aspects of our society. And while many things that would never be acceptable during normal time are reasonable things to do right now, we need to makes sure we can ratchet them back once the current pandemic is over.

Cindy Cohn at EFF wrote:

We know that this virus requires us to take steps that would be unthinkable in normal times. Staying inside, limiting public gatherings, and cooperating with medically needed attempts to track the virus are, when approached properly, reasonable and responsible things to do. But we must be as vigilant as we are thoughtful. We must be sure that measures taken in the name of responding to COVID-19 are, in the language of international human rights law, "necessary and proportionate" to the needs of society in fighting the virus. Above all, we must make sure that these measures end and that the data collected for these purposes is not re-purposed for either governmental or commercial ends.

I worry that in our haste and fear, we will fail to do any of that.

More from EFF.

Posted on March 30, 2020 at 6:32 AM46 Comments

Friday Squid Blogging: Squid Can Edit Their Own Genome

Amazing:

Revealing yet another super-power in the skillful squid, scientists have discovered that squid massively edit their own genetic instructions not only within the nucleus of their neurons, but also within the axon -- the long, slender neural projections that transmit electrical impulses to other neurons. This is the first time that edits to genetic information have been observed outside of the nucleus of an animal cell.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Posted on March 27, 2020 at 4:28 PM184 Comments

Story of Gus Weiss

This is a long and fascinating article about Gus Weiss, who masterminded a long campaign to feed technical disinformation to the Soviet Union, which may or may not have caused a massive pipeline explosion somewhere in Siberia in the 1980s, if in fact there even was a massive pipeline explosion somewhere in Siberia in the 1980s.

Lots of information about the origins of US export controls laws and sabotage operations.

Posted on March 27, 2020 at 6:03 AM7 Comments

On Cyber Warranties

Interesting article discussing cyber-warranties, and whether they are an effective way to transfer risk (as envisioned by Ackerlof's "market for lemons") or a marketing trick.

The conclusion:

Warranties must transfer non-negligible amounts of liability to vendors in order to meaningfully overcome the market for lemons. Our preliminary analysis suggests the majority of cyber warranties cover the cost of repairing the device alone. Only cyber-incident warranties cover first-party costs from cyber-attacks -- why all such warranties were offered by firms selling intangible products is an open question. Consumers should question whether warranties can function as a costly signal when narrow coverage means vendors accept little risk.

Worse still, buyers cannot compare across cyber-incident warranty contracts due to the diversity of obligations and exclusions. Ambiguous definitions of the buyer's obligations and excluded events create uncertainty over what is covered. Moving toward standardized terms and conditions may help consumers, as has been pursued in cyber insurance, but this is in tension with innovation and product diversity.

[..]

Theoretical work suggests both the breadth of the warranty and the price of a product determine whether the warranty functions as a quality signal. Our analysis has not touched upon the price of these products. It could be that firms with ineffective products pass the cost of the warranty on to buyers via higher prices. Future studies could analyze warranties and price together to probe this issue.

In conclusion, cyber warranties -- particularly cyber-product warranties -- do not transfer enough risk to be a market fix as imagined in Woods. But this does not mean they are pure marketing tricks either. The most valuable feature of warranties is in preventing vendors from exaggerating what their products can do. Consumers who read the fine print can place greater trust in marketing claims so long as the functionality is covered by a cyber-incident warranty.


Posted on March 26, 2020 at 6:27 AM2 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.