Crypto-Gram

April 15, 2020

by Bruce Schneier

Fellow and Lecturer, Harvard Kennedy School

schneier@schneier.com

https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram’s web page.

These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:

  1. TSA Admits Liquid Ban Is Security Theater
  2. The Insecurity of WordPress and Apache Struts
  3. Work-from-Home Security Advice
  4. Emergency Surveillance During COVID-19 Crisis
  5. Hacking Voice Assistants with Ultrasonic Waves
  6. Internet Voting in Puerto Rico
  7. Facial Recognition for People Wearing Masks
  8. On Cyber Warranties
  9. Story of Gus Weiss
  10. Privacy vs. Surveillance in the Age of COVID-19
  11. Clarifying the Computer Fraud and Abuse Act
  12. Dark Web Hosting Provider Hacked
  13. Marriott Was Hacked—Again
  14. Bug Bounty Programs Are Being Used to Buy Silence
  15. Security and Privacy Implications of Zoom
  16. Emotet Malware Causes Physical Damage
  17. Cybersecurity During COVID-19
  18. RSA-250 Factored
  19. Microsoft Buys Corp.com
  20. Kubernetes Security
  21. Contact Tracing COVID-19 Infections via Smartphone Apps
  22. Ransomware Now Leaking Stolen Documents
  23. Upcoming Speaking Engagements

TSA Admits Liquid Ban Is Security Theater

[2020.03.16] The TSA is allowing people to bring larger bottles of hand sanitizer with them on airplanes:

Passengers will now be allowed to travel with containers of liquid hand sanitizer up to 12 ounces. However, the agency cautioned that the shift could mean slightly longer waits at checkpoint because the containers may have to be screened separately when going through security.

Won’t airplanes blow up as a result? Of course not.

Would they have blown up last week were the restrictions lifted back then? Of course not.

It’s always been security theater.

Interesting context:

The TSA can declare this rule change because the limit was always arbitrary, just one of the countless rituals of security theater to which air passengers are subjected every day. Flights are no more dangerous today, with the hand sanitizer, than yesterday, and if the TSA allowed you to bring 12 ounces of shampoo on a flight tomorrow, flights would be no more dangerous then. The limit was bullshit. The ease with which the TSA can toss it aside makes that clear.

All over America, the coronavirus is revealing, or at least reminding us, just how much of contemporary American life is bullshit, with power structures built on punishment and fear as opposed to our best interest. Whenever the government or a corporation benevolently withdraws some punitive threat because of the coronavirus, it’s a signal that there was never any good reason for that threat to exist in the first place.


The Insecurity of WordPress and Apache Struts

[2020.03.18] Interesting data:

A study that analyzed all the vulnerability disclosures between 2010 and 2019 found that around 55% of all the security bugs that have been weaponized and exploited in the wild were for two major application frameworks, namely WordPress and Apache Struts.

The Drupal content management system ranked third, followed by Ruby on Rails and Laravel, according to a report published this week by risk analysis firm RiskSense.

The full report is here.


Work-from-Home Security Advice

[2020.03.19] SANS has made freely available its “Work-from-Home Awareness Kit.”

When I think about how COVID-19’s security measures are affecting organizational networks, I see several interrelated problems:

One, employees are working from their home networks and sometimes from their home computers. These systems are more likely to be out of date, unpatched, and unprotected. They are more vulnerable to attack simply because they are less secure.

Two, sensitive organizational data will likely migrate outside of the network. Employees working from home are going to save data on their own computers, where they aren’t protected by the organization’s security systems. This makes the data more likely to be hacked and stolen.

Three, employees are more likely to access their organizational networks insecurely. If the organization is lucky, they will have already set up a VPN for remote access. If not, they’re either trying to get one quickly or not bothering at all. Handing people VPN software to install and use with zero training is a recipe for security mistakes, but not using a VPN is even worse.

Four, employees are being asked to use new and unfamiliar tools like Zoom to replace face-to-face meetings. Again, these hastily set-up systems are likely to be insecure.

Five, the general chaos of “doing things differently” is an opening for attack. Tricks like business email compromise, where an employee gets a fake email from a senior executive asking him to transfer money to some account, will be more successful when the employee can’t walk down the hall to confirm the email’s validity—and when everyone is distracted and so many other things are being done differently.

Worrying about network security seems almost quaint in the face of the massive health risks from COVID-19, but attacks on infrastructure can have effects far greater than the infrastructure itself. Stay safe, everyone, and help keep your networks safe as well.


Emergency Surveillance During COVID-19 Crisis

[2020.03.20] Israel is using emergency surveillance powers to track people who may have COVID-19, joining China and Iran in using mass surveillance in this way. I believe pressure will increase to leverage existing corporate surveillance infrastructure for these purposes in the US and other countries. With that in mind, the EFF has some good thinking on how to balance public safety with civil liberties:

Thus, any data collection and digital monitoring of potential carriers of COVID-19 should take into consideration and commit to these principles:

  • Privacy intrusions must be necessary and proportionate. A program that collects, en masse, identifiable information about people must be scientifically justified and deemed necessary by public health experts for the purpose of containment. And that data processing must be proportionate to the need. For example, maintenance of 10 years of travel history of all people would not be proportionate to the need to contain a disease like COVID-19, which has a two-week incubation period.
  • Data collection based on science, not bias. Given the global scope of communicable diseases, there is historical precedent for improper government containment efforts driven by bias based on nationality, ethnicity, religion, and race—rather than facts about a particular individual’s actual likelihood of contracting the virus, such as their travel history or contact with potentially infected people. Today, we must ensure that any automated data systems used to contain COVID-19 do not erroneously identify members of specific demographic groups as particularly susceptible to infection.
  • Expiration. As in other major emergencies in the past, there is a hazard that the data surveillance infrastructure we build to contain COVID-19 may long outlive the crisis it was intended to address. The government and its corporate cooperators must roll back any invasive programs created in the name of public health after crisis has been contained.
  • Transparency. Any government use of “big data” to track virus spread must be clearly and quickly explained to the public. This includes publication of detailed information about the information being gathered, the retention period for the information, the tools used to process that information, the ways these tools guide public health decisions, and whether these tools have had any positive or negative outcomes.
  • Due Process. If the government seeks to limit a person’s rights based on this “big data” surveillance (for example, to quarantine them based on the system’s conclusions about their relationships or travel), then the person must have the opportunity to timely and fairly challenge these conclusions and limits.

Hacking Voice Assistants with Ultrasonic Waves

[2020.03.23] I previously wrote about hacking voice assistants with lasers. Turns you can do much the same thing with ultrasonic waves:

Voice assistants—the demo targeted Siri, Google Assistant, and Bixby—are designed to respond when they detect the owner’s voice after noticing a trigger phrase such as ‘Ok, Google’.

Ultimately, commands are just sound waves, which other researchers have already shown can be emulated using ultrasonic waves which humans can’t hear, providing an attacker has a line of sight on the device and the distance is short.

What SurfingAttack adds to this is the ability to send the ultrasonic commands through a solid glass or wood table on which the smartphone was sitting using a circular piezoelectric disc connected to its underside.

Although the distance was only 43cm (17 inches), hiding the disc under a surface represents a more plausible, easier-to-conceal attack method than previous techniques.

Research paper. Demonstration video.


Internet Voting in Puerto Rico

[2020.03.24] Puerto Rico is considered allowing for Internet voting. I have joined a group of security experts in a letter opposing the bill.

Cybersecurity experts agree that under current technology, no practically proven method exists to securely, verifiably, or privately return voted materials over the internet. That means that votes could be manipulated or deleted on the voter’s computer without the voter’s knowledge, local elections officials cannot verify that the voter’s ballot reflects the voter’s intent, and the voter’s selections could be traceable back to the individual voter. Such a system could violate protections guaranteeing a secret ballot, as outlined in Section 2, Article II of the Puerto Rico Constitution.

The ACLU agrees.


Facial Recognition for People Wearing Masks

[2020.03.25] The Chinese facial recognition company Hanwang claims it can recognize people wearing masks:

The company now says its masked facial recognition program has reached 95 percent accuracy in lab tests, and even claims that it is more accurate in real life, where its cameras take multiple photos of a person if the first attempt to identify them fails.

[…]

Counter-intuitively, training facial recognition algorithms to recognize masked faces involves throwing data away. A team at the University of Bradford published a study last year showing they could train a facial recognition program to accurately recognize half-faces by deleting parts of the photos they used to train the software.

When a facial recognition program tries to recognize a person, it takes a photo of the person to be identified, and reduces it down to a bundle, or vector, of numbers that describes the relative positions of features on the face.

[…]

Hanwang’s system works for masked faces by trying to guess what all the faces in its existing database of photographs would look like if they were masked.


On Cyber Warranties

[2020.03.26] Interesting article discussing cyber-warranties, and whether they are an effective way to transfer risk (as envisioned by Akerlof’s “market for lemons”) or a marketing trick.

The conclusion:

Warranties must transfer non-negligible amounts of liability to vendors in order to meaningfully overcome the market for lemons. Our preliminary analysis suggests the majority of cyber warranties cover the cost of repairing the device alone. Only cyber-incident warranties cover first-party costs from cyber-attacks—why all such warranties were offered by firms selling intangible products is an open question. Consumers should question whether warranties can function as a costly signal when narrow coverage means vendors accept little risk.

Worse still, buyers cannot compare across cyber-incident warranty contracts due to the diversity of obligations and exclusions. Ambiguous definitions of the buyer’s obligations and excluded events create uncertainty over what is covered. Moving toward standardized terms and conditions may help consumers, as has been pursued in cyber insurance, but this is in tension with innovation and product diversity.

[..]

Theoretical work suggests both the breadth of the warranty and the price of a product determine whether the warranty functions as a quality signal. Our analysis has not touched upon the price of these products. It could be that firms with ineffective products pass the cost of the warranty on to buyers via higher prices. Future studies could analyze warranties and price together to probe this issue.

In conclusion, cyber warranties—particularly cyber-product warranties—do not transfer enough risk to be a market fix as imagined in Woods. But this does not mean they are pure marketing tricks either. The most valuable feature of warranties is in preventing vendors from exaggerating what their products can do. Consumers who read the fine print can place greater trust in marketing claims so long as the functionality is covered by a cyber-incident warranty.


Story of Gus Weiss

[2020.03.27] This is a long and fascinating article about Gus Weiss, who masterminded a long campaign to feed technical disinformation to the Soviet Union, which may or may not have caused a massive pipeline explosion somewhere in Siberia in the 1980s, if in fact there even was a massive pipeline explosion somewhere in Siberia in the 1980s.

Lots of information about the origins of US export controls laws and sabotage operations.


Privacy vs. Surveillance in the Age of COVID-19

[2020.03.30] The trade-offs are changing:

As countries around the world race to contain the pandemic, many are deploying digital surveillance tools as a means to exert social control, even turning security agency technologies on their own civilians. Health and law enforcement authorities are understandably eager to employ every tool at their disposal to try to hinder the virus—even as the surveillance efforts threaten to alter the precarious balance between public safety and personal privacy on a global scale.

Yet ratcheting up surveillance to combat the pandemic now could permanently open the doors to more invasive forms of snooping later.

I think the effects of COVID-19 will be more drastic than the effects of the terrorist attacks of 9/11: not only with respect to surveillance, but across many aspects of our society. And while many things that would never be acceptable during normal time are reasonable things to do right now, we need to makes sure we can ratchet them back once the current pandemic is over.

Cindy Cohn at EFF wrote:

We know that this virus requires us to take steps that would be unthinkable in normal times. Staying inside, limiting public gatherings, and cooperating with medically needed attempts to track the virus are, when approached properly, reasonable and responsible things to do. But we must be as vigilant as we are thoughtful. We must be sure that measures taken in the name of responding to COVID-19 are, in the language of international human rights law, “necessary and proportionate” to the needs of society in fighting the virus. Above all, we must make sure that these measures end and that the data collected for these purposes is not re-purposed for either governmental or commercial ends.

I worry that in our haste and fear, we will fail to do any of that.

More from EFF.


Clarifying the Computer Fraud and Abuse Act

[2020.03.31] A federal court has ruled that violating a website’s terms of service is not “hacking” under the Computer Fraud and Abuse Act.

The plaintiffs wanted to investigate possible racial discrimination in online job markets by creating accounts for fake employers and job seekers. Leading job sites have terms of service prohibiting users from supplying fake information, and the researchers worried that their research could expose them to criminal liability under the CFAA, which makes it a crime to “access a computer without authorization or exceed authorized access.”

So in 2016 they sued the federal government, seeking a declaration that this part of the CFAA violated the First Amendment.

But rather than addressing that constitutional issue, Judge John Bates ruled on Friday that the plaintiffs’ proposed research wouldn’t violate the CFAA’s criminal provisions at all. Someone violates the CFAA when they bypass an access restriction like a password. But someone who logs into a website with a valid password doesn’t become a hacker simply by doing something prohibited by a website’s terms of service, the judge concluded.

“Criminalizing terms-of-service violations risks turning each website into its own criminal jurisdiction and each webmaster into his own legislature,” Bates wrote.

Bates noted that website terms of service are often long, complex, and change frequently. While some websites require a user to read through the terms and explicitly agree to them, others merely include a link to the terms somewhere on the page. As a result, most users aren’t even aware of the contractual terms that supposedly govern the site. Under those circumstances, it’s not reasonable to make violation of such terms a criminal offense, Bates concluded.

This is not the first time a court has issued a ruling in this direction. It’s also not the only way the courts have interpreted the frustratingly vague Computer Fraud and Abuse Act.

EDITED TO ADD (4/13): The actual opinion.


Dark Web Hosting Provider Hacked

[2020.04.01] Daniel’s Hosting, which hosts about 7,600 dark web portals for free, has been hacked and is down. It’s unclear when, or if, it will be back up.


Marriott Was Hacked—Again

[2020.04.02] Marriott announced another data breach, this one affecting 5.2 million people:

At this point, we believe that the following information may have been involved, although not all of this information was present for every guest involved:

  • Contact Details (e.g., name, mailing address, email address, and phone number)
  • Loyalty Account Information (e.g., account number and points balance, but not passwords)
  • Additional Personal Details (e.g., company, gender, and birthday day and month)
  • Partnerships and Affiliations (e.g., linked airline loyalty programs and numbers)
  • Preferences (e.g., stay/room preferences and language preference)

This isn’t nearly as bad as the 2014 Marriott breach—made public in 2018—which was the work of the Chinese government. But it does call into question whether Marriott is taking security seriously at all. It would be nice if there were a government regulatory body that could investigate and hold the company accountable.


Bug Bounty Programs Are Being Used to Buy Silence

[2020.04.03] Investigative report on how commercial bug-bounty programs like HackerOne, Bugcrowd, and SynAck are being used to silence researchers:

Used properly, bug bounty platforms connect security researchers with organizations wanting extra scrutiny. In exchange for reporting a security flaw, the researcher receives payment (a bounty) as a thank you for doing the right thing. However, CSO’s investigation shows that the bug bounty platforms have turned bug reporting and disclosure on its head, what multiple expert sources, including HackerOne’s former chief policy officer, Katie Moussouris, call a “perversion.”

[…]

Silence is the commodity the market appears to be demanding, and the bug bounty platforms have pivoted to sell what willing buyers want to pay for.

“Bug bounties are best when transparent and open. The more you try to close them down and place NDAs on them, the less effective they are, the more they become about marketing rather than security,” Robert Graham of Errata Security tells CSO.

Leitschuh, the Zoom bug finder, agrees. “This is part of the problem with the bug bounty platforms as they are right now. They aren’t holding companies to a 90-day disclosure deadline,” he says. “A lot of these programs are structured on this idea of non-disclosure. What I end up feeling like is that they are trying to buy researcher silence.”

The bug bounty platforms’ NDAs prohibit even mentioning the existence of a private bug bounty. Tweeting something like “Company X has a private bounty program over at Bugcrowd” would be enough to get a hacker kicked off their platform.

The carrot for researcher silence is the money—bounties can range from a few hundred to tens of thousands of dollars—but the stick to enforce silence is “safe harbor,” an organization’s public promise not to sue or criminally prosecute a security researcher attempting to report a bug in good faith.


Security and Privacy Implications of Zoom

[2020.04.03] Over the past few weeks, Zoom’s use has exploded since it became the video conferencing platform of choice in today’s COVID-19 world. (My own university, Harvard, uses it for all of its classes. Boris Johnson had a cabinet meeting over Zoom.) Over that same period, the company has been exposed for having both lousy privacy and lousy security. My goal here is to summarize all of the problems and talk about solutions and workarounds.

In general, Zoom’s problems fall into three broad buckets: (1) bad privacy practices, (2) bad security practices, and (3) bad user configurations.

Privacy first: Zoom spies on its users for personal profit. It seems to have cleaned this up somewhat since everyone started paying attention, but it still does it.

The company collects a laundry list of data about you, including user name, physical address, email address, phone number, job information, Facebook profile information, computer or phone specs, IP address, and any other information you create or upload. And it uses all of this surveillance data for profit, against your interests.

Last month, Zoom’s privacy policy contained this bit:

Does Zoom sell Personal Data? Depends what you mean by “sell.” We do not allow marketing companies, or anyone else to access Personal Data in exchange for payment. Except as described above, we do not allow any third parties to access any Personal Data we collect in the course of providing services to users. We do not allow third parties to use any Personal Data obtained from us for their own purposes, unless it is with your consent (e.g. when you download an app from the Marketplace. So in our humble opinion, we don’t think most of our users would see us as selling their information, as that practice is commonly understood.

“Depends what you mean by ‘sell.'” “…most of our users would see us as selling…” “…as that practice is commonly understood.” That paragraph was carefully worded by lawyers to permit them to do pretty much whatever they want with your information while pretending otherwise. Do any of you who “download[ed] an app from the Marketplace” remember consenting to them giving your personal data to third parties? I don’t.

Doc Searls has been all over this, writing about the surprisingly large number of third-party trackers on the Zoom website and its poor privacy practices in general.

On March 29th, Zoom rewrote its privacy policy:

We do not sell your personal data. Whether you are a business or a school or an individual user, we do not sell your data.

[…]

We do not use data we obtain from your use of our services, including your meetings, for any advertising. We do use data we obtain from you when you visit our marketing websites, such as zoom.us and zoom.com. You have control over your own cookie settings when visiting our marketing websites.

There’s lots more. It’s better than it was, but Zoom still collects a huge amount of data about you. And note that it considers its home pages “marketing websites,” which means it’s still using third-party trackers and surveillance based advertising. (Honestly, Zoom, just stop doing it.)

Now security: Zoom’s security is at best sloppy, and malicious at worst. Motherboard reported that Zoom’s iPhone app was sending user data to Facebook, even if the user didn’t have a Facebook account. Zoom removed the feature, but its response should worry you about its sloppy coding practices in general:

“We originally implemented the ‘Login with Facebook’ feature using the Facebook SDK in order to provide our users with another convenient way to access our platform. However, we were recently made aware that the Facebook SDK was collecting unnecessary device data,” Zoom told Motherboard in a statement on Friday.

This isn’t the first time Zoom was sloppy with security. Last year, a researcher discovered that a vulnerability in the Mac Zoom client allowed any malicious website to enable the camera without permission. This seemed like a deliberate design choice: that Zoom designed its service to bypass browser security settings and remotely enable a user’s web camera without the user’s knowledge or consent. (EPIC filed an FTC complaint over this.) Zoom patched this vulnerability last year.

On 4/1, we learned that Zoom for Windows can be used to steal users’ Window credentials.

Attacks work by using the Zoom chat window to send targets a string of text that represents the network location on the Windows device they’re using. The Zoom app for Windows automatically converts these so-called universal naming convention strings—such as \attacker.example.com/C$—into clickable links. In the event that targets click on those links on networks that aren’t fully locked down, Zoom will send the Windows usernames and the corresponding NTLM hashes to the address contained in the link.

On 4/2, we learned that Zoom secretly displayed data from people’s LinkedIn profiles, which allowed some meeting participants to snoop on each other. (Zoom has fixed this one.)

I’m sure lots more of these bad security decisions, sloppy coding mistakes, and random software vulnerabilities are coming.

But it gets worse. Zoom’s encryption is awful. First, the company claims that it offers end-to-end encryption, but it doesn’t. It only provides link encryption, which means everything is unencrypted on the company’s servers. From the Intercept:

In Zoom’s white paper, there is a list of “pre-meeting security capabilities” that are available to the meeting host that starts with “Enable an end-to-end (E2E) encrypted meeting.” Later in the white paper, it lists “Secure a meeting with E2E encryption” as an “in-meeting security capability” that’s available to meeting hosts. When a host starts a meeting with the “Require Encryption for 3rd Party Endpoints” setting enabled, participants see a green padlock that says, “Zoom is using an end to end encrypted connection” when they mouse over it.

But when reached for comment about whether video meetings are actually end-to-end encrypted, a Zoom spokesperson wrote, “Currently, it is not possible to enable E2E encryption for Zoom video meetings. Zoom video meetings use a combination of TCP and UDP. TCP connections are made using TLS and UDP connections are encrypted with AES using a key negotiated over a TLS connection.”

They’re also lying about the type of encryption. On 4/3, Citizen Lab reported

Zoom documentation claims that the app uses “AES-256” encryption for meetings where possible. However, we find that in each Zoom meeting, a single AES-128 key is used in ECB mode by all participants to encrypt and decrypt audio and video. The use of ECB mode is not recommended because patterns present in the plaintext are preserved during encryption.

The AES-128 keys, which we verified are sufficient to decrypt Zoom packets intercepted in Internet traffic, appear to be generated by Zoom servers, and in some cases, are delivered to participants in a Zoom meeting through servers in China, even when all meeting participants, and the Zoom subscriber’s company, are outside of China.

I’m okay with AES-128, but using ECB (electronic codebook) mode indicates that there is no one at the company who knows anything about cryptography.

And that China connection is worrisome. Citizen Lab again:

Zoom, a Silicon Valley-based company, appears to own three companies in China through which at least 700 employees are paid to develop Zoom’s software. This arrangement is ostensibly an effort at labor arbitrage: Zoom can avoid paying US wages while selling to US customers, thus increasing their profit margin. However, this arrangement may make Zoom responsive to pressure from Chinese authorities.

Or from Chinese programmers slipping backdoors into the code at the request of the government.

Finally, bad user configuration. Zoom has a lot of options. The defaults aren’t great, and if you don’t configure your meetings right you’re leaving yourself open to all sort of mischief.

Zoombombing” is the most visible problem. People are finding open Zoom meetings, classes, and events: joining them, and sharing their screens to broadcast offensive content—porn, mostly—to everyone. It’s awful if you’re the victim, and a consequence of allowing any participant to share their screen.

Even without screen sharing, people are logging in to random Zoom meetings and disrupting them. Turns out that Zoom didn’t make the meeting ID long enough to prevent someone from randomly trying them, looking for meetings. This isn’t new; Checkpoint Research reported this last summer. Instead of making the meeting IDs longer or more complicated—which it should have done—it enabled meeting passwords by default. Of course most of us don’t use passwords, and there are now automatic tools for finding Zoom meetings.

For help securing your Zoom sessions, Zoom has a good guide. Short summary: don’t share the meeting ID more than you have to, use a password in addition to a meeting ID, use the waiting room if you can, and pay attention to who has what permissions.

That’s what we know about Zoom’s privacy and security so far. Expect more revelations in the weeks and months to come. The New York Attorney General is investigating the company. Security researchers are combing through the software, looking for other things Zoom is doing and not telling anyone about. There are more stories waiting to be discovered.

Zoom is a security and privacy disaster, but until now had managed to avoid public accountability because it was relatively obscure. Now that it’s in the spotlight, it’s all coming out. (Their 4/1 response to all of this is here.) On 4/2, the company said it would freeze all feature development and focus on security and privacy. Let’s see if that’s anything more than a PR move.

In the meantime, you should either lock Zoom down as best you can, or—better yet—abandon the platform altogether. Jitsi is a distributed, free, and open-source alternative. Start your meeting here.

EDITED TO ADD: Fight for the Future is on this.

Steve Bellovin’s comments.

Meanwhile, lots of Zoom video recordings are available on the Internet. The article doesn’t have any useful details about how they got there:

Videos viewed by The Post included one-on-one therapy sessions; a training orientation for workers doing telehealth calls, which included people’s names and phone numbers; small-business meetings, which included private company financial statements; and elementary-school classes, in which children’s faces, voices and personal details were exposed.

Many of the videos include personally identifiable information and deeply intimate conversations, recorded in people’s homes. Other videos include nudity, such as one in which an aesthetician teaches students how to give a Brazilian wax.

[…]

Many of the videos can be found on unprotected chunks of Amazon storage space, known as buckets, which are widely used across the Web. Amazon buckets are locked down by default, but many users make the storage space publicly accessible either inadvertently or to share files with other people.

EDITED TO ADD (4/4): New York City has banned Zoom from its schools.


Emotet Malware Causes Physical Damage

[2020.04.06] Microsoft is reporting that an Emotet malware infection shut down a network by causing computers to overheat and then crash.

The Emotet payload was delivered and executed on the systems of Fabrikam—a fake name Microsoft gave the victim in their case study—five days after the employee’s user credentials were exfiltrated to the attacker’s command and control (C&C) server.

Before this, the threat actors used the stolen credentials to deliver phishing emails to other Fabrikam employees, as well as to their external contacts, with more and more systems getting infected and downloading additional malware payloads.

The malware further spread through the network without raising any red flags by stealing admin account credentials authenticating itself on new systems, later used as stepping stones to compromise other devices.

Within 8 days since that first booby-trapped attachment was opened, Fabrikam’s entire network was brought to its knees despite the IT department’s efforts, with PCs overheating, freezing, and rebooting because of blue screens, and Internet connections slowing down to a crawl because of Emotet devouring all the bandwidth.

The infection mechanism was one employee opening a malicious attachment to a phishing email. I can’t find any information on what kind of attachment.


Cybersecurity During COVID-19

[2020.04.07] Three weeks ago (could it possibly be that long already?), I wrote about the increased risks of working remotely during the COVID-19 pandemic.

One, employees are working from their home networks and sometimes from their home computers. These systems are more likely to be out of date, unpatched, and unprotected. They are more vulnerable to attack simply because they are less secure.

Two, sensitive organizational data will likely migrate outside of the network. Employees working from home are going to save data on their own computers, where they aren’t protected by the organization’s security systems. This makes the data more likely to be hacked and stolen.

Three, employees are more likely to access their organizational networks insecurely. If the organization is lucky, they will have already set up a VPN for remote access. If not, they’re either trying to get one quickly or not bothering at all. Handing people VPN software to install and use with zero training is a recipe for security mistakes, but not using a VPN is even worse.

Four, employees are being asked to use new and unfamiliar tools like Zoom to replace face-to-face meetings. Again, these hastily set-up systems are likely to be insecure.

Five, the general chaos of “doing things differently” is an opening for attack. Tricks like business email compromise, where an employee gets a fake email from a senior executive asking him to transfer money to some account, will be more successful when the employee can’t walk down the hall to confirm the email’s validity—and when everyone is distracted and so many other things are being done differently.

NASA is reporting an increase in cyberattacks. From an agency memo:

A new wave of cyber-attacks is targeting Federal Agency Personnel, required to telework from home, during the Novel Coronavirus (COVID-19) outbreak. During the past few weeks, NASA’s Security Operations Center (SOC) mitigation tools have prevented success of these attempts. Here are some examples of what’s been observed in the past few days:

  • Doubling of email phishing attempts
  • Exponential increase in malware attacks on NASA systems
  • Double the number of mitigation-blocking of NASA systems trying to access malicious sites (often unknowingly) due to users accessing the Internet

Here’s another article that makes basically the same points I did:

But the rapid shift to remote working will inevitably create or exacerbate gaps in security. Employees using unfamiliar software will get settings wrong and leave themselves open to breaches. Staff forced to use their own ageing laptops from home will find their data to be less secure than those using modern equipment.

That’s a big problem because the security issues are not going away. For the last couple of months coronavirus-themed malware and phishing scams have been on the rise. Business email compromise scams—where crooks impersonate a CEO or other senior staff member and then try to trick workers into sending money to their accounts—could be made easier if staff primarily rely on email to communicate while at home.

EDITED TO ADD: This post has been translated into Portuguese.

EDITED TO ADD (4/13): A three-part series about home-office cybersecurity.


RSA-250 Factored

[2020.04.08] RSA-250 has been factored.

This computation was performed with the Number Field Sieve algorithm, using the open-source CADO-NFS software.

The total computation time was roughly 2700 core-years, using Intel Xeon Gold 6130 CPUs as a reference (2.1GHz):

RSA-250 sieving: 2450 physical core-years
RSA-250 matrix: 250 physical core-years

The computation involved tens of thousands of machines worldwide, and was completed in a few months.

News article. On the factoring challenges.


Microsoft Buys Corp.com

[2020.04.09] A few months ago, Brian Krebs told the story of the domain corp.com, and how it is basically a security nightmare:

At issue is a problem known as “namespace collision,” a situation where domain names intended to be used exclusively on an internal company network end up overlapping with domains that can resolve normally on the open Internet.

Windows computers on an internal corporate network validate other things on that network using a Microsoft innovation called Active Directory, which is the umbrella term for a broad range of identity-related services in Windows environments. A core part of the way these things find each other involves a Windows feature called “DNS name devolution,” which is a kind of network shorthand that makes it easier to find other computers or servers without having to specify a full, legitimate domain name for those resources.

For instance, if a company runs an internal network with the name internalnetwork.example.com, and an employee on that network wishes to access a shared drive called “drive1,” there’s no need to type “drive1.internalnetwork.example.com” into Windows Explorer; typing “\drive1” alone will suffice, and Windows takes care of the rest.

But things can get far trickier with an internal Windows domain that does not map back to a second-level domain the organization actually owns and controls. And unfortunately, in early versions of Windows that supported Active Directory—Windows 2000 Server, for example—the default or example Active Directory path was given as “corp,” and many companies apparently adopted this setting without modifying it to include a domain they controlled.

Compounding things further, some companies then went on to build (and/or assimilate) vast networks of networks on top of this erroneous setting.

Now, none of this was much of a security concern back in the day when it was impractical for employees to lug their bulky desktop computers and monitors outside of the corporate network. But what happens when an employee working at a company with an Active Directory network path called “corp” takes a company laptop to the local Starbucks?

Chances are good that at least some resources on the employee’s laptop will still try to access that internal “corp” domain. And because of the way DNS name devolution works on Windows, that company laptop online via the Starbucks wireless connection is likely to then seek those same resources at “corp.com.”

In practical terms, this means that whoever controls corp.com can passively intercept private communications from hundreds of thousands of computers that end up being taken outside of a corporate environment which uses this “corp” designation for its Active Directory domain.

Microsoft just bought it, so it wouldn’t fall into the hands of any bad actors:

In a written statement, Microsoft said it acquired the domain to protect its customers.

“To help in keeping systems protected we encourage customers to practice safe security habits when planning for internal domain and network names,” the statement reads. “We released a security advisory in June of 2009 and a security update that helps keep customers safe. In our ongoing commitment to customer security, we also acquired the Corp.com domain.”


Kubernetes Security

[2020.04.10] Attack matrix for Kubernetes, using the MITRE ATT&CK framework. A good first step towards understand the security of this suddenly popular and very complex container orchestration system.


Contact Tracing COVID-19 Infections via Smartphone Apps

[2020.04.13] Google and Apple have announced a joint project to create a privacy-preserving COVID-19 contact tracing app. (Details, such as we have them, are here.) It’s similar to the app being developed at MIT, and similar to others being described and developed elsewhere. It’s nice seeing the privacy protections; they’re well thought out.

I was going to write a long essay about the security and privacy concerns, but Ross Anderson beat me to it. (Note that some of his comments are UK-specific.)

First, it isn’t anonymous. Covid-19 is a notifiable disease so a doctor who diagnoses you must inform the public health authorities, and if they have the bandwidth they call you and ask who you’ve been in contact with. They then call your contacts in turn. It’s not about consent or anonymity, so much as being persuasive and having a good bedside manner.

I’m relaxed about doing all this under emergency public-health powers, since this will make it harder for intrusive systems to persist after the pandemic than if they have some privacy theater that can be used to argue that the whizzy new medi-panopticon is legal enough to be kept running.

Second, contact tracers have access to all sorts of other data such as public transport ticketing and credit-card records. This is how a contact tracer in Singapore is able to phone you and tell you that the taxi driver who took you yesterday from Orchard Road to Raffles has reported sick, so please put on a mask right now and go straight home. This must be controlled; Taiwan lets public-health staff access such material in emergencies only.

Third, you can’t wait for diagnoses. In the UK, you only get a test if you’re a VIP or if you get admitted to hospital. Even so the results take 1-3 days to come back. While the VIPs share their status on twitter or facebook, the other diagnosed patients are often too sick to operate their phones.

Fourth, the public health authorities need geographical data for purposes other than contact tracing – such as to tell the army where to build more field hospitals, and to plan shipments of scarce personal protective equipment. There are already apps that do symptom tracking but more would be better. So the UK app will ask for the first three characters of your postcode, which is about enough to locate which hospital you’d end up in.

Fifth, although the cryptographers – and now Google and Apple – are discussing more anonymous variants of the Singapore app, that’s not the problem. Anyone who’s worked on abuse will instantly realise that a voluntary app operated by anonymous actors is wide open to trolling. The performance art people will tie a phone to a dog and let it run around the park; the Russians will use the app to run service-denial attacks and spread panic; and little Johnny will self-report symptoms to get the whole school sent home.

I recommend reading his essay in full. Also worth reading are this EFF essay, and this ACLU white paper.

To me, the real problems aren’t around privacy and security. The efficacy of any app-based contact tracing is still unproven. A “contact” from the point of view of an app isn’t the same as an epidemiological contact. And the ratio of infections to contacts is high. We would have to deal with the false positives (being close to someone else, but separated by a partition or other barrier) and the false negatives (not being close to someone else, but contracting the disease through a mutually touched object). And without cheap, fast, and accurate testing, the information from any of these apps isn’t very useful. So I agree with Ross that this is primarily an exercise in that false syllogism: Something must be done. This is something. Therefore, we must do it. It’s techies proposing tech solutions to what is primarily a social problem.

EDITED TO ADD: Susan Landau on contact tracing apps and how they’re being oversold. And Farzad Mostashari, former coordinator for health IT at the Department of Health and Human Services, on contact tracing apps.

As long as 1) every contact does not result in an infection, and 2) a large percentage of people with the disease are asymptomatic and don’t realize they have it, I can’t see how this sort of app is valuable. If we had cheap, fast, and accurate testing for everyone on demand…maybe. But I still don’t think so.


Ransomware Now Leaking Stolen Documents

[2020.04.14] Originally, ransomware didn’t involve any data theft. Malware would encrypt the data on your computer, and demand a ransom for the encryption key. Now ransomware is increasingly involving both encryption and exfiltration. Brian Krebs wrote about this in December. It’s a further incentive for the victims to pay.

Recently, the aerospace company Visser Precision was hit by the DoppelPaymer ransomware. The company refused to pay, so the criminals leaked documents and data belonging to Visser Precision, Lockheed Martin, Boeing, SpaceX, the US Navy, and others.


Upcoming Speaking Engagements

[2020.04.14] This is a current list of where and when I am scheduled to speak:

  • I’m being interviewed on “Hacking in the Public Interest” as part of the Black Hat Webcast Series, on Thursday, April 16, 2020 at 2:00 PM EDT.

The list is maintained on this page.


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books—including his latest, Click Here to Kill Everybody—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet and Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of EPIC and VerifiedVoting.org.

Copyright © 2020 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.