Crypto-Gram

March 15, 2022

by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram’s web page.

These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:

  1. Secret CIA Data Collection Program
  2. Vendors are Fixing Security Flaws Faster
  3. Possible Government Surveillance of the Otter.ai Transcription App
  4. Stealing Bicycles by Swapping QR Codes
  5. A New Cybersecurity “Social Contract”
  6. Bypassing Apple’s AirTag Security
  7. An Elaborate Employment Con in the Internet Age
  8. Privacy Violating COVID Tests
  9. Insurance Coverage for NotPetya Losses
  10. Decrypting Hive Ransomware Data
  11. Vulnerability in Stalkerware Apps
  12. Details of an NSA Hacking Operation
  13. Samsung Encryption Flaw
  14. Hacking Alexa through Alexa’s Speech
  15. Using Radar to Read Body Language
  16. Fraud on Zelle
  17. Where’s the Russia-Ukraine Cyberwar?
  18. Leak of Russian Censorship Data
  19. Upcoming Speaking Events

Secret CIA Data Collection Program

[2022.02.15] Two US senators claim that the CIA has been running an unregulated—and almost certainly illegal—mass surveillance program on Americans.

The senator’s statement. Some declassified information from the CIA.

No real details yet.


Vendors are Fixing Security Flaws Faster

[2022.02.16] Google’s Project Zero is reporting that software vendors are patching their code faster.

tl;dr

  • In 2021, vendors took an average of 52 days to fix security vulnerabilities reported from Project Zero. This is a significant acceleration from an average of about 80 days 3 years ago.
  • In addition to the average now being well below the 90-day deadline, we have also seen a dropoff in vendors missing the deadline (or the additional 14-day grace period). In 2021, only one bug exceeded its fix deadline, though 14% of bugs required the grace period.
  • Differences in the amount of time it takes a vendor/product to ship a fix to users reflects their product design, development practices, update cadence, and general processes towards security reports. We hope that this comparison can showcase best practices, and encourage vendors to experiment with new policies.
  • This data aggregation and analysis is relatively new for Project Zero, but we hope to do it more in the future. We encourage all vendors to consider publishing aggregate data on their time-to-fix and time-to-patch for externally reported vulnerabilities, as well as more data sharing and transparency in general.

Possible Government Surveillance of the Otter.ai Transcription App

[2022.02.17] A reporter interviews a Uyghur human-rights advocate, and uses the Otter.ai transcription app.

The next day, I received an odd note from Otter.ai, the automated transcription app that I had used to record the interview. It read: “Hey Phelim, to help us improve your Otter’s experience, what was the purpose of this particular recording with titled ‘Mustafa Aksu’ created at ‘2021-11-08 11:02:41’?”

Customer service or Chinese surveillance? Turns out it’s hard to tell.

EDITED TO ADD (3/12): Another article.


Stealing Bicycles by Swapping QR Codes

[2022.02.21] This is a clever hack against those bike-rental kiosks:

They’re stealing Citi Bikes by switching the QR scan codes on two bicycles near each other at a docking station, then waiting for an unsuspecting cyclist to try to unlock a bike with his or her smartphone app.

The app doesn’t work for the rider but does free up the nearby Citi Bike with the switched code, where a thief is waiting, jumps on the bicycle and rides off.

Presumably they’re using camera, printers, and stickers to swap the codes on the bikes. And presumably the victim is charged for not returning the stolen bicycle.

This story is from last year, but I hadn’t seen it before. There’s a video of one theft at the link.


A New Cybersecurity “Social Contract”

[2022.02.22] The US National Cyber Director Chris Inglis wrote an essay outlining a new social contract for the cyber age:

The United States needs a new social contract for the digital age—one that meaningfully alters the relationship between public and private sectors and proposes a new set of obligations for each. Such a shift is momentous but not without precedent. From the Pure Food and Drug Act of 1906 to the Clean Air Act of 1963 and the public-private revolution in airline safety in the 1990s, the United States has made important adjustments following profound changes in the economy and technology.

A similarly innovative shift in the cyber-realm will likely require an intense process of development and iteration. Still, its contours are already clear: the private sector must prioritize long-term investments in a digital ecosystem that equitably distributes the burden of cyberdefense. Government, in turn, must provide more timely and comprehensive threat information while simultaneously treating industry as a vital partner. Finally, both the public and private sectors must commit to moving toward true collaboration—contributing resources, attention, expertise, and people toward institutions designed to prevent, counter, and recover from cyber-incidents.

The devil is in the details, of course, but he’s 100% right when he writes that the market cannot solve this: that the incentives are all wrong. While he never actually uses the word “regulation,” the future he postulates won’t be possible without it. Regulation is how society aligns market incentives with its own values. He also leaves out the NSA—whose effectiveness rests on all of these global insecurities—and the FBI, whose incessant push for encryption backdoors goes against his vision of increased cybersecurity. I’m not sure how he’s going to get them on board. Or the surveillance capitalists, for that matter. A lot of what he wants will require reining in that particular business model.

Good essay—worth reading in full.


Bypassing Apple’s AirTag Security

[2022.02.23] A Berlin-based company has developed an AirTag clone that bypasses Apple’s anti-stalker security systems. Source code for these AirTag clones is available online.

So now we have several problems with the system. Apple’s anti-stalker security only works with iPhones. (Apple wrote an Android app that can detect AirTags, but how many people are going to download it?) And now non-AirTags can piggyback on Apple’s system without triggering the alarms.

Apple didn’t think this through nearly as well as it claims to have. I think the general problem is one that I have written about before: designers just don’t have intimate threats in mind when building these systems.


An Elaborate Employment Con in the Internet Age

[2022.02.24] The story is an old one, but the tech gives it a bunch of new twists:

Gemma Brett, a 27-year-old designer from west London, had only been working at Madbird for two weeks when she spotted something strange. Curious about what her commute would be like when the pandemic was over, she searched for the company’s office address. The result looked nothing like the videos on Madbird’s website of a sleek workspace buzzing with creative-types. Instead, Google Street View showed an upmarket block of flats in London’s Kensington.

[…]

Using online reverse image searches they dug deeper. They found that almost all the work Madbird claimed as its own had been stolen from elsewhere on the internet—and that some of the colleagues they’d been messaging online didn’t exist.

[…]

At least six of the most senior employees profiled by Madbird were fake. Their identities stitched together using photos stolen from random corners of the internet and made-up names. They included Madbird’s co-founder, Dave Stanfield—despite him having a LinkedIn profile and Ali referring to him constantly. Some of the duped staff had even received emails from him.

Read the whole sad story. What’s amazing is how shallow all the fakery was, and how quickly it all unraveled once people started digging. But until there’s suspicion enough to dig, we take all of these things at face value. And in COVID times, there’s no face-to-face anything.


Privacy Violating COVID Tests

[2022.02.25] A good lesson in reading the fine print:

Cignpost Diagnostics, which trades as ExpressTest and offers £35 tests for holidaymakers, said it holds the right to analyse samples from seals to “learn more about human health”—and sell information on to third parties.

Individuals are required to give informed consent for their sensitive medical data to be used but customers’ consent for their DNA to be sold now as buried in Cignpost’s online documents.

Of course, no one ever reads the fine print.

EDITED TO ADD (3/12): The original story.


Insurance Coverage for NotPetya Losses

[2022.02.28] Tarah Wheeler and Josephine Wolff analyze a recent court decision that the NotPetya attacks are not considered an act of war under the wording of Merck’s insurance policy, and that the insurers must pay the $1B+ claim. Wheeler and Wolff argue that the judge “did the right thing for the wrong reasons..”


Decrypting Hive Ransomware Data

[2022.03.01] Nice piece of research:

Abstract: Among the many types of malicious codes, ransomware poses a major threat. Ransomware encrypts data and demands a ransom in exchange for decryption. As data recovery is impossible if the encryption key is not obtained, some companies suffer from considerable damage, such as the payment of huge amounts of money or the loss of important data. In this paper, we analyzed Hive ransomware, which appeared in June 2021. Hive ransomware has caused immense harm, leading the FBI to issue an alert about it. To minimize the damage caused by Hive Ransomware and to help victims recover their files, we analyzed Hive Ransomware and studied recovery methods. By analyzing the encryption process of Hive ransomware, we confirmed that vulnerabilities exist by using their own encryption algorithm. We have recovered the master key for generating the file encryption key partially, to enable the decryption of data encrypted by Hive ransomware. We recovered 95% of the master key without the attacker’s RSA private key and decrypted the actual infected data. To the best of our knowledge, this is the first successful attempt at decrypting Hive ransomware. It is expected that our method can be used to reduce the damage caused by Hive ransomware.

Here’s the flaw:

The cryptographic vulnerability identified by the researchers concerns the mechanism by which the master keys are generated and stored, with the ransomware strain only encrypting select portions of the file as opposed to the entire contents using two keystreams derived from the master key.

The encryption keystream, which is created from an XOR operation of the two keystreams, is then XORed with the data in alternate blocks to generate the encrypted file. But this technique also makes it possible to guess the keystreams and restore the master key, in turn enabling the decode of encrypted files sans the attacker’s private key.

The researchers said that they were able to weaponize the flaw to devise a method to reliably recover more than 95% of the keys employed during encryption.


Vulnerability in Stalkerware Apps

[2022.03.02] TechCrunch is reporting—but not describing in detail—a vulnerability in a series of stalkerware apps that exposes personal information of the victims. The vulnerability isn’t in the apps installed on the victims’ phones, but in the website the stalker goes to view the information the app collects. The article is worth reading, less for the description of the vulnerability and more for the shadowy string of companies behind these stalkerware apps.


Details of an NSA Hacking Operation

[2022.03.03] Pangu Lab in China just published a report of a hacking operation by the Equation Group (aka the NSA). It noticed the hack in 2013, and was able to map it with Equation Group tools published by the Shadow Brokers (aka some Russian group).

…the scope of victims exceeded 287 targets in 45 countries, including Russia, Japan, Spain, Germany, Italy, etc. The attack lasted for over 10 years. Moreover, one victim in Japan is used as a jump server for further attack.

News article.


Samsung Encryption Flaw

[2022.03.04] Researchers have found a major encryption flaw in 100 million Samsung Galaxy phones.

From the abstract:

In this work, we expose the cryptographic design and implementation of Android’s Hardware-Backed Keystore in Samsung’s Galaxy S8, S9, S10, S20, and S21 flagship devices. We reversed-engineered and provide a detailed description of the cryptographic design and code structure, and we unveil severe design flaws. We present an IV reuse attack on AES-GCM that allows an attacker to extract hardware-protected key material, and a downgrade attack that makes even the latest Samsung devices vulnerable to the IV reuse attack. We demonstrate working key extraction attacks on the latest devices. We also show the implications of our attacks on two higher-level cryptographic protocols between the TrustZone and a remote server: we demonstrate a working FIDO2 WebAuthn login bypass and a compromise of Google’s Secure Key Import.

Here are the details:

As we discussed in Section 3, the wrapping key used to encrypt the key blobs (HDK) is derived using a salt value computed by the Keymaster TA. In v15 and v20-s9 blobs, the salt is a deterministic function that depends only on the application ID and application data (and constant strings), which the Normal World client fully controls. This means that for a given application, all key blobs will be encrypted using the same key. As the blobs are encrypted in AES-GCM mode-of-operation, the security of the resulting encryption scheme depends on its IV values never being reused.

Gadzooks. That’s a really embarrassing mistake. GSM needs a new nonce for every encryption. Samsung took a secure cipher mode and implemented it insecurely.

News article.


Hacking Alexa through Alexa’s Speech

[2022.03.07] An Alexa can respond to voice commands it issues. This can be exploited:

The attack works by using the device’s speaker to issue voice commands. As long as the speech contains the device wake word (usually “Alexa” or “Echo”) followed by a permissible command, the Echo will carry it out, researchers from Royal Holloway University in London and Italy’s University of Catania found. Even when devices require verbal confirmation before executing sensitive commands, it’s trivial to bypass the measure by adding the word “yes” about six seconds after issuing the command. Attackers can also exploit what the researchers call the “FVV,” or full voice vulnerability, which allows Echos to make self-issued commands without temporarily reducing the device volume.

It does require proximate access, though, at least to set the attack up:

It requires only a few seconds of proximity to a vulnerable device while it’s turned on so an attacker can utter a voice command instructing it to pair with an attacker’s Bluetooth-enabled device. As long as the device remains within radio range of the Echo, the attacker will be able to issue commands.

Research paper.


Using Radar to Read Body Language

[2022.03.08] Yet another method of surveillance:

Radar can detect you moving closer to a computer and entering its personal space. This might mean the computer can then choose to perform certain actions, like booting up the screen without requiring you to press a button. This kind of interaction already exists in current Google Nest smart displays, though instead of radar, Google employs ultrasonic sound waves to measure a person’s distance from the device. When a Nest Hub notices you’re moving closer, it highlights current reminders, calendar events, or other important notifications.

Proximity alone isn’t enough. What if you just ended up walking past the machine and looking in a different direction? To solve this, Soli can capture greater subtleties in movements and gestures, such as body orientation, the pathway you might be taking, and the direction your head is facing—aided by machine learning algorithms that further refine the data. All this rich radar information helps it better guess if you are indeed about to start an interaction with the device, and what the type of engagement might be.

[…]

The ATAP team chose to use radar because it’s one of the more privacy-friendly methods of gathering rich spatial data. (It also has really low latency, works in the dark, and external factors like sound or temperature don’t affect it.) Unlike a camera, radar doesn’t capture and store distinguishable images of your body, your face, or other means of identification. “It’s more like an advanced motion sensor,” Giusti says. Soli has a detectable range of around 9 feet—less than most cameras—but multiple gadgets in your home with the Soli sensor could effectively blanket your space and create an effective mesh network for tracking your whereabouts in a home.

“Privacy-friendly” is a relative term.

These technologies are coming. They’re going to be an essential part of the Internet of Things.


Fraud on Zelle

[2022.03.09] Zelle is rife with fraud:

Zelle’s immediacy has also made it a favorite of fraudsters. Other types of bank transfers or transactions involving payment cards typically take at least a day to clear. But once crooks scare or trick victims into handing over money via Zelle, they can siphon away thousands of dollars in seconds. There’s no way for customers—and in many cases, the banks themselves—to retrieve the money.

[…]

It’s not clear who is legally liable for such losses. Banks say that returning money to defrauded customers is not their responsibility, since the federal law covering electronic transfers—known in the industry as Regulation E—requires them to cover only “unauthorized” transactions, and the fairly common scam that Mr. Faunce fell prey to tricks people into making the transfers themselves. Victims say because they were duped into sending the money, the transaction is unauthorized. Regulatory guidance has so far been murky.

When swindled customers, already upset to find themselves on the hook, search for other means of redress, many are enraged to find out that Zelle is owned and operated by banks.

[…]

The Zelle network is operated by Early Warning Services, a company created and owned by seven banks: Bank of America, Capital One, JPMorgan Chase, PNC, Truist, U.S. Bank and Wells Fargo. Early Warning, based in Scottsdale, Ariz., manages the system’s technical infrastructure. But the 1,425 banks and credit unions that use Zelle can customize the app and add their own security settings.


Where’s the Russia-Ukraine Cyberwar?

[2022.03.10] It has been interesting to notice how unimportant and ineffective cyber operations have been in the Russia-Ukraine war. Russia launched a wiper against Ukraine at the beginning, but it was found and neutered. Near as I can tell, the only thing that worked was the disabling of regional KA-SAT SATCOM terminals.

It’s probably too early to reach any conclusions, but people are starting to write about this, with varying theories.

I want to write about this, too, but I’m waiting for things to progress more.

EDITED TO ADD (3/12): Two additional takes.


Leak of Russian Censorship Data

[2022.03.14] The transparency organization Distributed Denial of Secrets has released 800GB of data from Roskomnadzor, the Russian government censorship organization.

Specifically, Distributed Denial of Secrets says the data comes from the Roskomnadzor of the Republic of Bashkortostan. The Republic of Bashkortostan is in the west of the country.

[…]

The data is split into two main categories: a series of over 360,000 files totalling in at 526.9GB and which date up to as recently as March 5, and then two databases that are 290.6GB in size, according to Distributed Denial of Secrets’ website.


Upcoming Speaking Events

[2022.03.14] This is a current list of where and when I am scheduled to speak:

  • I’m participating in an online panel discussion on “Ukraine and Russia: The Online War,” hosted by UMass Amherst, at 5:00 PM Eastern on March 31, 2022.
  • I’m speaking at Future Summits in Antwerp, Belgium on May 18, 2022.
  • I’m speaking at IT-S Now 2022 in Vienna on June 2, 2022.
  • I’m speaking at the 14th International Conference on Cyber Conflict, CyCon 2022, in Tallinn, Estonia on June 3, 2022.
  • I’m speaking at the RSA Conference 2022 in San Francisco, June 6-9, 2022.
  • I’m speaking at the Dublin Tech Summit in Dublin, Ireland, June 15-16, 2022.

The list is maintained on this page.


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books—including his latest, We Have Root—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.

Copyright © 2022 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.