June 15, 2022
by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit Crypto-Gram’s web page.
These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.
In this issue:
- The NSA Says that There are No Known Flaws in NIST’s Quantum-Resistant Algorithms
- Attacks on Managed Service Providers Expected to Increase
- iPhone Malware that Operates Even When the Phone Is Turned Off
- Websites that Collect Your Data as You Type
- Bluetooth Flaw Allows Remote Unlocking of Digital Locks
- The Onion on Google Map Surveillance
- Forging Australian Driver’s Licenses
- The Justice Department Will No Longer Charge Security Researchers with Criminal Hacking
- Manipulating Machine-Learning Systems through the Order of the Training Data
- Malware-Infested Smart Card Reader
- Security and Human Behavior (SHB) 2022
- The Limits of Cyber Operations in Wartime
- Clever—and Exploitable—Windows Zero-Day
- Remotely Controlling Touchscreens
- Me on Public-Interest Tech
- Long Story on the Accused CIA Vault 7 Leaker
- Leaking Military Secrets on Gaming Discussion Boards
- Smartphones and Civilians in Wartime
- Twitter Used Two-Factor Login Details for Ad Targeting
- Cryptanalysis of ENCSecurity’s Encryption Implementation
- Hacking Tesla’s Remote Key Cards
- Upcoming Speaking Engagements
The NSA Says that There are No Known Flaws in NIST’s Quantum-Resistant Algorithms
[2022.05.16] Rob Joyce, the director of cybersecurity at the NSA, said so in an interview:
The NSA already has classified quantum-resistant algorithms of its own that it developed over many years, said Joyce. But it didn’t enter any of its own in the contest. The agency’s mathematicians, however, worked with NIST to support the process, trying to crack the algorithms in order to test their merit.
“Those candidate algorithms that NIST is running the competitions on all appear strong, secure, and what we need for quantum resistance,” Joyce said. “We’ve worked against all of them to make sure they are solid.”
The purpose of the open, public international scrutiny of the separate NIST algorithms is “to build trust and confidence,” he said.
I believe him. This is what the NSA did with NIST’s candidate algorithms for AES and then for SHA-3. NIST’s Post-Quantum Cryptography Standardization Process looks good.
I still worry about the long-term security of the submissions, though. In 2018, in an essay titled “Cryptography After the Aliens Land,” I wrote:
…there is always the possibility that those algorithms will fall to aliens with better quantum techniques. I am less worried about symmetric cryptography, where Grover’s algorithm is basically an upper limit on quantum improvements, than I am about public-key algorithms based on number theory, which feel more fragile. It’s possible that quantum computers will someday break all of them, even those that today are quantum resistant.
It took us a couple of decades to fully understand von Neumann computer architecture. I’m sure it will take years of working with a functional quantum computer to fully understand the limits of that architecture. And some things that we think of as computationally hard today will turn out not to be.
EDITED TO ADD (6/14): Since I wrote this, flaws were found in at least four candidates.
Attacks on Managed Service Providers Expected to Increase
[2022.05.17] CISA, NSA, FBI, and similar organizations in the other Five Eyes countries are warning that attacks on MSPs—as a vector to their customers—are likely to increase. No details about what this prediction is based on. Makes sense, though. The SolarWinds attack was incredibly successful for the Russian SVR, and a blueprint for future attacks.
iPhone Malware that Operates Even When the Phone Is Turned Off
[2022.05.18] Researchers have demonstrated iPhone malware that works even when the phone is fully shut down.
t turns out that the iPhone’s Bluetooth chip—which is key to making features like Find My work—has no mechanism for digitally signing or even encrypting the firmware it runs. Academics at Germany’s Technical University of Darmstadt figured out how to exploit this lack of hardening to run malicious firmware that allows the attacker to track the phone’s location or run new features when the device is turned off.
The research is the first—or at least among the first—to study the risk posed by chips running in low-power mode. Not to be confused with iOS’s low-power mode for conserving battery life, the low-power mode (LPM) in this research allows chips responsible for near-field communication, ultra wideband, and Bluetooth to run in a special mode that can remain on for 24 hours after a device is turned off.
The research is fascinating, but the attack isn’t really feasible. It requires a jailbroken phone, which is hard to pull off in an adversarial setting.
Websites that Collect Your Data as You Type
Researchers from KU Leuven, Radboud University, and University of Lausanne crawled and analyzed the top 100,000 websites, looking at scenarios in which a user is visiting a site while in the European Union and visiting a site from the United States. They found that 1,844 websites gathered an EU user’s email address without their consent, and a staggering 2,950 logged a US user’s email in some form. Many of the sites seemingly do not intend to conduct the data-logging but incorporate third-party marketing and analytics services that cause the behavior.
After specifically crawling sites for password leaks in May 2021, the researchers also found 52 websites in which third parties, including the Russian tech giant Yandex, were incidentally collecting password data before submission. The group disclosed their findings to these sites, and all 52 instances have since been resolved.
“If there’s a Submit button on a form, the reasonable expectation is that it does something—that it will submit your data when you click it,” says Güneş Acar, a professor and researcher in Radboud University’s digital security group and one of the leaders of the study. “We were super surprised by these results. We thought maybe we were going to find a few hundred websites where your email is collected before you submit, but this exceeded our expectations by far.”
Bluetooth Flaw Allows Remote Unlocking of Digital Locks
[2022.05.20] Locks that use Bluetooth Low Energy to authenticate keys are vulnerable to remote unlocking. The research focused on Teslas, but the exploit is generalizable.
In a video shared with Reuters, NCC Group researcher Sultan Qasim Khan was able to open and then drive a Tesla using a small relay device attached to a laptop which bridged a large gap between the Tesla and the Tesla owner’s phone.
“This proves that any product relying on a trusted BLE connection is vulnerable to attacks even from the other side of the world,” the UK-based firm said in a statement, referring to the Bluetooth Low Energy (BLE) protocol—technology used in millions of cars and smart locks which automatically open when in close proximity to an authorised device.
Although Khan demonstrated the hack on a 2021 Tesla Model Y, NCC Group said any smart locks using BLE technology, including residential smart locks, could be unlocked in the same way.
Another news article.
EDITED TO ADD (6/14): A longer version of the demo video.
The Onion on Google Map Surveillance
[2022.05.20] “Google Maps Adds Shortcuts through Houses of People Google Knows Aren’t Home Right Now.”
Forging Australian Driver’s Licenses
[2022.05.23] The New South Wales digital driver’s license has multiple implementation flaws that allow for easy forgeries.
This file is encrypted using AES-256-CBC encryption combined with Base64 encoding.
A 4-digit application PIN (which gets set during the initial onboarding when a user first instals the application) is the encryption password used to protect or encrypt the licence data.
The problem here is that an attacker who has access to the encrypted licence data (whether that be through accessing a phone backup, direct access to the device or remote compromise) could easily brute-force this 4-digit PIN by using a script that would try all 10,000 combinations….
The second design flaw that is favourable for attackers is that the Digital Driver Licence data is never validated against the back-end authority which is the Service NSW API/database.
This means that the application has no native method to validate the Digital Driver Licence data that exists on the phone and thus cannot perform further actions such as warn users when this data has been modified.
As the Digital Licence is stored on the client’s device, validation should take place to ensure the local copy of the data actually matches the Digital Driver’s Licence data that was originally downloaded from the Service NSW API.
As this verification does not take place, an attacker is able to display the edited data on the Service NSW application without any preventative factors.
There’s a lot more in the blog post.
The Justice Department Will No Longer Charge Security Researchers with Criminal Hacking
[2022.05.24] Following a recent Supreme Court ruling, the Justice Department will no longer prosecute “good faith” security researchers with cybercrimes:
The policy for the first time directs that good-faith security research should not be charged. Good faith security research means accessing a computer solely for purposes of good-faith testing, investigation, and/or correction of a security flaw or vulnerability, where such activity is carried out in a manner designed to avoid any harm to individuals or the public, and where the information derived from the activity is used primarily to promote the security or safety of the class of devices, machines, or online services to which the accessed computer belongs, or those who use such devices, machines, or online services.
The new policy states explicitly the longstanding practice that “the department’s goals for CFAA enforcement are to promote privacy and cybersecurity by upholding the legal right of individuals, network owners, operators, and other persons to ensure the confidentiality, integrity, and availability of information stored in their information systems.” Accordingly, the policy clarifies that hypothetical CFAA violations that have concerned some courts and commentators are not to be charged. Embellishing an online dating profile contrary to the terms of service of the dating website; creating fictional accounts on hiring, housing, or rental websites; using a pseudonym on a social networking site that prohibits them; checking sports scores at work; paying bills at work; or violating an access restriction contained in a term of service are not themselves sufficient to warrant federal criminal charges. The policy focuses the department’s resources on cases where a defendant is either not authorized at all to access a computer or was authorized to access one part of a computer—such as one email account—and, despite knowing about that restriction, accessed a part of the computer to which his authorized access did not extend, such as other users’ emails.
EDITED TO ADD (6/14): Josephine Wolff writes about this update.
Manipulating Machine-Learning Systems through the Order of the Training Data
[2022.05.25] Yet another adversarial ML attack:
Most deep neural networks are trained by stochastic gradient descent. Now “stochastic” is a fancy Greek word for “random”; it means that the training data are fed into the model in random order.
So what happens if the bad guys can cause the order to be not random? You guessed it—all bets are off. Suppose for example a company or a country wanted to have a credit-scoring system that’s secretly sexist, but still be able to pretend that its training was actually fair. Well, they could assemble a set of financial data that was representative of the whole population, but start the model’s training on ten rich men and ten poor women drawn from that set then let initialisation bias do the rest of the work.
Does this generalise? Indeed it does. Previously, people had assumed that in order to poison a model or introduce backdoors, you needed to add adversarial samples to the training data. Our latest paper shows that’s not necessary at all. If an adversary can manipulate the order in which batches of training data are presented to the model, they can undermine both its integrity (by poisoning it) and its availability (by causing training to be less effective, or take longer). This is quite general across models that use stochastic gradient descent.
Malware-Infested Smart Card Reader
[2022.05.26] Brian Krebs has an interesting story of a smart ID card reader with a malware-infested Windows driver, and US government employees who inadvertently buy and use them.
But by all accounts, the potential attack surface here is enormous, as many federal employees clearly will purchase these readers from a myriad of online vendors when the need arises. Saicoo’s product listings, for example, are replete with comments from customers who self-state that they work at a federal agency (and several who reported problems installing drivers).
Security and Human Behavior (SHB) 2022
[2022.05.31] Today is the second day of the fifteenth Workshop on Security and Human Behavior, hosted by Ross Anderson and Alice Hutchings at the University of Cambridge. After two years of having this conference remotely on Zoom, it’s nice to be back together in person.
SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, Alice Hutchings, and myself. The forty or so attendees include psychologists, economists, computer security researchers, sociologists, political scientists, criminologists, neuroscientists, designers, lawyers, philosophers, anthropologists, geographers, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.
For the past decade and a half, this workshop has been the most intellectually stimulating two days of my professional year. It influences my thinking in different and sometimes surprising ways—and has resulted in some unexpected collaborations.
Our goal is always to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to six to eight minutes, with the rest of the time for open discussion. Because everyone was not able to attend in person, our panels all include remote participants as well. The hybrid structure is working well, even though our remote participants aren’t around for the social program.
This year’s schedule is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks.
Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh, twelfth, thirteenth, and fourteenth SHB workshops. Follow those links to find summaries, papers, and occasionally audio/video recordings of the various workshops. Ross also maintains a good webpage of psychology and security resources.
The Limits of Cyber Operations in Wartime
[2022.05.31] Interesting paper by Lennart Maschmeyer: “The Subversive Trilemma: Why Cyber Operations Fall Short of Expectations“:
Abstract: Although cyber conflict has existed for thirty years, the strategic utility of cyber operations remains unclear. Many expect cyber operations to provide independent utility in both warfare and low-intensity competition. Underlying these expectations are broadly shared assumptions that information technology increases operational effectiveness. But a growing body of research shows how cyber operations tend to fall short of their promise. The reason for this shortfall is their subversive mechanism of action. In theory, subversion provides a way to exert influence at lower risks than force because it is secret and indirect, exploiting systems to use them against adversaries. The mismatch between promise and practice is the consequence of the subversive trilemma of cyber operations, whereby speed, intensity, and control are negatively correlated. These constraints pose a trilemma for actors because a gain in one variable tends to produce losses across the other two variables. A case study of the Russo-Ukrainian conflict provides empirical support for the argument. Qualitative analysis leverages original data from field interviews, leaked documents, forensic evidence, and local media. Findings show that the subversive trilemma limited the strategic utility of all five major disruptive cyber operations in this conflict.
Clever—and Exploitable—Windows Zero-Day
[2022.06.01] Researchers have reported a still-unpatched Windows zero-day that is currently being exploited in the wild.
Here’s the advisory, which includes a work-around until a patch is available.
Remotely Controlling Touchscreens
[2022.06.02] Researchers have demonstrated controlling touchscreens at a distance, at least in a laboratory setting:
The core idea is to take advantage of the electromagnetic signals to execute basic touch events such as taps and swipes into targeted locations of the touchscreen with the goal of taking over remote control and manipulating the underlying device.
The attack, which works from a distance of up to 40mm, hinges on the fact that capacitive touchscreens are sensitive to EMI, leveraging it to inject electromagnetic signals into transparent electrodes that are built into the touchscreen so as to register them as touch events.
The experimental setup involves an electrostatic gun to generate a strong pulse signal that’s then sent to an antenna to transmit an electromagnetic field to the phone’s touchscreen, thereby causing the electrodes which act as antennas themselves to pick up the EMI.
Paper: “GhostTouch: Targeted Attacks on Touchscreens without Physical Touch“:
Abstract: Capacitive touchscreens have become the primary human-machine interface for personal devices such as smartphones and tablets. In this paper, we present GhostTouch, the first active contactless attack against capacitive touchscreens. GhostTouch uses electromagnetic interference (EMI) to inject fake touch points into a touchscreen without the need to physically touch it. By tuning the parameters of the electromagnetic signal and adjusting the antenna, we can inject two types of basic touch events, taps and swipes, into targeted locations of the touchscreen and control them to manipulate the underlying device. We successfully launch the GhostTouch attacks on nine smartphone models. We can inject targeted taps continuously with a standard deviation of as low as 14.6 x 19.2 pixels from the target area, a delay of less than 0.5s and a distance of up to 40mm. We show the real-world impact of the GhostTouch attacks in a few proof-of-concept scenarios, including answering an eavesdropping phone call, pressing the button, swiping up to unlock, and entering a password. Finally, we discuss potential hardware and software countermeasures to mitigate the attack.
Me on Public-Interest Tech
[2022.06.03] Back in November 2020, in the middle of the COVID-19 pandemic, I gave a virtual talk at the International Symposium on Technology and Society: “The Story of the Internet and How it Broke Bad: A Call for Public-Interest Technologists.” It was something I was really proud of, and it’s finally up on the net.
Long Story on the Accused CIA Vault 7 Leaker
[2022.06.06] Long article about Joshua Schulte, the accused leaker of the WikiLeaks Vault 7 and Vault 8 CIA data.
Well worth reading.
Leaking Military Secrets on Gaming Discussion Boards
[2022.06.08] People are leaking classified military information on discussion boards for the video game War Thunder to win arguments—repeatedly.
Smartphones and Civilians in Wartime
[2022.06.09] Interesting article about civilians using smartphones to assist their militaries in wartime, and how that blurs the important legal distinction between combatants and non-combatants:
The principle of distinction between the two roles is a critical cornerstone of international humanitarian law—the law of armed conflict, codified by decades of customs and laws such as the Geneva Conventions. Those considered civilians and civilian targets are not to be attacked by military forces; as they are not combatants, they should be spared. At the same time, they also should not act as combatants—if they do, they may lose this status.
The conundrum, then, is how to classify a civilian who, with the use of their smartphone, potentially becomes an active participant in a military sensor system. (To be clear, solely having the app installed is not sufficient to lose the protected status. What matters is actual usage.) The Additional Protocol I to Geneva Conventions states that civilians enjoy protection from the “dangers arising from military operations unless and for such time as they take a direct part in hostilities.” Legally, if civilians engage in military activity, such as taking part in hostilities by using weapons, they forfeit their protected status, “for such time as they take a direct part in hostilities” that “affect[s] the military operations,” according to the International Committee of the Red Cross, the traditional impartial custodian of International Humanitarian Law. This is the case even if the people in question are not formally members of the armed forces. By losing the status of a civilian, one may become a legitimate military objective, carrying the risk of being directly attacked by military forces.
Twitter Used Two-Factor Login Details for Ad Targeting
[2022.06.09] Twitter was fined $150 million for using phone numbers and email addresses collected for two-factor authentication for ad targeting.
Cryptanalysis of ENCSecurity’s Encryption Implementation
[2022.06.13] ENCSecurity markets a file encryption system, and it’s used by SanDisk, Sony, Lexar, and probably others. Despite it using AES as its algorithm, its implementation is flawed in multiple ways—and breakable.
The moral is, as it always is, that implementing cryptography securely is hard. Don’t roll your own anything if you can help it.
Hacking Tesla’s Remote Key Cards
[2022.06.14] Interesting vulnerability in Tesla’s NFC key cards:
Martin Herfurt, a security researcher in Austria, quickly noticed something odd about the new feature: Not only did it allow the car to automatically start within 130 seconds of being unlocked with the NFC card, but it also put the car in a state to accept entirely new keys—with no authentication required and zero indication given by the in-car display.
“The authorization given in the 130-second interval is too general… [it’s] not only for drive,” Herfurt said in an online interview. “This timer has been introduced by Tesla…in order to make the use of the NFC card as a primary means of using the car more convenient. What should happen is that the car can be started and driven without the user having to use the key card a second time. The problem: within the 130-second period, not only the driving of the car is authorized, but also the [enrolling] of a new key.”
Upcoming Speaking Engagements
[2022.06.14] This is a current list of where and when I am scheduled to speak:
- I’m speaking at the Dublin Tech Summit in Dublin, Ireland, June 15-16, 2022.
The list is maintained on this page.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.
You can also read these articles on my blog, Schneier on Security.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books—including his latest, We Have Root—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.
Copyright © 2022 by Bruce Schneier.