November 15, 2022
by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit Crypto-Gram’s web page.
These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.
- New Book: A Hacker’s Mind
- Hacking Automobile Keyless Entry Systems
- Qatar Spyware
- Museum Security
- Interview with Signal’s New President
- Adversarial ML Attack that Secretly Gives a Language Model a Point of View
- On the Randomness of Automatic Card Shufflers
- Australia Increases Fines for Massive Data Breaches
- Critical Vulnerability in Open SSL
- Apple Only Commits to Patching Latest OS Version
- Iran’s Digital Surveillance Tools Leaked
- NSA on Supply Chain Security
- The Conviction of Uber’s Chief Security Officer
- Using Wi-FI to See through Walls
- Defeating Phishing-Resistant Multifactor Authentication
- An Untrustworthy TLS Certificate in Browsers
- NSA Over-surveillance
- A Digital Red Cross
- Upcoming Speaking Engagements
[2022.11.11] I have a new book coming out in February. It’s about hacking.
A Hacker’s Mind: How the Powerful Bend Society’s Rules, and How to Bend them Back isn’t about hacking computer systems; it’s about hacking more general economic, political, and social systems. It generalizes the term hack as a means of subverting a system’s rules in unintended ways.
What sorts of system? Any system of rules, really. Take the tax code, for example. It’s not computer code, but it’s a series of algorithms—supposedly deterministic—that take a bunch of inputs about your income and produce an output that’s the amount of money you owe. This code has vulnerabilities; we call them loopholes. It has exploits; those are tax avoidance strategies. And there is an entire industry of black-hat hackers who exploit vulnerabilities in the tax code: we call them accountants and tax attorneys.
In my conception, a “hack” is something a system permits, but is unanticipated and unwanted by its designers. It’s unplanned: a mistake in the system’s design or coding. It’s subversion, or an exploitation. It’s a cheat—but only sort of. Just as a computer vulnerability can be exploited over the Internet because the code permits it, a tax loophole is “allowed” by the system because it follows the rules, even though it might subvert the intent of those rules.
Once you start thinking of hacking in this way, you’ll start seeing hacks everywhere. You can find hacks in professional sports, in customer reward programs, in financial systems, in politics; in lots of economic, political, and social systems; against our cognitive functions. A curved hockey stick is a hack, and we know the name of the hacker who invented it. Airline frequent-flier mileage runs are a hack. The filibuster was originally a hack, invented by Cato the Younger, A Roman senator in 60 BCE. Hedge funds are full of hacks.
A system is just a set of rules. Or norms, since the “rules” aren’t always formal. And even the best-thought-out sets of rules will be incomplete or inconsistent. It’ll have ambiguities, and things the designers haven’t thought of. As long as there are people who want to subvert the goals of a system, there will be hacks.
I use this framework in A Hacker’s Mind to tease out a lot of why today’s economic, political, and social systems are failing us so badly, and apply what we have learned about hacking defenses in the computer world to those more general hacks. And I end by looking at artificial intelligence, and what will happen when AIs start hacking. Not the problems of hacking AI, which are both ubiquitous and super weird, but what happens when an AI is able to discover new hacks against these more general systems. What happens when AIs find tax loopholes, or loopholes in financial regulations. We have systems in place to deal with these sorts of hacks, but they were invented when hackers were human and reflect the human pace of hack discovery. They won’t be able to withstand an AI finding dozens, or hundreds, of loopholes in financial regulations. We’re simply not ready for the speed, scale, scope, and sophistication of AI hackers.
A Hacker’s Mind is my pandemic book, written in 2020 and 2021. It represents another step in my continuing journey of increasing generalizations. And I really like the cover. It will be published on February 7. It makes an excellent belated holiday gift. Order yours today and avoid the rush.
The criminals targeted vehicles with keyless entry and start systems, exploiting the technology to get into the car and drive away.
As a result of a coordinated action carried out on 10 October in the three countries involved, 31 suspects were arrested. A total of 22 locations were searched, and over EUR 1 098 500 in criminal assets seized.
The criminals targeted keyless vehicles from two French car manufacturers. A fraudulent tool—marketed as an automotive diagnostic solution, was used to replace the original software of the vehicles, allowing the doors to be opened and the ignition to be started without the actual key fob.
Among those arrested feature the software developers, its resellers and the car thieves who used this tool to steal vehicles.
The article doesn’t say how the hacking tool got installed into cars. Were there crooked auto mechanics, dealers, or something else?
Everyone travelling to Qatar during the football World Cup will be asked to download two apps called Ehteraz and Hayya.
Briefly, Ehteraz is an covid-19 tracking app, while Hayya is an official World Cup app used to keep track of match tickets and to access the free Metro in Qatar.
In particular, the covid-19 app Ehteraz asks for access to several rights on your mobile., like access to read, delete or change all content on the phone, as well as access to connect to WiFi and Bluetooth, override other apps and prevent the phone from switching off to sleep mode.
The Ehteraz app, which everyone over 18 coming to Qatar must download, also gets a number of other accesses such as an overview of your exact location, the ability to make direct calls via your phone and the ability to disable your screen lock.
The Hayya app does not ask for as much, but also has a number of critical aspects. Among other things, the app asks for access to share your personal information with almost no restrictions. In addition, the Hayya app provides access to determine the phone’s exact location, prevent the device from going into sleep mode, and view the phone’s network connections.
Despite what the article says, I don’t know how mandatory this actually is. I know people who visited Saudi Arabia when that country had a similarly sketchy app requirement. Some of them just didn’t bother downloading the apps, and were never asked about it at the border.
Banks don’t take millions of dollars and put them in plastic bags and hang them on the wall so everybody can walk right up to them. But we do basically the same thing in museums and hang the assets right out on the wall. So it’s our job, then, to either use technology or develop technology that protects the art, to hire honest guards that are trainable and able to meet the challenge and alert and so forth. And we have to keep them alert because it’s the world’s most boring job. It might be great for you to go to a museum and see it for a day, but they stand in that same gallery year after year, and so they get mental fatigue. And so we have to rotate them around and give them responsibilities that keep them stimulated and keep them fresh.
It’s a challenge. But we try to predict the items that might be most vulnerable. Which are not necessarily most valuable; some things have symbolic significance to them. And then we try to predict what the next targets might be and advise our clients that they maybe need to put special security on those items.
WhatsApp uses the Signal encryption protocol to provide encryption for its messages. That was absolutely a visionary choice that Brian and his team led back in the day – and big props to them for doing that. But you can’t just look at that and then stop at message protection. WhatsApp does not protect metadata the way that Signal does. Signal knows nothing about who you are. It doesn’t have your profile information and it has introduced group encryption protections. We don’t know who you are talking to or who is in the membership of a group. It has gone above and beyond to minimize the collection of metadata.
WhatsApp, on the other hand, collects the information about your profile, your profile photo, who is talking to whom, who is a group member. That is powerful metadata. It is particularly powerful—and this is where we have to back out into a structural argument for a company to collect the data that is also owned by Meta/Facebook. Facebook has a huge amount, just unspeakable volumes, of intimate information about billions of people across the globe.
It is not trivial to point out that WhatsApp metadata could easily be joined with Facebook data, and that it could easily reveal extremely intimate information about people. The choice to remove or enhance the encryption protocols is still in the hands of Facebook. We have to look structurally at what that organization is, who actually has control over these decisions, and at some of these details that often do not get discussed when we talk about message encryption overall.
I am a fan of Signal and I use it every day. The one feature I want, which WhatsApp has and Signal does not, is the ability to easily export a chat to a text file.
[2022.10.21] Machine learning security is extraordinarily difficult because the attacks are so varied—and it seems that each new one is weirder than the next. Here’s the latest: a training-time attack that forces the model to exhibit a point of view: Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures.”
Abstract: We investigate a new threat to neural sequence-to-sequence (seq2seq) models: training-time attacks that cause models to “spin” their outputs so as to support an adversary-chosen sentiment or point of view—but only when the input contains adversary-chosen trigger words. For example, a spinned summarization model outputs positive summaries of any text that mentions the name of some individual or organization.
Model spinning introduces a “meta-backdoor” into a model. Whereas conventional backdoors cause models to produce incorrect outputs on inputs with the trigger, outputs of spinned models preserve context and maintain standard accuracy metrics, yet also satisfy a meta-task chosen by the adversary.
Model spinning enables propaganda-as-a-service, where propaganda is defined as biased speech. An adversary can create customized language models that produce desired spins for chosen triggers, then deploy these models to generate disinformation (a platform attack), or else inject them into ML training pipelines (a supply-chain attack), transferring malicious functionality to downstream models trained by victims.
To demonstrate the feasibility of model spinning, we develop a new backdooring technique. It stacks an adversarial meta-task onto a seq2seq model, backpropagates the desired meta-task output to points in the word-embedding space we call “pseudo-words,” and uses pseudo-words to shift the entire output distribution of the seq2seq model. We evaluate this attack on language generation, summarization, and translation models with different triggers and meta-tasks such as sentiment, toxicity, and entailment. Spinned models largely maintain their accuracy metrics (ROUGE and BLEU) while shifting their outputs to satisfy the adversary’s meta-task. We also show that, in the case of a supply-chain attack, the spin functionality transfers to downstream models.
This new attack dovetails with something I’ve been worried about for a while, something Latanya Sweeney has dubbed “persona bots.” This is what I wrote in my upcoming book (to be published in February):
One example of an extension of this technology is the “persona bot,” an AI posing as an individual on social media and other online groups. Persona bots have histories, personalities, and communication styles. They don’t constantly spew propaganda. They hang out in various interest groups: gardening, knitting, model railroading, whatever. They act as normal members of those communities, posting and commenting and discussing. Systems like GPT-3 will make it easy for those AIs to mine previous conversations and related Internet content and to appear knowledgeable. Then, once in a while, the AI might post something relevant to a political issue, maybe an article about a healthcare worker having an allergic reaction to the COVID-19 vaccine, with worried commentary. Or maybe it might offer its developer’s opinions about a recent election, or racial justice, or any other polarizing subject. One persona bot can’t move public opinion, but what if there were thousands of them? Millions?
These are chatbots on a very small scale. They would participate in small forums around the Internet: hobbyist groups, book groups, whatever. In general they would behave normally, participating in discussions like a person does. But occasionally they would say something partisan or political, depending on the desires of their owners. Because they’re all unique and only occasional, it would be hard for existing bot detection techniques to find them. And because they can be replicated by the millions across social media, they could have a greater effect. They would affect what we think, and—just as importantly—what we think others think. What we will see as robust political discussions would be persona bots arguing with other persona bots.
Attacks like these add another wrinkle to that sort of scenario.
[2022.10.24] Many years ago, Matt Blaze and I talked about getting our hands on a casino-grade automatic shuffler and looking for vulnerabilities. We never did it—I remember that we didn’t even try very hard—but this article shows that we probably would have found non-random properties:
…the executives had recently discovered that one of their machines had been hacked by a gang of hustlers. The gang used a hidden video camera to record the workings of the card shuffler through a glass window. The images, transmitted to an accomplice outside in the casino parking lot, were played back in slow motion to figure out the sequence of cards in the deck, which was then communicated back to the gamblers inside. The casino lost millions of dollars before the gang were finally caught.
Stanford mathematician Persi Diaconis found other flaws:
With his collaborator Susan Holmes, a statistician at Stanford, Diaconis travelled to the company’s Las Vegas showroom to examine a prototype of their new machine. The pair soon discovered a flaw. Although the mechanical shuffling action appeared random, the mathematicians noticed that the resulting deck still had rising and falling sequences, which meant that they could make predictions about the card order.
[2022.10.26] After suffering two large, and embarrassing, data breaches in recent weeks, the Australian government increased the fine for serious data breaches from $2.2 million to a minimum of $50 million. (That’s $50 million AUD, or $32 million USD.)
This is a welcome change. The problem is one of incentives, and Australia has now increased the incentive for companies to secure the personal data or their users and customers.
How bad is “Critical”? According to OpenSSL, an issue of critical severity affects common configurations and is also likely exploitable.
It’s likely to be abused to disclose server memory contents, and potentially reveal user details, and could be easily exploited remotely to compromise server private keys or execute code execute remotely. In other words, pretty much everything you don’t want happening on your production systems.
In other words, while Apple will provide security-related updates for older versions of its operating systems, only the most recent upgrades will receive updates for every security problem Apple knows about. Apple currently provides security updates to macOS 11 Big Sur and macOS 12 Monterey alongside the newly released macOS Ventura, and in the past, it has released security updates for older iOS versions for devices that can’t install the latest upgrades.
This confirms something that independent security researchers have been aware of for a while but that Apple hasn’t publicly articulated before. Intego Chief Security Analyst Joshua Long has tracked the CVEs patched by different macOS and iOS updates for years and generally found that bugs patched in the newest OS versions can go months before being patched in older (but still ostensibly “supported”) versions, when they’re patched at all.
According to these internal documents, SIAM is a computer system that works behind the scenes of Iranian cellular networks, providing its operators a broad menu of remote commands to alter, disrupt, and monitor how customers use their phones. The tools can slow their data connections to a crawl, break the encryption of phone calls, track the movements of individuals or large groups, and produce detailed metadata summaries of who spoke to whom, when, and where. Such a system could help the government invisibly quash the ongoing protests—or those of tomorrow—an expert who reviewed the SIAM documents told The Intercept.
SIAM gives the government’s Communications Regulatory Authority—Iran’s telecommunications regulator—turnkey access to the activities and capabilities of the country’s mobile users. “Based on CRA rules and regulations all telecom operators must provide CRA direct access to their system for query customers information and change their services via web service,” reads an English-language document obtained by The Intercept. (Neither the CRA nor Iran’s mission to the United Nations responded to a requests for comment.)
Lots of details, and links to the leaked documents, at the Intercept webpage.
[2022.11.04] The NSA (together with CISA) has published a long report on supply-chain security: “Securing the Software Supply Chain: Recommended Practices Guide for Suppliers.“:
Prevention is often seen as the responsibility of the software developer, as they are required to securely develop and deliver code, verify third party components, and harden the build environment. But the supplier also holds a critical responsibility in ensuring the security and integrity of our software. After all, the software vendor is responsible for liaising between the customer and software developer. It is through this relationship that additional security features can be applied via contractual agreements, software releases and updates, notifications and mitigations of vulnerabilities.
Software suppliers will find guidance from NSA and our partners on preparing organizations by defining software security checks, protecting software, producing well-secured software, and responding to vulnerabilities on a continuous basis. Until all stakeholders seek to mitigate concerns specific to their area of responsibility, the software supply chain cycle will be vulnerable and at risk for potential compromise.
They previously published “Securing the Software Supply Chain: Recommended Practices Guide for Developers.” And they plan on publishing one focused on customers.
EDITED TO ADD (11/14): The proposed EU Cyber Resilience Act places obligations on software providers to deliver secure code, and fix bugs in a timely manner.
[2022.11.07] I have been meaning to write about Joe Sullivan, Uber’s former Chief Security Officer. He was convicted of crimes related to covering up a cyberattack against Uber. It’s a complicated case, and I’m not convinced that he deserved a guilty ruling or that it’s a good thing for the industry.
I may still write something, but until then, this essay on the topic is worth reading.
The scientists tested the exploit by modifying an off-the-shelf drone to create a flying scanning device, the Wi-Peep. The robotic aircraft sends several messages to each device as it flies around, establishing the positions of devices in each room. A thief using the drone could find vulnerable areas in a home or office by checking for the absence of security cameras and other signs that a room is monitored or occupied. It could also be used to follow a security guard, or even to help rival hotels spy on each other by gauging the number of rooms in use.
There have been attempts to exploit similar WiFi problems before, but the team says these typically require bulky and costly devices that would give away attempts. Wi-Peep only requires a small drone and about $15 US in equipment that includes two WiFi modules and a voltage regulator. An intruder could quickly scan a building without revealing their presence.
Roger Grimes has an excellent post reminding everyone that “phishing-resistant” is not “phishing proof,” and that everyone needs to stop pretending otherwise. His list of different attacks is particularly useful.
Google’s Chrome, Apple’s Safari, nonprofit Firefox and others allow the company, TrustCor Systems, to act as what’s known as a root certificate authority, a powerful spot in the internet’s infrastructure that guarantees websites are not fake, guiding users to them seamlessly.
The company’s Panamanian registration records show that it has the identical slate of officers, agents and partners as a spyware maker identified this year as an affiliate of Arizona-based Packet Forensics, which public contracting records and company documents show has sold communication interception services to U.S. government agencies for more than a decade.
In the earlier spyware matter, researchers Joel Reardon of the University of Calgary and Serge Egelman of the University of California at Berkeley found that a Panamanian company, Measurement Systems, had been paying developers to include code in a variety of innocuous apps to record and transmit users’ phone numbers, email addresses and exact locations. They estimated that those apps were downloaded more than 60 million times, including 10 million downloads of Muslim prayer apps.
Measurement Systems’ website was registered by Vostrom Holdings, according to historic domain name records. Vostrom filed papers in 2007 to do business as Packet Forensics, according to Virginia state records. Measurement Systems was registered in Virginia by Saulino, according to another state filing.
More details by Reardon.
Cory Doctorow does a great job explaining the context and the general security issues.
EDITED TO ADD (11/10): Slashdot thread.
[2022.11.11] Here in 2022, we have a newly declassified 2016 Inspector General report—”Misuse of Sigint Systems”—about a 2013 NSA program that resulted in the unauthorized (that is, illegal) targeting of Americans.
Given all we learned from Edward Snowden, this feels like a minor coda. There’s nothing really interesting in the IG document, which is heavily redacted.
EDITED TO ADD (11/14): Non-paywalled copy of the Bloomberg link.
The emblem wouldn’t provide technical cybersecurity protection to hospitals, Red Cross infrastructure or other medical providers, but it would signal to hackers that a cyberattack on those protected networks during an armed conflict would violate international humanitarian law, experts say, Tilman Rodenhäuser, a legal adviser to the International Committee of the Red Cross, said at a panel discussion hosted by the organization on Thursday.
I can think of all sorts of problems with this idea and many reasons why it won’t work, but those also apply to the physical red cross on buildings, vehicles, and people’s clothing. So let’s try it.
EDITED TO ADD: Original reference.
[2022.11.14] This is a current list of where and when I am scheduled to speak:
- I’m speaking at the 24th International Information Security Conference in Madrid, Spain, on November 17, 2022.
The list is maintained on this page.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.
You can also read these articles on my blog, Schneier on Security.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books—including his latest, We Have Root—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.
Copyright © 2022 by Bruce Schneier.