February 15, 2022
by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit Crypto-Gram’s web page.
These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.
- An Examination of the Bug Bounty Marketplace
- UK Government to Launch PR Campaign Undermining End-to-End Encryption
- Are Fake COVID Testing Sites Harvesting Data?
- San Francisco Police Illegally Spying on Protesters
- China’s Olympics App Is Horribly Insecure
- Linux-Targeted Malware Increased by 35%
- Merck Wins Insurance Lawsuit re NotPetya Attack
- New DeadBolt Ransomware Targets NAS Devices
- Tracking Secret German Organizations with Apple AirTags
- Twelve-Year-Old Linux Vulnerability Discovered and Patched
- Me on App Store Monopolies and Security
- Finding Vulnerabilities in Open Source Projects
- Interview with the Head of the NSA’s Research Directorate
- The EARN IT Act Is Back
- Amy Zegart on Spycraft in the Internet Age
- Breaking 256-bit Elliptic Curve Encryption with a Quantum Computer
- Bunnie Huang’s Plausibly Deniable Database
- On the Irish Health Services Executive Hack
- Upcoming Speaking Engagements
[2022.01.17] Here’s a fascinating report: “Bounty Everything: Hackers and the Making of the Global Bug Marketplace.” From a summary:
…researchers Ryan Ellis and Yuan Stevens provide a window into the working lives of hackers who participate in “bug bounty” programs—programs that hire hackers to discover and report bugs or other vulnerabilities in their systems. This report illuminates the risks and insecurities for hackers as gig workers, and how bounty programs rely on vulnerable workers to fix their vulnerable systems.
Ellis and Stevens’s research offers a historical overview of bounty programs and an analysis of contemporary bug bounty platforms—the new intermediaries that now structure the vast majority of bounty work. The report draws directly from interviews with hackers, who recount that bounty programs seem willing to integrate a diverse workforce in their practices, but only on terms that deny them the job security and access enjoyed by core security workforces. These inequities go far beyond the difference experienced by temporary and permanent employees at companies such as Google and Apple, contend the authors. The global bug bounty workforce is doing piecework—they are paid for each bug, and the conditions under which a bug is paid vary greatly from one company to the next.
[2022.01.18] Rolling Stone is reporting that the UK government has hired the M&C Saatchi advertising agency to launch an anti-encryption advertising campaign. Presumably they’ll lean heavily on the “think of the children!” rhetoric we’re seeing in this current wave of the crypto wars. The technical eavesdropping mechanisms have shifted to client-side scanning, which won’t actually help—but since that’s not really the point, it’s not argued on its merits.
[2022.01.19] Over the past few weeks, I’ve seen a bunch of writing about what seems to be fake COVID-19 testing sites. They take your name and info, and do a nose swab, but you never get test results. Speculation centered around data harvesting, but that didn’t make sense because it was far too labor intensive for that and—sorry to break it to you—your data isn’t worth all that much.
It seems to be multilevel marketing fraud instead:
The Center for COVID Control is a management company to Doctors Clinical Laboratory. It provides tests and testing supplies, software, personal protective equipment and marketing services—online and printed—to testing sites, said a person who was formerly associated with the Center for COVID Control. Some of the sites are owned independently but operate in partnership with the chain under its name and with its guidance.
Doctors Clinical Lab, the lab Center for COVID Control uses to process tests, makes money by billing patients’ insurance companies or seeking reimbursement from the federal government for testing. Insurance statements reviewed by Block Club show the lab has, in multiple instances, billed insurance companies $325 for a PCR test, $50 for a rapid test, $50 for collecting a person’s sample and $80 for a “supplemental fee.”
In turn, the testing sites are paid for providing samples to the lab to be processed, said a person formerly associated with the Center for COVID Control.
In a January video talking to testing site operators, Syed said the Center for COVID Control will no longer provide them with PCR tests, but it will continue supplying them with rapid tests at a cost of $5 per test. The companies will keep making money for the rapid tests they collect, he said.
“You guys will continue making the $28.50 you’re making for the rapid test,” Syed said in the video.
Read the article for the messy details. Or take a job and see for yourself.
EDITED TO ADD (2/13): More coverage about the fake testing sites.
This surveillance invaded the privacy of protesters, targeted people of color, and chills and deters participation and organizing for future protests. The SFPD also violated San Francisco’s new Surveillance Technology Ordinance. It prohibits city agencies like the SFPD from acquiring, borrowing, or using surveillance technology, without prior approval from the city’s Board of Supervisors, following an open process that includes public participation. Here, the SFPD went through no such process before spying on protesters with this network of surveillance cameras.
It’s feels like a pretty easy case. There’s a law, and the SF police didn’t follow it.
I wouldn’t be writing about this at all except that Chris is a board member of EPIC, and used his EPIC affiliation in the op-ed to bolster his own credentials. (Bizarrely, he linked to an EPIC page that directly contradicts his position.) In his op-ed, he mischaracterized the EFF’s actions and the facts of the lawsuit. It’s a mess.
One of the fundamental principles that underlies EPIC’s work (and the work of many other groups) on surveillance oversight is that individuals should have the power to decide whether surveillance tools are used in their communities and to impose limits on their use. We have fought for years to shed light on the development, procurement, and deployment of such technologies and have worked to ensure that they are subject to independent oversight through hearings, legal challenges, petitions, and other public forums. The CCOPS model, which was developed by ACLU affiliates and other coalition partners in California and implemented through the San Francisco ordinance, is a powerful mechanism to enable public oversight of dangerous surveillance tools. The access, retention, and use policies put in place by the neighborhood business associations operating these networks provide necessary, but not sufficient, protections against abuse. Strict oversight is essential to promote both privacy and community safety, which includes freedom from arbitrary police action and the freedom to assemble.
So far, EPIC has not done anything about Larsen still being on its board. (Others have criticized them for keeping him on.) I don’t know if I have an opinion on this. Larsen has done good work on financial privacy regulations, which is a good thing. But he seems to be funding all these surveillance cameras in San Francisco, which is really bad.
[2022.01.21] China is mandating that athletes download and use a health and travel app when they attend the Winter Olympics next month. Citizen Lab examined the app and found it riddled with security holes.
- MY2022, an app mandated for use by all attendees of the 2022 Olympic Games in Beijing, has a simple but devastating flaw where encryption protecting users’ voice audio and file transfers can be trivially sidestepped. Health customs forms which transmit passport details, demographic information, and medical and travel history are also vulnerable. Server responses can also be spoofed, allowing an attacker to display fake instructions to users.
- MY2022 is fairly straightforward about the types of data it collects from users in its public-facing documents. However, as the app collects a range of highly sensitive medical information, it is unclear with whom or which organization(s) it shares this information.
- MY2022 includes features that allow users to report “politically sensitive” content. The app also includes a censorship keyword list, which, while presently inactive, targets a variety of political topics including domestic issues such as Xinjiang and Tibet as well as references to Chinese government agencies.
- While the vendor did not respond to our security disclosure, we find that the app’s security deficits may not only violate Google’s Unwanted Software Policy and Apple’s App Store guidelines but also China’s own laws and national standards pertaining to privacy protection, providing potential avenues for future redress.
It’s not clear whether the security flaws were intentional or not, but the report speculated that proper encryption might interfere with some of China’s ubiquitous online surveillance tools, especially systems that allow local authorities to snoop on phones using public wireless networks or internet cafes. Still, the researchers added that the flaws were probably unintentional, because the government will already be receiving data from the app, so there wouldn’t be a need to intercept the data as it was being transferred.
The app also included a list of 2,422 political keywords, described within the code as “illegalwords.txt,” that worked as a keyword censorship list, according to Citizen Lab. The researchers said the list appeared to be a latent function that the app’s chat and file transfer function was not actively using.
The US government has already advised athletes to leave their personal phones and laptops home and bring burners.
Malware targeting Linux systems increased by 35% in 2021 compared to 2020.
XorDDoS, Mirai and Mozi malware families accounted for over 22% of Linux-targeted threats observed by CrowdStrike in 2021.
Ten times more Mozi malware samples were observed in 2021 compared to 2020.
Lots of details in the report.
The Crowdstrike findings aren’t surprising as they confirm an ongoing trend that emerged in previous years.
For example, an Intezer report analyzing 2020 stats found that Linux malware families increased by 40% in 2020 compared to the previous year.
In the first six months of 2020, a steep rise of 500% in Golang malware was recorded, showing that malware authors were looking for ways to make their code run on multiple platforms.
This programming, and by extension, targeting trend, has already been confirmed in early 2022 cases and is likely to continue unabated.
EDITED TO ADD (2/13): Another article.
On 6th December 2021, the New Jersey Superior Court granted partial summary judgment (attached) in favour of Merck and International Indemnity, declaring that the War or Hostile Acts exclusion was inapplicable to the dispute.
Merck suffered US$1.4 billion in business interruption losses from the Notpetya cyber attack of 2017 which were claimed against “all risks” property re/insurance policies providing coverage for losses resulting from destruction or corruption of computer data and software.
The parties disputed whether the Notpetya malware which affected Merck’s computers in 2017 was an instrument of the Russian government, so that the War or Hostile Acts exclusion would apply to the loss.
The Court noted that Merck was a sophisticated and knowledgeable party, but there was no indication that the exclusion had been negotiated since it was in standard language. The Court, therefore, applied, under New Jersey law, the doctrine of construction of insurance contracts that gives prevalence to the reasonable expectations of the insured, even in exceptional circumstances when the literal meaning of the policy is plain.
Merck argued that the attack was not “an official state action,” which I’m surprised wasn’t successfully disputed.
The attacks started today, January 25th, with QNAP devices suddenly finding their files encrypted and file names appended with a .deadbolt file extension.
Instead of creating ransom notes in each folder on the device, the QNAP device’s login page is hijacked to display a screen stating, “WARNING: Your files have been locked by DeadBolt”….
BleepingComputer is aware of at least fifteen victims of the new DeadBolt ransomware attack, with no specific region being targeted.
As with all ransomware attacks against QNAP devices, the DeadBolt attacks only affect devices accessible to the Internet.
As the threat actors claim the attack is conducted through a zero-day vulnerability, it is strongly advised that all QNAP users disconnect their devices from the Internet and place them behind a firewall.
Wittmann says that everyone she spoke to denied being part of this intelligence agency. But what she describes as a “good indicator,” would be if she could prove that the postal address for this “federal authority” actually leads to the intelligence service’s apparent offices.
“To understand where mail ends up,” she writes (in translation), “[you can do] a lot of manual research. Or you can simply send a small device that regularly transmits its current position (a so-called AirTag) and see where it lands.”
She sent a parcel with an AirTag and watched through Apple’s Find My system as it was delivered via the Berlin sorting center to a sorting office in Cologne-Ehrenfeld. And then appears at the Office for the Protection of the Constitution in Cologne.
So an AirTag addressed to a telecommunications authority based in one part of Germany, ends up in the offices of an intelligence agency based in another part of the country.
Wittmann’s research is also now detailed in the German Wikipedia entry for the federal telecommunications service. It recounts how following her original discovery in December 2021, subsequent government press conferences have denied that there is such a federal telecommunications service at all.
Here’s the original Medium post, in German.
In a similar story, someone used an AirTag to track her furniture as a moving company lied about its whereabouts.
EDITED TO ADD (2/13): Another AirTag tracking story.
Linux users on Tuesday got a major dose of bad news—a 12-year-old vulnerability in a system tool called Polkit gives attackers unfettered root privileges on machines running most major distributions of the open source operating system.
Previously called PolicyKit, Polkit manages system-wide privileges in Unix-like OSes. It provides a mechanism for nonprivileged processes to safely interact with privileged processes. It also allows users to execute commands with high privileges by using a component called pkexec, followed by the command.
It was discovered in October, and disclosed last week—after most Linux distributions issued patches. Of course, there’s lots of Linux out there that never gets patched, so expect this to be exploited in the wild for a long time.
Of course, this vulnerability doesn’t give attackers access to the system. They have to get that some other way. But if they get access, this vulnerability gives them root privileges.
[2022.02.01] There are two bills working their way through Congress that would force companies like Apple to allow competitive app stores. Apple hates this, since it would break its monopoly, and it’s making a variety of security arguments to bolster its argument. I have written a rebuttal:
I would like to address some of the unfounded security concerns raised about these bills. It’s simply not true that this legislation puts user privacy and security at risk. In fact, it’s fairer to say that this legislation puts those companies’ extractive business-models at risk. Their claims about risks to privacy and security are both false and disingenuous, and motivated by their own self-interest and not the public interest. App store monopolies cannot protect users from every risk, and they frequently prevent the distribution of important tools that actually enhance security. Furthermore, the alleged risks of third-party app stores and “side-loading” apps pale in comparison to their benefits. These bills will encourage competition, prevent monopolist extortion, and guarantee users a new right to digital self-determination.
Matt Stoller has also written about this.
[2022.02.02] The Open Source Security Foundation announced $10 million in funding from a pool of tech and financial companies, including $5 million from Microsoft and Google, to find vulnerabilities in open source projects:
The “Alpha” side will emphasize vulnerability testing by hand in the most popular open-source projects, developing close working relationships with a handful of the top 200 projects for testing each year. “Omega” will look more at the broader landscape of open source, running automated testing on the top 10,000.
This is an excellent idea. This code ends up in all sorts of critical applications.
Log4j would be a prototypical vulnerability that the Alpha team might look for—an unknown problem in a high-impact project that automated tools would not be able to pick up before a human discovered it. The goal is not to use the personnel engaged with Alpha to replicate dependency analysis, for example.
[2022.02.03] MIT Technology Review published an interview with Gil Herrera, the new head of the NSA’s Research Directorate. There’s a lot of talk about quantum computing, monitoring 5G networks, and the problems of big data:
The math department, often in conjunction with the computer science department, helps tackle one of NSA’s most interesting problems: big data. Despite public reckoning over mass surveillance, NSA famously faces the challenge of collecting such extreme quantities of data that, on top of legal and ethical problems, it can be nearly impossible to sift through all of it to find everything of value. NSA views the kind of “vast access and collection” that it talks about internally as both an achievement and its own set of problems. The field of data science aims to solve them.
“Everyone thinks their data is the messiest in the world, and mine maybe is because it’s taken from people who don’t want us to have it, frankly,” said Herrera’s immediate predecessor at the NSA, the computer scientist Deborah Frincke, during a 2017 talk at Stanford. “The adversary does not speak clearly in English with nice statements into a mic and, if we can’t understand it, send us a clearer statement.”
Making sense of vast stores of unclear, often stolen data in hundreds of languages and even more technical formats remains one of the directorate’s enduring tasks.
A group of lawmakers led by Sen. Richard Blumenthal (D-CT) and Sen. Lindsey Graham (R-SC) have re-introduced the EARN IT Act, an incredibly unpopular bill from 2020 that was dropped in the face of overwhelming opposition. Let’s be clear: the new EARN IT Act would pave the way for a massive new surveillance system, run by private companies, that would roll back some of the most important privacy and security features in technology used by people around the globe. It’s a framework for private actors to scan every message sent online and report violations to law enforcement. And it might not stop there. The EARN IT Act could ensure that anything hosted online—backups, websites, cloud photos, and more—is scanned.
[2022.02.08] Amy Zegart has a new book: Spies, Lies, and Algorithms: The History and Future of American Intelligence. Wired has an excerpt:
In short, data volume and accessibility are revolutionizing sensemaking. The intelligence playing field is leveling—and not in a good way. Intelligence collectors are everywhere, and government spy agencies are drowning in data. This is a radical new world and intelligence agencies are struggling to adapt to it. While secrets once conferred a huge advantage, today open source information increasingly does. Intelligence used to be a race for insight where great powers were the only ones with the capabilities to access secrets. Now everyone is racing for insight and the internet gives them tools to do it. Secrets still matter, but whoever can harness all this data better and faster will win.
The third challenge posed by emerging technologies strikes at the heart of espionage: secrecy. Until now, American spy agencies didn’t have to interact much with outsiders, and they didn’t want to. The intelligence mission meant gathering secrets so we knew more about adversaries than they knew about us, and keeping how we gathered secrets a secret too.
In the digital age, however, secrecy is bringing greater risk because emerging technologies are blurring nearly all the old boundaries of geopolitics. Increasingly, national security requires intelligence agencies to engage the outside world, not stand apart from it.
I have not yet read the book.
Finally, we calculate the number of physical qubits required to break the 256-bit elliptic curve encryption of keys in the Bitcoin network within the small available time frame in which it would actually pose a threat to do so. It would require 317 × 106 physical qubits to break the encryption within one hour using the surface code, a code cycle time of 1 μs, a reaction time of 10 μs, and a physical gate error of 10-3. To instead break the encryption within one day, it would require 13 × 106 physical qubits.
In other words: no time soon. Not even remotely soon. IBM’s largest ever superconducting quantum computer is 127 physical qubits.
Most security schemes facilitate the coercive processes of an attacker because they disclose metadata about the secret data, such as the name and size of encrypted files. This allows specific and enforceable demands to be made: “Give us the passwords for these three encrypted files with names A, B and C, or else…”. In other words, security often focuses on protecting the confidentiality of data, but lacks deniability.
A scheme with deniability would make even the existence of secret files difficult to prove. This makes it difficult for an attacker to formulate a coherent demand: “There’s no evidence of undisclosed data. Should we even bother to make threats?” A lack of evidence makes it more difficult to make specific and enforceable demands.
Precursor is a device we designed to keep secrets, such as passwords, wallets, authentication tokens, contacts and text messages. We also want it to offer plausible deniability in the face of an attacker that has unlimited access to a physical device, including its root keys, and a set of “broadly known to exist” passwords, such as the screen unlock password and the update signing password. We further assume that an attacker can take a full, low-level snapshot of the entire contents of the FLASH memory, including memory marked as reserved or erased. Finally, we assume that a device, in the worst case, may be subject to repeated, intrusive inspections of this nature.
We created the PDDB (Plausibly Deniable DataBase) to address this threat scenario. The PDDB aims to offer users a real option to plausibly deny the existence of secret data on a Precursor device. This option is strongest in the case of a single inspection. If a device is expected to withstand repeated inspections by the same attacker, then the user has to make a choice between performance and deniability. A “small” set of secrets (relative to the entire disk size, on Precursor that would be 8MiB out of 100MiB total size) can be deniable without a performance impact, but if larger (e.g. 80MiB out of 100MiB total size) sets of secrets must be kept, then archived data needs to be turned over frequently, to foil ciphertext comparison attacks between disk imaging events.
I have been thinking about this sort of thing for many, many years. (Here’s my analysis of one such system.) I have come to realize that the threat model isn’t as simple as Bunnie describes. The goal is to prevent “rubber-hose cryptanalysis,” simply beating the encryption key out of someone. But while a deniable database or file system allows the person to plausibly say that there are no more keys to beat out of them, the perpetrators can never be sure. The value of a normal, undeniable encryption system is that the perpetrators will know when they can stop beating the person—the person can undeniably say that there are no more keys left to reveal.
The report notes that:
- The HSE did not have a Chief Information Security Officer (CISO) or a “single responsible owner for cybersecurity at either senior executive or management level to provide leadership and direction.
- It had no documented cyber incident response runbooks or IT recovery plans (apart from documented AD recovery plans) for recovering from a wide-scale ransomware event.
- Under-resourced Information Security Managers were not performing their business as usual role (including a NIST-based cybersecurity review of systems) but were working on evaluating security controls for the COVID-19 vaccination system. Antivirus software triggered numerous alerts after detecting Cobalt Strike activity but these were not escalated. (The antivirus server was later encrypted in the attack).
- There was no security monitoring capability that was able to effectively detect, investigate and respond to security alerts across HSE’s IT environment or the wider National Healthcare Network (NHN).
- There was a lack of effective patching (updates, bug fixes etc.) across the IT estate and reliance was placed on a single antivirus product that was not monitored or effectively maintained with updates across the estate. (The initial workstation attacked had not had antivirus signatures updated for over a year.)
- Over 30,000 machines were running Windows 7 (out of support since January 2020).
- The initial breach came after a HSE staff member interacted with a malicious Microsoft Office Excel file attached to a phishing email; numerous subsequent alerts were not effectively investigated.
PwC’s crisp list of recommendations in the wake of the incident as well as detail on the business impact of the HSE ransomware attack may prove highly useful guidance on best practice for IT professionals looking to set up a security programme and get it funded.
[2022.02.14] This is a current list of where and when I am scheduled to speak:
- I’m speaking at IT-S Now 2022 in Vienna on June 2, 2022.
- I’m speaking at the 14th International Conference on Cyber Conflict, CyCon 2022, in Tallinn, Estonia on June 3, 2022.
- I’m speaking at the RSA Conference 2022 in San Francisco, June 6-9, 2022.
The list is maintained on this page.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.
You can also read these articles on my blog, Schneier on Security.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books—including his latest, We Have Root—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.
Copyright © 2022 by Bruce Schneier.