Crypto-Gram

April 15, 2025

by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram’s web page.

These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:

  1. Improvements in Brute Force Attacks
  2. Is Security Human Factors Research Skewed Towards Western Ideas and Habits?
  3. Critical GitHub Attack
  4. NCSC Releases Post-Quantum Cryptography Timeline
  5. My Writings Are in the LibGen AI Training Corpus
  6. More Countries are Demanding Backdoors to Encrypted Apps
  7. Report on Paragon Spyware
  8. AI Data Poisoning
  9. A Taxonomy of Adversarial Machine Learning Attacks and Mitigations
  10. AIs as Trusted Third Parties
  11. The Signal Chat Leak and the NSA
  12. Cell Phone OPSEC for Border Crossings
  13. Rational Astrologies and Security
  14. Web 3.0 Requires Data Integrity
  15. Troy Hunt Gets Phished
  16. DIRNSA Fired
  17. Arguing Against CALEA
  18. How to Leak to a Journalist
  19. Reimagining Democracy
  20. AI Vulnerability Finding
  21. China Sort of Admits to Being Behind Volt Typhoon
  22. Upcoming Speaking Engagements

Improvements in Brute Force Attacks

[2025.03.17] New paper: “GPU Assisted Brute Force Cryptanalysis of GPRS, GSM, RFID, and TETRA: Brute Force Cryptanalysis of KASUMI, SPECK, and TEA3.”

Abstract: Key lengths in symmetric cryptography are determined with respect to the brute force attacks with current technology. While nowadays at least 128-bit keys are recommended, there are many standards and real-world applications that use shorter keys. In order to estimate the actual threat imposed by using those short keys, precise estimates for attacks are crucial.

In this work we provide optimized implementations of several widely used algorithms on GPUs, leading to interesting insights on the cost of brute force attacks on several real-word applications.

In particular, we optimize KASUMI (used in GPRS/GSM),SPECK (used in RFID communication), andTEA3 (used in TETRA). Our best optimizations allow us to try 235.72, 236.72, and 234.71 keys per second on a single RTX 4090 GPU. Those results improve upon previous results significantly, e.g. our KASUMI implementation is more than 15 times faster than the optimizations given in the CRYPTO’24 paper [ACC+24] improving the main results of that paper by the same factor.

With these optimizations, in order to break GPRS/GSM, RFID, and TETRA communications in a year, one needs around 11.22 billion, and 1.36 million RTX 4090GPUs, respectively.

For KASUMI, the time-memory trade-off attacks of [ACC+24] can be performed with142 RTX 4090 GPUs instead of 2400 RTX 3090 GPUs or, when the same amount of GPUs are used, their table creation time can be reduced to 20.6 days from 348 days,crucial improvements for real world cryptanalytic tasks.

Attacks always get better; they never get worse. None of these is practical yet, and they might never be. But there are certainly more optimizations to come.

EDITED TO ADD (4/14): One of the paper’s authors responds.


Is Security Human Factors Research Skewed Towards Western Ideas and Habits?

[2025.03.18] Really interesting research: “How WEIRD is Usable Privacy and Security Research?” by Ayako A. Hasegawa Daisuke Inoue, and Mitsuaki Akiyama:

Abstract: In human factor fields such as human-computer interaction (HCI) and psychology, researchers have been concerned that participants mostly come from WEIRD (Western, Educated, Industrialized, Rich, and Democratic) countries. This WEIRD skew may hinder understanding of diverse populations and their cultural differences. The usable privacy and security (UPS) field has inherited many research methodologies from research on human factor fields. We conducted a literature review to understand the extent to which participant samples in UPS papers were from WEIRD countries and the characteristics of the methodologies and research topics in each user study recruiting Western or non-Western participants. We found that the skew toward WEIRD countries in UPS is greater than that in HCI. Geographic and linguistic barriers in the study methods and recruitment methods may cause researchers to conduct user studies locally. In addition, many papers did not report participant demographics, which could hinder the replication of the reported studies, leading to low reproducibility. To improve geographic diversity, we provide the suggestions including facilitate replication studies, address geographic and linguistic issues of study/recruitment methods, and facilitate research on the topics for non-WEIRD populations.

The moral may be that human factors and usability needs to be localized.


Critical GitHub Attack

[2025.03.20] This is serious:

A sophisticated cascading supply chain attack has compromised multiple GitHub Actions, exposing critical CI/CD secrets across tens of thousands of repositories. The attack, which originally targeted the widely used “tj-actions/changed-files” utility, is now believed to have originated from an earlier breach of the “reviewdog/action-setup@v1” GitHub Action, according to a report.

[…]

CISA confirmed the vulnerability has been patched in version 46.0.1.

Given that the utility is used by more than 23,000 GitHub repositories, the scale of potential impact has raised significant alarm throughout the developer community.


NCSC Releases Post-Quantum Cryptography Timeline

[2025.03.21] The UK’s National Computer Security Center (part of GCHQ) released a timeline—also see their blog post—for migration to quantum-computer-resistant cryptography.

It even made The Guardian.


My Writings Are in the LibGen AI Training Corpus

[2025.03.21] The Atlantic has a search tool that allows you to search for specific works in the “LibGen” database of copyrighted works that Meta used to train its AI models. (The rest of the article is behind a paywall, but not the search tool.)

It’s impossible to know exactly which parts of LibGen Meta used to train its AI, and which parts it might have decided to exclude; this snapshot was taken in January 2025, after Meta is known to have accessed the database, so some titles here would not have been available to download.

Still…interesting.

Searching my name yields 199 results: all of my books in different versions, plus a bunch of shorter items.


More Countries are Demanding Backdoors to Encrypted Apps

[2025.03.24] Last month, I wrote about the UK forcing Apple to break its Advanced Data Protection encryption in iCloud. More recently, both Sweden and France are contemplating mandating backdoors. Both initiatives are attempting to scare people into supporting backdoors, which are—of course—are terrible idea.

Also: “A Feminist Argument Against Weakening Encryption.”

EDITED TO ADD (4/14): The French proposal was voted down.


Report on Paragon Spyware

[2025.03.25] Citizen Lab has a new report on Paragon’s spyware:

Key Findings:

  • Introducing Paragon Solutions. Paragon Solutions was founded in Israel in 2019 and sells spyware called Graphite. The company differentiates itself by claiming it has safeguards to prevent the kinds of spyware abuses that NSO Group and other vendors are notorious for.
  • Infrastructure Analysis of Paragon Spyware. Based on a tip from a collaborator, we mapped out server infrastructure that we attribute to Paragon’s Graphite spyware tool. We identified a subset of suspected Paragon deployments, including in Australia, Canada, Cyprus, Denmark, Israel, and Singapore.
  • Identifying a Possible Canadian Paragon Customer. Our investigation surfaced potential links between Paragon Solutions and the Canadian Ontario Provincial Police, and found evidence of a growing ecosystem of spyware capability among Ontario-based police services.
  • Helping WhatsApp Catch a Zero-Click. We shared our analysis of Paragon’s infrastructure with Meta, who told us that the details were pivotal to their ongoing investigation into Paragon. WhatsApp discovered and mitigated an active Paragon zero-click exploit, and later notified over 90 individuals who it believed were targeted, including civil society members in Italy.
  • Android Forensic Analysis: Italian Cluster. We forensically analyzed multiple Android phones belonging to Paragon targets in Italy (an acknowledged Paragon user) who were notified by WhatsApp. We found clear indications that spyware had been loaded into WhatsApp, as well as other apps on their devices.
  • A Related Case of iPhone Spyware in Italy. We analyzed the iPhone of an individual who worked closely with confirmed Android Paragon targets. This person received an Apple threat notification in November 2024, but no WhatsApp notification. Our analysis showed an attempt to infect the device with novel spyware in June 2024. We shared details with Apple, who confirmed they had patched the attack in iOS 18.
  • Other Surveillance Tech Deployed Against The Same Italian Cluster. We also note 2024 warnings sent by Meta to several individuals in the same organizational cluster, including a Paragon victim, suggesting the need for further scrutiny into other surveillance technology deployed against these individuals.

AI Data Poisoning

[2025.03.26] Cloudflare has a new feature—available to free users as well—that uses AI to generate random pages to feed to AI web crawlers:

Instead of simply blocking bots, Cloudflare’s new system lures them into a “maze” of realistic-looking but irrelevant pages, wasting the crawler’s computing resources. The approach is a notable shift from the standard block-and-defend strategy used by most website protection services. Cloudflare says blocking bots sometimes backfires because it alerts the crawler’s operators that they’ve been detected.

“When we detect unauthorized crawling, rather than blocking the request, we will link to a series of AI-generated pages that are convincing enough to entice a crawler to traverse them,” writes Cloudflare. “But while real looking, this content is not actually the content of the site we are protecting, so the crawler wastes time and resources.”

The company says the content served to bots is deliberately irrelevant to the website being crawled, but it is carefully sourced or generated using real scientific facts—such as neutral information about biology, physics, or mathematics—to avoid spreading misinformation (whether this approach effectively prevents misinformation, however, remains unproven).

It’s basically an AI-generated honeypot. And AI scraping is a growing problem:

The scale of AI crawling on the web appears substantial, according to Cloudflare’s data that lines up with anecdotal reports we’ve heard from sources. The company says that AI crawlers generate more than 50 billion requests to their network daily, amounting to nearly 1 percent of all web traffic they process. Many of these crawlers collect website data to train large language models without permission from site owners….

Presumably the crawlers will now have to up both their scraping stealth and their ability to filter out AI-generated content like this. Which means the honeypots will have to get better at detecting scrapers and more stealthy in their fake content. This arms race is likely to go back and forth, wasting a lot of energy in the process.


A Taxonomy of Adversarial Machine Learning Attacks and Mitigations

[2025.03.27] NIST just released a comprehensive taxonomy of adversarial machine learning attacks and countermeasures.


AIs as Trusted Third Parties

[2025.03.28] This is a truly fascinating paper: “Trusted Machine Learning Models Unlock Private Inference for Problems Currently Infeasible with Cryptography.” The basic idea is that AIs can act as trusted third parties:

Abstract: We often interact with untrusted parties. Prioritization of privacy can limit the effectiveness of these interactions, as achieving certain goals necessitates sharing private data. Traditionally, addressing this challenge has involved either seeking trusted intermediaries or constructing cryptographic protocols that restrict how much data is revealed, such as multi-party computations or zero-knowledge proofs. While significant advances have been made in scaling cryptographic approaches, they remain limited in terms of the size and complexity of applications they can be used for. In this paper, we argue that capable machine learning models can fulfill the role of a trusted third party, thus enabling secure computations for applications that were previously infeasible. In particular, we describe Trusted Capable Model Environments (TCMEs) as an alternative approach for scaling secure computation, where capable machine learning model(s) interact under input/output constraints, with explicit information flow control and explicit statelessness. This approach aims to achieve a balance between privacy and computational efficiency, enabling private inference where classical cryptographic solutions are currently infeasible. We describe a number of use cases that are enabled by TCME, and show that even some simple classic cryptographic problems can already be solved with TCME. Finally, we outline current limitations and discuss the path forward in implementing them.

When I was writing Applied Cryptography way back in 1993, I talked about human trusted third parties (TTPs). This research postulates that someday AIs could fulfill the role of a human TTP, with added benefits like (1) being able to audit their processing, and (2) being able to delete it and erase their knowledge when their work is done. And the possibilities are vast.

Here’s a TTP problem. Alice and Bob want to know whose income is greater, but don’t want to reveal their income to the other. (Assume that both Alice and Bob want the true answer, so neither has an incentive to lie.) A human TTP can solve that easily: Alice and Bob whisper their income to the TTP, who announces the answer. But now the human knows the data. There are cryptographic protocols that can solve this. But we can easily imagine more complicated questions that cryptography can’t solve. “Which of these two novel manuscripts has more sex scenes?” “Which of these two business plans is a riskier investment?” If Alice and Bob can agree on an AI model they both trust, they can feed the model the data, ask the question, get the answer, and then delete the model afterwards. And it’s reasonable for Alice and Bob to trust a model with questions like this. They can take the model into their own lab and test it a gazillion times until they are satisfied that it is fair, accurate, or whatever other properties they want.

The paper contains several examples where an AI TTP provides real value. This is still mostly science fiction today, but it’s a fascinating thought experiment.


The Signal Chat Leak and the NSA

[2025.03.31] US National Security Advisor Mike Waltz, who started the now-infamous group chat coordinating a US attack against the Yemen-based Houthis on March 15, is seemingly now suggesting that the secure messaging service Signal has security vulnerabilities.

“I didn’t see this loser in the group,” Waltz told Fox News about Atlantic editor in chief Jeffrey Goldberg, whom Waltz invited to the chat. “Whether he did it deliberately or it happened in some other technical mean, is something we’re trying to figure out.”

Waltz’s implication that Goldberg may have hacked his way in was followed by a report from CBS News that the US National Security Agency (NSA) had sent out a bulletin to its employees last month warning them about a security “vulnerability” identified in Signal.

The truth, however, is much more interesting. If Signal has vulnerabilities, then China, Russia, and other US adversaries suddenly have a new incentive to discover them. At the same time, the NSA urgently needs to find and fix any vulnerabilities quickly as it can—and similarly, ensure that commercial smartphones are free of backdoors—access points that allow people other than a smartphone’s user to bypass the usual security authentication methods to access the device’s contents.

That is essential for anyone who wants to keep their communications private, which should be all of us.

It’s common knowledge that the NSA’s mission is breaking into and eavesdropping on other countries’ networks. (During President George W. Bush’s administration, the NSA conducted warrantless taps into domestic communications as well—surveillance that several district courts ruled to be illegal before those decisions were later overturned by appeals courts. To this day, many legal experts maintain that the program violated federal privacy protections.) But the organization has a secondary, complementary responsibility: to protect US communications from others who want to spy on them. That is to say: While one part of the NSA is listening into foreign communications, another part is stopping foreigners from doing the same to Americans.

Those missions never contradicted during the Cold War, when allied and enemy communications were wholly separate. Today, though, everyone uses the same computers, the same software, and the same networks. That creates a tension.

When the NSA discovers a technological vulnerability in a service such as Signal (or buys one on the thriving clandestine vulnerability market), does it exploit it in secret, or reveal it so that it can be fixed? Since at least 2014, a US government interagency “equities” process has been used to decide whether it is in the national interest to take advantage of a particular security flaw, or to fix it. The trade-offs are often complicated and hard.

Waltz—along with Vice President J.D. Vance, Defense Secretary Pete Hegseth, and the other officials in the Signal group—have just made the trade-offs much tougher to resolve. Signal is both widely available and widely used. Smaller governments that can’t afford their own military-grade encryption use it. Journalists, human rights workers, persecuted minorities, dissidents, corporate executives, and criminals around the world use it. Many of these populations are of great interest to the NSA.

At the same time, as we have now discovered, the app is being used for operational US military traffic. So, what does the NSA do if it finds a security flaw in Signal?

Previously, it might have preferred to keep the flaw quiet and use it to listen to adversaries. Now, if the agency does that, it risks someone else finding the same vulnerability and using it against the US government. And if it was later disclosed that the NSA could have fixed the problem and didn’t, then the results might be catastrophic for the agency.

Smartphones present a similar trade-off. The biggest risk of eavesdropping on a Signal conversation comes from the individual phones that the app is running on. While it’s largely unclear whether the US officials involved had downloaded the app onto personal or government-issued phones—although Witkoff suggested on X that the program was on his “personal devices“—smartphones are consumer devices, not at all suitable for classified US government conversations. An entire industry of spyware companies sells capabilities to remotely hack smartphones for any country willing to pay. More capable countries have more sophisticated operations. Just last year, attacks that were later attributed to China attempted to access both President Donald Trump and Vance’s smartphones. Previously, the FBI—as well as law enforcement agencies in other countries—have pressured both Apple and Google to add “backdoors” in their phones to more easily facilitate court-authorized eavesdropping.

These backdoors would create, of course, another vulnerability to be exploited. A separate attack from China last year accessed a similar capability built into US telecommunications networks.

The vulnerabilities equities have swung against weakened smartphone security and toward protecting the devices that senior government officials now use to discuss military secrets. That also means that they have swung against the US government hoarding Signal vulnerabilities—and toward full disclosure.

This is plausibly good news for Americans who want to talk among themselves without having anyone, government or otherwise, listen in. We don’t know what pressure the Trump administration is using to make intelligence services fall into line, but it isn’t crazy to worry that the NSA might again start monitoring domestic communications.

Because of the Signal chat leak, it’s less likely that they’ll use vulnerabilities in Signal to do that. Equally, bad actors such as drug cartels may also feel safer using Signal. Their security against the US government lies in the fact that the US government shares their vulnerabilities. No one wants their secrets exposed.

I have long advocated for a “defense dominant” cybersecurity strategy. As long as smartphones are in the pocket of every government official, police officer, judge, CEO, and nuclear power plant operator—and now that they are being used for what the White House now calls calls “sensitive,” if not outright classified conversations among cabinet members—we need them to be as secure as possible. And that means no government-mandated backdoors.

We may find out more about how officials—including the vice president of the United States—came to be using Signal on what seem to be consumer-grade smartphones, in a apparent breach of the laws on government records. It’s unlikely that they really thought through the consequences of their actions.

Nonetheless, those consequences are real. Other governments, possibly including US allies, will now have much more incentive to break Signal’s security than they did in the past, and more incentive to hack US government smartphones than they did before March 24.

For just the same reason, the US government has urgent incentives to protect them.

This essay was originally published in Foreign Policy.


Cell Phone OPSEC for Border Crossings

[2025.04.01] I have heard stories of more aggressive interrogation of electronic devices at US border crossings. I know a lot about securing computers, but very little about securing phones.

Are there easy ways to delete data—files, photos, etc.—on phones so it can’t be recovered? Does resetting a phone to factory defaults erase data, or is it still recoverable? That is, does the reset erase the old encryption key, or just sever the password that access that key? When the phone is rebooted, are deleted files still available?

We need answers for both iPhones and Android phones. And it’s not just the US; the world is going to become a more dangerous place to oppose state power.


Rational Astrologies and Security

[2025.04.02] John Kelsey and I wrote a short paper for the Rossfest Festschrift: “Rational Astrologies and Security“:

There is another non-security way that designers can spend their security budget: on making their own lives easier. Many of these fall into the category of what has been called rational astrology. First identified by Randy Steve Waldman [Wal12], the term refers to something people treat as though it works, generally for social or institutional reasons, even when there’s little evidence that it works—and sometimes despite substantial evidence that it does not.

[…]

Both security theater and rational astrologies may seem irrational, but they are rational from the perspective of the people making the decisions about security. Security theater is often driven by information asymmetry: people who don’t understand security can be reassured with cosmetic or psychological measures, and sometimes that reassurance is important. It can be better understood by considering the many non-security purposes of a security system. A monitoring bracelet system that pairs new mothers and their babies may be security theater, considering the incredibly rare instances of baby snatching from hospitals. But it makes sense as a security system designed to alleviate fears of new mothers [Sch07].

Rational astrologies in security result from two considerations. The first is the principal-agent problem: The incentives of the individual or organization making the security decision are not always aligned with the incentives of the users of that system. The user’s well-being may not weigh as heavily on the developer’s mind as the difficulty of convincing his boss to take a chance by ignoring an outdated security rule or trying some new technology.

The second consideration that can lead to a rational astrology is where there is a social or institutional need for a solution to a problem for which there is actually not a particularly good solution. The organization needs to reassure regulators, customers, or perhaps even a judge and jury that “they did all that could be done” to avoid some problem—even if “all that could be done” wasn’t very much.


Web 3.0 Requires Data Integrity

[2025.04.03] If you’ve ever taken a computer security class, you’ve probably learned about the three legs of computer security—confidentiality, integrity, and availability—known as the CIA triad. When we talk about a system being secure, that’s what we’re referring to. All are important, but to different degrees in different contexts. In a world populated by artificial intelligence (AI) systems and artificial intelligent agents, integrity will be paramount.

What is data integrity? It’s ensuring that no one can modify data—that’s the security angle—but it’s much more than that. It encompasses accuracy, completeness, and quality of data—all over both time and space. It’s preventing accidental data loss; the “undo” button is a primitive integrity measure. It’s also making sure that data is accurate when it’s collected—that it comes from a trustworthy source, that nothing important is missing, and that it doesn’t change as it moves from format to format. The ability to restart your computer is another integrity measure.

The CIA triad has evolved with the Internet. The first iteration of the Web—Web 1.0 of the 1990s and early 2000s—prioritized availability. This era saw organizations and individuals rush to digitize their content, creating what has become an unprecedented repository of human knowledge. Organizations worldwide established their digital presence, leading to massive digitization projects where quantity took precedence over quality. The emphasis on making information available overshadowed other concerns.

As Web technologies matured, the focus shifted to protecting the vast amounts of data flowing through online systems. This is Web 2.0: the Internet of today. Interactive features and user-generated content transformed the Web from a read-only medium to a participatory platform. The increase in personal data, and the emergence of interactive platforms for e-commerce, social media, and online everything demanded both data protection and user privacy. Confidentiality became paramount.

We stand at the threshold of a new Web paradigm: Web 3.0. This is a distributed, decentralized, intelligent Web. Peer-to-peer social-networking systems promise to break the tech monopolies’ control on how we interact with each other. Tim Berners-Lee’s open W3C protocol, Solid, represents a fundamental shift in how we think about data ownership and control. A future filled with AI agents requires verifiable, trustworthy personal data and computation. In this world, data integrity takes center stage.

For example, the 5G communications revolution isn’t just about faster access to videos; it’s about Internet-connected things talking to other Internet-connected things without our intervention. Without data integrity, for example, there’s no real-time car-to-car communications about road movements and conditions. There’s no drone swarm coordination, smart power grid, or reliable mesh networking. And there’s no way to securely empower AI agents.

In particular, AI systems require robust integrity controls because of how they process data. This means technical controls to ensure data is accurate, that its meaning is preserved as it is processed, that it produces reliable results, and that humans can reliably alter it when it’s wrong. Just as a scientific instrument must be calibrated to measure reality accurately, AI systems need integrity controls that preserve the connection between their data and ground truth.

This goes beyond preventing data tampering. It means building systems that maintain verifiable chains of trust between their inputs, processing, and outputs, so humans can understand and validate what the AI is doing. AI systems need clean, consistent, and verifiable control processes to learn and make decisions effectively. Without this foundation of verifiable truth, AI systems risk becoming a series of opaque boxes.

Recent history provides many sobering examples of integrity failures that naturally undermine public trust in AI systems. Machine-learning (ML) models trained without thought on expansive datasets have produced predictably biased results in hiring systems. Autonomous vehicles with incorrect data have made incorrect—and fatal—decisions. Medical diagnosis systems have given flawed recommendations without being able to explain themselves. A lack of integrity controls undermines AI systems and harms people who depend on them.

They also highlight how AI integrity failures can manifest at multiple levels of system operation. At the training level, data may be subtly corrupted or biased even before model development begins. At the model level, mathematical foundations and training processes can introduce new integrity issues even with clean data. During execution, environmental changes and runtime modifications can corrupt previously valid models. And at the output level, the challenge of verifying AI-generated content and tracking it through system chains creates new integrity concerns. Each level compounds the challenges of the ones before it, ultimately manifesting in human costs, such as reinforced biases and diminished agency.

Think of it like protecting a house. You don’t just lock a door; you also use safe concrete foundations, sturdy framing, a durable roof, secure double-pane windows, and maybe motion-sensor cameras. Similarly, we need digital security at every layer to ensure the whole system can be trusted.

This layered approach to understanding security becomes increasingly critical as AI systems grow in complexity and autonomy, particularly with large language models (LLMs) and deep-learning systems making high-stakes decisions. We need to verify the integrity of each layer when building and deploying digital systems that impact human lives and societal outcomes.

At the foundation level, bits are stored in computer hardware. This represents the most basic encoding of our data, model weights, and computational instructions. The next layer up is the file system architecture: the way those binary sequences are organized into structured files and directories that a computer can efficiently access and process. In AI systems, this includes how we store and organize training data, model checkpoints, and hyperparameter configurations.

On top of that are the application layers—the programs and frameworks, such as PyTorch and TensorFlow, that allow us to train models, process data, and generate outputs. This layer handles the complex mathematics of neural networks, gradient descent, and other ML operations.

Finally, at the user-interface level, we have visualization and interaction systems—what humans actually see and engage with. For AI systems, this could be everything from confidence scores and prediction probabilities to generated text and images or autonomous robot movements.

Why does this layered perspective matter? Vulnerabilities and integrity issues can manifest at any level, so understanding these layers helps security experts and AI researchers perform comprehensive threat modeling. This enables the implementation of defense-in-depth strategies—from cryptographic verification of training data to robust model architectures to interpretable outputs. This multi-layered security approach becomes especially crucial as AI systems take on more autonomous decision-making roles in critical domains such as healthcare, finance, and public safety. We must ensure integrity and reliability at every level of the stack.

The risks of deploying AI without proper integrity control measures are severe and often underappreciated. When AI systems operate without sufficient security measures to handle corrupted or manipulated data, they can produce subtly flawed outputs that appear valid on the surface. The failures can cascade through interconnected systems, amplifying errors and biases. Without proper integrity controls, an AI system might train on polluted data, make decisions based on misleading assumptions, or have outputs altered without detection. The results of this can range from degraded performance to catastrophic failures.

We see four areas where integrity is paramount in this Web 3.0 world. The first is granular access, which allows users and organizations to maintain precise control over who can access and modify what information and for what purposes. The second is authentication—much more nuanced than the simple “Who are you?” authentication mechanisms of today—which ensures that data access is properly verified and authorized at every step. The third is transparent data ownership, which allows data owners to know when and how their data is used and creates an auditable trail of data providence. Finally, the fourth is access standardization: common interfaces and protocols that enable consistent data access while maintaining security.

Luckily, we’re not starting from scratch. There are open W3C protocols that address some of this: decentralized identifiers for verifiable digital identity, the verifiable credentials data model for expressing digital credentials, ActivityPub for decentralized social networking (that’s what Mastodon uses), Solid for distributed data storage and retrieval, and WebAuthn for strong authentication standards. By providing standardized ways to verify data provenance and maintain data integrity throughout its lifecycle, Web 3.0 creates the trusted environment that AI systems require to operate reliably. This architectural leap for integrity control in the hands of users helps ensure that data remains trustworthy from generation and collection through processing and storage.

Integrity is essential to trust, on both technical and human levels. Looking forward, integrity controls will fundamentally shape AI development by moving from optional features to core architectural requirements, much as SSL certificates evolved from a banking luxury to a baseline expectation for any Web service.

Web 3.0 protocols can build integrity controls into their foundation, creating a more reliable infrastructure for AI systems. Today, we take availability for granted; anything less than 100% uptime for critical websites is intolerable. In the future, we will need the same assurances for integrity. Success will require following practical guidelines for maintaining data integrity throughout the AI lifecycle—from data collection through model training and finally to deployment, use, and evolution. These guidelines will address not just technical controls but also governance structures and human oversight, similar to how privacy policies evolved from legal boilerplate into comprehensive frameworks for data stewardship. Common standards and protocols, developed through industry collaboration and regulatory frameworks, will ensure consistent integrity controls across different AI systems and applications.

Just as the HTTPS protocol created a foundation for trusted e-commerce, it’s time for new integrity-focused standards to enable the trusted AI services of tomorrow.

This essay was written with Davi Ottenheimer, and originally appeared in Communications of the ACM.


Troy Hunt Gets Phished

[2025.04.04] In case you need proof that anyone, even someone who does cybersecurity for a living, can fall for a phishing attack, Troy Hunt has a long, iterative story on his webpage about how he got phished. Worth reading.

EDITED TO ADD (4/14): Commentary from Adam Shostack and Cory Doctorow.


DIRNSA Fired

[2025.04.07] In “Secrets and Lies” (2000), I wrote:

It is poor civic hygiene to install technologies that could someday facilitate a police state.

It’s something a bunch of us were saying at the time, in reference to the vast NSA’s surveillance capabilities.

I have been thinking of that quote a lot as I read news stories of President Trump firing the Director of the National Security Agency. General Timothy Haugh.

A couple of weeks ago, I wrote:

We don’t know what pressure the Trump administration is using to make intelligence services fall into line, but it isn’t crazy to worry that the NSA might again start monitoring domestic communications.

The NSA already spies on Americans in a variety of ways. But that’s always been a sideline to its main mission: spying on the rest of the world. Once Trump replaces Haugh with a loyalist, the NSA’s vast surveillance apparatus can be refocused domestically.

Giving that agency all those powers in the 1990s, in the 2000s after the terrorist attacks of 9/11, and in the 2010s was always a mistake. I fear that we are about to learn how big a mistake it was.

Here’s PGP creator Phil Zimmerman in 1996, spelling it out even more clearly:

The Clinton Administration seems to be attempting to deploy and entrench a communications infrastructure that would deny the citizenry the ability to protect its privacy. This is unsettling because in a democracy, it is possible for bad people to occasionally get elected—sometimes very bad people. Normally, a well-functioning democracy has ways to remove these people from power. But the wrong technology infrastructure could allow such a future government to watch every move anyone makes to oppose it. It could very well be the last government we ever elect.

When making public policy decisions about new technologies for the government, I think one should ask oneself which technologies would best strengthen the hand of a police state. Then, do not allow the government to deploy those technologies. This is simply a matter of good civic hygiene.


Arguing Against CALEA

[2025.04.08] At a Congressional hearing earlier this week, Matt Blaze made the point that CALEA, the 1994 law that forces telecoms to make phone calls wiretappable, is outdated in today’s threat environment and should be rethought:

In other words, while the legally-mandated CALEA capability requirements have changed little over the last three decades, the infrastructure that must implement and protect it has changed radically. This has greatly expanded the “attack surface” that must be defended to prevent unauthorized wiretaps, especially at scale. The job of the illegal eavesdropper has gotten significantly easier, with many more options and opportunities for them to exploit. Compromising our telecommunications infrastructure is now little different from performing any other kind of computer intrusion or data breach, a well-known and endemic cybersecurity problem. To put it bluntly, something like Salt Typhoon was inevitable, and will likely happen again unless significant changes are made.

This is the access that the Chinese threat actor Salt Typhoon used to spy on Americans:

The Wall Street Journal first reported Friday that a Chinese government hacking group dubbed Salt Typhoon broke into three of the largest U.S. internet providers, including AT&T, Lumen (formerly CenturyLink), and Verizon, to access systems they use for facilitating customer data to law enforcement and governments. The hacks reportedly may have resulted in the “vast collection of internet traffic”; from the telecom and internet giants. CNN and The Washington Post also confirmed the intrusions and that the U.S. government’s investigation is in its early stages.


How to Leak to a Journalist

[2025.04.09] Neiman Lab has some good advice on how to leak a story to a journalist.


Reimagining Democracy

[2025.04.10] Imagine that all of us—all of society—have landed on some alien planet and need to form a government: clean slate. We do not have any legacy systems from the United States or any other country. We do not have any special or unique interests to perturb our thinking. How would we govern ourselves? It is unlikely that we would use the systems we have today. Modern representative democracy was the best form of government that eighteenth-century technology could invent. The twenty-first century is very different: scientifically, technically, and philosophically. For example, eighteenth-century democracy was designed under the assumption that travel and communications were both hard.

Indeed, the very idea of representative government was a hack to get around technological limitations. Voting is easier now. Does it still make sense for all of us living in the same place to organize every few years and choose one of us to go to a single big room far away and make laws in our name? Representative districts are organized around geography because that was the only way that made sense two hundred-plus years ago. But we do not need to do it that way anymore. We could organize representation by age: one representative for the thirty-year-olds, another for the forty-year-olds, and so on. We could organize representation randomly: by birthday, perhaps. We can organize in any way we want. American citizens currently elect people to federal posts for terms ranging from two to six years. Would ten years be better for some posts? Would ten days be better for others? There are lots of possibilities. Maybe we can make more use of direct democracy by way of plebiscites. Certainly we do not want all of us, individually, to vote on every amendment to every bill, but what is the optimal balance between votes made in our name and ballot initiatives that we all vote on?

For the past three years, I have organized a series of annual two-day workshops to discuss these and other such questions.1 For each event, I brought together fifty people from around the world: political scientists, economists, law professors, experts in artificial intelligence, activists, government types, historians, science-fiction writers, and more. We did not come up with any answers to our questions—and I would have been surprised if we had—but several themes emerged from the event. Misinformation and propaganda was a theme, of course, and the inability to engage in rational policy discussions when we cannot agree on facts. The deleterious effects of optimizing a political system for economic outcomes was another theme. Given the ability to start over, would anyone design a system of government for the near-term financial interest of the wealthiest few? Another theme was capitalism and how it is or is not intertwined with democracy. While the modern market economy made a lot of sense in the industrial age, it is starting to fray in the information age. What comes after capitalism, and how will it affect the way we govern ourselves?

Many participants examined the effects of technology, especially artificial intelligence (AI). We looked at whether—and when—we might be comfortable ceding power to an AI system. Sometimes deciding is easy. I am happy for an AI system to figure out the optimal timing of traffic lights to ensure the smoothest flow of cars through my city. When will we be able to say the same thing about the setting of interest rates? Or taxation? How would we feel about an AI device in our pocket that voted in our name, thousands of times per day, based on preferences that it inferred from our actions? Or how would we feel if an AI system could determine optimal policy solutions that balanced every voter’s preferences: Would it still make sense to have a legislature and representatives? Possibly we should vote directly for ideas and goals instead, and then leave the details to the computers.

These conversations became more pointed in the second and third years of our workshop, after generative AI exploded onto the internet. Large language models are poised to write laws, enforce both laws and regulations, act as lawyers and judges, and plan political strategy. How this capacity will compare to human expertise and capability is still unclear, but the technology is changing quickly and dramatically. We will not have AI legislators anytime soon, but just as today we accept that all political speeches are professionally written by speechwriters, will we accept that future political speeches will all be written by AI devices? Will legislators accept AI-written legislation, especially when that legislation includes a level of detail that human-based legislation generally does not? And if so, how will that change affect the balance of power between the legislature and the administrative state? Most interestingly, what happens when the AI tools we use to both write and enforce laws start to suggest policy options that are beyond human understanding? Will we accept them, because they work? Or will we reject a system of governance where humans are only nominally in charge?

Scale was another theme of the workshops. The size of modern governments reflects the technology at the time of their founding. European countries and the early American states are a particular size because that was a governable size in the eighteenth and nineteenth centuries. Larger governments—those of the United States as a whole and of the European Union—reflect a world where travel and communications are easier. Today, though, the problems we have are either local, at the scale of cities and towns, or global. Do we really have need for a political unit the size of France or Virginia? Or is it a mixture of scales that we really need, one that moves effectively between the local and the global?

As to other forms of democracy, we discussed one from history and another made possible by today’s technology. Sortition is a system of choosing political officials randomly. We use it today when we pick juries, but both the ancient Greeks and some cities in Renaissance Italy used it to select major political officials. Today, several countries—largely in Europe—are using the process to decide policy on complex issues. We might randomly choose a few hundred people, representative of the population, to spend a few weeks being briefed by experts, debating the issues, and then deciding on environmental regulations, or a budget, or pretty much anything.

“Liquid democracy” is a way of doing away with elections altogether. The idea is that everyone has a vote and can assign it to anyone they choose. A representative collects the proxies assigned to him or her and can either vote directly on the issues or assign all the proxies to someone else. Perhaps proxies could be divided: this person for economic matters, another for health matters, a third for national defense, and so on. In the purer forms of this system, people might transfer their votes to someone else at any time. There would be no more election days: vote counts might change every day.

And then, there is the question of participation and, more generally, whose interests are taken into account. Early democracies were really not democracies at all; they limited participation by gender, race, and land ownership. These days, to achieve a more comprehensive electorate we could lower the voting age. But, of course, even children too young to vote have rights, and in some cases so do other species. Should future generations be given a “voice,” whatever that means? What about nonhumans, or whole ecosystems? Should everyone have the same volume and type of voice? Right now, in the United States, the very wealthy have much more influence than others do. Should we encode that superiority explicitly? Perhaps younger people should have a more powerful vote than everyone else. Or maybe older people should.

In the workshops, those questions led to others about the limits of democracy. All democracies have boundaries limiting what the majority can decide. We are not allowed to vote Common Knowledge out of existence, for example, but can generally regulate speech to some degree. We cannot vote, in an election, to jail someone, but we can craft laws that make a particular action illegal. We all have the right to certain things that cannot be taken away from us. In the community of our future, what should be our rights as individuals? What should be the rights of society, superseding those of individuals?

Personally, I was most interested, at each of the three workshops, in how political systems fail. As a security technologist, I study how complex systems are subverted—hacked, in my parlance—for the benefit of a few at the expense of the many. Think of tax loopholes, or tricks to avoid government regulation. These hacks are common today, and AI tools will make them easier to find—and even to design—in the future. I would want any government system to be resistant to trickery. Or, to put it another way: I want the interests of each individual to align with the interests of the group at every level. We have never had a system of government with this property, but—in a time of existential risks such as climate change—it is important that we develop one.

Would this new system of government even be called “democracy”? I truly do not know.

Such speculation is not practical, of course, but still is valuable. Our workshops did not produce final answers and were not intended to do so. Our discourse was filled with suggestions about how to patch our political system where it is fraying. People regularly debate changes to the US Electoral College, or the process of determining voting districts, or the setting of term limits. But those are incremental changes. It is difficult to find people who are thinking more radically: looking beyond the horizon—not at what is possible today but at what may be possible eventually. Thinking incrementally is critically important, but it is also myopic. It represents a hill-climbing strategy of continuous but quite limited improvements. We also need to think about discontinuous changes that we cannot easily get to from here; otherwise, we may be forever stuck at local maxima. And while true innovation in politics is a lot harder than innovation in technology, especially without a violent revolution forcing changes on us, it is something that we as a species are going to have to get good at, one way or another.

Our workshop will reconvene for a fourth meeting in December 2025.

Note

  1. The First International Workshop on Reimagining Democracy (IWORD) was held December 7—8, 2022. The Second IWORD was held December 12—13, 2023. Both took place at the Harvard Kennedy School. The sponsors were the Ford Foundation, the Knight Foundation, and the Ash and Belfer Centers of the Kennedy School. See Schneier, “Recreating Democracy” and Schneier, “Second Interdisciplinary Workshop.”

This essay was originally published in Common Knowledge.


AI Vulnerability Finding

[2025.04.11] Microsoft is reporting that its AI systems are able to find new vulnerabilities in source code:

Microsoft discovered eleven vulnerabilities in GRUB2, including integer and buffer overflows in filesystem parsers, command flaws, and a side-channel in cryptographic comparison.

Additionally, 9 buffer overflows in parsing SquashFS, EXT4, CramFS, JFFS2, and symlinks were discovered in U-Boot and Barebox, which require physical access to exploit.

The newly discovered flaws impact devices relying on UEFI Secure Boot, and if the right conditions are met, attackers can bypass security protections to execute arbitrary code on the device.

Nothing major here. These aren’t exploitable out of the box. But that an AI system can do this at all is impressive, and I expect their capabilities to continue to improve.


China Sort of Admits to Being Behind Volt Typhoon

[2025.04.14] The Wall Street Journal has the story:

Chinese officials acknowledged in a secret December meeting that Beijing was behind a widespread series of alarming cyberattacks on U.S. infrastructure, according to people familiar with the matter, underscoring how hostilities between the two superpowers are continuing to escalate.

The Chinese delegation linked years of intrusions into computer networks at U.S. ports, water utilities, airports and other targets, to increasing U.S. policy support for Taiwan, the people, who declined to be named, said.

The admission wasn’t explicit:

The Chinese official’s remarks at the December meeting were indirect and somewhat ambiguous, but most of the American delegation in the room interpreted it as a tacit admission and a warning to the U.S. about Taiwan, a former U.S. official familiar with the meeting said.

No surprise.


Upcoming Speaking Engagements

[2025.04.14] This is a current list of where and when I am scheduled to speak:

  • I’m giving an online talk on AI and trust for the Weizenbaum Institute on April 24, 2025 at 2:00 PM CEST (8:00 AM ET).

The list is maintained on this page.


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books—including his latest, A Hacker’s Mind—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.

Copyright © 2025 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.