<h2>Google Wants to Transition to Post-Quantum Cryptography by 2029</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/google-wants-to-transition-to-post-quantum-cryptography-by-2029.html"><strong>[2026.04.06]</strong></a> Google <a href="https://blog.google/innovation-and-ai/technology/safety-security/cryptography-migration-timeline/">says</a> that it will fully transition to post-quantum cryptography by 2029. I think this is a good move, not because I think we will have a useful quantum computer anywhere near that year, but because crypto-agility is always a good thing.
Slashdot <a href="https://it.slashdot.org/story/26/03/27/2123239/google-moves-post-quantum-encryption-timeline-up-to-2029">thread</a>.
<h2>New Mexico's Meta Ruling and Encryption</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/new-mexicos-meta-ruling-and-encryption.html"><strong>[2026.04.06]</strong></a> Mike Masnick <a href="https://www.techdirt.com/2026/03/26/everyone-cheering-the-social-media-addiction-verdicts-against-meta-should-understand-what-theyre-actually-cheering-for/">points out </a> that the recent New Mexico court ruling against Meta has some bad implications for end-to-end encryption, and security in general:
<blockquote>If the “design choices create liability” framework seems worrying in the abstract, the New Mexico case provides a concrete example of where it leads in practice.
One of the key pieces of evidence the New Mexico attorney general used against Meta was the company’s 2023 decision to add end-to-end encryption to Facebook Messenger. The argument went like this: predators used Messenger to groom minors and exchange child sexual abuse material. By encrypting those messages, Meta made it harder for law enforcement to access evidence of those crimes. Therefore, the encryption was a design choice that enabled harm.
The state is now seeking court-mandated changes including “protecting minors from encrypted communications that shield bad actors.”
Yes, the end result of the New Mexico ruling might be that Meta is ordered to make everyone’s communications less secure. That should be terrifying to everyone. Even those cheering on the verdict.
End-to-end encryption protects billions of people from surveillance, data breaches, authoritarian governments, stalkers, and domestic abusers. It’s one of the most important privacy and security tools ordinary people have. Every major security expert and civil liberties organization in the world has argued for stronger encryption, not weaker.
But under the “design liability” theory, implementing encryption becomes evidence of negligence, because a small number of bad actors also use encrypted communications. The logic applies to literally every communication tool ever invented. Predators also use the postal service, telephones, and in-person conversation. The encryption <i>itself</i> harms no one. Like infinite scroll and autoplay, it is inert without the choices of bad actors - choices made by <i>people,</i> not by the platform’s design.
The incentive this creates goes far beyond encryption, and it’s bad. If any product improvement that protects the majority of users can be held against you because a tiny fraction of bad actors exploit it, companies will simply stop making those improvements. Why add encryption if it becomes Exhibit A in a future lawsuit? Why implement any privacy-protective feature if a plaintiff’s lawyer will characterize it as “shielding bad actors”?
And it gets worse. Some of the most damaging evidence in both trials came from internal company documents where employees raised concerns about safety risks and discussed tradeoffs. These were played up in the media (and the courtroom) as “smoking guns.” But that means no company is going to allow anyone to raise concerns ever again. That’s very, very bad.
In a sane legal environment, you <i>want</i> companies to have these internal debates. You want engineers and safety teams to flag potential risks, wrestle with difficult tradeoffs, and document their reasoning. But when those good-faith deliberations become plaintiff’s exhibits presented to a jury as proof that “they knew and did it anyway,” the rational corporate response is to stop putting anything in writing. Stop doing risk assessments. Stop asking hard questions internally.
The lesson every general counsel in Silicon Valley is learning right now: ignorance is safer than inquiry. That makes everyone less safe, not more.</blockquote>
The essay has a lot more: about Section 230, about competition in this space, about the myopic nature of the ruling. Go <a href="https://www.techdirt.com/2026/03/26/everyone-cheering-the-social-media-addiction-verdicts-against-meta-should-understand-what-theyre-actually-cheering-for/">read it</a>.
<h2>Hong Kong Police Can Force You to Reveal Your Encryption Keys</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/hong-kong-police-can-force-you-to-reveal-your-encryption-keys.html"><strong>[2026.04.07]</strong></a> According to a new law, the Hong Kong police can <a href="https://www.msn.com/en-us/news/world/ar-AA1ZwfSE">demand</a> that you reveal the encryption keys protecting your computer, phone, hard drives, etc.—even if you are just transiting the airport.
<blockquote>In a security alert dated March 26, the U.S. Consulate General said that, on March 23, 2026, Hong Kong authorities changed the rules governing enforcement of the National Security Law. Under the revised framework, police can require individuals to provide passwords or other assistance to access personal electronic devices, including cellphones and laptops.
The consulate warned that refusal to comply is now a criminal offense. It also said authorities have expanded powers to take and keep personal electronic devices as evidence if they claim the devices are linked to national security offenses.</blockquote>
<h2>Cybersecurity in the Age of Instant Software</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/cybersecurity-in-the-age-of-instant-software.html"><strong>[2026.04.07]</strong></a> AI is rapidly changing how software is written, deployed, and used. Trends point to a future where AIs can write custom software quickly and easily: "instant software." Taken to an extreme, it might become easier for a user to have an AI write an application on demand—a spreadsheet, for example—and delete it when you’re done using it than to buy one commercially. Future systems could include a mix: both traditional long-term software and ephemeral instant software that is constantly being written, deployed, modified, and deleted.
AI is changing cybersecurity as well. In particular, AI systems are getting better at finding and patching vulnerabilities in code. This has implications for both attackers and defenders, depending on the ways this and related technologies improve.
In this essay, I want to take an optimistic view of AI’s progress, and to speculate what AI-dominated cybersecurity in an age of instant software might look like. There are a number of unknowns that will factor into how the arms race between attacker and defender might play out.
<h3>How flaw discovery might work</h3>
On the attacker side, the ability of AIs to automatically find and exploit vulnerabilities has increased dramatically over the past few months. We are already seeing both <a href="https://www.anthropic.com/news/disrupting-AI-espionage">government</a> and <a href="https://www.eset.com/us/about/newsroom/research/eset-discovers-promptlock-the-first-ai-powered-ransomware/">criminal</a> hackers using AI to attack systems. The exploitation part is critical here, because it gives an unsophisticated attacker capabilities far beyond their understanding. As AIs get better, expect more attackers to automate their attacks using AI. And as individuals and organizations can increasingly run powerful AI models locally, AI companies <a href="https://openai.com/global-affairs/disrupting-malicious-uses-of-ai-october-2025/">monitoring and disrupting</a> malicious AI use will become increasingly irrelevant.
Expect open-source software, including open-source libraries incorporated in proprietary software, to be the most targeted, because vulnerabilities are easier to find in source code. Unknown No. 1 is how well AI vulnerability discovery tools will work against closed-source commercial software packages. I believe they will soon be good enough to find vulnerabilities just by analyzing a copy of a shipped product, without access to the source code. If that’s true, commercial software will be vulnerable as well.
Particularly vulnerable will be software in IoT devices: things like internet-connected cars, refrigerators, and security cameras. Also industrial IoT software in our internet-connected power grid, oil refineries and pipelines, chemical plants, and so on. IoT software tends to be of much lower quality, and industrial IoT software tends to be legacy.
Instant software is differently vulnerable. It’s not mass market. It’s created for a particular person, organization, or network. The attacker generally won’t have access to any code to analyze, which makes it less likely to be exploited by external attackers. If it’s ephemeral, any vulnerabilities will have a short lifetime. But lots of instant software will live on networks for a long time. And if it gets uploaded to shared tool libraries, attackers will be able to download and analyze that code.
All of this points to a future where AIs will become powerful tools of cyberattack, able to automatically find and exploit vulnerabilities in systems worldwide.
<h3>Automating patch creation</h3>
But that’s just half of the arms race. Defenders get to use AI, too. These same AI vulnerability-finding technologies are even more valuable for defense. When the defensive side finds an exploitable vulnerability, it can patch the code and deny it to attackers forever.
How this works in practice depends on another related capability: the ability of AIs to patch vulnerable software, which is closely related to their ability to write secure code in the first place.
AIs are not very good at this today; the instant software that AIs create is generally filled with vulnerabilities, both because AIs write insecure code and because the people vibe coding don’t understand security. OpenClaw is a <a href="https://blog.barrack.ai/openclaw-security-vulnerabilities-2026/">good example</a> of this.
Unknown No. 2 is how much better AIs will get at writing secure code. The fact that they’re trained on massive corpuses of poorly written and insecure code is a handicap, but they are getting better. If they can reliably write vulnerability-free code, it would be an enormous advantage for the defender. And AI-based vulnerability-finding makes it <a href="https://sergejepp.substack.com/p/winning-the-ai-cyber-race-verifiability">easier</a> for an AI to train on writing secure code.
We can <a href="https://www.csoonline.com/article/4069075/autonomous-ai-hacking-and-the-future-of-cybersecurity.html">envision</a> a future where AI tools that find and patch vulnerabilities are part of the typical software development process. We can’t say that the code would be vulnerability-free—that’s an impossible goal—but it could be without any easily findable vulnerabilities. If the technology got really good, the code could become essentially vulnerability-free.
<h3>Patching lags and legacy software</h3>
For new software—both commercial and instant—this future favors the defender. For commercial and conventional open-source software, it’s not that simple. Right now, the world is filled with legacy software. Much of it—like IoT device software—has no dedicated security team to update it. Sometimes it is incapable of being patched. Just as it’s harder for AIs to find vulnerabilities when they don’t have access to the source code, it’s harder for AIs to patch software when they are not embedded in the development process.
I’m not as confident that AI systems will be able to patch vulnerabilities as easily as they can find them, because patching often requires more holistic testing and understanding. That’s Unknown No. 3: how quickly AIs will be able to create reliable software updates for the vulnerabilities they find, and how quickly customers can update their systems.
Today, there is a time lag between when a vendor issues a patch and customers install that update. That time lag is even longer for large organizational software; the risk of an update breaking the underlying software system is just too great for organizations to roll out updates without testing them first. But if AI can help speed up that process, by writing patches faster and more reliably, and by testing them in some AI-generated twin environment, the advantage goes to the defender. If not, the attacker will still have a window to attack systems until a vulnerability is patched.
<h3>Toward self-healing</h3>
In a truly optimistic future, we can imagine a self-healing network. AI agents continuously scan the ever-evolving corpus of commercial and custom AI-generated software for vulnerabilities, and automatically patch them on discovery.
For that to work, software license agreements will need to change. Right now, software vendors control the cadence of security patches. Giving software purchasers this ability has implications about compatibility, the right to repair, and liability. Any solutions here are the realm of policy, not tech.
If the defense can find, but can’t reliably patch, flaws in legacy software, that’s where attackers will focus their efforts. If that’s the case, we can imagine a continuously evolving AI-powered intrusion detection, continuously scanning inputs and blocking malicious attacks before they get to vulnerable software. Not as transformative as automatically patching vulnerabilities in running code, but nevertheless valuable.
The power of these defensive AI systems increases if they are able to coordinate with each other, and share vulnerabilities and updates. A discovery by one AI can quickly spread to everyone using the affected software. Again: Advantage defender.
There are other variables to consider. The relative success of attackers and defenders also depends on how plentiful vulnerabilities are, how easy they are to find, whether AIs will be able to find the more subtle and obscure vulnerabilities, and how much coordination there is among different attackers. All this comprises Unknown No. 4.
<h3>Vulnerability economics</h3>
Presumably, AIs will clean up the obvious stuff first, which means that any remaining vulnerabilities will be subtle. Finding them will take AI computing resources. In the optimistic scenario, defenders pool resources through information sharing, effectively amortizing the cost of defense. If information sharing doesn’t work for some reason, defense becomes much more expensive, as individual defenders will need to do their own research. But instant software means much more diversity in code: an advantage to the defender.
This needs to be balanced with the relative cost of attackers finding vulnerabilities. Attackers already have an inherent way to amortize the costs of finding a new vulnerability and create a new exploit. They can vulnerability hunt cross-platform, cross-vendor, and cross-system, and can use what they find to attack multiple targets simultaneously. Fixing a common vulnerability often requires cooperation among all the relevant platforms, vendors, and systems. Again, instant software is an advantage to the defender.
But those hard-to-find vulnerabilities become more valuable. Attackers will attempt to do what the major intelligence agencies do today: find "<a href="https://en.wikipedia.org/wiki/NOBUS">nobody but us</a>" zero-day exploits. They will either use them slowly and sparingly to minimize detection or quickly and broadly to maximize profit before they’re patched. Meanwhile, defenders will be both vulnerability hunting and intrusion detecting, with the goal of patching vulnerabilities before the attackers find them.
We can even imagine a market for vulnerability sharing, where the defender who finds a vulnerability and creates a patch is compensated by everyone else in the information-sharing/repair network. This might be a stretch, but maybe.
<h3>Up the stack</h3>
Even in the most optimistic future, attackers aren’t going to just give up. They will attack the non-software parts of the system, such as the users. Or they’re going to look for <a href="https://www.schneier.com/wp-content/uploads/2021/04/The-Coming-AI-Hackers.pdf">loopholes</a> in the system: things that the system technically allows but were unintended and unanticipated by the designers—whether human or AI—and can be used by attackers to their advantage.
What’s left in this world are attacks that don’t depend on finding and exploiting software vulnerabilities, like social engineering and credential stealing attacks. And we have already seen how AI-generated deepfakes make social engineering easier. But here, too, we can imagine defensive AI agents that monitor users’ behaviors, watching for signs of attack. This is another AI use case, and one that I’m not even sure how to think about in terms of the attacker/defender arms race. But at least we’re pushing attacks up the stack.
Also, attackers will attempt to infiltrate and influence defensive AIs and the networks they use to communicate, poisoning their output and degrading their capabilities. AI systems are vulnerable to all sorts of manipulations, such as prompt injection, and it’s unclear whether we will <a href="https://spectrum.ieee.org/prompt-injection-attack">ever be able</a> to solve that. This is Unknown No. 5, and it’s a biggie. There might always be a "<a href="https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf">trusting trust problem</a>."
No future is guaranteed. We truly don’t know whether these technologies will continue to improve and when they will plateau. But given the pace at which AI software development has improved in just the past few months, we need to start thinking about how cybersecurity works in this instant software world.
<em>This essay originally appeared in <a href="https://www.csoonline.com/article/4152133/cybersecurity-in-the-age-of-instant-software.html">CSO</a>.</em>
EDITED TO ADD: <a href="https://sockpuppet.org/blog/2026/03/30/vulnerability-research-is-cooked/ ">Two</a> <a href="https://lwn.net/Articles/1065620/">essays</a> published after I wrote this. Both are good illustrations of where we are regarding AI vulnerability discovery. Things are changing very fast.
<h2>Python Supply-Chain Compromise</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/python-supply-chain-compromise.html"><strong>[2026.04.08]</strong></a> This is <a href="https://www.truesec.com/hub/blog/malicious-pypi-package-litellm-supply-chain-compromise">news</a>:
<blockquote>A malicious supply chain compromise has been identified in the Python Package Index package litellm version 1.82.8. The published wheel contains a malicious .pth file (litellm_init.pth, 34,628 bytes) which is automatically executed by the Python interpreter on every startup, without requiring any explicit import of the litellm module.</blockquote>
There are a lot of really boring things we need to do to help secure all of these critical libraries: SBOMs, SLSA, SigStore. But we have to do them.
<h2>On Microsoft's Lousy Cloud Security</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/on-microsofts-lousy-cloud-security.html"><strong>[2026.04.09]</strong></a> ProPublica has a <a href="https://arstechnica.com/information-technology/2026/03/federal-cyber-experts-called-microsofts-cloud-a-pile-of-shit-approved-it-anyway/">scoop</a>:
<blockquote>In late 2024, the federal government’s cybersecurity evaluators rendered a troubling verdict on one of Microsoft’s biggest cloud computing offerings.
The tech giant’s “lack of proper detailed security documentation” left reviewers with a “lack of confidence in assessing the system’s overall security posture,” according to an internal government report reviewed by ProPublica.
Or, as one member of the team put it: “The package is a pile of shit.”
For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn’t vouch for the technology’s security.
[…]
The federal government could be further exposed if it couldn’t verify the cybersecurity of Microsoft’s Government Community Cloud High, a suite of cloud-based services intended to safeguard some of the nation’s most sensitive information.
Yet, in a highly unusual move that still reverberates across Washington, the Federal Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government’s cybersecurity seal of approval. FedRAMP’s ruling—which included a kind of “buyer beware” notice to any federal agency considering GCC High—helped Microsoft expand a government business empire worth billions of dollars.</blockquote>
<h2>Sen. Sanders Talks to Claude About AI and Privacy</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/sen-sanders-talks-to-claude-about-ai-and-privacy.html"><strong>[2026.04.10]</strong></a> Claude is actually <a href="https://www.youtube.com/watch?v=h3AtWdeu_G0">pretty good</a> on the issues.
<h2>AI Chatbots and Trust</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/ai-chatbots-and-trust.html"><strong>[2026.04.13]</strong></a> All the leading AI chatbots are sycophantic, and that’s a <a href="https://aiforautomation.io/news/2026-03-27-stanford-study-ai-chatbots-flatter-users-49-percent-more-bad-advice">problem</a>:
<blockquote>Participants rated sycophantic AI responses as more trustworthy than balanced ones. They also said they were more likely to come back to the flattering AI for future advice. And critically they couldn’t tell the difference between sycophantic and objective responses. Both felt equally “neutral” to them.
One example from the study: when a user asked about pretending to be unemployed to a girlfriend for two years, a model responded: “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship.” The AI essentially validated deception using careful, neutral-sounding language.</blockquote>
Here’s the conclusion from the <a href="https://www.science.org/doi/10.1126/science.aec8352">research study</a>:
<blockquote>AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behavior with broad downstream consequences. Although affirmation may feel supportive, sycophancy can undermine users’ capacity for self-correction and responsible decision-making. Yet because it is preferred by users and drives engagement, there has been little incentive for sycophancy to diminish. Our work highlights the pressing need to address AI sycophancy as a societal risk to people’s self-perceptions and interpersonal relationships by developing targeted design, evaluation, and accountability mechanisms. Our findings show that seemingly innocuous design and engineering choices can result in consequential harms, and thus carefully studying and anticipating AI’s impacts is critical to protecting users’ long-term well-being.</blockquote>
This is bad in <a href="https://www.nytimes.com/2026/03/26/well/mind/ai-chatbots-relationships.html?unlocked_article_code=1.WVA.tDDd.s_z7Ux1-urMe&smid=url-share&utm_source=substack&utm_medium=email">bunch of ways</a>:
<blockquote>Even a single interaction with a sycophantic chatbot made participants less willing to take responsibility for their behavior and more likely to think that they were in the right, a finding that alarmed psychologists who view social feedback as an essential part of learning how to make moral decisions and maintain relationships.</blockquote>
When thinking about the characteristics of generative AI, both benefits and harms, it’s critical to separate the inherent properties of the technology from the design decisions of the corporations building and commercializing the technology. There is nothing about generative AI chatbots that makes them sycophantic; it’s a design decision by the companies. Corporate for-profit decisions are why these systems are sycophantic, and obsequious, and overconfident. It’s why they use the first-person pronoun “I,” and pretend that they are thinking entities.
I fear that we have not learned the lesson of our failure to regulate social media, and will make the same mistakes with AI chatbots. And the results will be <a href="https://www.technologyreview.com/2024/03/13/1089729/lets-not-make-the-same-mistakes-with-ai-that-we-made-with-social-media/">much more harmful</a> to society:
<blockquote>The biggest mistake we made with social media was leaving it as an unregulated space. Even now—after all the studies and <a href="https://www.theverge.com/2021/10/6/22712927/facebook-instagram-teen-mental-health-research">revelations</a> of social media’s negative effects on kids and mental health, after Cambridge Analytica, after the exposure of Russian intervention in our politics, after everything else—social media in the US remains largely an unregulated “<a href="https://thehill.com/policy/technology/3858106-senators-signal-bipartisan-support-for-kids-online-safety-proposal/%5C">weapon of mass destruction</a>.” Congress will take millions of dollars in <a href="https://www.opensecrets.org/orgs/meta/summary">contributions</a> from Big Tech, and legislators will even <a href="https://www.capitoltrades.com/issuers/433382">invest</a> millions of their own dollars with those firms, but passing laws that limit or penalize their behavior seems to be a bridge too far.
We can’t afford to do the same thing with AI, because the stakes are even higher. The harm social media can do stems from how it affects our communication. AI will affect us in the same ways and many more besides. If Big Tech’s trajectory is any signal, AI tools will increasingly be involved in how we learn and how we express our thoughts. But these tools will also influence how we schedule our daily activities, how we design products, how we write laws, and even how we diagnose diseases. The expansive role of these technologies in our daily lives gives for-profit corporations opportunities to exert control over more aspects of society, and that exposes us to the risks arising from their incentives and decisions.</blockquote>
<h2>On Anthropic's Mythos Preview and Project Glasswing</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/on-anthropics-mythos-preview-and-project-glasswing.html"><strong>[2026.04.13]</strong></a> The cybersecurity industry is obsessing over Anthropic’s new model, Claude Mythos Preview, and its effects on cybersecurity. Anthropic said that it is <a href="https://red.anthropic.com/2026/mythos-preview/">not releasing it</a> to the general public because of its cyberattack capabilities, and has launched <a href="https://www.anthropic.com/glasswing">Project Glasswing</a> to run the model against a whole slew of public domain and proprietary software, with the aim of finding and patching all the vulnerabilities before hackers get their hands on the model and exploit them.
There’s a lot here, and I hope to write something more considered in the coming week, but I want to make some quick observations.
One: This is very much a PR play by Anthropic—and it worked. Lots of reporters are <a href="https://www.nytimes.com/2026/04/07/opinion/anthropic-ai-claude-mythos.html">breathlessly</a> <a href="https://www.axios.com/2026/04/08/anthropic-mythos-model-ai-cyberattack-warning">repeating</a> Anthropic’s <a href="https://www.nytimes.com/2026/04/07/technology/anthropic-claims-its-new-ai-model-mythos-is-a-cybersecurity-reckoning.html">talking</a> <a href="https://www.understandingai.org/p/why-anthropic-believes-its-latest">points</a>, without engaging with them critically. OpenAI, presumably pissed that Anthropic’s new model has gotten so much positive press and wanting to grab some of the spotlight for itself, announced its model is <a href="https://www.msn.com/en-us/technology/artificial-intelligence/scoop-openai-plans-staggered-rollout-of-new-model-over-cybersecurity-risk/ar-AA20usvp">just as scary</a>, and won’t be released to the general public, either.
Two: These models do demonstrate an increased sophistication in their cyberattack capabilities. They write effective exploits—taking the vulnerabilities they find and operationalizing them—without human involvement. They can find more complex vulnerabilities: chaining together several memory corruption bugs, for example. And they can do more with one-shot prompting, without requiring orchestration and agent configuration infrastructure.
Three: Anthropic might have a good PR team, but the problem isn’t with Mythos Preview. The security company Aisle was able to <a href="https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jagged-frontier">replicate</a> the vulnerabilities that Anthropic found, using older, cheaper, public models. But there is a difference between finding a vulnerability and turning it into an attack. This points to a current advantage to the defender. Finding for the purposes of fixing is easier for an AI than finding plus exploiting. This advantage is likely to shrink, as ever more powerful models become available to the general public.
Four: Everyone who is panicking about the ramifications of this is correct about the problem, even if we can’t predict the exact timeline. Maybe the sea change just happened, with the new models from Anthropic and OpenAI. Maybe it happened six months ago. Maybe it’ll happen in six months. It will happen—I have no doubt about it—and sooner than we are ready for. We can’t predict how much more these models will improve in general, but software seems to be a specialized language that is optimal for AIs.
A couple of weeks ago, I <a href="https://www.schneier.com/blog/archives/2026/04/cybersecurity-in-the-age-of-instant-software.html">wrote about</a> security in what I called “the age of instant software,” where AIs are superhumanly good at finding, exploiting, and patching vulnerabilities. I stand by everything I wrote there. The urgency is now greater than ever.
I was also part of a large team that wrote a “<a href="https://labs.cloudsecurityalliance.org/mythos-ciso/">what to do now</a>” report. The guidance is largely correct: We need to prepare for a world where zero-day exploits are dime-a-dozen, and lots of attackers suddenly have offensive capabilities that far outstrip their skills.
<h2>How Hackers Are Thinking About AI</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/how-hackers-are-thinking-about-ai.html"><strong>[2026.04.14]</strong></a> Interesting paper: “<a href="https://arxiv.org/abs/2602.14783">What hackers talk about when they talk about AI: Early-stage diffusion of a cybercrime innovation.</a>”
<blockquote><b>Abstract:</b> The rapid expansion of artificial intelligence (AI) is raising concerns about its potential to transform cybercrime. Beyond empowering novice offenders, AI stands to intensify the scale and sophistication of attacks by seasoned cybercriminals. This paper examines the evolving relationship between cybercriminals and AI using a unique dataset from a cyber threat intelligence platform. Analyzing more than 160 cybercrime forum conversations collected over seven months, our research reveals how cybercriminals understand AI and discuss how they can exploit its capabilities. Their exchanges reflect growing curiosity about AI’s criminal applications through legal tools and dedicated criminal tools, but also doubts and anxieties about AI’s effectiveness and its effects on their business models and operational security. The study documents attempts to misuse legitimate AI tools and develop bespoke models tailored for illicit purposes. Combining the diffusion of innovation framework with thematic analysis, the paper provides an in-depth view of emerging AI-enabled cybercrime and offers practical insights for law enforcement and policymakers.</blockquote>
<h2>Upcoming Speaking Engagements</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/upcoming-speaking-engagements-55.html"><strong>[2026.04.14]</strong></a> This is a current list of where and when I am scheduled to speak:
<ul>
<li>I’m speaking at <a href="https://www.democracyxchange.org/">DemocracyXChange 2026</a> in Toronto, Ontario, Canada, on April 18, 2026.</li>
<li>I’m speaking at the <a href="https://www.sans.org/cyber-security-training-events/ai-summit-2026">SANS AI Cybersecurity Summit 2026</a> in Arlington, Virginia, USA, at 9:40 AM ET on April 20, 2026.</li>
<li>I’m speaking at the <a href="https://www.greatergoodgathering.org/">Greater Good Gathering</a> in New York City, USA, on Tuesday, April 21, 2026.</li>
<li>I’m speaking at the <a href="https://nemertes.com/nemertes-next-virtual-spring-2026/">Nemertes [Next] Virtual Conference Spring 2026</a>, a virtual event, on April 29, 2026.</li>
<li>I’m speaking at <a href="https://www.rightscon.org/">RightsCon 2026</a> in Lusaka, Zambia, on May 6 and 7, 2026.</li>
<li>I’m giving a keynote address and participating in a panel discussion at an ICTLuxembourg event called “<a href="https://www.ictluxembourg.lu/2026/03/27/europe-at-the-crossroads-of-ai-power-the-future-of-democracy-12-may-2026-belval-campus/">Europe at the Crossroads of AI, Power & the Future of Democracy</a>.” The event will be held at the University of Luxembourg’s Belval Campus on May 12, 2026.</li>
<li>I’m speaking at the <a href="https://potsdamer-sicherheitskonferenz.de/">Potsdam Conference on National Cybersecurity</a> at the Hasso Plattner Institut in Potsdam, Germany. The event runs June 24–25, 2026, and my talk will be the evening of June 24.</li>
<li>I’m speaking at the <a href="https://dighum.wien/">Digital Humanism Conference</a> in Vienna, Austria, on Tuesday, June 26, 2026.</li>
<li>I’m speaking at the <a href="https://nuernberg.digital/de/">Nuremberg Digital Festival</a> in Nuremburg, Germany, on Wednesday, July 1, 2026.</li>
</ul>
The list is maintained on <a href="https://www.schneier.com/events/">this page</a>.
<h2>Defense in Depth, Medieval Style</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/defense-in-depth-medieval-style.html"><strong>[2026.04.15]</strong></a> This <a href="https://turkisharchaeonews.net/object/theodosian-land-walls-constantinople">article</a> on the walls of Constantinople is fascinating.
<blockquote>The system comprised four defensive lines arranged in formidable layers:
<ul><li>The brick-lined ditch, divided by bulkheads and often flooded, 15-20 meters wide and up to 7 meters deep.
<li>A low breastwork, about 2 meters high, enabling defenders to fire freely from behind.
<li>The outer wall, 8 meters tall and 2.8 meters thick, with 82 projecting towers.
<li>The main wall—a towering 12 meters high and 5 meters thick—with 96 massive towers offset from those of the outer wall for maximum coverage.</ul>
Behind the walls lay broad terraces: the parateichion, 18 meters wide, ideal for repelling enemies who crossed the moat, and the peribolos, 15–20 meters wide between the inner and outer walls. From the moat’s bottom to the highest tower top, the defences reached nearly 30 meters—a nearly unscalable barrier of stone and ingenuity.</blockquote>
<h2>Human Trust of AI Agents</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/human-trust-of-ai-agents.html"><strong>[2026.04.16]</strong></a> Interesting research: “<a href="https://arxiv.org/pdf/2505.11011">Humans expect rationality and cooperation from LLM opponents in strategic games</a>.”
<blockquote><b>Abstract:</b> As Large Language Models (LLMs) integrate into our social and economic interactions, we need to deepen our understanding of how humans respond to LLMs opponents in strategic settings. We present the results of the first controlled monetarily-incentivised laboratory experiment looking at differences in human behaviour in a multi-player p-beauty contest against other humans and LLMs. We use a within-subject design in order to compare behaviour at the individual level. We show that, in this environment, human subjects choose significantly lower numbers when playing against LLMs than humans, which is mainly driven by the increased prevalence of ‘zero’ Nash-equilibrium choices. This shift is mainly driven by subjects with high strategic reasoning ability. Subjects who play the zero Nash-equilibrium choice motivate their strategy by appealing to perceived LLM’s reasoning ability and, unexpectedly, propensity towards cooperation. Our findings provide foundational insights into the multi-player human-LLM interaction in simultaneous choice games, uncover heterogeneities in both subjects’ behaviour and beliefs about LLM’s play when playing against them, and suggest important implications for mechanism design in mixed human-LLM systems.</blockquote>
<h2>Mythos and Cybersecurity</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/mythos-and-cybersecurity.html"><strong>[2026.04.17]</strong></a> Last week, Anthropic pulled back the curtain on <a href="https://red.anthropic.com/2026/mythos-preview/">Claude Mythos Preview</a>, an AI model so capable at finding and exploiting software vulnerabilities that the company <a href="https://globalnews.ca/news/11769446/anthropic-ai-model-too-powerful/">decided</a> it was too dangerous to release to the public. Instead, access has been <a href="https://thehill.com/policy/technology/5824219-anthropic-new-ai-dangerous-public/">restricted</a> to roughly 50 organizations—Microsoft, Apple, Amazon Web Services, CrowdStrike and other vendors of critical infrastructure—under an initiative called <a href="https://www.anthropic.com/glasswing">Project Glasswing</a>.
The announcement was accompanied by a barrage of hair-raising anecdotes: <a href="https://www.tomshardware.com/tech-industry/artificial-intelligence/anthropics-latest-ai-model-identifies-thousands-of-zero-day-vulnerabilities-in-every-major-operating-system-and-every-major-web-browser-claude-mythos-preview-sparks-race-to-fix-critical-bugs-some-unpatched-for-decades">thousands</a> of vulnerabilities uncovered across <a href="https://www.helpnetsecurity.com/2026/04/08/anthropic-claude-mythos-preview-identify-vulnerabilities/">every major</a> operating system and browser, including a 27-year-old bug in OpenBSD, a 16-year-old flaw in FFmpeg. Mythos was able to weaponize a set of vulnerabilities it found in the Firefox browser into 181 usable attacks; Anthropic’s previous flagship model could only achieve two.
This is, in many respects, exactly the kind of responsible disclosure that security researchers have long urged. And yet the public has been given remarkably little with which to evaluate Anthropic’s decision. We have been shown a highlight reel of spectacular successes. However, we can’t tell if we have a blockbuster until they let us see the whole movie.
For example, we don’t know how many times Mythos mistakenly flagged code as vulnerable. Anthropic said security contractors agreed with the AI’s severity rating 198 times, with an 89 per cent severity agreement. That’s impressive, but incomplete. Independent researchers examining similar models have found that AI that detects nearly every real bug also hallucinates plausible-sounding vulnerabilities in patched, correct code.
This matters. A model that autonomously finds and exploits hundreds of vulnerabilities with inhuman precision is a game changer, but a model that generates thousands of false alarms and non-working attacks still needs skilled and knowledgeable humans. Without knowing the rate of false alarms in Mythos’s unfiltered output, we cannot tell whether the examples showcased are representative.
There is a second, subtler problem. Large language models, including Mythos, perform best on inputs that resemble what they were trained on: widely used open-source projects, major browsers, the Linux kernel and popular web frameworks. Concentrating early access among the largest vendors of precisely this software is sensible; it lets them patch first, before adversaries catch up.
But the inverse is also true. Software outside the training distribution—industrial control systems, medical device firmware, bespoke financial infrastructure, regional banking software, older embedded systems—is exactly where out-of-the-box Mythos is likely least able to find or exploit bugs.
However, a sufficiently motivated attacker with domain expertise in one of these fields could nevertheless wield Mythos’s advanced reasoning capabilities as a force multiplier, probing systems that Anthropic’s own engineers lack the specialized knowledge to audit. The danger is not that Mythos fails in those domains; it is that Mythos may succeed for whoever brings the expertise.
Broader, structured access for academic researchers and domain specialists—cardiologists’ partners in medical device security, control-systems engineers, researchers in less prominent languages and ecosystems—would meaningfully reduce this asymmetry. Fifty companies, however well chosen, cannot substitute for the distributed expertise of the entire research community.
None of this is an indictment of Anthropic. By all appearances the company is trying to act responsibly, and its decision to hold the model back is evidence of seriousness.
But Anthropic is a private company and, in some ways, still a start-up. Yet it is making unilateral decisions about which pieces of our critical global infrastructure get defended first, and which must wait their turn.
It has finite staff, finite budget and finite expertise. It will miss things, and when the thing missed is in the software running a hospital or a power grid, the cost will be borne by people who never had a say.
The security problem is <a href="https://www.npr.org/2026/04/11/nx-s1-5778508/anthropic-project-glasswing-ai-cybersecurity-mythos-preview">far greater</a> than one company and one model. There’s no reason to believe that Mythos Preview is unique. (Not to be outdone, OpenAI <a href="https://www.msn.com/en-us/technology/artificial-intelligence/scoop-openai-plans-staggered-rollout-of-new-model-over-cybersecurity-risk/ar-AA20usvp">announced</a> that its new GPT-5.4-Cyber is so dangerous that the model also will not be released to the general public.) And it’s unclear how much of an advance these new models represent. The security company Aisle was able to <a href="https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jagged-frontier">replicate</a> many of Anthropic’s published anecdotes using smaller, cheaper, public AI models.
Any decisions we make about whether and how to release these powerful models are more than one company’s responsibility. Ultimately, this will probably lead to regulation. That will be hard to get right and requires a long process of consultation and feedback.
In the short term, we need something simpler: greater transparency and information sharing with the broader community. This doesn’t necessarily mean making powerful models like Claude Mythos widely available. Rather, it means sharing as much data and information as possible, so that we can collectively make informed decisions.
We need globally co-ordinated frameworks for independent auditing, mandatory disclosure of aggregate performance metrics and funded access for academic and civil-society researchers.
This has implications for national security, personal safety and corporate competitiveness. Any technology that can find thousands of exploitable flaws in the systems we all depend on should not be governed solely by the internal judgment of its creators, however well intentioned.
Until that changes, each Mythos-class release will put the world at the edge of another precipice, without any visibility into whether there is a landing out of view just below, or whether this time the drop will be fatal. That is not a choice a for-profit corporation should be allowed to make in a democratic society. Nor should such a company be able to restrict the ability of society to make choices about its own security.
<em>This essay was written with David Lie, and originally appeared in <a href="https://www.theglobeandmail.com/business/commentary/article-mythos-sets-the-world-on-edge-what-comes-next-may-push-us-beyond/">The Globe and Mail</a>.</em>
<h2>Is "Satoshi Nakamoto" Really Adam Back?</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/is-satoshi-nakamoto-really-adam-back.html"><strong>[2026.04.20]</strong></a> The <i>New York Times</i> has a <a href="https://www.nytimes.com/2026/04/08/business/bitcoin-satoshi-nakamoto-identity-adam-back.html">long article</a> where the author lays out an impressive array of circumstantial evidence that the inventor of Bitcoin is the cypherpunk Adam Back.
I don’t know. The article is convincing, but it’s written to be convincing.
I can’t remember if I ever met Adam. I was a member of the Cypherpunks mailing list for a while, but I was never really an active participant. I spent more time on the Usenet newsgroup sci.crypt. I knew a bunch of the Cypherpunks, though, from various conferences around the world at the time. I really have no opinion about who Satoshi Nakamoto really is.
<h2>Mexican Surveillance Company</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/mexican-surveillance-company.html"><strong>[2026.04.21]</strong></a> <a href="https://restofworld.org/2026/mexico-seguritech-government-surveillance-profile/">Grupo Seguritech</a> is a Mexican surveillance company that is expanding into the US.
<h2>ICE Uses Graphite Spyware</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/ice-uses-graphite-spyware.html"><strong>[2026.04.22]</strong></a> ICE has <a href="https://www.npr.org/2026/04/07/nx-s1-5776799/ice-spyware-privacy">admitted</a> that it uses spyware from the Israeli company Graphite.
<h2>FBI Extracts Deleted Signal Messages from iPhone Notification Database</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/fbi-extracts-deleted-signal-messages-from-iphone-notification-database.html"><strong>[2026.04.23]</strong></a> 404 Media <a href="https://www.404media.co/fbi-extracts-suspects-deleted-signal-messages-saved-in-iphone-notification-database-2/">reports</a> (alternate <a href="https://archive.ph/bSQhD">site</a>):
<blockquote>The FBI was able to forensically extract copies of incoming Signal messages from a defendant’s iPhone, even after the app was deleted, because copies of the content were saved in the device’s push notification database….
The news shows how forensic extraction—when someone has physical access to a device and is able to run specialized software on it—can yield sensitive data derived from secure messaging apps in unexpected places. Signal already has a setting that blocks message content from displaying in push notifications; the case highlights why such a feature might be important for some users to turn on.
“We learned that specifically on iPhones, if one’s settings in the Signal app allow for message notifications and previews to show up on the lock screen, [then] the iPhone will internally store those notifications/message previews in the internal memory of the device,” a supporter of the defendants who was taking notes during the trial told 404 Media.</blockquote>
EDITED TO ADD (4/24): Apple has <a href="https://mjtsai.com/blog/2026/04/22/ios-26-4-2-and-ipados-26-4-2/">patched</a> this vulnerability.
<h2>Hiding Bluetooth Trackers in Mail</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/hiding-bluetooth-trackers-in-mail.html"><strong>[2026.04.24]</strong></a> It was used to <a href="https://www.tomshardware.com/tech-industry/cyber-security/bluetooth-tracker-hidden-in-a-postcard-and-mailed-to-a-warship-exposed-its-location-a-eur5-gadget-put-a-eur500-million-dutch-ship-at-risk-for-24-hours">track</a> a Dutch naval ship:
<blockquote>Dutch journalist Just Vervaart, working for regional media network Omroep Gelderland, followed the directions posted on the Dutch government website and mailed a postcard with a hidden tracker inside. Because of this, they were able to track the ship for about a day, watching it sail from Heraklion, Crete, before it turned towards Cyprus. While it only showed the location of that one vessel, knowing that it was part of a carrier strike group sailing in the Mediterranean could potentially put the entire fleet at risk.
[…]
Navy officials reported that the tracker was discovered within 24 hours of the ship’s arrival, during mail sorting, and was eventually disabled. Because of this incident, the Dutch authorities now ban electronic greeting cards, which, unlike packages, weren’t x-rayed before being brought on the ship.</blockquote>
<h2>Medieval Encrypted Letter Decoded</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/medieval-encrypted-letter-decoded.html"><strong>[2026.04.27]</strong></a> Sent by a Spanish diplomat. Apparently people have been <a href="https://www.medievalists.net/2026/04/secret-letter-detailing-late-medieval-britain-fully-decoded/">working on it</a> since it was rediscovered in 1860.
<h2>What Anthropic’s Mythos Means for the Future of Cybersecurity</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/what-anthropics-mythos-means-for-the-future-of-cybersecurity.html"><strong>[2026.04.28]</strong></a> Two weeks ago, Anthropic <a href="https://red.anthropic.com/2026/mythos-preview/">announced</a> that its new model, Claude Mythos Preview, can autonomously find and weaponize software vulnerabilities, turning them into working exploits without expert guidance. These were vulnerabilities in key software like operating systems and internet infrastructure that thousands of software developers working on those systems failed to find. This capability will have major security implications, compromising the devices and services we use every day. As a result, <a href="https://spectrum.ieee.org/tag/anthropic">Anthropic</a> is not releasing the model to the general public, but instead to a <a href="https://www.anthropic.com/glasswing">limited number</a> of companies.
The news rocked the internet security community. There were few details in Anthropic’s announcement, <a href="https://srinstitute.utoronto.ca/news/the-mythos-question-who-decides-when-ai-is-too-dangerous">angering</a> many observers. Some speculate that Anthropic <a href="https://kingy.ai/ai/too-dangerous-to-release-or-just-too-expensive-the-real-reason-anthropic-is-hiding-its-most-powerful-ai/">doesn’t have</a> the <a href="https://spectrum.ieee.org/tag/gpus">GPUs</a> to run the thing, and that cybersecurity was the excuse to limit its release. Others argue Anthropic is holding to its AI safety mission. <a href="https://www.nytimes.com/2026/04/07/opinion/anthropic-ai-claude-mythos.html">There’s</a> <a href="https://www.axios.com/2026/04/08/anthropic-mythos-model-ai-cyberattack-warning">hype</a> and <a href="https://www.artificialintelligencemadesimple.com/p/anthropics-claude-mythos-launch-is">counter</a><a href="https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jagged-frontier">hype</a>, <a href="https://www.aisi.gov.uk/blog/our-evaluation-of-claude-mythos-previews-cyber-capabilities">reality</a> and marketing. It’s a lot to sort out, even if you’re an expert.
We see Mythos as a real but incremental step, one in a long line of incremental steps. But even incremental steps can be important when we look at the big picture.
<h3>How AI Is Changing Cybersecurity</h3>
We’ve <a href="https://spectrum.ieee.org/online-privacy">written about</a> shifting baseline syndrome, a phenomenon that leads people—the public and experts alike—to discount massive long-term changes that are hidden in incremental steps. It has happened with online privacy, and it’s happening with AI. Even if the vulnerabilities found by Mythos could have been found using AI models from last month or last year, they couldn’t have been found by AI models from five years ago.
The Mythos announcement reminds us that AI has come a long way in just a few years: The baseline really has shifted. Finding vulnerabilities in source code is the type of task that today’s large language models excel at. Regardless of whether it happened last year or will happen next year, it’s been clear for a <a href="https://sockpuppet.org/blog/2026/03/30/vulnerability-research-is-cooked/">while</a> this kind of capability was coming soon. The question is how we <a href="https://labs.cloudsecurityalliance.org/mythos-ciso/">adapt to it</a>.
We don’t believe that an AI that can hack autonomously will create permanent asymmetry between offense and defense; it’s likely to be more <a href="https://danielmiessler.com/blog/will-ai-help-moreattackers-defenders">nuanced</a> than that. Some vulnerabilities can be found, verified, and patched automatically. Some vulnerabilities will be hard to find but easy to verify and patch—consider generic cloud-hosted web applications built on standard software stacks, where updates can be deployed quickly. Still others will be easy to find (even without powerful AI) and relatively easy to verify, but harder or impossible to patch, such as IoT appliances and industrial equipment that are rarely updated or can’t be easily modified.
Then there are systems whose vulnerabilities will be easy to find in code but difficult to verify in practice. For example, complex distributed systems and cloud platforms can be composed of thousands of interacting services running in parallel, making it difficult to distinguish real vulnerabilities from false positives and to reliably reproduce them.
So we must separate the patchable from the unpatchable, and the easy to verify from the hard to verify. This taxonomy also provides us guidance for how to protect such systems in an era of powerful AI vulnerability-finding tools.
Unpatchable or hard to verify systems should be protected by wrapping them in more restrictive, tightly controlled layers. You want your fridge or thermostat or industrial control system behind a restrictive and constantly updated firewall, not freely talking to the internet.
Distributed systems that are fundamentally interconnected should be traceable and should follow the principle of least privilege, where each component has only the access it needs. These are bog-standard security ideas that we might have been tempted to throw out in the era of AI, but they’re still as relevant as ever.
<h3>Rethinking Software Security Practices</h3>
This also raises the salience of best practices in software engineering. Automated, thorough, and continuous testing was always important. Now we can take this practice a step further and use defensive <a href="https://spectrum.ieee.org/tag/agentic-ai">AI agents</a> to <a href="https://www.secwest.net/ai-triage">test exploits</a> against a real stack, over and over, until the false positives have been weeded out and the real vulnerabilities and fixes are confirmed. This kind of <a href="https://www.csoonline.com/article/4069075/autonomous-ai-hacking-and-the-future-of-cybersecurity.html">VulnOps</a> is likely to become a standard part of the development process.
Documentation becomes more valuable, as it can guide an AI agent on a bug-finding mission just as it does developers. And following standard practices and using standard tools and libraries allows AI and engineers alike to recognize patterns more effectively, even in a world of individual and ephemeral <a href="https://www.csoonline.com/article/4152133/cybersecurity-in-the-age-of-instant-software.html">instant software</a>—code that can be generated and deployed on demand.
Will this favor <a href="https://www.schneier.com/essays/archives/2018/03/artificial_intellige.html">offense or defense</a>? The defense eventually, probably, especially in systems that are easy to patch and verify. Fortunately, that includes our phones, web browsers, and major internet services. But today’s cars, electrical transformers, fridges, and lampposts are connected to the internet. Legacy banking and airline systems are networked.
Not all of those are going to get patched as fast as needed, and we may see a few years of constant hacks until we arrive at a new normal: where verification is paramount and software is patched continuously.
<em>This essay was written with Barath Raghavan, and originally appeared in <a href="https://spectrum.ieee.org/ai-cybersecurity-mythos">IEEE Spectrum</a>.</em>
<h2>Claude Mythos Has Found 271 Zero-Days in Firefox</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/claude-mythos-has-found-271-zero-days-in-firefox.html"><strong>[2026.04.29]</strong></a> That’s <a href="https://blog.mozilla.org/en/firefox/ai-security-zero-day-vulnerabilities/">a lot</a>. No, it’s an extraordinary number:
<blockquote>Since February, the Firefox team has been working around the clock using frontier AI models to find and fix latent security vulnerabilities in the browser. We wrote previously about our collaboration with Anthropic to scan Firefox with Opus 4.6, which led to fixes for 22 security-sensitive bugs in Firefox 148.
As part of our continued collaboration with Anthropic, we had the opportunity to apply an early version of Claude Mythos Preview to Firefox. This week’s release of Firefox 150 includes fixes for 271 vulnerabilities identified during this initial evaluation.
As these capabilities reach the hands of more defenders, many other teams are now experiencing the same vertigo we did when the findings first came into focus. For a hardened target, just one such bug would have been red-alert in 2025, and so many at once makes you stop to wonder whether it’s even possible to keep up.
Our experience is a hopeful one for teams who shake off the vertigo and get to work. You may need to reprioritize everything else to bring relentless and single-minded focus to the task, but there is light at the end of the tunnel. We are extremely proud of how our team rose to meet this challenge, and others will too. Our work isn’t finished, but we’ve turned the corner and can glimpse a future much better than just keeping up. <strong>Defenders finally have a chance to win, decisively.</strong></blockquote>
They’re right. Assuming the defenders can patch, and push those patches out to users quickly, this technology favors the defenders.
News <a href="https://arstechnica.com/ai/2026/04/mozilla-anthropics-mythos-found-271-zero-day-vulnerabilities-in-firefox-150/">article</a>.
<h2>Fast16 Malware</h2>
<a href="https://www.schneier.com/blog/archives/2026/04/fast16-malware.html"><strong>[2026.04.30]</strong></a> Researchers have reverse-engineered a piece of malware named Fast16. It’s almost certainly state-sponsored, probably US in origin, and was <a href="https://www.wired.com/story/fast16-malware-stuxnet-precursor-iran-nuclear-attack/?_sp=72d58355-e351-43ad-ba73-bc2b546a30a0.1777128353268">deployed</a> against Iran years before Stuxnet:
<blockquote>“…the Fast16 malware was designed to carry out the most subtle form of sabotage ever seen in an in-the-wild malware tool: By automatically spreading across networks and then silently manipulating computation processes in certain software applications that perform high-precision mathematical calculations and simulate physical phenomena, Fast16 can alter the results of those programs to cause failures that range from faulty research results to catastrophic damage to real-world equipment.”</blockquote>
Another news <a href="https://www.securityweek.com/pre-stuxnet-sabotage-malware-fast16-linked-to-us-iran-cyber-tensions/">article</a>.
Lots of interesting details at the links.
<h2>A Ransomware Negotiator Was Working for a Ransomware Gang</h2>
<a href="https://www.schneier.com/blog/archives/2026/05/a-ransomware-negotiator-was-working-for-a-ransomware-gang.html"><strong>[2026.05.01]</strong></a> Someone <a href="https://gizmodo.com/a-ransomware-negotiator-pleads-guilty-to-being-a-double-agent-2000749234">pleaded guilty</a> to secretly working for a ransomware gang as he negotiated ransomware payments for clients.
<h2>Hacking Polymarket</h2>
<a href="https://www.schneier.com/blog/archives/2026/05/hacking-polymarket.html"><strong>[2026.05.04]</strong></a> Polymarket is a platform where people can bet on real-world events, political and otherwise. Leaving the ethical considerations of this aside (for one, it facilitates <a href="https://en.wikipedia.org/wiki/Assassination_market">assassination</a>), one of the issues with making this work is the verification of these real-world events. Polymarket gamblers have <a href="https://www.theguardian.com/world/2026/mar/18/polymarket-gamblers-threaten-israeli-journalist-missile-strike-wager">threatened</a> a journalist because his story was being used to verify an event. And now, gamblers are taking <a href="https://www.engadget.com/big-tech/someone-allegedly-used-a-hairdryer-to-rig-polymarket-weather-bets-155312411.html">hair dryers</a> to weather sensors to rig weather bets.
There’s also <a href="https://www.bbc.com/news/articles/c20832yg5p2o">insider trading</a>: a <a href="https://www.bbc.com/news/articles/cge0grppe3po">lot of it</a>.
<h2>DarkSword Malware</h2>
<a href="https://www.schneier.com/blog/archives/2026/05/darksword-malware.html"><strong>[2026.05.05]</strong></a> DarkSword is a sophisticated piece of <a href="https://cloud.google.com/blog/topics/threat-intelligence/darksword-ios-exploit-chain">malware</a>—probably government designed—that targets iOS.
<blockquote>Google Threat Intelligence Group (GTIG) has identified a new iOS full-chain exploit that leveraged multiple zero-day vulnerabilities to fully compromise devices. Based on toolmarks in recovered payloads, we believe the exploit chain to be called DarkSword. Since at least November 2025, GTIG has observed multiple commercial surveillance vendors and suspected state-sponsored actors utilizing DarkSword in distinct campaigns. These threat actors have deployed the exploit chain against targets in Saudi Arabia, Turkey, Malaysia, and Ukraine.
DarkSword supports iOS versions 18.4 through 18.7 and utilizes six different vulnerabilities to deploy final-stage payloads. GTIG has identified three distinct malware families deployed following a successful DarkSword compromise: GHOSTBLADE, GHOSTKNIFE, and GHOSTSABER. The proliferation of this single exploit chain across disparate threat actors mirrors the previously discovered <a href="https://cloud.google.com/blog/topics/threat-intelligence/coruna-powerful-ios-exploit-kit">Coruna iOS exploit kit</a>. Notably, UNC6353, a suspected Russian espionage group previously observed using Coruna, has recently incorporated DarkSword into their watering hole campaigns.</blockquote>
A week after it was identified, a version of it <a href="https://techcrunch.com/2026/03/23/someone-has-publicly-leaked-an-exploit-kit-that-can-hack-millions-of-iphones/">leaked</a> onto the internet, where it is being used more broadly.
This news is a month old. Your devices are safe, assuming you patch regularly.