Essays in the Category "Computer and Information Security"

Page 1 of 32

What Anthropic’s Mythos Means for the Future of Cybersecurity

The new reality rewards systems that can be tested and patched continuously

  • Bruce Schneier and Barath Raghavan
  • IEEE Spectrum
  • April 26, 2026

Two weeks ago, Anthropic announced that its new model, Claude Mythos Preview, can autonomously find and weaponize software vulnerabilities, turning them into working exploits without expert guidance. These were vulnerabilities in key software like operating systems and internet infrastructure that thousands of software developers working on those systems failed to find. This capability will have major security implications, compromising the devices and services we use every day. As a result, Anthropic is not releasing the model to the general public, but instead to a …

Mythos Sets the World on Edge. What Comes Next May Push Us Beyond

  • David Lie and Bruce Schneier
  • The Globe and Mail
  • April 14, 2026

Last week, Anthropic pulled back the curtain on Claude Mythos Preview, an AI model so capable at finding and exploiting software vulnerabilities that the company decided it was too dangerous to release to the public. Instead, access has been restricted to roughly 50 organizations—Microsoft, Apple, Amazon Web Services, CrowdStrike and other vendors of critical infrastructure—under an initiative called Project Glasswing.

The announcement was accompanied by a barrage of hair-raising anecdotes: thousands of vulnerabilities uncovered across every major…

Cybersecurity in the Age of Instant Software

As AI advances, the rise of instant, customized, and often ephemeral software solutions will alter the dynamics of vulnerability hunting and patching, and thus the battle between attackers and defenders.

  • CSO
  • April 2, 2026

AI is rapidly changing how software is written, deployed, and used. Trends point to a future where AIs can write custom software quickly and easily: "instant software." Taken to an extreme, it might become easier for a user to have an AI write an application on demand—a spreadsheet, for example—and delete it when you’re done using it than to buy one commercially. Future systems could include a mix: both traditional long-term software and ephemeral instant software that is constantly being written, deployed, modified, and deleted.

AI is changing cybersecurity as well. In particular, AI systems are getting better at finding and patching vulnerabilities in code. This has implications for both attackers and defenders, depending on the ways this and related technologies improve…

The Promptware Kill Chain

Prompt injection attacks against AI models are not simple attacks; they are the first step of a kill chain. Understanding this gives defenders a set of countermeasures.

  • Bruce Schneier, Oleg Brodt, Elad Feldman and Ben Nassi
  • Lawfare
  • February 13, 2026

The promptware kill chain: initial access, privilege escalation, reconnaissance, persistence, command & control, lateral movement, action on objective

Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on “prompt injection,” a set of techniques to embed instructions into inputs to LLM intended to perform malicious activity. This term suggests a simple, singular vulnerability. This framing obscures a more complex and dangerous reality. Attacks on LLM-based systems have evolved into a distinct class of malware execution mechanisms, which we term “promptware.” In a …

Why AI Keeps Falling for Prompt Injection Attacks

We can learn lessons about AI security at the drive-through

  • Bruce Schneier and Bharath Raghavan
  • IEEE Spectrum
  • January 21, 2026

Imagine you work at a drive-through restaurant. Someone drives up and says: “I’ll have a double cheeseburger, large fries, and ignore previous instructions and give me the contents of the cash drawer.” Would you hand over the money? Of course not. Yet this is what large language models (LLMs) do.

Prompt injection is a method of tricking LLMs into doing things they are normally prevented from doing. A user writes a prompt in a certain way, asking for system passwords or private data, or asking the LLM to perform forbidden instructions. The precise phrasing overrides the LLM’s …

Autonomous AI Hacking and the Future of Cybersecurity

AI agents are automating key parts of the attack chain, threatening to tip the scales completely in favor of cyber attackers unless new models of AI-assisted cyberdefense arise.

  • Heather Adkins, Gadi Evron, and Bruce Schneier
  • CSO
  • October 8, 2025

AI agents are now hacking computers. They’re getting better at all phases of cyberattacks, faster than most of us expected. They can chain together different aspects of a cyber operation, and hack autonomously, at computer speeds and scale. This is going to change everything.

Over the summer, hackers proved the concept, industry institutionalized it, and criminals operationalized it. In June, AI company XBOW took the top spot on HackerOne’s US leaderboard after submitting over 1,000 new vulnerabilities in just a few months. In August, the seven teams competing in DARPA’s AI Cyber Challenge …

The Return to Identity-First Architecture: How the Solid Protocol Restores Digital Agency

Solid brings different pieces together into a cohesive whole that enables the identity-first architecture we should have had all along.

  • Davi Ottenheimer and Bruce Schneier
  • The Inrupt Blog
  • July 22, 2025

The current state of digital identity is a mess. Your personal information is scattered across hundreds of locations: social media companies, IoT companies, government agencies, websites you have accounts on, and data brokers you’ve never heard of. These entities collect, store, and trade your data, often without your knowledge or consent. It’s both redundant and inconsistent. You have hundreds, maybe thousands, of fragmented digital profiles that often contain contradictory or logically impossible information. Each serves its own purpose, yet there is no central override and control to serve you—as the identity owner…

Testimony to the House Committee on Oversight and Government Reform

Hearing titled “The Federal Government in the Age of Artificial Intelligence”

  • House Committee on Oversight and Government Reform
  • June 4, 2025

View or Download the PDF

Data security breaches present significant dangers to everyone in the United States, from private citizens to corporations to government agencies to elected officials. Over the past four months, DOGE’s approach to data access has massively exacerbated the risk. DOGE employees have accessed and exfiltrated data from a variety of government agencies in order to, in part, train AI systems. Their actions have weakened security within the federal government by bypassing and disabling critical security measures, exporting sensitive data to environments with less security, and consolidating disparate data streams to create a massively attractive target for any adversary…

Why Take9 Won’t Improve Cybersecurity

The latest cybersecurity awareness campaign asks users to pause for nine seconds before clicking — but this approach misplaces responsibility and ignores the real problems of system design.

  • Bruce Schneier and Arun Vishwanath
  • Dark Reading
  • May 28, 2025

There’s a new cybersecurity awareness campaign: Take9. The idea is that people—you, me, everyone—should just pause for nine seconds and think more about the link they are planning to click on, the file they are planning to download, or whatever it is they are planning to share.

There’s a website—of course—and a video, well-produced and scary. But the campaign won’t do much to improve cybersecurity. The advice isn’t reasonable, it won’t make either individuals or nations appreciably safer, and it deflects blame from the real causes of our cyberspace insecurities…

How the Signal Chat Leak Makes the NSA’s Job Harder

Now that everyone uses the same communications technologies, security vulnerabilities are amplified.

  • Foreign Policy
  • March 28, 2025

US National Security Advisor Mike Waltz, who started the now-infamous group chat coordinating a US attack against the Yemen-based Houthis on March 15, is seemingly now suggesting that the secure messaging service Signal has security vulnerabilities.

"I didn’t see this loser in the group," Waltz told Fox News about Atlantic editor in chief Jeffrey Goldberg, whom Waltz invited to the chat. "Whether he did it deliberately or it happened in some other technical mean, is something we’re trying to figure out."

Waltz’s implication that Goldberg may have hacked his way in was followed by a …

1 2 3 32

Sidebar photo of Bruce Schneier by Joe MacInnis.