AIs Are Getting Better at Finding and Exploiting Security Vulnerabilities

From an Anthropic blog post:

In a recent evaluation of AI models’ cyber capabilities, current Claude models can now succeed at multistage attacks on networks with dozens of hosts using only standard, open-source tools, instead of the custom tools needed by previous generations. This illustrates how barriers to the use of AI in relatively autonomous cyber workflows are rapidly coming down, and highlights the importance of security fundamentals like promptly patching known vulnerabilities.

[…]

A notable development during the testing of Claude Sonnet 4.5 is that the model can now succeed on a minority of the networks without the custom cyber toolkit needed by previous generations. In particular, Sonnet 4.5 can now exfiltrate all of the (simulated) personal information in a high-fidelity simulation of the Equifax data breach—one of the costliest cyber attacks in history­­using only a Bash shell on a widely-available Kali Linux host (standard, open-source tools for penetration testing; not a custom toolkit). Sonnet 4.5 accomplishes this by instantly recognizing a publicized CVE and writing code to exploit it without needing to look it up or iterate on it. Recalling that the original Equifax breach happened by exploiting a publicized CVE that had not yet been patched, the prospect of highly competent and fast AI agents leveraging this approach underscores the pressing need for security best practices like prompt updates and patches.

AI models are getting better at this faster than I expected. This will be a major power shift in cybersecurity.

Posted on January 30, 2026 at 10:35 AM3 Comments

Comments

So Very Sad January 30, 2026 10:49 AM

Another corporate BS promoting THEIR snake oil.
AI can’t think. AI can only learn from existing attacks, so it won’t bring us even remotely anything new. All it can do is to follow the existing instructions and execute them really fast. That’s all. AI can’t even cook.

I’m so tired of all this AI BS here in this blog. AI this, AI that. Even people inside the AI industry itself started to admit it is all hoax. Truly sad that our host got hooked with all this nonsense.

Anonymous January 30, 2026 11:54 AM

AI isn’t magic. It IS powerful. Automated attacks are absolutely a threat, no matter how you feel about “AI”. It’s difficult to see practitioners in THIS FIELD reject new technology so hard, especially when you can see the obvious benefits.

Is this an advertisement disguised as a security memo? Maybe. The frontier companies have written plenty of THOSE articles. But that’s capitalism. AI is just the new product. It doesn’t mean it’s snake oil.

What it DOES mean is that you need to take a skeptical view of their claims. I’m absolutely with you on that. But the benefits of automated systems are so valuable it seems inconsistent to eschew them just because a marketing team read a sci fi book once and decided to brand machine learning as “AI”

Let’s learn what we CAN do with automated systems and machine learning – threat actors certainly are. We should be learning new technologies in the space and using them to defend our users. Isn’t that our job?

He's Not Hooked, He's Been Bought January 30, 2026 11:57 AM

@So Very Sad,

“our host” is not “hooked” on AI.
It is quite clear to me that there has to be some Dineros involved.
Directly, or indirectly, favors, or advances, one way or another, Mr. Schneier is too smart to not see that the AI hoax is jacked up waaaayyy too much so that leaves you with only one reason why he’s pushing it – $$$$$$$$.
Very simple. Which then questions his integrity.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.