AI Found Twelve New Vulnerabilities in OpenSSL

The title of the post is”What AI Security Research Looks Like When It Works,” and I agree:

In the latest OpenSSL security release> on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the Fall 2025 release, AISLE is credited for surfacing 13 of 14 OpenSSL CVEs assigned in 2025, and 15 total across both releases. This is a historically unusual concentration for any single research team, let alone an AI-driven one.

These weren’t trivial findings either. They included CVE-2025-15467, a stack buffer overflow in CMS message parsing that’s potentially remotely exploitable without valid key material, and exploits for which have been quickly developed online. OpenSSL rated it HIGH severity; NIST‘s CVSS v3 score is 9.8 out of 10 (CRITICAL, an extremely rare severity rating for such projects). Three of the bugs had been present since 1998-2000, for over a quarter century having been missed by intense machine and human effort alike. One predated OpenSSL itself, inherited from Eric Young’s original SSLeay implementation in the 1990s. All of this in a codebase that has been fuzzed for millions of CPU-hours and audited extensively for over two decades by teams including Google’s.

In five of the twelve cases, our AI system directly proposed the patches that were accepted into the official release.

AI vulnerability finding is changing cybersecurity, faster than expected. This capability will be used by both offense and defense.

More.

Posted on February 18, 2026 at 7:03 AM20 Comments

Comments

TimH February 18, 2026 7:47 AM

Looks to me like the AI system is checking exhaustively for known needles (flaws) in the haystack, whereas fuzzing is searching the haystack for needles, which is far less efficient.

Clive Robinson February 18, 2026 8:52 AM

@ TimH,

With regards,

“Looks to me like the AI system is checking exhaustively for known needles (flaws) in the haystack”

Think a little further “known” has an implication that,

1, As known, should not have gone in the haystack.
2, If there before “known” should have been patched out of the haystack.

Which means that you are looking for “unknown” and as there are oh so many of those… So,

“fuzzing is searching the haystack for needles, which is far less efficient”

Is in fact on average “the most efficient” way of finding new thus unknown flaws or vulnerabilities.

Clive Robinson February 18, 2026 10:08 AM

@ ALL,

I’ve explained some of the issues to do with finding vulnerabilities as either an attacker or defender, and the maths favours attackers not defenders.

Because attackers can spend their resources on “focused narrow in depth” searches, whilst defenders have to spend their often lesser resorces on “unfocused wide in broad” searches.

There are many ways you can optomize searches from the mechanical search for “all instances” to the random “all areas” of fuzzing. In effect each being one end of the curve.

Thus modified techniques will out perform both.

A branch of thermodynamics called statistical mechanics is something that can tell us about what is an improved or more optimal search method.

Thus the aim is to take “Known Instances of attack and form them into classes of attacks by abstracting out basic vulnerability features.

This changes a purely “random search” from basic fuzzing to something more directed. So the “drunkards walk” becomes in effective “directed” to enlarge the class in a spiral outward about a known class.

Think of it as a drunkard on a slope, ease of walking makes a general walk too much work so the drunkard tends to head down hill.

This is what some call a “directed drunkards walk” as such Brownian motion under the influence of the force of gravity creates a density profile that in effect causers fast moving particles to rise whilst the slow moving fall towards the point of highest gravitational attraction.

However this is a general trend based on statistical properties, not one that is specific to any one type of attack.

Moving the trend to accommodate other type of instances is thus generally more efficient than pure random “fuzzing”.

But pure random fuzzing will find new instances that are in a class of just one member (it’s self). So creates new classes rather than new instances which some would view as more beneficial.

Clive Robinson February 18, 2026 12:34 PM

@ Dave, Bruce, ALL,

Today the Register has an article on the use of AI for Computer Security that some readers would consider an exercise in doing a “face palm” by LLM users,

Your AI-generated password isn’t random, it just looks that way

Seemingly complex strings are actually highly predictable, crackable within hours

AI security company Irregular looked at Claude, ChatGPT, and Gemini, and found all three GenAI tools put forward seemingly strong passwords that were, in fact, easily guessable.

https://www.theregister.com/2026/02/18/generating_passwords_with_llms/

Not exactly surprising for those that have looked into “Current AI LLM and ML Systems”.

Arguably the “stochastic” element is not even very good just for basic “statistics”.

But that aside of interest further down the article is,

‘Irregular also said there were no repeating characters in any of the 50 passwords, indicating they were not truly random.’

That is an “Oh so clear indicator” that somebody does not know/understand what they are talking about…

Which throws doubt on the reporting or further back on the journalistic “supply chain”.

Because it’s just not true as stated in the article (note the lack of direct quote marks in the article around it, which suggests that the journalist is the one at fault).

Rontea February 18, 2026 1:24 PM

What AISLE and Claude Opus 4.6 have demonstrated is a perfect example of the dual-use nature of AI in security. Right now, this is a boon for defenders: the ability to find hundreds of vulnerabilities, including critical ones, before attackers can exploit them, is a major shift in the defensive landscape. But make no mistake—capabilities like this will inevitably be adopted by offensive actors as well. Vulnerability discovery at this scale changes the game for both sides. For the moment, defenders have the edge because they can integrate these tools into development pipelines and remediate issues before they reach the wild. The challenge is maintaining that advantage as adversaries catch up.

Dave February 18, 2026 4:11 PM

@Rontea

What do you mean, attackers “catching up”? This AI research shows how miserably far behind defenders have been.

FTA,

“All of this in a codebase that has been fuzzed for millions of CPU-hours and audited extensively for over two decades by teams including Google’s.”

Oh really, and how on earth do the researchers know that Google failed to find them? The more likely answer is that Google found them, kept them hidden, and forwarded them to the various three letter agencies in the USA. Which is entirely consistent with what we know about the relationship between Google and the American government.

Hence, trust. People were trusting that Google was operating in the public interest and this research suggests they were not. In the same way the password article linked to by Clive shows the current folly of trusting AI with passwords.

Rontea February 18, 2026 7:56 PM

@Dave

“ This AI research shows how miserably far behind defenders have been.”

Look for the keyword advantage

Spellucci February 18, 2026 10:23 PM

@Rontea I agree being able to analyze source code is a boon for defenders. I believe this puts attackers at a disadvantage.

If both the defender and the attacker have access to the source code (open source like OpenSSL) they are in a race to find vulnerabilities. Once a vulnerability is resolved, it does little good for the attacker to know what that vulnerability used to be.

If the defender has access to the source code and the attacker does not (closed source) it will be much more difficult for the attacker to find vulnerabilities that it will be for the defender to do so.

Yes, there are other ways of finding vulnerabilities other than analyzing the source code, like fuzzing, but analyzing the source code is a huge advantage.

lurker February 18, 2026 11:21 PM

@Spellucci
“it does little good for the attacker to know what that vulnerability used to be.”

Check the number of successful attacks that occur in the brief interval after patch release and before the victim gets round to pathching. AI will find those victims quicker than you can say “Patch Tuesday.”

Clive Robinson February 19, 2026 4:29 AM

@ Spellucci, lurker, ALL,

With regards,

“If both the defender and the attacker have access to the source code (open source like OpenSSL) they are in a race to find vulnerabilities.”

Whilst true the attacker still has the advantage in that they are looking for one point of vulnerability to drill down and develop into an attack. That is they can concentrate their resources backed by a lot of experience and pre-built exploit payloads and tools.

A defender however usually has less resources to use than an attacker, has to spread them thin as their primary role is to “develop the code base”.

Worse they have to find as many potential points of attack as they can, and then spend resources on patching them out at the expense of developing the code…

All before an attacker develops a single vulnerability into a successful attack.

With the sheer number of vulnerabilities out there It’s a game of probabilities which almost always favours the attacker not the defender.

It’s why we have “attack windows” after a vulnerability is discovered. The length of which is determined more by how the attackers behave than the defenders.

But put simply with Open Source in Open Repositories, attackers can see what the primary defenders –the developers– are doing… if the attackers analyse the developer/defenders activity they will see where a vulnerability is being looked for, and turn it into an exploit before a patch is even available and before the users of the code have time to “patch and reconfigure”.

Which is why it gets complicated.

But that’s not the only issue to consider, you go on to say,

“Once a vulnerability is resolved, it does little good for the attacker to know what that vulnerability used to be.”

Actually far from true.

Vulnerabilities as I’ve frequently noted fall into

“Individual Instances of attack are always in Classes of attack types.”

With,

“The Instance being specific, the “Class” being general.”

Thus finding an original vulnerability gives you a set of Class identifiers to go seek out.

What I’ve generally not spoken about is the “Class” is a form of “finger print” that reveals both new vulnerabilities to look for, but also they show the developers style thus weaknesses…

Current AI systems are “statistics exploiting engines” that are in effect split into two two parts. The first is the DNN which is General and the second is RAG that is specific.

For a specific “code base” any vulnerabilities will be formed from “general” to a language / tool chain, whilst the developers “fist / style” will provide more specific vulnerabilities to look for.

Thinking on this as to how it relates to vulnerability Classes and thus Instances gives the attackers several advantages over the developers/defenders.

It’s something I expect to see more frequently as the cost of using ML drops and can run on rented systems.

But also any DNN developed by defenders will become available to augment as we’ve seen with the likes of DeepSeek and Meta etc. That is a DNN developed for vibe coding becomes the “general base” for finding vulnerabilities with RAG giving specific patterns from code bases to see where developer attention is focused and any changes to there patterns…

But consider further the difference between crypt-analysis and traffic-analysis one is for the specific the other is for the general… And because that “attack window” is always going to be there even in Closed Source…

Clive Robinson February 19, 2026 4:58 AM

@ Kurtzwell, ALL,

With regards,

“The Singularity is at the horizon”

I’m aware that it’s probably a sarcastic comment but it won’t be for many others…

It’s the old notion of “take me to your leader” and even older “deities” that are both omnipresent and omniscient. Due to a potential failing in the human mind that seeks/fears something better than the individual is.

It has been and still is ruthlessly exploited by those of a non typical mind set that seek power and control over others which is why we have the “King Game” and “trusted advisor behind the throne”

Remember Machiavelli, Cardinal Richelieu and those of the French Revolution, Stalin and similar?

Whilst the “King” is usually a “useful idiot” they,

1, Serve the advisor
2, The form the target that protects the advisor from the masses

It’s the same principle that George Orwell highlighted in his writings, especially the realisation that an essential element of the “King Game” is a “distant enemy” that has to be fought by all.

But in modern times “distance” does not have to be “geographic”. In effect all “isms” have a “distant enemy” orchestrated for the masses to rail against as it provides protection for the imposed leaders for their own failings and thus extra shielding for the advisors.

It’s a dynamic we are seeing play out very publicly currently, in more than one place.

A study of history tells you where this is likely to head off to as,

“The wheel tends to follow the rut in the road, as foot steps tend to trudge the trodden path, and water flows in gullies and rivers.”

Winter February 19, 2026 8:07 AM

@Ray Kurzweil

The Singularity is at the horizon

I my experience, The Singularity has all the features of The End of the Rainbow, including the famous Pot of Gold.

lurker February 19, 2026 1:24 PM

@ALL

There was a recent post by @Clive that seems to be lost in the fog, linking to a story that the singularity would happen on a Tuesday. That story was only speculating on A singularity, when human research, discussion, even speculation about AI, would go assymptotic to infinity. Not THE singularity when machine intelligence and physical ability exceeds our ability to control them.

Just passin’ thru February 19, 2026 2:03 PM

Right now, AI is finding source code vulnerabilities. How long before it is used to find vulnerabilities in shipped executables?

I imagine that state actors are already doing this…

Whither Canada? February 19, 2026 2:45 PM

@Clive

Re: kings, advisors

Are you saying that the real power lies with the person behind the curtain and not with the anointed one?

I would appreciate more about the king game.

@all

AI is only doing what people already do. Are we actually ready to learn from what it exposes about others and what it innovates, both good and evil?

In this thread alone are told a number of ways to look for vulns. Some probably not widely known since those that know would lose their advantage if their knowledge became common.

Androids and holograms were “persons” on Star Trek, but AI is property and we bend over backwards to avoid prosecuting and penalizing owners of property for the bad things that happen when that roperty is used to do bad things.

Many of the axis concentration camps were builf around IG Farben imdustrial facilities initially as dorms for slave labour. What happened to all those industrial assets and their managers and owners after the war? Who in blocked the prosecution of the corporate entities?

But what happems when your property has algorithmic agency?

Re uodates amd delays: maybe devices dont have to be connected all the time? Maybe when they do reconnect they could first be forced to uodate before doing anything else?

Clive Robinson February 19, 2026 6:46 PM

@ lurker, ALL,

The paperwork says…

With regards,

“There was a recent post by @Clive that seems to be lost in the fog, linking to a story that the singularity would happen on a Tuesday.”

It’s on Cam Pederson’s blog, and he’s an engineer[1] and he pointed out that because people talking of the singularity more and more frequently you could “curve fit”[2] the point when the line goes to some measure of infinity or a pole…

I came across his blog through finding this post,

https://campedersen.com/eigenpute

Called “The King and the Wizard” which is highly relevant to “AI Safety” and the likes of frameworks like the Ralph Loop and Gas Town that give AI “unbound resource agency”.

As he points out in his post “The Singularity will Occur on a Tuesday” from a few days back (10th of Feb),

“I collected five real metrics of AI progress, fit a hyperbolic model to each one independently, and found the one with genuine curvature toward a pole. The date has millisecond precision. There is a countdown.

(I am aware this is unhinged. We’re doing it anyway.)”

So “just for fun”…

Read the whole page at,

https://campedersen.com/singularity

If nothing else it makes the point that only one of the five curves is the one to use and why.

[1] According to a web page up on the web archive, he’s an engineer possessing a large red bushy beard and a bike, which he rides whilst eating burritos in the bay area and tells “dad jokes”…

https://web.archive.org/web/20180928034905/https://depict.com/pages/about

Make of that what you will.

[2] On a graph a point of interest can be defined in one of several ways. The one we get taught first is where two lines cross, and we get told that about the “point of origin” which is assumed to be where the X and Y axis of our graph cross or where our data plot line/curve starts or cross through. But another is where a curve becomes a specific gradient such as vertical or horizontal or a cusp of some kind. These can be calculated in various ways and Sir Issac Newton gave us the “methods of infinitesimals” we now call Calculus to find such points.

Clive Robinson February 19, 2026 7:29 PM

@ Just passin’ thru, ALL,

With regards,

<

blockquote>”Right now, AI is finding source code vulnerabilities. How long before it is used to find vulnerabilities in shipped executables?”

<

blockquote>

It already is, but you might not think of it as AI because it does not use “Current AI LLM and AI Systems” but the older “Expert Systems” and “Fuzzy logic”.

But at it’s simplest we call it “fuzzing” or “directed fuzzing”.

The theory is,

“Designers or Developers don’t handle user input or business logic correctly and as a result a vulnerability occurs in a run time.”

It’s a reverse view of the “Halting Problem” in that a program is designed “not to halt but does” –ie abnormally– in the case of a service / server, or “should halt but does not” in the case of a filter / application. And all the implications that follow on from that.

Importantly fuzzing finds such failings in any part of the computing stack,

1, Legislation / Regulation
2, Standards
3, Protocols
4, Specifications
5, Implementations
6, Hardware from the ISA gap down to where quantum physics happen.

It’s a very generic “infinite number of monkeys” test tool based on probability. That is if there is a vulnerability of any kind then given infinite testers or infinite test time it will be found.

The fact a single fuzzing test type usually finds a vulnerability very quickly tells a lot about the ICT industry…

Put the fuzzing tool in a framework to run it automatically and have an AI parse the logs and you have your AI tool,

“to find vulnerabilities in shipped executables?”

To make it more useful put it in an AI agent inside a Ralph Loop within a Gas Town setup (but watch out for the amount of resources it will swallow up).

Celos February 22, 2026 5:58 AM

This works mostly for attackers, not defenders. The problem is coverage. LLM-type AI has absolutely no chance finding all vulnerability and finding “some” does not cut it. The whole thing is a bogus “success” metric, pushed by the LLM peddlers to keep the hype going. It works because most people do not understand how things actually work and that what matters is the vulnerabilities left behind, not the ones found.

Here is a scenario: Some LLM finds vulnerabilities in some (obviously not very competently written) code. Then a few variations of the query and context are tried. It finds some more. They all get fixed and the code is rolled out. Small problem: The attackers can try many, many more variations and they have to get lucky just once. Oh, and LLM-type AI has already demonstrated that it can generate working attack code, even if that attack code is typically unreliable and insecure. Which does not matter one bit to an attacker.

All this does is make the arms-race faster and that benefits primarily the attackers.

What we still need and have always needed in this space is not a better capability to find vulnerabilities. That is not an approach that will ever be successful. What we need is code with low vulnerability counts and basically no practically useable vulnerabilities. But that requires competent engineering with ample redundancies, care and an actual understanding what secure code looks like. You know, approaches similar to what is done in any established engineering discipline. Which will have qualification requirements, regulations and liability. These are also approaches not accessible to “AI”, at least not with type any we have.

And hence amateur-hour in software creation continues while the damage is raising. I think we will have to wait another 100 years until people finally will accept that making good software cannot be done on the cheap and will never be easy.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.