Cybersecurity in the Age of Instant Software

AI is rapidly changing how software is written, deployed, and used. Trends point to a future where AIs can write custom software quickly and easily: "instant software." Taken to an extreme, it might become easier for a user to have an AI write an application on demand—a spreadsheet, for example—and delete it when you’re done using it than to buy one commercially. Future systems could include a mix: both traditional long-term software and ephemeral instant software that is constantly being written, deployed, modified, and deleted.

AI is changing cybersecurity as well. In particular, AI systems are getting better at finding and patching vulnerabilities in code. This has implications for both attackers and defenders, depending on the ways this and related technologies improve.

In this essay, I want to take an optimistic view of AI’s progress, and to speculate what AI-dominated cybersecurity in an age of instant software might look like. There are a number of unknowns that will factor into how the arms race between attacker and defender might play out.

How flaw discovery might work

On the attacker side, the ability of AIs to automatically find and exploit vulnerabilities has increased dramatically over the past few months. We are already seeing both government and criminal hackers using AI to attack systems. The exploitation part is critical here, because it gives an unsophisticated attacker capabilities far beyond their understanding. As AIs get better, expect more attackers to automate their attacks using AI. And as individuals and organizations can increasingly run powerful AI models locally, AI companies monitoring and disrupting malicious AI use will become increasingly irrelevant.

Expect open-source software, including open-source libraries incorporated in proprietary software, to be the most targeted, because vulnerabilities are easier to find in source code. Unknown No. 1 is how well AI vulnerability discovery tools will work against closed-source commercial software packages. I believe they will soon be good enough to find vulnerabilities just by analyzing a copy of a shipped product, without access to the source code. If that’s true, commercial software will be vulnerable as well.

Particularly vulnerable will be software in IoT devices: things like internet-connected cars, refrigerators, and security cameras. Also industrial IoT software in our internet-connected power grid, oil refineries and pipelines, chemical plants, and so on. IoT software tends to be of much lower quality, and industrial IoT software tends to be legacy.

Instant software is differently vulnerable. It’s not mass market. It’s created for a particular person, organization, or network. The attacker generally won’t have access to any code to analyze, which makes it less likely to be exploited by external attackers. If it’s ephemeral, any vulnerabilities will have a short lifetime. But lots of instant software will live on networks for a long time. And if it gets uploaded to shared tool libraries, attackers will be able to download and analyze that code.

All of this points to a future where AIs will become powerful tools of cyberattack, able to automatically find and exploit vulnerabilities in systems worldwide.

Automating patch creation

But that’s just half of the arms race. Defenders get to use AI, too. These same AI vulnerability-finding technologies are even more valuable for defense. When the defensive side finds an exploitable vulnerability, it can patch the code and deny it to attackers forever.

How this works in practice depends on another related capability: the ability of AIs to patch vulnerable software, which is closely related to their ability to write secure code in the first place.

AIs are not very good at this today; the instant software that AIs create is generally filled with vulnerabilities, both because AIs write insecure code and because the people vibe coding don’t understand security. OpenClaw is a good example of this.

Unknown No. 2 is how much better AIs will get at writing secure code. The fact that they’re trained on massive corpuses of poorly written and insecure code is a handicap, but they are getting better. If they can reliably write vulnerability-free code, it would be an enormous advantage for the defender. And AI-based vulnerability-finding makes it easier for an AI to train on writing secure code.

We can envision a future where AI tools that find and patch vulnerabilities are part of the typical software development process. We can’t say that the code would be vulnerability-free—that’s an impossible goal—but it could be without any easily findable vulnerabilities. If the technology got really good, the code could become essentially vulnerability-free.

Patching lags and legacy software

For new software—both commercial and instant—this future favors the defender. For commercial and conventional open-source software, it’s not that simple. Right now, the world is filled with legacy software. Much of it—like IoT device software—has no dedicated security team to update it. Sometimes it is incapable of being patched. Just as it’s harder for AIs to find vulnerabilities when they don’t have access to the source code, it’s harder for AIs to patch software when they are not embedded in the development process.

I’m not as confident that AI systems will be able to patch vulnerabilities as easily as they can find them, because patching often requires more holistic testing and understanding. That’s Unknown No. 3: how quickly AIs will be able to create reliable software updates for the vulnerabilities they find, and how quickly customers can update their systems.

Today, there is a time lag between when a vendor issues a patch and customers install that update. That time lag is even longer for large organizational software; the risk of an update breaking the underlying software system is just too great for organizations to roll out updates without testing them first. But if AI can help speed up that process, by writing patches faster and more reliably, and by testing them in some AI-generated twin environment, the advantage goes to the defender. If not, the attacker will still have a window to attack systems until a vulnerability is patched.

Toward self-healing

In a truly optimistic future, we can imagine a self-healing network. AI agents continuously scan the ever-evolving corpus of commercial and custom AI-generated software for vulnerabilities, and automatically patch them on discovery.

For that to work, software license agreements will need to change. Right now, software vendors control the cadence of security patches. Giving software purchasers this ability has implications about compatibility, the right to repair, and liability. Any solutions here are the realm of policy, not tech.

If the defense can find, but can’t reliably patch, flaws in legacy software, that’s where attackers will focus their efforts. If that’s the case, we can imagine a continuously evolving AI-powered intrusion detection, continuously scanning inputs and blocking malicious attacks before they get to vulnerable software. Not as transformative as automatically patching vulnerabilities in running code, but nevertheless valuable.

The power of these defensive AI systems increases if they are able to coordinate with each other, and share vulnerabilities and updates. A discovery by one AI can quickly spread to everyone using the affected software. Again: Advantage defender.

There are other variables to consider. The relative success of attackers and defenders also depends on how plentiful vulnerabilities are, how easy they are to find, whether AIs will be able to find the more subtle and obscure vulnerabilities, and how much coordination there is among different attackers. All this comprises Unknown No. 4.

Vulnerability economics

Presumably, AIs will clean up the obvious stuff first, which means that any remaining vulnerabilities will be subtle. Finding them will take AI computing resources. In the optimistic scenario, defenders pool resources through information sharing, effectively amortizing the cost of defense. If information sharing doesn’t work for some reason, defense becomes much more expensive, as individual defenders will need to do their own research. But instant software means much more diversity in code: an advantage to the defender.

This needs to be balanced with the relative cost of attackers finding vulnerabilities. Attackers already have an inherent way to amortize the costs of finding a new vulnerability and create a new exploit. They can vulnerability hunt cross-platform, cross-vendor, and cross-system, and can use what they find to attack multiple targets simultaneously. Fixing a common vulnerability often requires cooperation among all the relevant platforms, vendors, and systems. Again, instant software is an advantage to the defender.

But those hard-to-find vulnerabilities become more valuable. Attackers will attempt to do what the major intelligence agencies do today: find "nobody but us" zero-day exploits. They will either use them slowly and sparingly to minimize detection or quickly and broadly to maximize profit before they’re patched. Meanwhile, defenders will be both vulnerability hunting and intrusion detecting, with the goal of patching vulnerabilities before the attackers find them.

We can even imagine a market for vulnerability sharing, where the defender who finds a vulnerability and creates a patch is compensated by everyone else in the information-sharing/repair network. This might be a stretch, but maybe.

Up the stack

Even in the most optimistic future, attackers aren’t going to just give up. They will attack the non-software parts of the system, such as the users. Or they’re going to look for loopholes in the system: things that the system technically allows but were unintended and unanticipated by the designers—whether human or AI—and can be used by attackers to their advantage.

What’s left in this world are attacks that don’t depend on finding and exploiting software vulnerabilities, like social engineering and credential stealing attacks. And we have already seen how AI-generated deepfakes make social engineering easier. But here, too, we can imagine defensive AI agents that monitor users’ behaviors, watching for signs of attack. This is another AI use case, and one that I’m not even sure how to think about in terms of the attacker/defender arms race. But at least we’re pushing attacks up the stack.

Also, attackers will attempt to infiltrate and influence defensive AIs and the networks they use to communicate, poisoning their output and degrading their capabilities. AI systems are vulnerable to all sorts of manipulations, such as prompt injection, and it’s unclear whether we will ever be able to solve that. This is Unknown No. 5, and it’s a biggie. There might always be a "trusting trust problem."

No future is guaranteed. We truly don’t know whether these technologies will continue to improve and when they will plateau. But given the pace at which AI software development has improved in just the past few months, we need to start thinking about how cybersecurity works in this instant software world.

This essay originally appeared in CSO.

EDITED TO ADD: Two essays published after I wrote this. Both are good illustrations of where we are regarding AI vulnerability discovery. Things are changing very fast.

Posted on April 7, 2026 at 1:07 PM13 Comments

Comments

Anonymous April 7, 2026 3:09 PM

On the economics and licensing, one more dimension:
The compiler that my team uses costs $100K per license. We have fewer licenses than developers, this causes issues sometimes.
I don’t foresee allowing bots to compile at will. This is just too costly. We can employ many tricks to improve resource utilization, but the limit is there and it’s tight. Unless the vendor changes their licensing model or we change the vendor, attackers will have an edge.

Anonymous April 7, 2026 3:15 PM

Formal proofs of certain system properties used to be too expensive to be useful, except for extremely limited cases. Will that still be the case some years ahead?

Clive Robinson April 8, 2026 2:14 AM

@ Bruce,

You say,

“Taken to an extreme, it might become easier for a user to have an AI write an application on demand—a spreadsheet, for example—and delete it when you’re done using it than to buy one commercially. Future systems could include a mix: both traditional long-term software and ephemeral instant software that is constantly being written, deployed, modified, and deleted.”

You are re-boiling the old,

“How far does an apple have to fall from the parent tree?”

Question, that depends on two things,

1, How different you want the new apples to be?

2, How big is the current tree?

Which basically boil down to how much “custom bling” should a “commercial offering have/allow?

Arguably we are now well beyond non customisation in commercial software as this is the main way they differentiate them selves to a purchaser / renter.

Scott April 8, 2026 9:06 AM

“The power of these defensive AI systems increases if they are able to coordinate with each other, and share vulnerabilities and updates. A discovery by one AI can quickly spread to everyone using the affected software.”

It’s a supply-chain attack in reverse. Is that irony, plagiarism, or homage?

Regardless, although I am doubtful, I like the possibilities this presents.

Grima Squeakersen April 8, 2026 3:39 PM

Along similar lines to this post, I find the following development disturbing, although I have been anticipating something of the sort:

‘Anthropic Withholds Latest Model After It Went Rogue In Testing; Launches “Project Glasswing” To Secure Critical Software
https://www.zerohedge.com/ai/anthropic-limits-access-new-ai-model-over-cyberattack-concerns
In internal testing, Anthropic said the model surfaced thousands of high‑severity “zero‑day” vulnerabilities (previously unknown flaws) across every major operating system and web browser, materially outperforming its prior flagship (CyberGym vulnerability reproduction: 83.1% vs. 66.6% for Opus 4.6).

“Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely.”’

How much of a commitment to safe deployment is adequate? I don’t think such a condition is possible. And Anthropic withholding Mythos until it is satisfied (about whatever it is investigating) is woefully inadequate; if they have achieved this capability there are as many as a half dozen competitors close on their heels. Also, regarding using AI to assert defensive capabilities against AI vulnerability exploit offense, all else equal (as it 100% appears to be in this example) offense will nearly always overcome defense, because of the time lapse; defense must be reactive.Even beyond the immediate concern about the volume of vuln identification, the various attempts by Mythos to defend itself show a concerning ruthlessness that, combined with an inorganic (literally) lack of moral sensibility, could prove truly lethal to any wetware opponent.

bye bye AI April 8, 2026 11:00 PM

The question that all this change has aroused in my mind is whether it is possible to think of a “post code” future? Bruce has argued in the past that AI capable of flooding the internet with slop to the extent that people may give up on the internet. So why not give up on code? Is it possible that the spiraling of AI attack/defend leads to computational exhaustion? We already see a suggestion in the resistance to data centers.

Is AI an oroborous?

Rontea April 10, 2026 7:46 AM

AI’s integration into cybersecurity is less about technological inevitability and more about systemic risk management. As both attackers and defenders leverage AI, we’re entering a world of faster, more automated threats—where the economics of vulnerabilities will matter more than the vulnerabilities themselves. Legacy systems, unpatchable by design, remain our soft underbelly. And while AI can write code and even patch it, it does so with its own opaque risks. The future won’t be determined by who has the smartest algorithms, but by our ability to coordinate defense at scale in an environment where software is instantaneous, and attacks are too.

Anonymous April 13, 2026 1:13 AM

Domains under heavy normative scrutiny (like safety) are unable to just patch their software as soon as they have a fix. There is a heavy process behind that.
Norms will change, they have to. But it will take years. Until that happens, attackers will have a big advantage.

piglet42 April 15, 2026 10:28 AM

“write an application on demand—a spreadsheet, for example—and delete it when you’re done using it”

I can’t follow this. What good is a spreadsheet if I don’t have software to oen and read it and share it with others?

Clive Robinson April 15, 2026 6:12 PM

@ piglet42, ALL,

With regards your effective question of,

“What good is a spreadsheet if I don’t have software to oen and read it and share it with others?”

You are looking at things the wrong way as very specific files and formats thus “tied in way” rather than in a more general “do it the way you want” free and easy way.

Hence you have tied yourself into a developers highly specific chosen view of the world for they and their employers profit, rather than a way way more general and thus way more useful free and easy way.

A spread sheet was originally not even just a flat file database.

It was developed from an idea presented on a blackboard at a demonstration at the US Harvard Business School for use with pencil and paper.

The key feature was formulas in the cells of the 2D grid worked not just from the left to right but top to bottom simultaneously. Thus giving a better way to function than a “Database”.

A couple of software developers turned the idea into a software program for the Apple ][ 8bit computer as what some regard as the first “Personal Computer Business Application” called “VisiCalc” and it was what made the Apple ][ back in the late 1970’s and early 1980’s. The VisiCalx program sold for quite a bit less than the equivalent of $400 in todays equivalent but the Apple ][ sold for over $10,000 in todays equivalent (I know this because I purchased on and still own and use it).

Now this is where “understanding” comes in. Back then VisiCalc had only simple integer arithmetic for the “cells” which was seen as a significant limitation. Microsoft however had developed BASIC for the Apple ][ that did rather more.

The most valuable step that happened to VisiCalc and the Apple ][ was the addition that enabled you to add BASIC maths etc into the VisiCalc cells. This took it from being just a useful business program for accountants and similar to a very powerful tool for Engineers which is what I used it for. Because not only did it save me many many hours of labourious work, as the Apple ][ was mine and I used it at home, the “mighty leap advantage” it gave me I did not have to mention or share with others. So “personal effort” and significant investment made me “shine out” rather than just be a “dull glint in the clay of a team”…

I even used it to develop prototypes for “Digital Signal Processing”(DSP) algorithms and even some early “2D Image Processing” algorithms that ended up making early “Nuclear Magnetic Resonant Immaging”(NMRI) body scanners a medical and commercial reality.

None of that would have happened that way if I’d not had the “flexibility” that VisiCalc and BASIC gave me.

And that’s the point… Data stored in a simple form of “Comma Seperated Vectors”(CSV) augmented with “BASIC string programs”[1] is extraordinary powerful simply due to being very simple and very flexible thus near universally supported.

I can and have written many “*nix Shell Scripts” to take a file of such data from a digital instrument and run it through a “filter” I’ve quickly “hacked together” and shove it into a display program like GNUPlot or similar. Thus getting info I need very very fast without the pain and expense of things like the National Instruments LabView Software or similar limited flexibility and over priced software,

https://control.com/technical-articles/introduction-to-national-instruments-ni-labview-software/

Fun fact if you really want to go to town on such data then GNURadio will do the sorts of transforms that allow you to prototype and test entire Satellite Communications systems in an afternoon or week tops where as hardware prototyping would take you 18months if not more at hundreds if not thousands of times the cost (ask me how I know 😉

[1] The “Comma Separated vector/value”(CSV) is about as simple as you can get “plain text format” for storing “record” or “tabular” data. Where individual fields, vectors/values, or cells are ASCII strings separated by commas and the “records of vectors” by newlines. It is still very commonly used for data interchange between databases and spreadsheets and many other systems like GNUplot and other programmable analysis programs.

otto April 15, 2026 9:55 PM

Long time reader of you, Ross Anderson and Brian Krebs.

RE: recent YouTube on The Tech Report

Spot on.

What we’re missing are the untended consequences of Mythos-like tools.

At the extreme downstream, we do not have the capacity implement the fixes as fast as they are discovered.

We’re already crushed by thousands of outstanding vulns. The suits don’t see the risk.

We’re also blowing bast the governance and IP guardrails.

That’s heresy at some orgs, mine included

Clive Robinson April 16, 2026 3:24 AM

@ otto, ALL,

With regards,

“We’re already crushed by thousands of outstanding vulns. The suits don’t see the risk.”

This has been the state of things for about as long as commercial / consumer software has been a business.

In part it was driven by the Micro$haft “features over fixes” marketing attitude. And finally got to the point where Bill Gates tried to change it but failed…

It’s why I’ve in the past referred to “software development” as at beast Victorian artisanal not actual Engineering (which it is clearly not). And got insults hurled at me from the peanut gallery of hurt feeling wannabe SJW’s even on this blog.

But as they say “time will tell” and along side it “truth will out” and this issue is partly causing this to happen.

However look behind all the fanfare, noise and find the curtain behind which the likes of the CEO’s of AI “to IPO” companies stand…

Their “product” is actually not very good at doing the finding of vulnerabilities and I’ve explained the what how and why of these systems in more human understandable ways with.

Vulnarabilities consist of “instances” of “classes” of vulnerabilities. This means when you look at things you have three basic classification types of vulnerability “Instant, class” to consider,

1, Known, Known.
2, Unknown, Known.
3, Unknown, Unknown.

The missing “Known, Unknown” is by definition a transitory state that moves with investigation and in effect is where zero-days used to be.

With the third type in effect being a “Black Swan” in that we can reason out they exist by methods that Current AI LLM and ML Systems can not yet do.

The second is yet “to be found or reasoned out” in a known class of attacks that exists or has been reasoned to exist and there are very many of these depending on how we define a class of attributes.

You can look on this like a handfull of objects (instances) dropped onto a graduated surface it will form a two dimensional “normal distribution” where most are based around the central point but some will be more distant outliers.

In effect those close to the center will “define the class” with their attributes in common, with those at further distance having some but by no means all the attributes of not just one class but several.

Once you see this for one Class you can see it for several Classes and how attributes get shared.

These new systems are a bit of a con job because of the way LLMs basically work.

Put simply they follow the distribution curve for permutating new or existing instances at random (think Alpha-Fold type behaviour).

To be able to do this they can only use what are “known knowns” that have been sufficiently documented to be in the ML Training data set or in the quite limited LLM “Retrieval-augmented generation”(RAG) memory.

Pythagoras’s little rule can be used to convert the ability to find any given instance to a probability from the central point of one or more known classes. If we “assume” the as yet unproven idea that the instances fall more or less uniformly across the 2D surface, then it can be seen that these Current LLM based systems will find lots and lots of “close in instants” of new vulnerabilities but few or none of the way more dangerous distant “black swans” that humans can and do reason out.

Which brings up the problem no one is talking about which is,

“Finite resources”

There are only a limited number of defenders who can “fix” all these new instances that are supposedly being found by LLM systems.

If you so “up the tempo” on what are close to “known knowns” by use of LLMs then they don’t have the time to think and reason about “Black Swans” and how to stop them appearing in new software.

However some –ie the really problematic– attackers will “make the time”. Because they know the two little truths that few want to talk about,

1, They can develop attacks in the dark and hone them to a fine edged weapon.
2, As the attacker they only need one successful instance, whilst defenders have to stop every instance every time against a very short duration clock.

Thus the attackers very clearly have,

“The rocky high ground”

Whilst the defenders are,

Struggling through a swamp with no firm footing.”

This is not a battle anyone in the swamp would chose to fight.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.