Cybersecurity in the Age of Instant Software

AI is rapidly changing how software is written, deployed, and used. Trends point to a future where AIs can write custom software quickly and easily: “instant software.” Taken to an extreme, it might become easier for a user to have an AI write an application on demand—a spreadsheet, for example—and delete it when you’re done using it than to buy one commercially. Future systems could include a mix: both traditional long-term software and ephemeral instant software that is constantly being written, deployed, modified, and deleted.

AI is changing cybersecurity as well. In particular, AI systems are getting better at finding and patching vulnerabilities in code. This has implications for both attackers and defenders, depending on the ways this and related technologies improve.

In this essay, I want to take an optimistic view of AI’s progress, and to speculate what AI-dominated cybersecurity in an age of instant software might look like. There are a number of unknowns that will factor into how the arms race between attacker and defender might play out.

How flaw discovery might work

On the attacker side, the ability of AIs to automatically find and exploit vulnerabilities has increased dramatically over the past few months. We are already seeing both government and criminal hackers using AI to attack systems. The exploitation part is critical here, because it gives an unsophisticated attacker capabilities far beyond their understanding. As AIs get better, expect more attackers to automate their attacks using AI. And as individuals and organizations can increasingly run powerful AI models locally, AI companies monitoring and disrupting malicious AI use will become increasingly irrelevant.

Expect open-source software, including open-source libraries incorporated in proprietary software, to be the most targeted, because vulnerabilities are easier to find in source code. Unknown No. 1 is how well AI vulnerability discovery tools will work against closed-source commercial software packages. I believe they will soon be good enough to find vulnerabilities just by analyzing a copy of a shipped product, without access to the source code. If that’s true, commercial software will be vulnerable as well.

Particularly vulnerable will be software in IoT devices: things like internet-connected cars, refrigerators, and security cameras. Also industrial IoT software in our internet-connected power grid, oil refineries and pipelines, chemical plants, and so on. IoT software tends to be of much lower quality, and industrial IoT software tends to be legacy.

Instant software is differently vulnerable. It’s not mass market. It’s created for a particular person, organization, or network. The attacker generally won’t have access to any code to analyze, which makes it less likely to be exploited by external attackers. If it’s ephemeral, any vulnerabilities will have a short lifetime. But lots of instant software will live on networks for a long time. And if it gets uploaded to shared tool libraries, attackers will be able to download and analyze that code.

All of this points to a future where AIs will become powerful tools of cyberattack, able to automatically find and exploit vulnerabilities in systems worldwide.

Automating patch creation

But that’s just half of the arms race. Defenders get to use AI, too. These same AI vulnerability-finding technologies are even more valuable for defense. When the defensive side finds an exploitable vulnerability, it can patch the code and deny it to attackers forever.

How this works in practice depends on another related capability: the ability of AIs to patch vulnerable software, which is closely related to their ability to write secure code in the first place.

AIs are not very good at this today; the instant software that AIs create is generally filled with vulnerabilities, both because AIs write insecure code and because the people vibe coding don’t understand security. OpenClaw is a good example of this.

Unknown No. 2 is how much better AIs will get at writing secure code. The fact that they’re trained on massive corpuses of poorly written and insecure code is a handicap, but they are getting better. If they can reliably write vulnerability-free code, it would be an enormous advantage for the defender. And AI-based vulnerability-finding makes it easier for an AI to train on writing secure code.

We can envision a future where AI tools that find and patch vulnerabilities are part of the typical software development process. We can’t say that the code would be vulnerability-free—that’s an impossible goal—but it could be without any easily findable vulnerabilities. If the technology got really good, the code could become essentially vulnerability-free.

Patching lags and legacy software

For new software—both commercial and instant—this future favors the defender. For commercial and conventional open-source software, it’s not that simple. Right now, the world is filled with legacy software. Much of it—like IoT device software—has no dedicated security team to update it. Sometimes it is incapable of being patched. Just as it’s harder for AIs to find vulnerabilities when they don’t have access to the source code, it’s harder for AIs to patch software when they are not embedded in the development process.

I’m not as confident that AI systems will be able to patch vulnerabilities as easily as they can find them, because patching often requires more holistic testing and understanding. That’s Unknown No. 3: how quickly AIs will be able to create reliable software updates for the vulnerabilities they find, and how quickly customers can update their systems.

Today, there is a time lag between when a vendor issues a patch and customers install that update. That time lag is even longer for large organizational software; the risk of an update breaking the underlying software system is just too great for organizations to roll out updates without testing them first. But if AI can help speed up that process, by writing patches faster and more reliably, and by testing them in some AI-generated twin environment, the advantage goes to the defender. If not, the attacker will still have a window to attack systems until a vulnerability is patched.

Toward self-healing

In a truly optimistic future, we can imagine a self-healing network. AI agents continuously scan the ever-evolving corpus of commercial and custom AI-generated software for vulnerabilities, and automatically patch them on discovery.

For that to work, software license agreements will need to change. Right now, software vendors control the cadence of security patches. Giving software purchasers this ability has implications about compatibility, the right to repair, and liability. Any solutions here are the realm of policy, not tech.

If the defense can find, but can’t reliably patch, flaws in legacy software, that’s where attackers will focus their efforts. If that’s the case, we can imagine a continuously evolving AI-powered intrusion detection, continuously scanning inputs and blocking malicious attacks before they get to vulnerable software. Not as transformative as automatically patching vulnerabilities in running code, but nevertheless valuable.

The power of these defensive AI systems increases if they are able to coordinate with each other, and share vulnerabilities and updates. A discovery by one AI can quickly spread to everyone using the affected software. Again: Advantage defender.

There are other variables to consider. The relative success of attackers and defenders also depends on how plentiful vulnerabilities are, how easy they are to find, whether AIs will be able to find the more subtle and obscure vulnerabilities, and how much coordination there is among different attackers. All this comprises Unknown No. 4.

Vulnerability economics

Presumably, AIs will clean up the obvious stuff first, which means that any remaining vulnerabilities will be subtle. Finding them will take AI computing resources. In the optimistic scenario, defenders pool resources through information sharing, effectively amortizing the cost of defense. If information sharing doesn’t work for some reason, defense becomes much more expensive, as individual defenders will need to do their own research. But instant software means much more diversity in code: an advantage to the defender.

This needs to be balanced with the relative cost of attackers finding vulnerabilities. Attackers already have an inherent way to amortize the costs of finding a new vulnerability and create a new exploit. They can vulnerability hunt cross-platform, cross-vendor, and cross-system, and can use what they find to attack multiple targets simultaneously. Fixing a common vulnerability often requires cooperation among all the relevant platforms, vendors, and systems. Again, instant software is an advantage to the defender.

But those hard-to-find vulnerabilities become more valuable. Attackers will attempt to do what the major intelligence agencies do today: find “nobody but us” zero-day exploits. They will either use them slowly and sparingly to minimize detection or quickly and broadly to maximize profit before they’re patched. Meanwhile, defenders will be both vulnerability hunting and intrusion detecting, with the goal of patching vulnerabilities before the attackers find them.

We can even imagine a market for vulnerability sharing, where the defender who finds a vulnerability and creates a patch is compensated by everyone else in the information-sharing/repair network. This might be a stretch, but maybe.

Up the stack

Even in the most optimistic future, attackers aren’t going to just give up. They will attack the non-software parts of the system, such as the users. Or they’re going to look for loopholes in the system: things that the system technically allows but were unintended and unanticipated by the designers—whether human or AI—and can be used by attackers to their advantage.

What’s left in this world are attacks that don’t depend on finding and exploiting software vulnerabilities, like social engineering and credential stealing attacks. And we have already seen how AI-generated deepfakes make social engineering easier. But here, too, we can imagine defensive AI agents that monitor users’ behaviors, watching for signs of attack. This is another AI use case, and one that I’m not even sure how to think about in terms of the attacker/defender arms race. But at least we’re pushing attacks up the stack.

Also, attackers will attempt to infiltrate and influence defensive AIs and the networks they use to communicate, poisoning their output and degrading their capabilities. AI systems are vulnerable to all sorts of manipulations, such as prompt injection, and it’s unclear whether we will ever be able to solve that. This is Unknown No. 5, and it’s a biggie. There might always be a “trusting trust problem.”

No future is guaranteed. We truly don’t know whether these technologies will continue to improve and when they will plateau. But given the pace at which AI software development has improved in just the past few months, we need to start thinking about how cybersecurity works in this instant software world.

This essay originally appeared in CSO.

EDITED TO ADD: Two essays published after I wrote this. Both are good illustrations of where we are re AI vulnerability discovery. Things are changing very fast.

Posted on April 7, 2026 at 1:07 PM2 Comments

Hong Kong Police Can Force You to Reveal Your Encryption Keys

According to a new law, the Hong Kong police can demand that you reveal the encryption keys protecting your computer, phone, hard drives, etc.—even if you are just transiting the airport.

In a security alert dated March 26, the U.S. Consulate General said that, on March 23, 2026, Hong Kong authorities changed the rules governing enforcement of the National Security Law. Under the revised framework, police can require individuals to provide passwords or other assistance to access personal electronic devices, including cellphones and laptops.

The consulate warned that refusal to comply is now a criminal offense. It also said authorities have expanded powers to take and keep personal electronic devices as evidence if they claim the devices are linked to national security offenses.

Posted on April 7, 2026 at 5:45 AM5 Comments

New Mexico’s Meta Ruling and Encryption

Mike Masnick points out that the recent New Mexico court ruling against Meta has some bad implications for end-to-end encryption, and security in general:

If the “design choices create liability” framework seems worrying in the abstract, the New Mexico case provides a concrete example of where it leads in practice.

One of the key pieces of evidence the New Mexico attorney general used against Meta was the company’s 2023 decision to add end-to-end encryption to Facebook Messenger. The argument went like this: predators used Messenger to groom minors and exchange child sexual abuse material. By encrypting those messages, Meta made it harder for law enforcement to access evidence of those crimes. Therefore, the encryption was a design choice that enabled harm.

The state is now seeking court-mandated changes including “protecting minors from encrypted communications that shield bad actors.”

Yes, the end result of the New Mexico ruling might be that Meta is ordered to make everyone’s communications less secure. That should be terrifying to everyone. Even those cheering on the verdict.

End-to-end encryption protects billions of people from surveillance, data breaches, authoritarian governments, stalkers, and domestic abusers. It’s one of the most important privacy and security tools ordinary people have. Every major security expert and civil liberties organization in the world has argued for stronger encryption, not weaker.

But under the “design liability” theory, implementing encryption becomes evidence of negligence, because a small number of bad actors also use encrypted communications. The logic applies to literally every communication tool ever invented. Predators also use the postal service, telephones, and in-person conversation. The encryption itself harms no one. Like infinite scroll and autoplay, it is inert without the choices of bad actors ­- choices made by people, not by the platform’s design.

The incentive this creates goes far beyond encryption, and it’s bad. If any product improvement that protects the majority of users can be held against you because a tiny fraction of bad actors exploit it, companies will simply stop making those improvements. Why add encryption if it becomes Exhibit A in a future lawsuit? Why implement any privacy-protective feature if a plaintiff’s lawyer will characterize it as “shielding bad actors”?

And it gets worse. Some of the most damaging evidence in both trials came from internal company documents where employees raised concerns about safety risks and discussed tradeoffs. These were played up in the media (and the courtroom) as “smoking guns.” But that means no company is going to allow anyone to raise concerns ever again. That’s very, very bad.

In a sane legal environment, you want companies to have these internal debates. You want engineers and safety teams to flag potential risks, wrestle with difficult tradeoffs, and document their reasoning. But when those good-faith deliberations become plaintiff’s exhibits presented to a jury as proof that “they knew and did it anyway,” the rational corporate response is to stop putting anything in writing. Stop doing risk assessments. Stop asking hard questions internally.

The lesson every general counsel in Silicon Valley is learning right now: ignorance is safer than inquiry. That makes everyone less safe, not more.

The essay has a lot more: about Section 230, about competition in this space, about the myopic nature of the ruling. Go read it.

Posted on April 6, 2026 at 3:09 PM7 Comments

US Bans All Foreign-Made Consumer Routers

This is for new routers; you don’t have to throw away your existing ones:

The Executive Branch determination noted that foreign-produced routers (1) introduce “a supply chain vulnerability that could disrupt the U.S. economy, critical infrastructure, and national defense” and (2) pose “a severe cybersecurity risk that could be leveraged to immediately and severely disrupt U.S. critical infrastructure and directly harm U.S. persons.”

More information:

Any new router made outside the US will now need to be approved by the FCC before it can be imported, marketed, or sold in the country.

In order to get that approval, companies manufacturing routers outside the US must apply for conditional approval in a process that will require the disclosure of the firm’s foreign investors or influence, as well as a plan to bring the manufacturing of the routers to the US.

Certain routers may be exempted from the list if they are deemed acceptable by the Department of Defense or the Department of Homeland Security, the FCC said. Neither agency has yet added any specific routers to its list of equipment exceptions.

[…]

Popular brands of router in the US include Netgear, a US company, which manufactures all of its products abroad.

One exception to the general absence of US-made routers is the newer Starlink WiFi router. Starlink is part of Elon Musk’s company SpaceX.

Presumably US companies will start making home routers, if they think this policy is stable enough to plan around. But they will be more expensive than routers made in China or Taiwan. Security is never free, but policy determines who pays for it.

Posted on April 2, 2026 at 1:28 PM26 Comments

Possible US Government iPhone Hacking Tool Leaked

Wired writes (alternate source):

Security researchers at Google on Tuesday released a report describing what they’re calling “Coruna,” a highly sophisticated iPhone hacking toolkit that includes five complete hacking techniques capable of bypassing all the defenses of an iPhone to silently install malware on a device when it visits a website containing the exploitation code. In total, Coruna takes advantage of 23 distinct vulnerabilities in iOS, a rare collection of hacking components that suggests it was created by a well-resourced, likely state-sponsored group of hackers.

[…]

Coruna’s code also appears to have been originally written by English-speaking coders, notes iVerify’s cofounder Rocky Cole. “It’s highly sophisticated, took millions of dollars to develop, and it bears the hallmarks of other modules that have been publicly attributed to the US government,” Cole tells WIRED. “This is the first example we’ve seen of very likely US government tools­based on what the code is telling us­spinning out of control and being used by both our adversaries and cybercriminal groups.”

TechCrunch reports that Coruna is definitely of US origin:

Two former employees of government contractor L3Harris told TechCrunch that Coruna was, at least in part, developed by the company’s hacking and surveillance tech division, Trenchant. The two former employees both had knowledge of the company’s iPhone hacking tools. Both spoke on condition of anonymity because they weren’t authorized to talk about their work for the company.

It’s always super interesting to see what malware looks like when it’s created through a professional software development process. And the TechCrunch article has some speculation as to how the US lost control of it. It seems that an employee of L3Harris’s surviellance tech division, Trenchant, sold it to the Russian government.

Posted on April 2, 2026 at 6:05 AM8 Comments

Is “Hackback” Official US Cybersecurity Strategy?

The 2026 US “Cyber Strategy for America” document is mostly the same thing we’ve seen out of the White House for over a decade, but with a more aggressive tone.

But one sentence stood out: “We will unleash the private sector by creating incentives to identify and disrupt adversary networks and scale our national capabilities.” This sounds like a call for hackback: giving private companies permission to conduct offensive cyber operations.

The Economist noticed (alternate link) this, too.

I think this is an incredibly dumb idea:

In warfare, the notion of counterattack is extremely powerful. Going after the enemy­—its positions, its supply lines, its factories, its infrastructure—­is an age-old military tactic. But in peacetime, we call it revenge, and consider it dangerous. Anyone accused of a crime deserves a fair trial. The accused has the right to defend himself, to face his accuser, to an attorney, and to be presumed innocent until proven guilty.

Both vigilante counterattacks, and preemptive attacks, fly in the face of these rights. They punish people before who haven’t been found guilty. It’s the same whether it’s an angry lynch mob stringing up a suspect, the MPAA disabling the computer of someone it believes made an illegal copy of a movie, or a corporate security officer launching a denial-of-service attack against someone he believes is targeting his company over the net.

In all of these cases, the attacker could be wrong. This has been true for lynch mobs, and on the internet it’s even harder to know who’s attacking you. Just because my computer looks like the source of an attack doesn’t mean that it is. And even if it is, it might be a zombie controlled by yet another computer; I might be a victim, too. The goal of a government’s legal system is justice; the goal of a vigilante is expediency.

We don’t issue letters of marque on the high seas anymore; we shouldn’t do it in cyberspace.

Posted on April 1, 2026 at 12:57 PM12 Comments

A Taxonomy of Cognitive Security

Last week, I listened to a fascinating talk by K. Melton on cognitive security, cognitive hacking, and reality pentesting. The slides from the talk are here, but—even better—Menton has a long essay laying out the basic concepts and ideas.

The whole thing is important and well worth reading, and I hesitate to excerpt. Here’s a taste:

The NeuroCompiler is where raw sensory data gets interpreted before you’re consciously aware of it. It decides what things mean, and it does this fast, automatic, and mostly invisible. It’s also where the majority of cognitive exploits actually land, right in this sweet spot between perception and conscious thought.

This is my term for what Daniel Kahneman called System 1 thinking. If the Sensory Interface is the intake port, the NeuroCompiler is what turns that input into “filtered meaning” before the Mind Kernel ever sees it. It takes raw signal (e.g., photons, sound waves, chemical gradients, pressure) and translates it into something actionable based on binary categories like threat or safe, familiar or novel, trustworthy or suspicious.

The speed is both an evolutionary feature and a modern bug. Processing here is fast enough to get you out of the way of a thrown object before you’ve consciously registered it. But “good enough most of the time” means “predictably wrong some of the time….

A critical architectural feature: the NeuroCompiler can route its output directly back to the Sensory Interface and out as behavior, skipping the conscious awareness of the Mind Kernel entirely. Reflex and startle responses use this mechanism, making this bypass pathway enormously useful for survival. Yet it leaves a wide-open backdoor. If the layer that holds access to skepticism and deliberate evaluation can be bypassed completely, a host of exploits become possible that would otherwise fail.

That’s just one of the five levels Melton talks about: sensory interface, neurocompiler, mind kernel, the mesh, and cultural substrate.

Melton’s taxonomy is compelling, and her parallels to IT systems are fascinating. I have long said that a genius idea is one that’s incredibly obvious once you hear it, but one that no one has said before. This is the first time I’ve heard cognition described in this way.

Posted on April 1, 2026 at 5:59 AM12 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.