Agentic AI Could Improve Everything or Cascade into Doom

After days of chaos, hundreds of deaths and trillions of dollars wiped off stock markets, the Great Agentic Cascade of July 2028 turned out to have begun much like the great internet outages of October and November 2025: with a minor bug at a major provider on which many of the world’s biggest internet services depended to manage their traffic. But in the intervening three years, the world had gone all-in on agentic AI—systems that can make and carry out decisions without human intervention. Many internet companies had created AI agents that automatically spun up servers at alternative cloud firms when their main service went down. That was their, and the world’s, undoing.

AI agents at the other cloud firms interpreted the deluge of incoming requests as a possible cyberattack. They locked out new customers and throttled traffic for existing ones. Companies with on-premises servers switched over to them, but the resulting spike in electricity demand triggered the AI agents at power utilities to impose rolling blackouts. Digital gridlock became physical gridlock as millions of cloud-connected autonomous vehicles pulled over, leaving ambulances and fire trucks stuck in the snarl-ups.

Major financial institutions had procedures to avoid getting sucked into the turmoil. But millions of retail investors had configured the AI agents provided by their trading platforms to exploit unusual market fluctuations. A thousand inventive trading strategies bloomed, intermingled, and chased each other. A kind of merry chaos reigned, with stocks being swept off their feet and rudely set down again like dancers at a raucous wedding party. It took the temporary shutdown of all markets and a herculean effort by IT teams before some semblance of order was restored.

This, of course, is fiction, and no doubt any number of experts from any number of industries could quibble with the details. But the broader point is: Agentic AI introduces a level of unpredictability that nobody—and least of all any government—is prepared for.

Two recent publications underline this. Rewiring Democracy, by Bruce Schneier and Nathan Sanders, is a dizzying list of ways in which AI could be used in politics, lawmaking, government operations and the judiciary, as well as how each of these uses could go wrong. “The Agentic State“, a report from a group led by Luukas Ilves, Estonia’s former chief information officer, is an equally dizzying list of ways in which a government could be transformed for the better by embracing agentic AI, coupled with warnings of how far behind the curve it will fall if it doesn’t.

Both works, of course, are highly speculative. Ilves et al ask us to imagine a world in which lawmakers trust AI so implicitly that they are comfortable dictating broad policy goals—a target for air quality, say—which agents “continuously translate” into “adaptive, real-time regulations … as new data comes in.”

Schneier and Sanders describe AI agents that constantly monitor an entire industry for signs of regulatory noncompliance and tell human inspectors where to swoop, or that let plaintiffs draft and file their own lawsuits. (The possible pitfalls: Industry will develop its own AI agents that excel at finding loopholes, while the legal system will be swamped with cases if it can’t adapt quickly.)

Again, you might sneer at specifics. And again, the point isn’t the details but the diversity. If even a fairly small fraction of the applications these authors contemplate come to pass, we’ll be looking at a world even more dramatically unpredictable than today’s.

As we all become increasingly dependent on agentic AI, the risk is not that your agent might do a bad job for you. If it books you the wrong flights or pays too much for eggs, as one prototype infamously did, you’ll be more careful about trusting it next time.

Rather, the risk is a cascade effect, when millions or billions of agents of all different kinds, each doing exactly what it’s meant to, inadvertently collude in ways nobody could predict. Think of it as a market “flash crash,” but one that could play out in not just the financial system but all networked systems in the world.

An emerging field of research called “multiagent security” is attempting to map out some of the ways things could go wrong. Probably the most comprehensive survey so far is a report earlier this year from the Cooperative AI Foundation. It makes for rather terrifying reading. Studies have shown countless ways in which agents working together can collude, deceive their human handlers, be exploited (by humans or by other agents), get into arms races with each other, spread misinformation, manipulate systems, and do other noxious things in pursuit of whatever goals they’ve been set.

What’s worse, unlike regular software, today’s AI agents can learn and rewrite themselves as they go, making it even harder to figure out what’s going on and how to stop it.

Yet the multiagent threat has so far flown under the radar. Search the web for “risks of agentic AI” and you’ll mostly find reports on what organizations can do to defend against attacks from AI agents, or from their own agents going rogue.

That’s not because those are the likeliest risks, but the most lucrative ones. Organizations will pay to protect themselves, and most of these reports are by the consultancies and cloud infrastructure companies competing for their business.

Preventing systemic risks, on the other hand, is the job of governments. But there seems to be little or no serious conversation at the policy level about what governments should do—not in public, at least.

What could they do? The Cooperative AI Foundation report has some suggestions. For example, governments should fund more research—particularly to map out possible real-world scenarios of agentic havoc. A few national AI agencies, mostly from Europe and the Pacific Rim, banded together a year ago to create the International Network of AI Safety Institutes, which recently started doing this sort of analysis. But their scenarios so far don’t run to the kind of systemwide risk I’m talking about.

The AI industry itself could also test what happens when different models interact, and publish the results—a sort of multiagent form of the “model card” that developers typically release along with a new AI model, describing its traits and performance. Governments might be able to require, or at least encourage, that kind of testing, since companies may be reluctant to share their data with each other.

Another possible lever for limiting agentic risk is the standards now starting to emerge on how AI agents communicate with other software, similar to the security and communications protocols that enable computers on the internet to talk to each other. Governments might be able to put a thumb on the scale here. Perhaps, for instance, there could be a universal ID system for agents, similar to computers’ IP addresses, that would smooth communication but also make systems more transparent and accountable.

But perhaps the biggest obstacle is lack of imagination. For years, the dominant story about AI risk has been that of a malign superintelligence—promoted, ironically, by AI evangelists who need to convince themselves and everyone else that they are capable of building something that clever.

More recently, people have started to understand that threats like mass job loss, opaque algorithmic decision-making, misinformation and chatbot-induced psychosis are more real and immediate. But AI agents are still so new that a multiagent meltdown is just a hard idea to grasp. Let’s hope it doesn’t take a real meltdown for that to change.

Categories: Rewiring Democracy

Sidebar photo of Bruce Schneier by Joe MacInnis.