Is AI Good for Democracy?

Politicians fixate on the global race for technological supremacy between US and China. They debate geopolitical implications of chip exports, latest model releases from each country, and military applications of AI. Someday, they believe, we might see advancements in AI tip the scales in a superpower conflict.

But the most important arms race of the 21st century is already happening elsewhere and, while AI is definitely the weapon of choice, combatants are distributed across dozens of domains.

Academic journals are flooded with AI-generated papers, and are turning to AI to help review submissions. Brazil’s court system started using AI to triage cases, only to face an increasing volume of cases filed with AI help. Open source software developers are being overwhelmed with code contributions from bots. Newspapers, music, social media, education, investigative journalism, hiring, and procurement are all being disrupted by a massive expansion of AI use.

Each of these is an arms race. Adversaries within a system iteratively seeking an edge against their competition by continuously expanding their use of a common technology.

Beneficiaries of these arms races are US mega-corporations capturing wealth from the rest of us at an unprecedented rate. A substantial fraction of global economy has reoriented around AI in just the past few years, and that trend is accelerating. In parallel, this industry’s lobbying interests are quickly becoming the object, rather than the subject, of US government power.

To understand these arms races, let’s look at an example of particular interest to democracies worldwide: how AI is changing the relationship between democratic government and citizens. Interactions that used to happen between people and elected representatives are expanding to a massive scale, with AIs taking the roles that humans once did.

In a notorious example from 2017, US Federal Communications Commission opened a comment platform on the web to get public input on internet regulation. It was quickly flooded with millions of comments fraudulently orchestrated by broadband providers to oppose FCC regulation of their industry. From the other side, a 19-yearold college student responded by submitting millions of comments of his own supporting the regulation. Both sides were using software primitive by the standards of today’s AI.

Nearly a decade later, it is getting harder for citizens to tell when they’re talking to a government bot, or when an online conversation about public policy is just bots talking to bots. When constituents leverage AI to communicate better, faster, and more, it pressures government officials to do the same.

This may sound futuristic, but it’s become a familiar reality in US. Staff in US Congress are using AI to make their constituent email correspondence more efficient. Politicians campaigning for office are adopting AI tools to automate fundraising and voter outreach. By one 2025 estimate, a fifth of public submissions to the Consumer Financial Protection Bureau were already being generated with AI assistance.

People and organizations are adopting AI here because it solves a real problem that has made mass advocacy campaigns ineffective in the past: quantity has been inversely proportional to both quality and relevance. It’s easy for government agencies to dismiss general comments in favour of more specific and actionable ones. That makes it hard for regular people to make their voices heard. Most of us don’t have the time to learn the specifics or to express ourselves in this kind of detail. AI makes that contextualization and personalization easy. And as the volume and length of constituent comments grow, agencies turn to AI to facilitate review and response.

That’s the arms race. People are using AI to submit comments, which requires those on the receiving end to use AI to wade through the comments received. To the extent that one side does attain an advantage, it will likely be temporary. And yet, there is real harm created when one side exploits another in these adversarial systems. Constituents of democracies lose out if their public servants use AI-generated responses to ignore and dismiss their voices rather than to listen to and include them. Scientific enterprise is weakened if fraudulent papers sloppily generated by AI overwhelm legitimate research.

As we write in our new book, Rewiring Democracy, the arms race dynamic is inevitable. Every actor in an adversarial system is incentivized and, in the absence of new regulation in this fast moving space, free to use new technologies to advance its own interests. Yet some of these examples are heartening. They signal that, even if you face an AI being used against you, there’s an opportunity to use the tech for your own benefit.

But, right now, it’s obvious who is benefiting most from AI. A handful of American Big Tech corps and their owners are extracting trillions of dollars from the manufacture of AI chips, development of AI data centers, and operation of so-called ‘frontier’ AI models. Regardless of which side pulls ahead in each arms race scenario, the house always wins. Corporate AI giants profit from the race dynamic itself.

As formidable as the near-monopoly positions of today’s Big Tech giants may seem, people and governments have substantial capability to fight back. Various democracies are resisting this concentration of wealth and power with tools of anti-trust regulation, protections for human rights, and public alternatives to corporate AI. All of us worried about the AI arms race and committed to preserving the interests of our communities and our democracies should think in both these terms: how to use the tech to our own advantage, and how to resist the concentration of power AI is being exploited to create.

This essay was written with Nathan E. Sanders, and originally appeared in The Times of India.

Posted on February 24, 2026 at 7:06 AM22 Comments

Comments

Warrick February 24, 2026 7:56 AM

Fundamental to democracy is the ability to discuss freely and hence arrive at some sort of consensus as to action. In an ideal world, this should be backed up by apolitical data and modelling. Attack against perceived reality worry me greatly…

The BBC recently interviewed a content creator who used AI technology to create fake videos showing “Urban Decay” in the UK
Why fake AI videos of UK urban decline are taking over social media. Although the creator claimed not to be publishing for political purposes, the content has received considerable attention across social media platforms, and can drastically impact peoples perceptions about important societal issues.

The takeaway is that the attention economy we have built, even if made completely apolitical in its algorithms, will still encourage extreme positions, to the detriment to democracy.

mark February 24, 2026 8:40 AM

Short answer: NO!
Longer answer: old sign: people screw things up, but to really screw them up takes a computer. Trolls aren’t enough, chatbots can do far more to screw up any dialog.

K.S February 24, 2026 9:05 AM

To me, economic concerns are secondary to AI (and therefore corporations controlling them) becoming arbiters of what is true. For that purpose, AI can be used in two complimentary ways – a) to suppress information by drowning the message with heckler’s veto; b) by providing slanted or inaccurate information to people that rely on it.

I don’t think individual humans are up to the task to filter out AI, we are going to drown trying to drink from the AI firehose. We desperately need tools that are: i) fully under individual’s control; ii) capable of identifying and filtering AI-generated content. Unfortunately, this might mean death of online anonymity, as I see strongly authenticating authorship as the only way to get there.

Dave February 24, 2026 9:24 AM

The core of this problem is the inherent instability of an adversarial system. This is why authoritarianism is rising in the West. People are turning to authority as a means of dealing with the cognitive overload the technological revolution has induced.

It didn’t have to be this way. Democracy can be reached by discussion and consensus. I believe it is too late for that now, sadly, so I think the end is going to be the destruction of the American dream of Liberty and Equality. What I do know is that if the choice is between technolibertarianism and dictatorship Americans will choose dictatorship every time.

Clive Robinson February 24, 2026 10:46 AM

@ Bruce, ALL,

You and Nathan say,

“Each of these is an arms race. Adversaries within a system iteratively seeking an edge against their competition by continuously expanding their use of a common technology.”

The one thing that has become clear beyond anything else is that,

“AI favours the attacker not the defender.”

Whilst it’s normally easy to see which is which in a two sided battle space, it becomes very quickly more difficult as the number of involved parties goes up.

In short,

“AI is a weapon of dominance”

And it is already being used that way on a massive scale.

The losers are almost always going to be “society” because those who chose to use AI as an attack weapon rather than a defensive system usually care not a jot for the harm they cause, as long as they obtain their objectives (even when they actually harm themselves in the process).

So the answer to the title question of,

“Is AI good for democracy?”

Is currently a resounding “NO” and I fully expect it to remain that way untill something more draconian or tyrannical comes along to replace it with worse.

Winter February 24, 2026 10:49 AM

@Dave

The core of this problem is the inherent instability of an adversarial system. This is why authoritarianism is rising in the West

I think the causes are more complex and more simple.

Authoritarianism seems to be on the rise when economic growth levels off.

Especially, people seem to feel the “cake is fixed” or even getting smaller. They now are fixated on getting their part of the spoils by driving out others from having a part.

Driving out people is what authoritarianism is all about.

It won’t work because the “aristocracy” is increasing their share of the spoils in a mad land grab. In the end, nothing will be left for Hoi Poloi. See Russia for an example.

Dave February 24, 2026 11:49 AM

@Winter

As I take it your thesis is that the system is rigged against the little guy who feels he is being squeezed out of the system and bemoans the loss off opportunity. Logically, though, such individuals should not be attracted to authoritarianism; they should be attracted to some kind of libertarian or even anarchy ideology. They should want to break down barriers to entry not increase them.

I don’t think that authoritarianism has anything to do directly with the amount of economic growth. It is a response to uncertainty; reducing uncertainty is the entire attraction of any form of security whether it be computer or political. @Bruce addresses this in Liars and Outliars. I wonder if he still feels the same confidence in “anonymous trust” now as he did then.

lurker February 24, 2026 12:46 PM

@Bruce
“Is AI Good for Democracy?”

I assume that’s a rhetorical question that doesn’t expect or deserve an answer. Because neither “AI” nor “democracy” are defined in your discussion, and the meaning and implications of neither are understood by the general population whose lives are being affected by both. Any attempt to supply an answer would simply be demagoguery.

Winter February 24, 2026 12:58 PM

@Dave

Logically, though, such individuals should not be attracted to authoritarianism; they should be attracted to some kind of libertarian or even anarchy ideology. They should want to break down barriers to entry not increase them.

The idea that people would support policies that improve their lot is not quite what is found in reality.

When asked what they prefer, getting it somewhat better, while others get even more, or, not improving or being worse off, but others lose even more, a disconcerting part of the populace prefers others being worse off, even if it damages their own case.

The history of Apartheid in South Africa, one of the most unequal countries, starts with poor with blue collar workers getting behind insane discrimination of colored people while not getting any benefits.

Impossibly Stupid February 24, 2026 1:20 PM

Yes! AI, as it is currently marketed, is fantastic for democracy, as it is currently practiced. The ease at which poorly educated voters living in fear can be controlled should never be underestimated.

But, right now, it’s obvious who is benefiting most from AI.

Uh, is it? I see a frenzy of investment dollars not being turned into profit dollars, and no path to profitability ever. Plus a ton of circular deals that make me question if even the merchants with a monopoly on supplies are going to end up doing well in this digital gold rush. The only people who seems poised to make money are the ones who know the hype has reached scam/bubble levels and have started skimming off the top. Only time will tell how the system eventually decides to deal with such rampant fraud.

Ted Heise February 24, 2026 3:33 PM

Man, this takes me back to working with Les Earnest (and others) to help EFF by (lightly) editing comments against the DMCA for submission to the notice and comment rule-making process. I can’t imagine trying to do something like that in today’s world!

Clive Robinson February 24, 2026 6:25 PM

@ Impossibly Stupid,

With regards,

“Only time will tell how the system eventually decides to deal with such rampant fraud.”

It’s the USA, under the paid for capitalism system of the neo-cons, it’s not fraud if they are “professional investors”…

So Venture Capitalists are “carefull” who they con with their pump-n-dump schemes chasing the golden ticket that lights up and trumps all others.

So not directly open to little guys with 401k’s and trying to raise enough to send their children into what they think is semi-decent education (but in reality US Uni’s are now just another form of “hedge fund fraud”)…

And it’s the little guys money with 30% sliced of the top by financial investment funds that the VC’s get… That they’ve also sliced 30% or more of as fees etc as well…

Is it a crooked game thought up by the same types that caused 2008 Financial Crisis, of course it is.

But the legislation they’ve bought from crooked politicians on both sides enables them to wash their hands of blame, whilst walking away with another 50% or more of the little guys money as fees and similar with no liability…

But hey that’s the “American way”.

Clive Robinson February 24, 2026 6:55 PM

@ Winter, Dave,

I’ve explained this before,

When asked what they prefer, getting it somewhat better, while others get even more, or, not improving or being worse off, but others lose even more, a disconcerting part of the populace prefers others being worse off, even if it damages their own case.”

Search for “Status Gap” it’s what the “self entitled” crave more than anything else.

They will cheerfully vote for short nasty and brutish life full of pain if they can lord it over everyone else by making their lives even worse.

All the progress mankind has made has been despite the “self entitled” who try desperately to hang onto what they see as “status”.

They would vote to be tortured every day if you got tortured day and night… Thus any idiot that promises to “make them great again” will have the self entitled not just queuing around the block, but also pay to hear about how those others will get their lot…

It’s why ICE is so popular with some, but those some forget two things,

1, The law of diminishing returns.
2, The saying of German Pastor Martin Niemöller from WWII,

“First they came for the Communists
And I did not speak out
Because I was not a Communist

Then they came for the Socialists
And I did not speak out
Because I was not a Socialist

Then they came for the trade unionists
And I did not speak out
Because I was not a trade unionist

Then they came for the Jews
And I did not speak out
Because I was not a Jew

Then they came for me
And there was no one left
To speak out for me”

But the “self entitled” want the first four groups to feel pain and to be extinguished, with out thinking when it will be their turn … Because the simple fact is unless you fight for others, you are explicitly giving consent for it to happen to you in turn.

It really is that simple.

And as I said further up,

“AI is a weapon of domination”

Especially for those that want to attack society in it’s various groups.

Just one thing to remember after WWII what do you think happened to those assets and companies stolen from minority groups by the National Socialists?

The US simply “divided them up” amongst the “self entitled” and congratulated them selves as being oh so clever to acquire the “spoils of war”.

Just look at how they are now run by the “self entitled”…

Kevin February 24, 2026 7:27 PM

Unfortunately, this may require MORE government rules to control.

Perhaps each person can get their “official email address” registered by the government. Verification using drivers license or something. Then each email or blog comment can be verified that it is from a real person. We can auto filter out all the AI/spam so much easier.

BUT: we have to trust the government to do this so no-one will like it.

lurker February 25, 2026 12:28 AM

@Paul Sagi

Yeah, I noticed that, and assumed that US MSM were already so besotted with Ai this and AI that, they wouldn’t want their readers to stop and think about it.

Clive Robinson February 25, 2026 3:24 AM

@ lurker, Bruce, ALL,

Is AGI or GAI actually of worth?

Would be a better question to investigate, as few have dared speak the truth of it…

Firstly the amount of money being thrown into “Current AI LLM and ML Systems” is when you remove the base line constant doubling in a little bit over a year every year for the past few years. See the spending graph in

https://garymarcus.substack.com/p/turns-out-generative-ai-was-a-scam

This is in no way sustainable as common sense will tell people if they just sit and think on it for a moment. Because the only way it could be sustained is with ever increasing inflation into hyperinflation. Which whilst Mr Trump might like that, it’s actually more generally considered an economic disaster, as amongst other things it makes work “valueless” and well well beyond the “hamster wheel of pain” or “Red Queens Race” metaphors.

But the other essential question that should be asked is

“What is the real productivity of the technology in economic terms?”

And the answer is fairly close to zero…

After you remove all the economic activities Current AI LLM and ML systems “can not be used for” the remaining subset is incredibly small and whilst not “vanishingly small” in specific areas it turns out that it is not loosing productivity on less than 2.5% of those rare examples. So the difference on the general GDP is as close to unmeasurable as it’s possible to get…

Any way read the whole article and make your own judgment,

Turns out Generative AI was a scam

Or at least very very far from what it has been cracked up to be

Breaking news from Shira Ovide at the Washington Post. Gift link here.

Remember how in November the White House Crypto and AI advisor was telling us that Generative AI was contributing half of US GDP growth?

Turns out it wasn’t, as Shira Ovide just reported

None of what Ovide had to say about the overestimation of Generative AI should actually come as a surprise. Generative AI has been inherently unreliable from the start; none of the problems that I warned about over the last half decade has been properly solved. Large language models still hallucinate, and they still make boneheaded errors; they still lack a proper concept of reality. They often produce workslop. A recent survey called The Remote Labor Index found that they could only do 2.5% of human tasks, and that is a massive overestimate, since literally everything that requires physical labor was excluded.

It goes on getting more and more truthfully brutal as the article goes on… And ends with, this penultimate paragraph of pain and an ultimate sentence of sarcasm,

When all is said and done, my best guess is that generative AI will have done significantly more harm to society than good. Although there are some practical use cases, such as coding, it is an inherently unreliable technology. It is ripping apart our educational system and our information ecosphere, and flooding the zone with nonconsensual deepfake porn. It is threatening the environment with data centers built on too much speculation. It is leading some people into serious mental health issues. And it may well lay waste to our economy, once banks and investors who bought the hype start to fall.

The countdown to Trump leaving the AI building has begun.

None of which is actually surprising to me as I’ve kind of being pointing out why it’s going to be of little use for quite some time now, and have explained if from,

1, The technical function aspect.
2, The surveillance business plan aspect
3, The failed before it starts advertising aspect.
4, The faux value by worthless circular agreements of ridiculous value aspect.

My advice is if you have any investments in AI get them out whilst there are still enough idiots to give you back what you put in pluss some faux value due to their own “more money than sense” outlook / behaviours.

Otherwise it will be more than your shirt coming off of your back, it will be the skin and flesh right down to the bone at the very least.

These portents of collapse and demise are written on the wall in blood,

https://en.wikipedia.org/wiki/Belshazzar%27s_feast

Don’t whine in the future that you were not told, to leave whilst you still could.

Rontea February 25, 2026 3:10 PM

Artificial intelligence is increasingly functioning as a permission structure: not just a tool, but a justification. Companies and governments are deploying AI to automate decisions they were already inclined to make, insulating themselves from scrutiny by pointing to the opacity of algorithms. In practice, AI often legitimizes actions that would face more resistance if taken directly by human actors. This is particularly dangerous because it masks the concentration of power under a veneer of impartial computation. We are witnessing a pattern where the mere invocation of AI provides cover for surveillance initiatives, exploitative business practices, and policy decisions that erode democratic oversight. The real challenge is not just in regulating AI’s capabilities, but in confronting the ways it is used to normalize and accelerate the very behaviors we should be questioning.

silentcomet February 25, 2026 11:36 PM

What stands out is the idea of AI not just as a geopolitical weapon, but as a force reshaping democratic participation itself. From the 2017 Federal Communications Commission comment flooding incident to AI-assisted submissions to the Consumer Financial Protection Bureau, we’re already seeing how scale, automation, and authenticity are colliding in civic spaces, all powered by advanced AI chips.

The real challenge isn’t whether AI is good or bad for democracy — it’s who controls it, who benefits from it, and whether governance can evolve as fast as the technology. If AI amplifies citizen voices responsibly, it strengthens democracy. If it concentrates power and automates manipulation, it weakens it. The outcome depends less on the tools — and more on the rules we build around them, including the ethical deployment of AI chips.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.