Will AI Strengthen or Undermine Democracy?

Listen to the Audio on NextBigIdeaClub.com

Below, co-authors Bruce Schneier and Nathan E. Sanders share five key insights from their new book, Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship.

What’s the big idea?

AI can be used both for and against the public interest within democracies. It is already being used in the governing of nations around the world, and there is no escaping its continued use in the future by leaders, policy makers, and legal enforcers. How we wire AI into democracy today will determine if it becomes a tool of oppression or empowerment.

1. AI’s global democratic impact is already profound.

It’s been just a few years since ChatGPT stormed into view and AI’s influence has already permeated every democratic process in governments around the world:

  • In 2022, an artist collective in Denmark founded the world’s first political party committed to an AI-generated policy platform.
  • Also in 2022, South Korean politicians running for the presidency were the first to use AI avatars to communicate with voters en masse.
  • In 2023, a Brazilian municipal legislator passed the first enacted law written by AI.
  • In 2024, a U.S. federal court judge started using AI to interpret the plain meaning of words in U.S. law.
  • Also in 2024, the Biden administration disclosed more than two thousand discrete use cases for AI across the agencies of the U.S. federal government.

The examples illustrate the diverse uses of AI across citizenship, politics, legislation, the judiciary, and executive administration.

Not all of these uses will create lasting change. Some of these will be one-offs. Some are inherently small in scale. Some were publicity stunts. But each use case speaks to a shifting balance of supply and demand that AI will increasingly mediate.

Legislators need assistance drafting bills and have limited staff resources, especially at the local and state level. Historically, they have looked to lobbyists and interest groups for help. Increasingly, it’s just as easy for them to use an AI tool.

2. The first places AI will be used are where there is the least public oversight.

Many of the use cases for AI in governance and politics have vocal objectors. Some make us uncomfortable, especially in the hands of authoritarians or ideological extremists.

In some cases, politics will be a regulating force to prevent dangerous uses of AI. Massachusetts has banned the use of AI face recognition in law enforcement because of real concerns voiced by the public about their tendency to encode systems of racial bias.

Some of the uses we think might be most impactful are unlikely to be adopted fast because of legitimate concern about their potential to make mistakes, introduce bias, or subvert human agency. AIs could be assistive tools for citizens, acting as their voting proxies to help us weigh in on larger numbers of more complex ballot initiatives, but we know that many will object to anything that verges on AIs being given a vote.

But AI will continue to be rapidly adopted in some aspects of democracy, regardless of how the public feels. People within democracies, even those in government jobs, often have great independence. They don’t have to ask anyone if it’s ok to use AI, and they will use it if they see that it benefits them. The Brazilian city councilor who used AI to draft a bill did not ask for anyone’s permission. The U.S. federal judge who used AI to help him interpret law did not have to check with anyone first. And the Trump administration seems to be using AI for everything from drafting tariff policies to writing public health reports—with some obvious drawbacks.

It’s likely that even the thousands of disclosed AI uses in government are only the tip of the iceberg. These are just the applications that governments have seen fit to share; the ones they think are the best vetted, most likely to persist, or maybe the least controversial to disclose.

3. Elites and authoritarians will use AI to concentrate power.

Many Westerners point to China as a cautionary tale of how AI could empower autocracy, but the reality is that AI provides structural advantages to entrenched power in democratic governments, too. The nature of automation is that it gives those at the top of a power structure more control over the actions taken at its lower levels.

It’s famously hard for newly elected leaders to exert their will over the many layers of human bureaucracies. The civil service is large, unwieldy, and messy. But it’s trivial for an executive to change the parameters and instructions of an AI model being used to automate the systems of government.

The dynamic of AI effectuating concentration of power extends beyond government agencies. Over the past five years, Ohio has undertaken a project to do a wholesale revision of its administrative code using AI. The leaders of that project framed it in terms of efficiency and good governance: deleting millions of words of outdated, unnecessary, or redundant language. The same technology could be applied to advance more ideological ends, like purging all statutory language that places burdens on business, neglects to hold businesses accountable, protects some class of people, or fails to protect others.

Whether you like or despise automating the enactment of those policies will depend on whether you stand with or are opposed to those in power, and that’s the point. AI gives any faction with power the potential to exert more control over the levers of government.

4. Organizers will find ways to use AI to distribute power instead.

We don’t have to resign ourselves to a world where AI makes the rich richer and the elite more powerful. This is a technology that can also be wielded by outsiders to help level the playing field.

In politics, AI gives upstart and local candidates access to skills and the ability to do work on a scale that used to only be available to well-funded campaigns. In the 2024 cycle, Congressional candidates running against incumbents like Glenn Cook in Georgia and Shamaine Daniels in Pennsylvania used AI to help themselves be everywhere all at once. They used AI to make personalized robocalls to voters, write frequent blog posts, and even generate podcasts in the candidate’s voice. In Japan, a candidate for Governor of Tokyo used an AI avatar to respond to more than eight thousand online questions from voters.

Outside of public politics, labor organizers are also leveraging AI to build power. The Worker’s Lab is a U.S. nonprofit developing assistive technologies for labor unions, like AI-enabled apps that help service workers report workplace safety violations. The 2023 Writers’ Guild of America strike serves as a blueprint for organizers. They won concessions from Hollywood studios that protect their members against being displaced by AI while also winning them guarantees for being able to use AI as assistive tools to their own benefit.

5. The ultimate democratic impact of AI depends on us.

If you are excited about AI and see the potential for it to make life, and maybe even democracy, better around the world, recognize that there are a lot of people who don’t feel the same way.

If you are disturbed about the ways you see AI being used and worried about the future that leads to, recognize that the trajectory we’re on now is not the only one available.

The technology of AI itself does not pose an inherent threat to citizens, workers, and the public interest. Like other democratic technologies—voting processes, legislative districts, judicial review—its impacts will depend on how it’s developed, who controls it, and how it’s used.

Constituents of democracies should do four things:

  • Reform the technology ecosystem to be more trustworthy, so that AI is developed with more transparency, more guardrails around exploitative use of data, and public oversight.
  • Resist inappropriate uses of AI in government and politics, like facial recognition technologies that automate surveillance and encode inequity.
  • Responsibly use AI in government where it can help improve outcomes, like making government more accessible to people through translation and speeding up administrative decision processes.
  • Renovate the systems of government vulnerable to the disruptive potential of AI’s superhuman capabilities, like political advertising rules that never anticipated deepfakes.

These four Rs are how we can rewire our democracy in a way that applies AI to truly benefit the public interest.

This essay was written with Nathan E. Sanders, and originally appeared in The Next Big Idea Club.

EDITED TO ADD (11/6): This essay was republished by Fast Company.

Posted on October 31, 2025 at 7:08 AM12 Comments

Comments

KC October 31, 2025 8:53 AM

Thank you for your deep research, insights, and communication, Bruce and Nathan.

It’s so helpful to have different formats to expand the opportunity to engage and participate with the material.

I have a copy of the audio and e-book. I will naturally enjoy going through the book more slowly and incrementally. Always so much interesting and meaningful detail.

From the book: Collectively, humans will help shape society’s next phase of transformation— the first to be influenced by AI.

Truly. It’s a new era.

I appreciate thinking of AI uses in the framework of the four R’s. It transforms something that can feel overwhelming into an empowering and human framework.

Rontea October 31, 2025 9:14 AM

Re: the four Rs:—Reform, Resist, Responsibly use, and Renovate—as the framework for guiding how AI should integrate into democratic systems.

Reforming the technology ecosystem is essential to ensure transparency and public trust, because without it, citizens will always question the motives of those using AI. Resisting inappropriate uses like mass surveillance or biased facial recognition is crucial to protect individual freedoms and uphold civil rights.At the same time, responsibly using AI in government to enhance accessibility, streamline services, and improve decision-making can strengthen democracy if done with care. Finally, renovating outdated systems—like modernizing political advertising rules and creating safeguards against deepfakes—is vital to prevent AI from being weaponized in harmful ways. Together, these steps create a roadmap for ensuring AI empowers people, rather than concentrating power in the hands of a few.

Clive Robinson October 31, 2025 2:01 PM

@ Bruce,

There is a whole very nasty area that you and Nathan Sanders, have either missed or chosen to ignore.

I suspect this is because you both are way over optimistic about AI and who controls it.

In the UK the current political incumbents supposedly in charge[1] have decided that it’s not just going to be a Police State, but there will be very significant “Ideological and economic Opression”.

Have a hunt up on, the UK IRS and DWP and an AI system they call “The Connect System”.

It was developed by Deltic also known as BAE_Systems “Digital Inteligence,

https://en.wikipedia.org/wiki/BAE_Systems_Digital_Intelligence

For those that do not know they are a fairly shady organisation that has pretentions to be the UK Palantir.

It’s connected into private and personal details like Banking Systems across the Entire EU and several other nations. If pushed with details the story give is it’s “to check for tax and benefits cheating”.

https://en.wikipedia.org/wiki/Connect_(computer_system)

However that is not it’s intended purpose, just a side effect. What the Connect System does is “profiling” people and those around them. Have a read about the known information sources.

It came out of an idea originated by the Other UK Political party after the “Poll Tax Riots” under Margaret Thatcher in the era of John Major back last century by a think tank. They had the good sense to not touch it with a barge pole.

However when the other party got in under an unscrupulous leader[1] the idea go revived and was going to be trialed in Northern Ireland. It was so unpopular they changed it a bit to shift the blame to the local councils, but leave it “Open” to going back to it.

The cause of the issue is simple, “Insufficient money is “coming in from the plebs” now that Big Business nolonger contributes anything but the little they think they can get away with. Mobile Phone Operators, and the Major Internet companies owe many billions and it was they that created the black hole in UK finances. But politically they are “sacrosanct” even though they are a grossly avoidant thus unfairly Monopolistic entity.

This black hole still exists Even after the politicians have created near impossible not to commit crimes. That use the UK “Single Justice Procedure” with little or no, or completely fabricated evidence to find you guilty… Thus levy a fine about the equivalent of between two and twelve times a days take home of the average bottom rung on the socioeconomic ladder of the middle class in the UK(~180-1200USD).

You can see how the Connect System is already being used if you care to look,

https://www.theguardian.com/society/2025/oct/31/woman-flight-italy-did-not-board-child-benefits-stopped

So expect that to be just one of 10,000s of false accusations.

Thus part of the Politicals desire via the “Connect System is to find what you might call “New-Crime”[2] to fine and further “blame the victim” who by no fault of their own who are Old, disabled physically or mentally, long term sick or diseased, young, unemployed or just unfortunate due to others actions.

But the one that goes back to the Poll Tax Riots, is the change to “Property Tax” currently it’s based on Property Value Bands set back last century, and basically it’s considered to not be “pulling in enough” to local Councils, which Central Government effectively subsidies. For the Chancellor raising Council tax will enable the Treasury to cut the subsidies.

But back when “Two Jags” was Deputy Prime minister, they were talking about putting in place very very local Council Tax almost on a home by home basis. How would you be assessed, simply by “What your neighbour spends on Credit cards or moves out of their bank account etc.

So if you are old or inform and bought your house back last century, you might have to find ~400USD for each monthly payment. For many that is already a significant amount of their your “supposed” pension or other income. A lot of people are already becoming very much in “destitution” or “hard poverty”. Imagine what happens when they get assessed on having wealthy Champaign and Caviar types living close around them. That could easily jump their council tax over ~1000USD or more for each monthly payment.

The argument made for it is it would encourage them to sell their home and move somewhere smaller. The problem if they do, they will get a new much higher assessment, and worse the AI system will assume that the price they get for their home will make them “wealthy” thus unentitled to any assistance etc.

Some can not sell up because their home has been modified for disability or infirmarty. Thus any spend on converting their new home will make their local council tax even higher, so in reality over all they would probably be a lot worse of not just short term but long term.

But the Treasury would be “raking it in” so that’s alright…

And of course the “Connect System” will use AI and you won’t be able to challenge it.

So it will become the foundation of a new “Robodebt” scheme based on “Political Mantra”.

But also all Credit Card and similar spending will make the “Mobile Phone” location & Time tracking that they currently have to pay for, redundant. But more importantly as it’s just a “Way Point” tracking, not a “Journey Tracking” they will be able to argue you were somewhere you were not… Any time in the last 12years, so expect a rise in false arrests etc.

This is the reality of AI use under “UK Democracy” so far, and it’s only going to get worse.

But one other thing that is almost a certainty, is businesses that pay next to nothing in the UK but earn billions from it will be given “incentives to stay”… So they can hand large “brown envelopes” or the more sophisticated equivalents to UK politicians to feather their nests or that of their relatives.

Which brings me back to a past UK Prime Minister[1]. He has been relentlessly pushing the UK ID system, that his son will get a big fat pile of cash through skimming off of contracts and supplying apps apparently, to supply the systems etc.

If people think I’m imagining this stuff I very seriously urge you to go look it up.

Oh and have a look at this,

https://www.youtube.com/watch?v=bfAhNMk9g_k

And ask yourself the question,

“If the Connect System already Does what the EX PM claims, why is another way way more intrusive AI system needed?”

The simple answer is “it’s not” so it gas to be about something else…

The least harmful suggestion is it’s another waste of tax payer resources to benefit political party doners.

But more likely “Goose Step 1” to an authoritarian probably Fascit Police State in the UK for the benefit of a few politicians and their cronies.

[1] They are not really, they are in the main “puppets of a previous leader” who has some very very distasteful credits to his name. Who is simply advancing his ideas about controling the nation at financial benefit to both him and his family. In short the best thing that can be said about him is he is a “blackmailing Crook” who has desires on what he sees as “true power”. Especially after the EU realised what he was and kicked his “King of Europe” desires into touch.

[2] New-Crime as a hat tip to George Orwell… Look at current UK bills going through parliament and very shortly what is new and hidden away in the UK budget to be read to parliament in less than four weeks,

https://www.reuters.com/world/uk/uks-reeves-weighing-income-tax-rise-tackle-deficit-guardian-reports-2025-10-23/

https://www.which.co.uk/news/article/autumn-budget-2025-when-it-is-and-what-will-it-contain-aZm9i8u75S5h

lurker October 31, 2025 7:36 PM

If somebody wants to strengthen democracy with AI, they could start not with the voting system or political party structure, but with the Tax system. Perceived unfairness of the tax system is the cause of much of the woe listed above by @Clive.

2000 years ago the Chinese taxed salt, and iron pots, and there was dissatisfaction about how much was paid by whom. A major conference at the Han court in 81BCE[1] discussed the philosophy of taxation in terms that are still valid in today’s cashless society. Even in a cashless society, money is the basis of the economy, not how many horses you own, or how many windows in your house. We have the machinery now to tax money when it changes hands. No other tax is needed.

Point your AI at that for a test case on whether it’s fit to strengthen democracy.

[1] https://en.wikipedia.org/wiki/Discourses_on_Salt_and_Iron

Clive Robinson October 31, 2025 9:50 PM

@ lurker,

With regards,

“Even in a cashless society, money is the basis of the economy…”

Money is merely a token that is an abstraction of “work” both human labour and in terms of energy in time.

It actually has no “real value” just “fiscal value” which changes all the time.

I’ve made this point before about “intrinsic value” that is held in a ton of coal. That is it has “real value” as a source of X hundred weight of carbon or a source of energy when burnt. That other physical things been equal remains constant

However the price or “fiscal value” of that ton of coal will change continuously, usually upward. The faster the upward change the more of concern ot should be. Because, it generally tracks either the “rate of inflation” or the “increasing scarcity” of the “good”, or both.

A side effect of rising inflation is at some point people stop saving, and start buying because the banks can not keep up. Somebody I used to know lived in a well known African Nation at the time they had “Hyper Inflation” such that you could see shop keepers changing the price tickets on items three or four times a day. He used to shop before work for food and the like as items could be 10% more expensive at the end of the day or just completely sold out.

Believe it or not from a Government perspective hyper inflation could be desirable as workers were paid at the end of the month. Also it created a lot of economic churn.

KC November 1, 2025 9:32 PM

In these important matters, I’m happy to see that Bruce and Nathan are joined by other voices.

Alan Rozenshtein recently gave a lecture – he says a first draft lecture – on the concern that AI could concentrate power in a ‘Unitary Artificial Executive.’

https://www.lawfaremedia.org/article/the-unitary-artificial-executive

The details of the mechanisms by which this could occur, and the solutions he offers are all food for thought. The fact that he is addressing this, is in itself promising. From his conclusion:

But let’s be honest about the obstacles: AI develops faster than law can regulate it. Most legislators and judges don’t understand AI well enough to constrain it. And both parties want presidential power when they control it. […]

When crisis and executive power threaten constitutional governance, lawyers have been the constraint.

And, to the students in the audience, let me say: You will be too. […]

This is a problem we’re still learning to see. But seeing it is the first step.

Continued …

KC November 1, 2025 9:35 PM

In the Q&A following the lecture, an audience member asks how a lawyer using AI may have an advantage over a lawyer who does not.

Professor Rozenshtein’s response, (lightly edited):

It’s something that my faculty talks about incessantly, though we haven’t figured out what to do about it […]

In five years, this is a fixed problem. I think in two years it’s a fixed problem. I do think it will soon become the case that if you’re in legal practice and you’re not using AI, you’re just going to lose, right?

Not to mention that your clients will just stop paying you. Because if I were a a client and I get a billable hour, right?

The first question I’m asking is how do you use AI? And if I don’t get a really good answer, like a really good answer of how this firm, or how you use AI every moment of every day to save time, I’m paying you 40 cents on the dollar.

[…] like the lawyer with Microsoft Word defeats the lawyer who uses a quill pen. This happens from time to time.

Until the time this is all worked out, may the hallucinations of AI counsel be gently guided back to reality.

ResearcherZero November 2, 2025 12:57 AM

As AI technology sucks in vast amounts of confidential and sensitive information from everywhere and anywhere, it devalues the importance of information in all areas of the living world while concentrating power in the hands of the few and removing power at scale.

AI platforms are trained and developed by exploiting vulnerable groups of people. Hollow, vacuous and empty may be apt descriptions for the assurances of AI and surveillance firms.

There is a lack of legal liability for automated tools and the invasion of privacy. Human rights language is included in official documentation and education as a box ticking exercise without producing any value at all for populations the language refers to.

The technology of artificial technology has been weaponized to exacerbate the abuse of power, increase the reach of surveillance, reduce accountability and marginalize vulnerable people. A new framework is required to protect the public from AI and Surveillance Capitalism.

‘https://www.hks.harvard.edu/centers/carr-ryan/publications/analyzing-charter-rights-and-institutions-tackle-surveillance

Chatbots have been linked to increased risk of isolation, loneliness and mental episodes.

OpenAI recently stated more than half a million of its users are showing signs of mental distress. Consumers can become increasingly delusional due to the use of the technology and those with existing mental health conditions are not just at risk, but also healthy individuals with no previous history of psychological conditions or illness.
https://www.theregister.com/2025/10/08/ai_psychosis/

AI models can be exploited to break their guardrails or used for harmful activities.
https://nypost.com/2025/10/09/business/googles-ex-ceo-eric-schmidt-shares-warns-of-homicidal-ai-models/

The companies developing these tools have already participated in crimes against humanity.
https://al-shabaka.org/briefs/ai-for-war-big-tech-empowering-israels-crimes-and-occupation/

Clive Robinson November 2, 2025 7:42 PM

@ KC, ALL,

With regards,

“an audience member asks how a lawyer using AI may have an advantage over a lawyer who does not.”

It’s why I’ve warned that Current AI LLM and ML Systems will kill off the current upper middle class “Professions”.

The reason is they are in most cases very heavily “rules based” where your ability to store knowledge and recall it is paramount.

A current LLM is effectively,

“A database with defective search engine.”

Hence the ~1/3 of unhelpful results.

But this has two consequences,

1, It’s bad news for those that do not check LLM results.
2, It effectively gives you a very significant boost in productivity if you do check.

But we can realistically expect the “defective search engine” to improve with time…

Similar applies to accountancy.

However because the Current LLM’s can not actually reason[1] other professions will be less hit.

That said, humans are not particularly good at reasoning, for two reasons,

1, We generally do not know the previous reasoning of others.
2, Often we do not know which way to go to form an efficient chain of reasoning.

An LLM can be reasonably upto date on the reasoning of others. Likewise an LLM can if fed the appropriate training data can see and compare other chains of reasoning to know which is likely to be the fastest or involve minimum work.

So yes LLMs can be a little like “Renaissance man”, who had knowledge over a broad range of at the time often thought to be unrelated knowledge domains. So could readily take knowledge from one knowledge domain and apply it to a different knowledge domain.

Many high end professionals today are in effect modern day “renaissance man” as the domains they work in are now so broad they have sub domains of sub domains that people specialise in.

The trick is “pattern matching” that is being able to reduce a problem to foundational levels and then match on those.

For quite some time now, I’ve objected to the way higher education works. In that people get trained in the “current tools” (as that’s what employers want[2]). Rather than get trained in the subject fundamentals that would enable them to easily migrate from tool to tool. Thus not just make them more useful but also more future proof.

[1] Every so often we get people claiming their AI can reason. It can not, all it actually does is “pattern match”. So if an issue is presented that can be “pattern matched” to something in the input training data set then an LLM can “appear” to reason an answer. What it is actually demonstrating is the lack of breadth of knowledge in the person making the claim.

[2] This is also the fundamental reason the Boeing 737-MAX that has proved so disastrous came into existence. Boeing had no plans to further extend the life of the 737, however American Airlines did not want the costs and inconvenience of re-training and certifying pilots. So blackmailed Boeing into makeing what was a very ill thought out aircraft. Boeing tried to get over the issues with software and that did not prove to have been either safe or effective.

ResearcherZero November 5, 2025 1:11 AM

This may provide some insight as how the technology may be used or abused.

The Department of Justice may not recover for decades due to the loss of civil servants with many years of experience, leaving the US susceptible to espionage and terrorism.

Changes within the department have already changed how it functions and what it pursues.

‘https://www.cbsnews.com/news/justice-department-whistleblower-says-he-witnessed-officials-undermining-rule-of-law-60-minutes-transcript/

Some 3,000 years of experience in investigating threats to national security has been lost.
https://www.nytimes.com/2025/10/30/opinion/trump-biden-justice-department.html

More than 5,000 employees of the DoJ have been forced out or have resigned.
https://www.ifyoucankeepit.org/p/thousands-of-years-of-law-enforcement

JTC November 15, 2025 8:47 AM

There’s only one fly in the ointment. At least in the United States, politicians don’t really care what constituents think. If you aren’t rich or famous, you have zero political influence, especially as we have only two parties who act more alike than different.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.