Four Ways AI Is Being Used to Strengthen Democracies Worldwide

Democracy is colliding with the technologies of artificial intelligence. Judging from the audience reaction at the recent World Forum on Democracy in Strasbourg, the general expectation is that democracy will be the worse for it. We have another narrative. Yes, there are risks to democracy from AI, but there are also opportunities.

We have just published the book Rewiring Democracy: How AI will Transform Politics, Government, and Citizenship. In it, we take a clear-eyed view of how AI is undermining confidence in our information ecosystem, how the use of biased AI can harm constituents of democracies and how elected officials with authoritarian tendencies can use it to consolidate power. But we also give positive examples of how AI is transforming democratic governance and politics for the better.

Here are four such stories unfolding right now around the world, showing how AI is being used by some to make democracy better, stronger, and more responsive to people.

Japan

Last year, then 33-year-old engineer Takahiro Anno was a fringe candidate for governor of Tokyo. Running as an independent candidate, he ended up coming in fifth in a crowded field of 56, largely thanks to the unprecedented use of an authorized AI avatar. That avatar answered 8,600 questions from voters on a 17-day continuous YouTube livestream and garnered the attention of campaign innovators worldwide.

Two months ago, Anno-san was elected to Japan’s upper legislative chamber, again leveraging the power of AI to engage constituents—this time answering more than 20,000 questions. His new party, Team Mirai, is also an AI-enabled civic technology shop, producing software aimed at making governance better and more participatory. The party is leveraging its share of Japan’s public funding for political parties to build the Mirai Assembly app, enabling constituents to express opinions on and ask questions about bills in the legislature, and to organize those expressions using AI. The party promises that its members will direct their questioning in committee hearings based on public input.

Brazil

Brazil is notoriously litigious, with even more lawyers per capita than the US. The courts are chronically overwhelmed with cases and the resultant backlog costs the government billions to process. Estimates are that the Brazilian federal government spends about 1.6% of GDP per year operating the courts and another 2.5% to 3% of GDP issuing court-ordered payments from lawsuits the government has lost.

Since at least 2019, the Brazilian government has aggressively adopted AI to automate procedures throughout its judiciary. AI is not making judicial decisions, but aiding in distributing caseloads, performing legal research, transcribing hearings, identifying duplicative filings, preparing initial orders for signature and clustering similar cases for joint consideration: all things to make the judiciary system work more efficiently. And the results are significant; Brazil’s federal supreme court backlog, for example, dropped in 2025 to its lowest levels in 33 years.

While it seems clear that the courts are realizing efficiency benefits from leveraging AI, there is a postscript to the courts’ AI implementation project over the past five-plus years: the litigators are using these tools, too. Lawyers are using AI assistance to file cases in Brazilian courts at an unprecedented rate, with new cases growing by nearly 40% in volume over the past five years.

It’s not necessarily a bad thing for Brazilian litigators to regain the upper hand in this arms race. It has been argued that litigation, particularly against the government, is a vital form of civic participation, essential to the self-governance function of democracy. Other democracies’ court systems should study and learn from Brazil’s experience and seek to use technology to maximize the bandwidth and liquidity of the courts to process litigation.

Germany

Now, we move to Europe and innovations in informing voters. Since 2002, the German Federal Agency for Civic Education has operated a non-partisan voting guide called Wahl-o-Mat. Officials convene an editorial team of 24 young voters (under 26 and selected for diversity) with experts from science and education to develop a slate of 80 questions. The questions are put to all registered German political parties. The responses are narrowed down to 38 key topics and then published online in a quiz format that voters can use to identify the party whose platform they most identify with.

In the past two years, outside groups have been innovating alternatives to the official Wahl-o-Mat guide that leverage AI. First came Wahlweise, a product of the German AI company AIUI. Second, students at the Technical University of Munich deployed an interactive AI system called Wahl.chat. This tool was used by more than 150,000 people within the first four months. In both cases, instead of having to read static webpages about the positions of various political parties, citizens can engage in an interactive conversation with an AI system to more easily get the same information contextualized to their individual interests and questions.

However, German researchers studying the reliability of such AI tools ahead of the 2025 German federal election raised significant concerns about bias and “hallucinations”—AI tools making up false information. Acknowledging the potential of the technology to increase voter informedness and party transparency, the researchers recommended adopting scientific evaluations comparable to those used in the Agency for Civic Education’s official tool to improve and institutionalize the technology.

United States

Finally, the US—in particular, California, home to CalMatters, a non-profit, nonpartisan news organization. Since 2023, its Digital Democracy project has been collecting every public utterance of California elected officials—every floor speech, comment made in committee and social media post, along with their voting records, legislation, and campaign contributions—and making all that information available in a free online platform.

CalMatters this year launched a new feature that takes this kind of civic watchdog function a big step further. Its AI Tip Sheets feature uses AI to search through all of this data, looking for anomalies, such as a change in voting position tied to a large campaign contribution. These anomalies appear on a webpage that journalists can access to give them story ideas and a source of data and analysis to drive further reporting.

This is not AI replacing human journalists; it is a civic watchdog organization using technology to feed evidence-based insights to human reporters. And it’s no coincidence that this innovation arose from a new kind of media institution—a non-profit news agency. As the watchdog function of the fourth estate continues to be degraded by the decline of newspapers’ business models, this kind of technological support is a valuable contribution to help a reduced number of human journalists retain something of the scope of action and impact our democracy relies on them for.

These are just four of many stories from around the globe of AI helping to make democracy stronger. The common thread is that the technology is distributing rather than concentrating power. In all four cases, it is being used to assist people performing their democratic tasks—politics in Japan, litigation in Brazil, voting in Germany and watchdog journalism in California—rather than replacing them.

In none of these cases is the AI doing something that humans can’t perfectly competently do. But in all of these cases, we don’t have enough available humans to do the jobs on their own. A sufficiently trustworthy AI can fill in gaps: amplify the power of civil servants and citizens, improve efficiency, and facilitate engagement between government and the public.

One of the barriers towards realizing this vision more broadly is the AI market itself. The core technologies are largely being created and marketed by US tech giants. We don’t know the details of their development: on what material they were trained, what guardrails are designed to shape their behavior, what biases and values are encoded into their systems. And, even worse, we don’t get a say in the choices associated with those details or how they should change over time. In many cases, it’s an unacceptable risk to use these for-profit, proprietary AI systems in democratic contexts.

To address that, we have long advocated for the development of “public AI”: models and AI systems that are developed under democratic control and deployed for public benefit, not sold by corporations to benefit their shareholders. The movement for this is growing worldwide.

Switzerland has recently released the world’s most powerful and fully realized public AI model. It’s called Apertus, and it was developed jointly by public Swiss institutions: the universities ETH
Zurich and EPFL, and the Swiss National Supercomputing Centre (CSCS). The development team has made it entirely open source–open data, open code, open weights—and free for anyone to use. No illegally acquired copyrighted works were used in its training. It doesn’t exploit poorly paid human laborers from the global south. Its performance is about where the large corporate giants were a year ago, which is more than good enough for many applications. And it demonstrates that it’s not necessary to spend trillions of dollars creating these models. Apertus takes a huge step forward to realizing the vision of an alternative to big tech—controlled corporate AI.

AI technology is not without its costs and risks, and we are not here to minimize them. But the technology has significant benefits as well.

AI is inherently power-enhancing, and it can magnify what the humans behind it want to do. It can enhance authoritarianism as easily as it can enhance democracy. It’s up to us to steer the technology in that better direction. If more citizen watchdogs and litigators use AI to amplify their power to oversee government and hold it accountable, if more political parties and election administrators use it to engage meaningfully with and inform voters and if more governments provide democratic alternatives to big tech’s AI offerings, society will be better off.

This essay was written with Nathan E. Sanders, and originally appeared in The Guardian.

Posted on November 25, 2025 at 7:00 AM19 Comments

Comments

Daniel Popescu November 25, 2025 7:27 AM

Good points and examples Bruce, thank you. And I’m going to read the book, soon I hope :).

Now, and is a big now: the total lack of regulatory and legislative oversight of the AI industry scares me a lot. And I’ll try to stay out of it as long as I can.

I think I’m lucky in this respect, being somewhat technically inclined, but what about the other 95% of the crowd? Scary.

John Rawls November 25, 2025 10:37 AM

This tech luminary appears to have a fairly narrow conception of political power. Which is not surprising given that he tends to confuse constitutional republics with “democracy.” It’s not like he’s a political philosopher.

Lo he spreads his arms wide and happily proclaims to us “Look! Look at all the wonderful things that A.I. is going to do for society! Hooray Utopia… ”

While in the background his message is being contradicted by a Nobel Prize Winner and Turing Award Recipient named Geoffrey Hinton who personal spearheaded A.I. projects at Google. In other words, an ostensibly more credible voice has spoken up and warned of mass social disruption.

If workers become displaced in large numbers and tech companies complete their process of state capture then the superficial trappings of “democracy” will not matter.

Just don’t expect the aforementioned tech luminary to mention this, he has speaking engagements that he wishes to keep. Presenting himself as an alleged expert is how he defines his identity and the tech industry has been known to silence dissent.

lurker November 25, 2025 12:18 PM

What is AI doing to improve the lives of the citizens of Burkina Fasso? Surinam? Will AI help prevent sea level rise that currently threatens to obliterate the nation of Tuvalu? Not if the latest COP in Belém is any indicator.

Rontea November 25, 2025 1:52 PM

It is encouraging to see how AI is helping strengthen democracies around the world! I love how each country is using it in creative ways, from AI avatars in Japan to watchdog journalism in the US. Technology really can make participation and transparency easier for everyone.

Rontea November 25, 2025 2:00 PM

@Daniel Popescu
“the total lack of regulatory and legislative oversight of the AI industry scares me a lot”

The current total lack of regulatory and legislative oversight in the AI industry is alarming. Without clear guidelines and accountability measures, the risks range from misuse of autonomous systems to the amplification of bias and misinformation. Establishing comprehensive policies and ethical frameworks now is critical to ensure that AI’s rapid development benefits society while mitigating severe potential harms.

Rontea November 25, 2025 2:03 PM

@Morley
“I can’t tell if AI is helping democracy or just politics.”

Perhaps the question is not whether AI serves democracy or politics, but whether it amplifies the values already present in human society. Technology reflects the hands that shape it; it can magnify transparency or manipulation, dialogue or division. AI does not choose sides—human intention does.

ResearcherZero November 26, 2025 3:21 AM

Tools like Digital Democracy are better than handing information to Big Tech. Which governments have been doing with health data and other private and sensitive details.

Just like arms, many tech companies have deals with governments to provide them with surveillance tools and quietly sell surveillance technology overseas through foreign procurement offices. This includes sales to authoritarian governments like China and Iran.

Trump signed an order to pool government scientific data for private companies to access.

‘https://edition.cnn.com/2025/11/24/tech/ai-executive-order

piglet42 November 26, 2025 6:43 AM

Regarding Germany: “In the past two years, outside groups have been innovating alternatives to the official Wahl-o-Mat guide that leverage AI.”

I genuinely don’t understand the point. The Wahl-o-Mat provides reliable and well researched information. What added value do the alternatives provide? “Hallucinations”, great, thanks!

Anyway I don’t understand what people want from these tools. German politics isn’t that hard to understand, there are six viable parties competing in elections (there are more parties of course most of which never win any seats) and their positions are not so mysterious that you need complicated technology to uncover them. Also, contrary to the US, parliamentary elections are non-personalized and non-regionalized. And contrary to some other countries, there is no tactial voting, you simply vote for the party you prefer and that’s it. I do not understand what actual need these AI tools are supposed to fill.

piglet42 November 26, 2025 10:46 AM

“However, German researchers studying the reliability of such AI tools ahead of the 2025 German federal election raised significant concerns about bias and “hallucinations”—AI tools making up false information.”

I read the article and it’s really shocking. The answers given by the AI tools about where the parties stand on important issues were flatout wrong in a quarter to half the cases! That’s exactly what we don’t need. I trust that the people who provided the tools had no intention to mislead but LLMs just can’t be relied on to provide correct answers about sensitive political issues. Now think of the damage tools like this can inflict in the hands of malicious actors!

Falon November 26, 2025 1:40 PM

In none of these cases is the AI doing something that humans can’t perfectly competently do. But in all of these cases, we don’t have enough available humans to do the jobs on their own.

Oh, did we hit global 0% unemployment? Because if not, we DO have enough available humans and we should train them to do these jobs rather than rushing to a solution that displaces workers unnecessarily.

Trust_No_1 November 27, 2025 2:46 PM

@Falon

Further to your point, I am deeply suspicious about using AI to do work.

If I use AI to work, am I essentially training AI to do those parts of my job where I use AI?

Could it then replace me at a later date to do those functions?

What would be left of my job then?

Clive Robinson November 27, 2025 4:33 PM

@ Trust_No_1, Falon, Bruce, ALL,

Some suspect I am a luddite when it comes to Current AI LLM and ML Systems. And although I’m reasonably sure that those replaced by such AI will be those who work in Professions such as routine Accountancy, Law and certain types of Medicine… I’m assuming that given a decade or three things will replace Current AI LLM and ML systems in more creative and investigative professions.

But consider something from back in the 1980’s. Back then glass blowing of large vessels –carboys– for storing acid and similar was done by skilled artisans.

Then someone decided to record all the movements etc and build robotic systems that behaved the same as the skilled artisans.

The thing is it did not replace just one or two glass blowing artisans it in effect sent an entire nation of skilled artisans into unemployment.

Which is why,

“If I use AI to work, am I essentially training AI to do those parts of my job where I use AI?”

The answer is “yes” but it’s way worse than that. What you train up will not be used for just your job, but thousands if not millions of people who do a sufficiently similar jobs.

The same will be true for all those similar workers and the result will be a library of job functions available to be selected for next to nothing.

You will not get recompense from the AI company, nor will your employer. The AI company will treat your work as their IP, even though it’s not and never has been paid for.

As I’ve said some see my view on this as a very bad idea as me being a Luddite… However I would much rather be someone who “puts the boot in” or more correctly a wooden clog, the French call a sabot which is why “putting the boot in” is actually called sabotage…

During my life I’ve seen many jobs disappear and what replaced them was almost always inferior.

For instance the most productive year for office work was back in 1973 the year before computers started appearing on desks.

Put simply the old system of boss uses dictaphone, audio secretary types it up and makes it look neat. The boss proofs it and in the post it goes was a well oiled and functioning process.

Then the boss got a computer / terminal on their desk, and not being typists they struggled, not being skilled they mucked things up and had to re-do things to get them right, and then they would fiddle with the text…

The bosses productivity sank like a rock dropped in a pond, and audio secretaries and copy typists very quickly became an endangered species. Those company “typing pools” were eviscerated. Similarly so were stenographers and short hand secretaries.

But also the technicians that kept the typewriters and audio recorders functioning.

Similarly expensive to manufacture valve/tube radios and TVs that had to be repaired very often got replaced by very cheap transistor based systems that hardly ever went wrong. All those repair technicians were fairly quickly out of work. But also those who manufactured those valve radios and TVs. Because the transistorised production was where labour was cheap in Japan, and later Taiwan, then South Korea.

Thus in the UK and US an entire industry disappeared within a few years.

Having started my higher education training in that period I was uncertain as to if I would even have a job when I finished. Thus I jumped into the specialised fields of communications and computers where I was “ahead of the crowd”. Then I worked up quickly to being a design engineer and jumped jobs and even professions repeatedly.

Worse I turned hobbies into professions… The down side of this is you loose your hobbies, and as the old saying has it,

“A man needs a shed to potter in.”

Because I kept dropping back into communications and had an interest in locks I eventually dropped into the various fields of “security” where I appear to have got stuck 😉

But the thing is I do not think that any AI system known to mankind currently could have replaced me, because my skills base and interests were way way to broad.

And there might be a lesson to dwell on there.

ResearcherZero December 2, 2025 6:37 AM

@Clive Robinson, ALL

Ensuring a wide set of skills will be a necessity to avoid being replaced or replicated.

Unethical behaviour will become a societal norm where automated systems – like Robodebt – allow escape from responsibility. Theft and coping of private works is already a norm.

Without accountability, privacy and transparency – democracy cannot properly function.
As we have already seen with Robodebt, no-one was held to account for the harm caused.
This was despite an inquiry finding those responsible were warned the scheme was illegal.

The Australian government claims that AI will be governed by existing laws. There is a significant backlog for law reform to address privacy rights, consumer protection, automated decision-making in government and accountability, as well as copyright, a digital duty of care, employment and fair an equitable wages and remuneration for work produced.

‘https://www.unsw.edu.au/news/2025/12/when-machines-decide-who-is-responsible

The government has decided to remove the guardrails from high-risk AI.
https://amplyfi.com/blog/why-australias-ai-laws-fail-the-stress-test-five-threat-categories-assessed/

Government departments have failed to meet reporting requirements for how our data is used and entered into commercial AI products. Existing laws clearly are not fit for purpose. No adequate rules to protect our private sensitive data exist, or deliver any accountability.

https://www.abc.net.au/news/2025-11-11/ai-could-ultimate-test-australia-democracy-boyer-lectures/105991022

An AI oligarchy for the wealthy where our property, employment, privacy and data is handed to The Big Four AI tech behemoths, will hand these companies the power to dictate terms.
https://worldwideunionofrobots.substack.com/p/australias-ai-surrender

Agammamon December 6, 2025 3:21 PM

The Japan example is a guy who got a machine trained to lie for him in a deniable way – the LLM tells the questioner what it wants to hear and when the politician is elected they can just say it was an error.

That does not sound like its strengthening democracy. Quite the opposite, it sounds like its allowing grifters easier access to the halls of power.

As an example, there is a reason ‘Proof of work’ is so popular – even a little bit of friction greatly reduces the profitability of scams, increasing the security of a system. Here the AI is reducing that friction, making it more economical.

Ignacio Rivera December 12, 2025 7:48 PM

It shows with absolute clarity that AI, when used transparently and under democratic oversight, is not a risk but a decisive force for strengthening civic oversight and empowering both journalists and citizens. It doesn’t replace people—it amplifies their ability to detect abuses and demand accountability. The example from California makes one thing clear: technology can redistribute power and break old structures, as long as it is designed ethically and with a genuine intention to serve the public good.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.