Algorithms Are Coming for Democracy—but It’s Not All Bad

In 2025, AI is poised to change every aspect of democratic politics—but it won’t necessarily be for the worse.

India’s prime minister, Narendra Modi, has used AI to translate his speeches for his multilingual electorate in real time, demonstrating how AI can help diverse democracies to be more inclusive. AI avatars were used by presidential candidates in South Korea in electioneering, enabling them to provide answers to thousands of voters’ questions simultaneously. We are also starting to see AI tools aid fundraising and get-out-the-vote efforts. AI techniques are starting to augment more traditional polling methods, helping campaigns get cheaper and faster data. And congressional candidates have started using AI robocallers to engage voters on issues. In 2025, these trends will continue. AI doesn’t need to be superior to human experts to augment the labor of an overworked canvasser, or to write ad copy similar to that of a junior campaign staffer or volunteer. Politics is competitive, and any technology that can bestow an advantage, or even just garner attention, will be used.

Most politics is local, and AI tools promise to make democracy more equitable. The typical candidate has few resources, so the choice may be between getting help from AI tools or getting no help at all. In 2024, a US presidential candidate with virtually zero name recognition, Jason Palmer, beat Joe Biden in a very small electorate, the American Samoan primary, by using AI-generated messaging and an online AI avatar.

At the national level, AI tools are more likely to make the already powerful even more powerful. Human + AI generally beats AI only: The more human talent you have, the more you can effectively make use of AI assistance. The richest campaigns will not put AIs in charge, but they will race to exploit AI where it can give them an advantage.

But while the promise of AI assistance will drive adoption, the risks are considerable. When computers get involved in any process, that process changes. Scalable automation, for example, can transform political advertising from one-size-fits-all into personalized demagoguing—candidates can tell each of us what they think we want to hear. Introducing new dependencies can also lead to brittleness: Exploiting gains from automation can mean dropping human oversight, and chaos results when critical computer systems go down.

Politics is adversarial. Any time AI is used by one candidate or party, it invites hacking by those associated with their opponents, perhaps to modify their behavior, eavesdrop on their output, or to simply shut them down. The kinds of disinformation weaponized by entities like Russia on social media will be increasingly targeted toward machines, too.

AI is different from traditional computer systems in that it tries to encode common sense and judgment that goes beyond simple rules; yet humans have no single ethical system, or even a single definition of fairness. We will see AI systems optimized for different parties and ideologies; for one faction not to trust the AIs of a rival faction; for everyone to have a healthy suspicion of corporate for-profit AI systems with hidden biases.

This is just the beginning of a trend that will spread through democracies around the world, and probably accelerate, for years to come. Everyone, especially AI skeptics and those concerned about its potential to exacerbate bias and discrimination, should recognize that AI is coming for every aspect of democracy. The transformations won’t come from the top down; they will come from the bottom up. Politicians and campaigns will start using AI tools when they are useful. So will lawyers, and political advocacy groups. Judges will use AI to help draft their decisions because it will save time. News organizations will use AI because it will justify budget cuts. Bureaucracies and regulators will add AI to their already algorithmic systems for determining all sorts of benefits and penalties.

Whether this results in a better democracy, or a more just world, remains to be seen. Keep watching how those in power uses these tools, and also how they empower the currently powerless. Those of us who are constituents of democracies should advocate tirelessly to ensure that we use AI systems to better democratize democracy, and not to further its worst tendencies.

This essay was written with Nathan E. Sanders, and originally appeared in Wired.

Posted on December 3, 2024 at 7:00 AM11 Comments

Comments

Fábio Emilio Costa December 3, 2024 9:04 AM

This is just the beginning of a trend that will spread through democracies around the world, and probably accelerate, for years to come. Everyone, especially AI skeptics and those concerned about its potential to exacerbate bias and discrimination, should recognize that AI is coming for every aspect of democracy. The transformations won’t come from the top down; they will come from the bottom up. Politicians and campaigns will start using AI tools when they are useful. So will lawyers, and political advocacy groups. Judges will use AI to help draft their decisions because it will save time. News organizations will use AI because it will justify budget cuts. Bureaucracies and regulators will add AI to their already algorithmic systems for determining all sorts of benefits and penalties.

And here lives the great problem: those with money enoough and antidemocratic intentions enough will do their best (worst?) to basically f$&k democracy up using AI to “improve” their hate retoric

We saw this since ’16 here in Brazil with Bolsonaro, the Hate Cabinet and the Digital Militias that ruined reputations, destroyed people and put people safety under hazard

Clive Robinson December 3, 2024 10:32 AM

It’s important to remember two things about this type of use of current AI based on LLM and ML systems,

1, The avatars/agents are not real or recordings of what was once real (they are “dishonest”).
2, Their use is in fact to make gain at others loss or expense (illicit “advantage”).

That is, as “an enterprise” it is gaining a,

“Dishonest Advantage”

Which is the basic definition of the crime of “Fraud”[1] in the UK and many other places.

So the use of such AI for political gain forefils both requirements of a crime. Which is actually hardly surprising with according to the UK Government less than a month ago,

“With fraud being the most common crime type in the UK, amounting to around 40% of all crime in England and Wales, these new measures are part of a wider government ambition to reduce fraud and protect potential victims…[1]”

Is “Fraud in Politics” expected to be any the less than “National Average” or “Considerably more” than other types of Fraud?

If anything is to be gone by based on past activities and currently the latter is more likely.

Thus the next question is,

“Is it actually a crime?”

To which the answer is “technically Yes” but “As of yet nobody has had action taken against it” (which unfortunately all to common with the crime of fraud and “new applications” of it).

But I suspect as politicians make the legislation, if this type of “fraud” has advantages for them they will change or enact new legislation so they can continue to illicitly take advantage.

Which unfortunately in other countries I gather is a very common form of legislator action,

“Where money talks, and all else walks.”

[1] See UK “Criminal Prosecution Service”(CPS) note on fraud that says,

“Fraud is the act of gaining a dishonest advantage, often financial, over another person. It is now the most commonly experienced crime in England and Wales, with an estimated 3.4 million incidents in the year ending March 2017.”

https://www.cps.gov.uk/crime-info/fraud-and-economic-crime

[2] From,
https://www.gov.uk/government/news/new-failure-to-prevent-fraud-guidance-published

Published 6 November 2024

Martin Stuart Sorrell December 3, 2024 10:42 AM

It’s not all Bad… it’s just mostly Bad.

An odd attempt by a high-profile luminary to superimpose a giant smiley face on our rather bleak future. For some reason the professional researchers who invented this technology (e.g. Geoffrey Hinton) and understand the implications through first-hand experience do not share Schneier’s glass-half-full optimism.

And this raises unsettling questions.

vaadu December 3, 2024 1:46 PM

Elon Musk says Grok will be soon capable of summarizing large bills passed by Congress for citizens to better understand.

AI will be able to answer questions like who benefits from this bill or how can the bill be tightened up so it can’t be gamed or how can this bill’s goals be accomplished and be more budget neutral.

MDK December 3, 2024 3:19 PM

@All

If you haven’t heard about hxxps://www.sanctuary.ai/ work they have been doing some interesting and difficult work. The current CEO is the former CEO of DWAVE.

MDK

David Leppik December 3, 2024 7:48 PM

This fails to mention the biggest threat from AI: propaganda. Just about every psychological trick in the book works even if you know it’s being used against you. Even if you know you are seeing an implausible AI deepfake, it still influences your opinion. Seeing a scandalous video or hearing a politician spout shocking opinions will make you question them, even if you know it’s a fake. You can’t unsee it. Similarly seeing enough heroic or patriotic images of a selfish, corrupt politician will make you more trusting, even if you know they are fake.

We’ve web samples of this in the last US presidential election, and we’re going to see more. AI will let them be personalized and targeted. Facebook (and others) have already profiled emotionally vulnerable people as a category for targeted advertising. Personalized advertising is already difficult to track. There is no reason for this not to increase with improved AI.

ResearcherZero December 4, 2024 2:24 AM

Seeing is Believing: The uncontrolled dissemination of decontextualized visual disinformation.

‘https://academic.oup.com/joc/advance-article/doi/10.1093/joc/jqae045/7908277

Information Warfare has evolved into an element of everyday life.
https://www.forbes.com/sites/alexvakulov/2024/11/19/information-warfare-spreading-chaos-a-guide-to-outsmarting-fake-news/

The fabricated quality of certainty.
https://theconversation.com/wittgenstein-and-the-dangers-of-certainty-76796

“…it is critical to carefully craft and test strategies to not inadvertently erode citizens’ trust in accurate information.”

https://www.nature.com/articles/s41562-024-01884-x

ResearcherZero December 4, 2024 3:08 AM

You have been neglected by the government. I can better represent you and speak for you.

Shadow Representation

‘https://www.tandfonline.com/doi/full/10.1080/00344893.2024.2386987#d1e155

Clive Robinson December 5, 2024 3:47 AM

Why AI legislation will not work.

A right to reply objective view based on events in the past week alone.

There has recently been a lot of talk about “guide rails” for AI. Both the addition of software to encapsulate LLM ML systems, and that which could be made and enforced by legislation and regulation.

As we know current “guide rails” of the programmed variety are failing to work or worse creating vulnerabilities that can be exploited by the fact they effect the processing flow.

You would have thought that, that alone would have been sufficient to cause the more sensible to say that we should not “Bull in a china shop” rush into the current LLM / ML AI systems. Because their “Fitness for purpose” is distinctly questionable at best.

But that appears not to be happening, instead money appears to be being thrown in “the fire pit” as a sacrifice on the “Bonfires of the Vanities” of some in the Big Tech AI organisations and questionable financial organisations “up the gas” on “the burn rate”. As the AI subject gets “Hyped Up” beyond any form of common sense caution.

Which means we have to actually consider now just how bad any prospective AI controling legislation or regulation might be.

To get an idea of this we need to “step away” from “AI Hype” and look at other areas that in effect are related.

In the UK in the past few days we have seen just some legislation and regulation issues hit the news headlines in respect to this. But as with all such they have a background history.

So firstly, back in the 1980’s Margret Thatcher PM pushed through Poll Tax legislation that was despite much warning turned into a significant disaster. However along with it was a “legal measure” that remained in one way or another which has been even more of a disaster. This was the alleged “presumption of reliability” of mechanical thus computational (thus AI) devices in the UK Court System,

https://www.benthamsgaze.org/2022/06/30/the-legal-rule-that-computers-are-presumed-to-be-operating-correctly-unforeseen-and-unjust-consequences/

This was significantly abused by the “Post Office Horizon” trials. Hence the fact it is nolonger supportable, and looks like it might get removed (which will prove both costly and devastating on UK Courts and Tribunals Service).

Secondly is a new set of legislation on “healthy Eating” in advertising. After years of campaigning about obesity especially in children caused by certain known to be bad foods, weak legislation was brought in under Boris Johnson PM and well it’s got the “loophole for a Camel” issue,

https://www.bbc.co.uk/news/articles/c5ydwnywvxjo

There are a couple of obvious lessons there, and We can expect similar but different to appear in any AI legislation.

This is simply because of the same underlying reasons. That is the people who are going to “lobby” against reasonable control, for “short term gain” without consideration to “long term harm”. But also due to the issues surrounding,

“Supposed Individual Rights -v- Actual Societal Responsibility.”

The fact that one of the Nobel laureate’s on AI thinks we should prioritize “universal basic income” due to what they feel AI will almost certainly do suggests we really should step with great care.

Professor Geoffrey Hinton is sometimes referred to as the “Godfather of AI” and,

‘[H]e said the UK government will have to establish a universal basic income to deal with the impact of AI on inequality, as he was “very worried about AI taking lots of mundane jobs”.’

https://www.bbc.co.uk/news/articles/c62r02z75jyo

Even in that short period since it’s been indicated that it’s not just “mundane jobs” that will be significantly adversely effected. In fact if you are reading this and under 50years old you are probably going to be hardest hit because,

“[A] new comprehensive Nokia Bell Labs study challenges that prevailing [mundane job] narrative, finding that it is white-collar, highly technical jobs that will likely be those that face more disruption.”

https://www.bell-labs.com/institute/blog/yes-ai-will-upend-many-jobs-just-not-the-ones-you-imagined/

That is most “middle class” or “Degrees Required” jobs will go the way of the Dodo as they are “learned knowledge” occupations.

Which in turn will cause a massive economic collapse simply by the fact it will remove nearly all circulatory wealth from 80-90% of the population. With the high likelihood things will revert in effect, to the historic “Barons in their Castles” with lawless “Guard Labour” exerting their will on what will have become an enforced “serf population”.

ResearcherZero December 9, 2024 1:04 AM

@Clive Robinson

There are some techniques that can be more widely applied by social and traditional media.

Reducing animosity between groups:

“the political incentives to inflame and weaponize affective polarization for political gain must be reduced or eliminated”

‘https://journals.sagepub.com/doi/10.1177/23794607241238037

Improving intergroup dialogue…

https://journals.sagepub.com/doi/10.1177/10464964241302071

The two most effective strategies that can help to reduce the divide.
https://www.rochester.edu/newscenter/political-divide-megastudy-antidemocratic-attitudes-partisan-animosity-626562/

Ethics should be the main goal of media outlets’ coverage of elections.
https://www.rutgers.edu/news/how-media-namely-news-ads-and-social-posts-can-shape-election

ResearcherZero December 9, 2024 3:32 AM

@Bruce

Tools capable of more nuance, do require a deeper understanding of technique.
Such tools only have power over us when we do not see the strings being pulled.

Overcoming the fear of rejection and need to feel accepted can either lead us to look within ourselves and see what we can do to change ourselves, or it can instead lead us to blame others for the things we ourselves refuse to accept. There are many reasons we may be afraid to be rejected by others. It is often far easier to decide it is the other person who is at fault, rather than to reexamine our own preconceived ideas and interpretations of events. It is only by reevaluating our own past behaviour we can learn the truth of events.

Emotions motivate behaviours we often have little control over.

‘https://fsi.stanford.edu/news/transformative-power-anger-under-authoritarian-repression

Abandoning better judgement, we look for “strong” leaders in times of instability.
Economic and personal insecurity thus leaves us vulnerable to even overt manipulation.

https://www.scientificamerican.com/article/explaining-the-global-rise-of-ldquo-dominance-rdquo-leadership/

In the past similar events and methods convinced people to give up their rights.
https://news.berkeley.edu/2024/09/09/fascism-shattered-europe-a-century-ago-and-historians-hear-echoes-today-in-the-u-s/

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.