AI and the Future of American Politics

Two years ago, Americans anxious about the forthcoming 2024 presidential election were considering the malevolent force of an election influencer: artificial intelligence. Over the past several years, we have seen plenty of warning signs from elections worldwide demonstrating how AI can be used to propagate misinformation and alter the political landscape, whether by trolls on social media, foreign influencers, or even a street magician. AI is poised to play a more volatile role than ever before in America’s next federal election in 2026. We can already see how different groups of political actors are approaching AI. Professional campaigners are using AI to accelerate the traditional tactics of electioneering; organizers are using it to reinvent how movements are built; and citizens are using it both to express themselves and amplify their side’s messaging. Because there are so few rules, and so little prospect of regulatory action, around AI’s role in politics, there is no oversight of these activities, and no safeguards against the dramatic potential impacts for our democracy.

The Campaigners

Campaigners—messengers, ad buyers, fundraisers, and strategists—are focused on efficiency and optimization. To them, AI is a way to augment or even replace expensive humans who traditionally perform tasks like personalizing emails, texting donation solicitations, and deciding what platforms and audiences to target.

This is an incremental evolution of the computerization of campaigning that has been underway for decades. For example, the progressive campaign infrastructure group Tech for Campaigns claims it used AI in the 2024 cycle to reduce the time spent drafting fundraising solicitations by one-third. If AI is working well here, you won’t notice the difference between an annoying campaign solicitation written by a human staffer and an annoying one written by AI.

But AI is scaling these capabilities, which is likely to make them even more ubiquitous. This will make the biggest difference for challengers to incumbents in safe seats, who see AI as both a tacitly useful tool and an attention-grabbing way to get their race into the headlines. Jason Palmer, the little-known Democratic primary challenger to Joe Biden, successfully won the American Samoa primary while extensively leveraging AI avatars for campaigning.

Such tactics were sometimes deployed as publicity stunts in the 2024 cycle; they were firsts that got attention. Pennsylvania Democratic Congressional candidate Shamaine Daniels became the first to use a conversational AI robocaller in 2023. Two long-shot challengers to Rep. Don Beyer used an AI avatar to represent the incumbent in a live debate last October after he declined to participate. In 2026, voters who have seen years of the official White House X account posting deepfaked memes of Donald Trump will be desensitized to the use of AI in political communications.

Strategists are also turning to AI to interpret public opinion data and provide more fine-grained insight into the perspective of different voters. This might sound like AIs replacing people in opinion polls, but it is really a continuation of the evolution of political polling into a data-driven science over the last several decades.

A recent survey by the American Association of Political Consultants found that a majority of their members’ firms already use AI regularly in their work, and more than 40 percent believe it will “fundamentally transform” the future of their profession. If these emerging AI tools become popular in the midterms, it won’t just be a few candidates from the tightest national races texting you three times a day. It may also be the member of Congress in the safe district next to you, and your state representative, and your school board members.

The development and use of AI in campaigning is different depending on what side of the aisle you look at. On the Republican side, Push Digital Group is going “all in” on a new AI initiative, using the technology to create hundreds of ad variants for their clients automatically, as well as assisting with strategy, targeting, and data analysis. On the other side, the National Democratic Training Committee recently released a playbook for using AI. Quiller is building an AI-powered fundraising platform aimed at drastically reducing the time campaigns spend producing emails and texts. Progressive-aligned startups Chorus AI and BattlegroundAI are offering AI tools for automatically generating ads for use on social media and other digital platforms. DonorAtlas automates data collection on potential donors, and RivalMind AI focuses on political research and strategy, automating the production of candidate dossiers.

For now, there seems to be an investment gap between Democratic- and Republican-aligned technology innovators. Progressive venture fund Higher Ground Labs boasts $50 million in deployed investments since 2017 and a significant focus on AI. Republican-aligned counterparts operate on a much smaller scale. Startup Caucus has announced one investment—of $50,000—since 2022. The Center for Campaign Innovation funds research projects and events, not companies. This echoes a longstanding gap in campaign technology between Democratic- and Republican-aligned fundraising platforms ActBlue and WinRed, which has landed the former in Republicans’ political crosshairs.

Of course, not all campaign technology innovations will be visible. In 2016, the Trump campaign vocally eschewed using data to drive campaign strategy and appeared to be falling way behind on ad spending, but was—we learned in retrospect—actually leaning heavily into digital advertising and making use of new controversial mechanisms for accessing and exploiting voters’ social media data with vendor Cambridge Analytica. The most impactful uses of AI in the 2026 midterms may not be known until 2027 or beyond.

The Organizers

Beyond the realm of political consultants driving ad buys and fundraising appeals, organizers are using AI in ways that feel more radically new.

The hypothetical potential of AI to drive political movements was illustrated in 2022 when a Danish artist collective used an AI model to found a political party, the Synthetic Party, and generate its policy goals. This was more of an art project than a popular movement, but it demonstrated that AIs—synthesizing the expressions and policy interests of humans—can formulate a political platform. In 2025, Denmark hosted a “summit” of eight such AI political agents where attendees could witness “continuously orchestrate[d] algorithmic micro-assemblies, spontaneous deliberations, and impromptu policy-making” by the participating AIs.

The more viable version of this concept lies in the use of AIs to facilitate deliberation. AIs are being used to help legislators collect input from constituents and to hold large-scale citizen assemblies. This kind of AI-driven “sensemaking” may play a powerful role in the future of public policy. Some research has suggested that AI can be as or more effective than humans in helping people find common ground on controversial policy issues.

Another movement for “Public AI” is focused on wresting AI from the hands of corporations to put people, through their governments, in control. Civic technologists in national governments from Singapore, Japan, Sweden, and Switzerland are building their own alternatives to Big Tech AI models, for use in public administration and distribution as a public good.

Labor organizers have a particularly interesting relationship to AI. At the same time that they are galvanizing mass resistance against the replacement or endangerment of human workers by AI, many are racing to leverage the technology in their own work to build power.

Some entrepreneurial organizers have used AI in the past few years as tools for activating, connecting, answering questions for, and providing guidance to their members. In the UK, the Centre for Responsible Union AI studies and promotes the use of AI by unions; they’ve published several case studies. The UK Public and Commercial Services Union has used AI to help their reps simulate recruitment conversations before going into the field. The Belgian union ACV-CVS has used AI to sort hundreds of emails per day from members to help them respond more efficiently. Software companies such as Quorum are increasingly offering AI-driven products to cater to the needs of organizers and grassroots campaigns.

But unions have also leveraged AI for its symbolic power. In the U.S., the Screen Actors Guild held up the specter of AI displacement of creative labor to attract public attention and sympathy, and the ETUC (the European confederation of trade unions) developed a policy platform for responding to AI.

Finally, some union organizers have leveraged AI in more provocative ways. Some have applied it to hacking the “bossware” AI to subvert the exploitative intent or disrupt the anti-union practices of their managers.

The Citizens

Many of the tasks we’ve talked about so far are familiar use cases to anyone working in office and management settings: writing emails, providing user (or voter, or member) support, doing research.

But even mundane tasks, when automated at scale and targeted at specific ends, can be pernicious. AI is not neutral. It can be applied by many actors for many purposes. In the hands of the most numerous and diverse actors in a democracy—the citizens—that has profound implications.

Conservative activists in Georgia and Florida have used a tool named EagleAI to automate challenging voter registration en masse (although the tool’s creator later denied that it uses AI). In a nonpartisan electoral management context with access to accurate data sources, such automated review of electoral registrations might be useful and effective. In this hyperpartisan context, AI merely serves to amplify the proclivities of activists at the extreme of their movements. This trend will continue unabated in 2026.

Of course, citizens can use AI to safeguard the integrity of elections. In Ghana’s 2024 presidential election, civic organizations used an AI tool to automatically detect and mitigate electoral disinformation spread on social media. The same year, Kenyan protesters developed specialized chatbots to distribute information about a controversial finance bill in Parliament and instances of government corruption.

So far, the biggest way Americans have leveraged AI in politics is in self-expression. About ten million Americans have used the chatbot Resistbot to help draft and send messages to their elected leaders. It’s hard to find statistics on how widely adopted tools like this are, but researchers have estimated that, as of 2024, about one in five consumer complaints to the U.S. Consumer Financial Protection Bureau was written with the assistance of AI.

OpenAI operates security programs to disrupt foreign influence operations and maintains restrictions on political use in its terms of service, but this is hardly sufficient to deter use of AI technologies for whatever purpose. And widely available free models give anyone the ability to attempt this on their own.

But this could change. The most ominous sign of AI’s potential to disrupt elections is not the deepfakes and misinformation. Rather, it may be the use of AI by the Trump administration to surveil and punish political speech on social media and other online platforms. The scalability and sophistication of AI tools give governments with authoritarian intent unprecedented power to police and selectively limit political speech.

What About the Midterms?

These examples illustrate AI’s pluripotent role as a force multiplier. The same technology used by different actors—campaigners, organizers, citizens, and governments—leads to wildly different impacts. We can’t know for sure what the net result will be. In the end, it will be the interactions and intersections of these uses that matters, and their unstable dynamics will make future elections even more unpredictable than in the past.

For now, the decisions of how and when to use AI lie largely with individuals and the political entities they lead. Whether or not you personally trust AI to write an email for you or make a decision about you hardly matters. If a campaign, an interest group, or a fellow citizen trusts it for that purpose, they are free to use it.

It seems unlikely that Congress or the Trump administration will put guardrails around the use of AI in politics. AI companies have rapidly emerged as among the biggest lobbyists in Washington, reportedly dumping $100 million toward preventing regulation, with a focus on influencing candidate behavior before the midterm elections. The Trump administration seems open and responsive to their appeals.

The ultimate effect of AI on the midterms will largely depend on the experimentation happening now. Candidates and organizations across the political spectrum have ample opportunity—but a ticking clock—to find effective ways to use the technology. Those that do will have little to stop them from exploiting it.

This essay was written with Nathan E. Sanders, and originally appeared in The American Prospect.

Posted on October 13, 2025 at 7:04 AM10 Comments

Comments

KC October 13, 2025 11:09 AM

On the other side, the National Democratic Training Committee recently released a playbook for using AI …

Wired: ‘The group is largely targeting smaller campaigns with fewer resources with its AI course, seeking to empower what could be five person teams to work with the “efficiency of a 15 person team.’

The NDTC site is CHOCK-FULL of a broad array of resources, including the very interesting looking course: “AI for Progressive Campaigns.”

https://traindems.org/course-details/280/ai-for-progressive-campaigns

KC October 13, 2025 11:11 AM

On the Republican side, Push Digital Group is going “all in” on a new AI initiative …

https://pushdigitalgroup.com/

Gosh darn. It looks the Republicans are continuing to expand their efforts.

Donehue also believes that the Republicans’ party infrastructure isn’t positioned for the current challenges candidates face reaching a diverse electorate in a fractured media environment …

“The only people that can really look at the future are companies,” he said. “Whether it’s Push, Targeted Victory, OnMessage, [FlexPoint Media], Axiom …

Just taking a moment to look at Targeted Victory, they appear to have a pretty substantial team. And quite a hiring spree going on.

https://targetedvictory.com/team/
https://www.linkedin.com/company/targeted-victory/

And they don’t seem shy about AI either.

The Prospecting and Data teams recently leveled up their skills at Snowflake Summit 2025 in San Francisco! They gained valuable insights from some of the leading AI and data experts, bringing cutting-edge strategies and the latest technology back to Targeted Victory.

Clive Robinson October 13, 2025 1:47 PM

@ Bruce, ALL,

You say,

“Jason Palmer, the little-known Democratic primary challenger to Joe Biden, successfully won the American Samoa primary while extensively leveraging AI avatars for campaigning.”

Which indicates there was in effect a “master slave” relationship. Where Mr Palmer thought he was the master and the AI Avatars slaves.

But you do not take it logically further by thinking about who actually controls the AI Avatars and the power balance.

Consider in most cases the “assumed Master” will in fact actually not be the master, but just a “customer” of the organisation that does actually control the AI Avatars.

Now consider the fact that the “customer” could actually be seen as just a “Puppet” or “photogenic front man”, that is almost entirely reliant on the AI Avatars.

He might be “Paying the Piper” but he is not “playing the music”.

Thus the question arises of,

“Why have lobbyists when you control the slaves?”

That is you could just put a puppet in the seat, and pull the strings via the AI slaves.

It’s something we need to seriously talk about, because without taking precautions now, it is something that will almost certainly happen.

We currently complain about lobbyists writing legislation and regulation for the politicians and civil servants.

Imagine no expensive lobbyists with large brown envelopes… Just “front men” as legislators who are 100% dependent on the AI.

We know the AI can be controlled by others who in effect can not be seen. In effect the unseen, get to use the AI as a very inexpensive “arms length dictator” controling as many legislators / politicians as they want.

Technically the AI “won’t be the directing mind” but the AI will be controlled, and not by the paying customer. And the AI thus the customer will be utterly subservient to the “unseen hand” over which there is no oversight or restraint by any process.

not important October 14, 2025 4:58 PM

https://www.yahoo.com/news/articles/more-scientists-ai-less-trust-110000886.html

=the report found that scientists expressed less trust in AI than they did in 2024, when it was decidedly less advanced.

in the 2024 iteration of the survey, 51 percent of scientists polled were worried about potential “hallucinations,” a widespread issue in which large language models (LLMs) present completely fabricated information as fact. That number was up to a whopping 64 percent in 2025, even as AI use among researchers surged from 45 to 62 percent.

Anxiety over security and privacy were up 11 percent from last year, while concerns over ethical AI and transparency also ticked up.

more people learn about how AI works, the less they trust it. The opposite was also true — AI’s biggest fanboys tended to be those who understood the least about the tech.

Experts say that users roundly prefer confident LLMs to ones that admit when they can’t find data or deliver an accurate answer — even when that information is totally made up. If a company like OpenAI were to stamp out sloppy hallucinations for good, it would scare off users in droves.=

not important October 14, 2025 5:01 PM

https://www.yahoo.com/news/articles/elon-musk-faces-backlash-over-235000799.html

My attention was on technical not moral issues:

=As the increased demand for electricity has outpaced the growth in supply, it has caused electricity prices to skyrocket for everyday consumers.

Not only that, but these data centers also use huge amounts of water for cooling purposes. Those living nearby have reported a hit to local water supplies as well as symptoms such as fatigue and anxiety related to loud data center operations.

Given the unprecedented pace at which AI has developed, lawmakers and government regulators have struggled to keep up with the rapid technological advancements.

To establish meaningful regulations protective of users and the environment while keeping energy costs affordable for everyday Americans will require a thoughtful, purpose-driven collaboration among the AI industry, government, individuals, and other stakeholders.=

ResearcherZero October 16, 2025 11:02 PM

Those comments do sound a lot like something a critic of technology might say. Perhaps a Luddite born somewhere in the bowels of hell who then emerges to enslave the entire world.

The government should have no business in watching what tech companies are doing Peter Theil warns. Theil warned about government overreach, how a critic of technology could be the Antichrist and criticized the technocratic elite (not including himself of course).

Palantir is partnering to scale so it can sift through data to find “hidden things”.

‘https://www.washingtonpost.com/technology/2025/10/10/peter-thiel-antichrist-lectures-leaked/

Federal officials designated by the President or Agency Heads (or their designees) have full and prompt access to all unclassified agency records, data, software systems, and information technology systems. Agency Heads shall review agency regulations governing unclassified data access within 30 days to eliminate or modify rules preventing this goal.
https://fedscoop.com/trump-executive-order-data-sharing-information-silos/

Americans’ data held by the government should be inserted into the following product.
https://www.usaspending.gov/recipient/1ea8a9a4-3726-3491-9040-66950bb67606-P/latest

ResearcherZero October 16, 2025 11:12 PM

Palantir is joining with Snowflake and other AI enabled data enterprise companies to increase interoperability and generate insights and share price rises from all that data.

‘https://www.techtarget.com/searchDataManagement/news/366632860/Snowflake-Palantir-team-to-simplify-AI-insight-generation

ResearcherZero October 17, 2025 12:47 AM

@not important

The direction things are heading in is about as thoughtful as you can get if want to squeeze and control everything that is possible from the neck of the golden goose.

Palantir ingests data at scale in a form that AI can understand. It allows organizations to connect all kinds of data sources from a range of different environments to manage all of that data. Foundry can be used to track and trace points of data and perform complex operations including analysis, classification, translation and link this all together over time. Big picture stuff that provides insight into all the areas one might want to exploit.

‘https://morphsys.substack.com/p/what-does-palantir-really-do-a-deep

Tariffs are helping the lobbying, legal and public affairs swamp generate record profits.
https://www.bloomberg.com/news/features/2025-10-01/businesses-eager-to-beat-tariffs-turn-to-trump-connected-lobbyists

Big Tech is amongst new clients lobbying firms have signed up that are looking to expand.
https://www.politico.com/news/2025/01/29/lobbying-trump-k-street-ballard-00201263

The Trump administration now owns a stake in a number of big businesses looking for favors.
https://finance.yahoo.com/news/trump-administration-now-holds-stakes-023008085.html

ResearcherZero October 18, 2025 1:25 AM

A basic idea behind the current approach is to pump out vast amounts of raw sewage into the public sphere. Once people become accustomed to being up to their necks in the muck, many will become oblivious to the large turds floating past on the surface. There are a few draw backs to this strategy. The murkiness of the waters will obscure much of the contents that are hidden at a deeper level. Even with a snorkel and a pair of goggles, few will volunteer to risk sticking their heads beneath the surface to get a glimpse at what could be lurking beneath. Detection systems may become clogged with s–t and venturing into unclear such blockages will inevitably see time and resources wasted freeing those who are bogged down.

Storm surges will become a very real problem due to the erosion and collapse of coastal ecosystems which once acted as a natural barrier to the heated and putrid waters. Will the melting ice of the Antarctic eventually dilute and sweep it all away, or will we all be caught in the deluge and swept away with it too while clinging to our rubber duckies?

Rontea October 23, 2025 6:10 PM

AI’s growing role in American politics is both fascinating and concerning. On one hand, it empowers campaigns, organizers, and citizens to reach audiences and engage in ways that were never possible before. On the other hand, the lack of clear regulations creates serious risks, from voter suppression to state surveillance. As we approach the 2026 midterms, the way AI is used could deeply influence not just election outcomes, but public trust in democracy itself.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.