AI and Voter Engagement

Social media has been a familiar, even mundane, part of life for nearly two decades. It can be easy to forget it was not always that way.

In 2008, social media was just emerging into the mainstream. Facebook reached 100 million users that summer. And a singular candidate was integrating social media into his political campaign: Barack Obama. His campaign’s use of social media was so bracingly innovative, so impactful, that it was viewed by journalist David Talbot and others as the strategy that enabled the first term Senator to win the White House.

Over the past few years, a new technology has become mainstream: AI. But still, no candidate has unlocked AI’s potential to revolutionize political campaigns. Americans have three more years to wait before casting their ballots in another Presidential election, but we can look at the 2026 midterms and examples from around the globe for signs of how that breakthrough might occur.

How Obama Did It

Rereading the contemporaneous reflections of the New York Times’ late media critic, David Carr, on Obama’s campaign reminds us of just how new social media felt in 2008. Carr positions it within a now-familiar lineage of revolutionary communications technologies from newspapers to radio to television to the internet.

The Obama campaign and administration demonstrated that social media was different from those earlier communications technologies, including the pre-social internet. Yes, increasing numbers of voters were getting their news from the internet, and content about the then-Senator sometimes made a splash by going viral. But those were still broadcast communications: one voice reaching many. Obama found ways to connect voters to each other.

In describing what social media revolutionized in campaigning, Carr quotes campaign vendor Blue State Digital’s Thomas Gensemer: “People will continue to expect a conversation, a two-way relationship that is a give and take.”

The Obama team made some earnest efforts to realize this vision. His transition team launched change.gov, the website where the campaign collected a “Citizen’s Briefing Book” of public comment. Later, his administration built We the People, an online petitioning platform.

But the lasting legacy of Obama’s 2008 campaign, as political scientists Hahrie Han and Elizabeth McKenna chronicled, was pioneering online “relational organizing.” This technique enlisted individuals as organizers to activate their friends in a self-perpetuating web of relationships.

Perhaps because of the Obama campaign’s close association with the method, relational organizing has been touted repeatedly as the linchpin of Democratic campaigns: in 2020, 2024, and today. But research by non-partisan groups like Turnout Nation and right-aligned groups like the Center for Campaign Innovation has also empirically validated the effectiveness of the technique for inspiring voter turnout within connected groups.

The Facebook of 2008 worked well for relational organizing. It gave users tools to connect and promote ideas to the people they know: college classmates, neighbors, friends from work or church. But the nature of social networking has changed since then.

For the past decade, according to Pew Research, Facebook use has stalled and lagged behind YouTube, while Reddit and TikTok have surged. These platforms are less useful for relational organizing, at least in the traditional sense. YouTube is organized more like broadcast television, where content creators produce content disseminated on their own channels in a largely one-way communication to their fans. Reddit gathers users worldwide in forums (subreddits) organized primarily on topical interest. The endless feed of TikTok’s “For You” page disseminates engaging content with little ideological or social commonality. None of these platforms shares the essential feature of Facebook c. 2008: an organizational structure that emphasizes direct connection to people that users have direct social influence over.

AI and Relational Organizing

Ideas and messages might spread virally through modern social channels, but they are not where you convince your friends to show up at a campaign rally. Today’s platforms are spaces for political hobbyism, where you express your political feelings and see others express theirs.

Relational organizing works when one person’s action inspires others to do this same. That’s inherently a chain of human-to-human connection. If my AI assistant inspires your AI assistant, no human notices and one’s vote changes. But key steps in the human chain can be assisted by AI. Tell your phone’s AI assistant to craft a personal message to one friend—or a hundred—and it can do it.

So if a campaign hits you at the right time with the right message, they might persuade you to task your AI assistant to ask your friends to donate or volunteer. The result can be something more than a form letter; it could be automatically drafted based on the entirety of your email or text correspondence with that friend. It could include references to your discussions of recent events, or past campaigns, or shared personal experiences. It could sound as authentic as if you’d written it from the heart, but scaled to everyone in your address book.

Research suggests that AI can generate and perform written political messaging about as well as humans. AI will surely play a tactical role in the 2026 midterm campaigns, and some candidates may even use it for relational organizing in this way.

(Artificial) Identity Politics

For AI to be truly transformative of politics, it must change the way campaigns work. And we are starting to see that in the US.

The earliest uses of AI in American political campaigns are, to be polite, uninspiring. Candidates viewed them as just another tool to optimize an endless stream of email and text message appeals, to ramp up political vitriol, to harvest data on voters and donors, or merely as a stunt.

Of course, we have seen the rampant production and spread of AI-powered deepfakes and misinformation. This is already impacting the key 2026 Senate races, which are likely to attract hundreds of millions of dollars in financing. Roy Cooper, Democratic candidate for US Senate from North Carolina, and Abdul El-Sayed, Democratic candidate for Senate from Michigan, were both targeted by viral deepfake attacks in recent months. This may reflect a growing trend in Donald Trump’s Republican party in the use of AI-generated imagery to build up GOP candidates and assail the opposition.

And yet, in the global elections of 2024, AI was used more memetically than deceptively. So far, conservative and far right parties seem to have adopted this most aggressively. The ongoing rise of Germany’s far-right populist AfD party has been credited to its use of AI to generate nostalgic and evocative (and, to many, offensive) campaign images, videos, and music and, seemingly as a result, they have dominated TikTok. Because most social platforms’ algorithms are tuned to reward media that generates an emotional response, this counts as a double use of AI: to generate content and to manipulate its distribution.

AI can also be used to generate politically useful, though artificial, identities. These identities can fulfill different roles than humans in campaigning and governance because they have differentiated traits. They can’t be imprisoned for speaking out against the state, can be positioned (legitimately or not) as unsusceptible to bribery, and can be forced to show up when humans will not.

In Venezuela, journalists have turned to AI avatars—artificial newsreaders—to report anonymously on issues that would otherwise elicit government retaliation. Albania recently “appointed” an AI to a ministerial post responsible for procurement, claiming that it would be less vulnerable to bribery than a human. In Virginia, both in 2024 and again this year, candidates have used AI avatars as artificial stand-ins for opponents that refused to debate them.

And yet, none of these examples, whether positive or negative, pursue the promise of the Obama campaign: to make voter engagement a “two-way conversation” on a massive scale.

The closest so far to fulfilling that vision anywhere in the world may be Japan’s new political party, Team Mirai. It started in 2024, when an independent Tokyo gubernatorial candidate, Anno Takahiro, used an AI avatar on YouTube to respond to 8,600 constituent questions over a seventeen-day continuous livestream. He collated hundreds of comments on his campaign manifesto into a revised policy platform. While he didn’t win his race, he shot up to a fifth place finish among a record 56 candidates.

Anno was RECENTLY elected to the upper house of the federal legislature as the founder of a new party with a 100 day plan to bring his vision of a “public listening AI” to the whole country. In the early stages of that plan, they’ve invested their share of Japan’s 32 billion yen in party grants—public subsidies for political parties—to hire engineers building digital civic infrastructure for Japan. They’ve already created platforms to provide transparency for party expenditures, and to use AI to make legislation in the Diet easy, and are meeting with engineers from US-based Jigsaw Labs (a Google company) to learn from international examples of how AI can be used to power participatory democracy.

Team Mirai has yet to prove that it can get a second member elected to the Japanese Diet, let alone to win substantial power, but they’re innovating and demonstrating new ways of using AI to give people a way to participate in politics that we believe is likely to spread.

Organizing with AI

AI could be used in the US in similar ways. Following American federalism’s longstanding model of “laboratories of democracy,” we expect the most aggressive campaign innovation to happen at the state and local level.

D.C. Mayor Muriel Bowser is partnering with MIT and Stanford labs to use the AI-based tool deliberation.io to capture wide scale public feedback in city policymaking about AI. Her administration said that using AI in this process allows “the District to better solicit public input to ensure a broad range of perspectives, identify common ground, and cultivate solutions that align with the public interest.”

It remains to be seen how central this will become to Bowser’s expected re-election campaign in 2026, but the technology has legitimate potential to be a prominent part of a broader program to rebuild trust in government. This is a trail blazed by Taiwan a decade ago. The vTaiwan initiative showed how digital tools like Pol.is, which uses machine learning to make sense of real time constituent feedback, can scale participation in democratic processes and radically improve trust in government. Similar AI listening processes have been used in Kentucky, France, and Germany.

Even if campaigns like Bowser’s don’t adopt this kind of AI-facilitated listening and dialog, expect it to be an increasingly prominent part of American public debate. Through a partnership with Jigsaw, Scott Rasmussen’s Napolitan Institute will use AI to elicit and synthesize the views of at least five Americans from every Congressional district in a project called “We the People.” Timed to coincide with the country’s 250th anniversary in 2026, expect the results to be promoted during the heat of the midterm campaign and to stoke interest in this kind of AI-assisted political sensemaking.

In the year where we celebrate the American republic’s semiquincentennial and continue a decade-long debate about whether or not Donald Trump and the Republican party remade in his image is fighting for the interests of the working class, representation will be on the ballot in 2026. Midterm election candidates will look for any way they can get an edge. For all the risks it poses to democracy, AI presents a real opportunity, too, for politicians to engage voters en masse while factoring their input into their platform and message. Technology isn’t going to turn an uninspiring candidate into Barack Obama, but it gives any aspirant to office the capability to try to realize the promise that swept him into office.

This essay was written with Nathan E. Sanders, and originally appeared in The Fulcrum.

Posted on November 18, 2025 at 7:01 AM6 Comments

Comments

anon November 18, 2025 8:34 AM

Barack Obama repeatedly stated: “If you like your doctor, you can keep your doctor. If you like your health care plan, you can keep your health care plan”.

I’m sure every other statement he’s ever made is still 100% true.

Clive Robinson November 18, 2025 12:30 PM

@ Bruce, ALL,

With regards,

“But still, no candidate has unlocked AI’s potential to revolutionize political campaigns.”

Err not true unless you are only talking about that political backwater some call the US 😉

AI has been used in India, and the UK in support of a candidate and actually as a candidate.

I’ve previously drawn your attention to two UK AI Candidates, AI Steve and AI Mark.

One of whom who ran the company got totally trashed by the electorate,

https://theconversation.com/britains-first-ai-politician-claims-he-will-bring-trust-back-to-politics-so-i-put-him-to-the-test-233403

However the man behind it Steve Endacott, claimed to be from the south of England for the Brighton campaign, but was actually living in and had run as an AI for a local council seat in Rochdale several hundred miles away a short while before.

It was pointed out by some that this unsurprisingly made him somewhat untrustworthy, and thus the AI got trounced.

But worse his company Neural Voice that Steve Endacott owns and runs in the Rochdale area appears to support more than one AI candidate and political party.

That is in addition is “Labour’s AI Mark”.

https://www.independent.co.uk/bulletin/news/virtual-mp-ai-labour-mark-sewards-b2802741.html

You can see more about the company “Neural Voice” on it’s self promoting web site,

https://www.neural-voice.ai/ai-steve

Although the web site claims he lives in Sussex South England Companies House shows something different,

https://find-and-update.company-information.service.gov.uk/company/15208364

The address given is a small domestic residence,

https://themovemarket.com/tools/propertyprices/8-newcastle-close-drighlington-bradford-bd11-1df

Which is registered as a domestic residence not place of business.

French Mailman November 18, 2025 4:31 PM

Example of AI / deepfake used in politics:

There will be municipal elections in France in 2026. In the city of Strasbourg, where the European Parliament is located, one candidate recently posted an AI-generated video to show the streets of the city dirtier than they really are.

Reference (in French)
https://www.leparisien.fr/elections/municipales/municipales-2026-une-candidate-rn-denonce-la-salete-des-rues-de-strasbourg-avec-des-images-generees-par-ia-31-10-2025-XGJXVAHFPRA3PGHU4DM7CBJOK4.php

P/K November 19, 2025 1:01 AM

How about not using that crap, energy and money wasting AI? Those AI tools for the general public are nothing more than imperialist instruments of those Big Tech companies and if there’s a marginal benefit for the consumer, that doesn’t outweigh all the disadvantages. It’s that blockchain hype on steroids.

Clive Robinson November 19, 2025 4:05 AM

@ P/K, ALL,

With regards you asking,

“How about not using that crap, energy and money wasting AI?”

The simple answer is,

“Because of lunacy and inbred cognitive bias.”

Call it in part the “Frankenstein Effect” that,

“Man can do better with cogs and chains than the time tested laws of nature and evolution”

Especially when history shows such notions are generally, lets just say “delusional”.

It’s unclear when the notion of “Artificial Intelligence” started, but as we still have no idea what “Intelligence” actually is thus no way to test it definitively or even by empirical observation it’s at the very least “an always moving target”[1].

Thus the history of AI research, is the joke of “always 20 years away”, but also

“How the latest and greatest research always fails to live up to the hype.”

Not that it means the research is pointless. I was around when great things were said about “expert systems” and “fuzzy logic” back in the 1980’s. Most assume they jus died or faded away. Neither is the case both are still around but in “niche applications”

I use them in “cognitive radios”, “industrial control” and “safety systems” used for remote or unmanned control.

Thus I can confidently predict two things about the Current AI LLM and ML systems,

1, They will fail to live up to the hype.
2, They will eventually find use in niche applications.

[1] Some actually believe it is “Mankind trying to play God” –or some other impossible deity– because of the backwards biblical statement of,

“God said, Let us make man in our image, in our likeness” (Genesis 1:26)

Swap man and god in the statement to see the truth of it, confirmed by the fact that “Mankind’s gods” are always only a little better than man thus always just out of reach as Mankind evolves.

Currently as you state with,

“It’s that blockchain hype on steroids.”

And I’ve more generally observed with the behaviour of “Venture Capitalists” looking for the next “mug bait” for “foolish speculators” they can “tax” as those with less money than sense loose not just their shirts, but get taken entirely to the cleaners.

They have hyped cryptocoins, blockchain, Smart Contracts, Non Fungible Tokens and just about everything else “Web3” with steadily diminishing pay offs. All basically on “shilling up” and creating FOMO.

It’s got so bad that almost the entire US Economy as seen through the various indexes is based on the churn from this shilling and FOMO hype, that in reality is just drum banging and flag waving nonsense.

Unfortunately where ever there is “noise, nonsense, or hype” it always attracts crooks and politicians of which the former are a little more aware than the latter. Their only interest generally is the old,

“What’s in it for me?”

That a century ago Upto Sinclair aptly summed up with,

“It is difficult to get a man to understand something when his salary depends on his not understanding it.”

Probably more true today than when he coined it.

Montecarlo November 19, 2025 10:01 AM

An AI can make phone calls to solicit donations at a much faster rate than any human being, since it can literally talk to thousands of people simultaneously. This is important, since fundraising consumes the majority of many politicians’ time.

Will this allow politicians to spend more time on other things, such as making backroom deals, or will they simply be replaced by AIs. Do human beings, as politicians, bring any value to the table?

Maybe. Deciding which promises to break is critical for any successful politician. Donors are told whatever they wish to hear, and so politicians make a wide array of promises, many of which are in direct conflict. Should promises be weighted in proportion to the size of each donation? This algorithm seems too simplistic, as factors such as the politician’s branding, track record, and ability to distract voters from substantive issues, must also enter into the equation. Also, promises that attract votes, even if they are not directly tied to donations, may have a small residual value. I therefore think AIs must substantially improve before they replace human politicians completely.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.