As the US Midterms Approach, AI Is Going to Emerge as a Key Issue Concerning Voters

In December, the Trump administration signed an executive order that neutered states’ ability to regulate AI by ordering his administration to both sue and withhold funds from states that try to do so. This action pointedly supported industry lobbyists keen to avoid any constraints and consequences on their deployment of AI, while undermining the efforts of consumers, advocates, and industry associations concerned about AI’s harms who have spent years pushing for state regulation.

Trump’s actions have clarified the ideological alignments around AI within America’s electoral factions. They set down lines on a new playing field for the midterm elections, prompting members of his party, the opposition, and all of us to consider where we stand in the debate over how and where to let AI transform our lives.

In a May 2025 survey of likely voters nationwide, more than 70% favored state and federal regulators having a hand in AI policy. A December 2025 poll by Navigator Research found similar results, with a massive net +48% favorability for more AI regulation. Yet despite the overwhelming preference of both voters and his party’s elected leaders—Congress was essentially unanimous in defeating a previous state AI regulation moratorium—Trump has delivered on a key priority of the industry. The order explicitly challenges the will of voters across blue and red states, from California to South Dakota, scrambling political positions around the technology and setting up a new ideological battleground in the upcoming race for Congress.

There are a number of ways that candidates and parties may try to capitalize on this emerging wedge issue before the midterms.

In 2025, much of the popular debate around AI was cast in terms of humans versus machines. Advances in AI and the companies it is associated with, it is said, come at the expense of humans. A new model release with greater capabilities for writing, teaching, or coding means more people in those disciplines losing their jobs.

This is a humanist debate. Making us talk to an AI customer-support agent is an affront to our dignity. Using AI to help generate media sacrifices authenticity. AI chatbots that persuade and manipulate assault our liberty. There is philosophical merit to these arguments, and yet they seem to have limited political salience.

Populism versus institutionalism is a better way to frame this debate in the context of US politics. The MAGA movement is widely understood to be a realignment of American party politics to ally the Republican party with populism, and the Democratic party with defenders of traditional institutions of American government and their democratic norms.

This frame is shattered by Trump’s AI order, which unabashedly serves economic elites at the expense of populist consumer protections. It is part of an ongoing courting process between MAGA and big tech, where the Trump political project sacrifices the interests of consumers and its populist credentials as it cozies up to tech moguls.

We are starting to see populist resistance to this government/big tech alignment emerge on the local scale. People in Maryland, Arizona, North Carolina, Michigan and many other states are vigorously opposing AI datacenters in their communities, based on environmental and energy-affordability impacts. These centers of opposition are politically diverse; both progressives and Trump-supporting voters are turning out in force, influencing their local elected officials to resist datacenter development.

This opposition to the physical infrastructure of corporate AI is so far staying local, but it may yet translate into a national and politically aligned movement that could divide the MAGA coalition.

Any policy discussions about AI should include the individual harms associated with job loss, as employers seek to replace laborers with machines. It should also include the systemic economic risks associated with concentrated and supercharged AI investment, the democratic risks associated with the increased power in monopolistic and politically influential tech companies, and the degradation of civic functions like journalism and education by AI. In order for our free market to function in the public interest, the companies amassing wealth and profiting from AI must be forced to take ownership of, and internalize, these costs.

The political salience of AI will grow to meet the staggering scale of financial investment and societal impact it is already commanding. There is an opportunity for enterprising candidates, of either political party, to take the mantle of opposing AI-linked harms in the midterm elections.

Political solutions start with organizing, and broadening the base of political engagement around these issues beyond the locally salient topic of datacenters. Movement leaders and elected officials in states that have taken action on AI regulation should mobilize around the blatant industry capture, wealth extraction, and corporate favoritism reflected in the Trump executive order. AI is no longer just a policy issue for governments to discuss: it is a political issue that voters must decide on and demand accountability on.

Posted on March 26, 2026 at 7:06 AM11 Comments

Comments

TimH March 26, 2026 7:35 AM

Any policy discussions about AI should include

You missed the data centre power and water requirements, plus air pollution (on site gensets) and noise. Data centres are being sneaked in (by not being described as data centres) through planning departments to avoid discussion of environmental impacts.

Not Delusional March 26, 2026 8:10 AM

@TimH,

Agree with you 100% plus the load those data centers are creating, or adding to, the already old and weak power grids across the USA so more brown-outs or even black-outs should be expected, no?

mark March 26, 2026 11:49 AM

My question is whether a Presidential executive order can LEGALLY force anything on the states – “all other power to the states and the people”.

Winter March 26, 2026 11:51 AM

The Mad Red Hatter has three unbreakable moral principles:
Me, Myself, and I

His actions follow these principles and he will do whatever pays off more to him personally.

As far as his followers (and family, I guess) are concerned, he follows the preachings of Jim Jones, who gave us the saying “Drinking the Coolaid“.

MAGA heartland has experienced this in a decreasing standard of living.

A considerable fraction of MAGA will follow him jumping of a cliff Jim Jones style, another part will count their blessings and try to stay alive. Which will be a liability to the GOP and MAGA.

I would be surprised if the next midterm elections will actually be anything more fair and free than those elections organized by Chavez, Maduro, Castro, or Khamenei. If they are held at all, that is.

Clive Robinson March 27, 2026 11:10 AM

@ Bruce,

With regards,

“This is a humanist debate. Making us talk to an AI customer-support agent is an affront to our dignity.”

No it’s way way worse than that.

The game is to “keep the customers away” by amongst other things wasting their time and other resources…

In effect corporations force you to speak to AI that is designed to never allow the customer to get through the system… Thus never gets issues acknowledged let alone resolved.

That way the corporation saves a lot of money and make ludicrous claims about how good they are…

It’s the sort of sleazy behaviour you expect from the likes of the big tech companies but now on steroids.

Clive Robinson March 27, 2026 1:07 PM

@ Bruce,

And “sychronicity” continues…

You might want to read,

The Last Gasps of the Rent Seeking Class

Over the past fifty years, the U.S. economy built a giant rent-extraction layer on top of human limitations: things take time, patience runs out, brand familiarity substitutes for diligence, and most people are willing to accept a bad price to avoid more clicks. Trillions of dollars of enterprise value depended on those constraints persisting. (Quote from Citrini Research)

The best anyone can hope for is a free market, with everything properly priced. But for decades, the American market has not been free. It’s used purposefully added friction to exploit a time asymmetry between the business and you. And due to things like call centers, this has been very profitable for the businesses. Cable companies and insurance rely on the fact that your time is more valuable than theirs. They can hire people in India at scale to waste your time. They can use procedure and big data to design protocols to drive you just to the point of frustration at little cost to them. How often do you diligently check Uber a

https://geohot.github.io//blog/jekyll/update/2026/02/26/the-last-gasps-of-the-rent-seeking-class.html

It argues that AI redresses part of the power to the customer through Chinese “Open models” and similar but this implies with quite interesting limitations…

Read and Reason and draw your own conclusions.

Winter March 27, 2026 1:26 PM

@Clive

The Last Gasps of the Rent Seeking Class

Historically, the rent seeking class only disappeared temporarily after the whole economy was a smoldering crater. Examples are the plagues, WWI, and WWII.

Note that the current political class leadership in the US are from the first post-war generation. They are the ones who built the current rent-seeking class.

Clive Robinson March 27, 2026 1:43 PM

@ Bruce, ALL,

As an addition to my two posts above.

This article further amplifies some of the issues for those trying to use “friction” via AI that if used correctly has an asynchronus advantage for the customer.

But… It in effect makes the likes of OpenAI a “third party service provider” that currently bares the cost,.. not the Corp or it’s customer.

It thus raises the notion that this state is not just unstable but will quickly colapse for very basic economic reasons.

Sam Altman’s OpenAI Is Burning Billions — Most Users Pay Nothing — As Anthropic Closes In

The AI race has a financial problem, and it belongs primarily to the company that started it. OpenAI is projected to lose $14 billion in 2026 — nearly triple earlier estimates for 2025 — even as it reports $20 billion in annualised revenue and 900 million weekly ChatGPT users. According to internal OpenAI financial projections first reported by The Information, the company expects cumulative losses of $44 billion between 2023 and the end of 2028, with profitability not arriving until 2029 at the earliest. The numbers tell the story of a business that has built the world’s most recognisable AI product and still cannot find a path to profit.

The reason is structural. Only 5.5% of ChatGPT’s 900 million users pay for a subscription. The other 94.5% access the service for free — while OpenAI bears the compute cost of every single query across that user base. Infrastructure, model training and talent costs are scaling faster than revenue. Industry-wide, AI companies are expected to spend $690 billion in capital expenditure in 2026 alone. As Google prepares to deploy $185 billion on AI infrastructure this year, the scale of capital deployment required to stay competitive is becoming almost incomprehensible — and OpenAI, unlike Google, has no diversified revenue stream to fund it.

https://europeanbusinessmagazine.com/business/sam-altmans-openai-is-burning-billions-most-users-pay-nothing-as-anthropic-closes-in/

Thus the obvious question is,

“What cost to ROI are investors going to carry on OpenAI, and for how long?”

Both the Corps and their customers will “shop around” for the best deal they can get within tolerable convenience limits…

Thus with Open Competition, how many AI companies will actually survive long enough to loose trillions?

And in the same time scale what will happen with what are in effect Open Source AI products that are State Funded out of China and other places?

As the supposed “Chinese Curse” has it,

“May you live in interesting times…”

Rontea March 28, 2026 9:55 AM

Re: “The right to be free from technologies that predict and optimize until the self becomes a simulation.”

We’ve seen this pattern before: short-term political calculations colliding with long-term technological risk. By preempting state oversight, the federal government is effectively creating a regulatory vacuum, and the AI industry will exploit it. Local resistance to AI infrastructure is not just about jobs or surveillance—it’s a form of grassroots risk management in the face of systemic opacity. If history is any guide, by the time we understand the full consequences of these choices, the costs will be baked in, and the public will be left holding the risk without the leverage to mitigate it.

The irony is that is less absurd to simulate life than to live it.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.