AI and the 2024 US Elections

For years now, AI has undermined the public’s ability to trust what it sees, hears, and reads. The Republican National Committee released a provocative ad offering an “AI-generated look into the country’s possible future if Joe Biden is re-elected,” showing apocalyptic, machine-made images of ruined cityscapes and chaos at the border. Fake robocalls purporting to be from Biden urged New Hampshire residents not to vote in the 2024 primary election. This summer, the Department of Justice cracked down on a Russian bot farm that was using AI to impersonate Americans on social media, and OpenAI disrupted an Iranian group using ChatGPT to generate fake social-media comments.

It’s not altogether clear what damage AI itself may cause, though the reasons for concern are obvious—the technology makes it easier for bad actors to construct highly persuasive and misleading content. With that risk in mind, there has been some movement toward constraining the use of AI, yet progress has been painstakingly slow in the area where it may count most: the 2024 election.

Two years ago, the Biden administration issued a blueprint for an AI Bill of Rights aiming to address “unsafe or ineffective systems,” “algorithmic discrimination,” and “abusive data practices,” among other things. Then, last year, Biden built on that document when he issued his executive order on AI. Also in 2023, Senate Majority Leader Chuck Schumer held an AI summit in Washington that included the centibillionaires Bill Gates, Mark Zuckerberg, and Elon Musk. Several weeks later, the United Kingdom hosted an international AI Safety Summit that led to the serious-sounding “Bletchley Declaration,” which urged international cooperation on AI regulation. The risks of AI fakery in elections have not sneaked up on anybody.

Yet none of this has resulted in changes that would resolve the use of AI in U.S. political campaigns. Even worse, the two federal agencies with a chance to do something about it have punted the ball, very likely until after the election.

On July 25, the Federal Communications Commission issued a proposal that would require political advertisements on TV and radio to disclose if they used AI. (The FCC has no jurisdiction over streaming, social media, or web ads.) That seems like a step forward, but there are two big problems. First, the proposed rules, even if enacted, are unlikely to take effect before early voting starts in this year’s election. Second, the proposal immediately devolved into a partisan slugfest. A Republican FCC commissioner alleged that the Democratic National Committee was orchestrating the rule change because Democrats are falling behind the GOP in using AI in elections. Plus, he argued, this was the Federal Election Commission’s job to do.

Yet last month, the FEC announced that it won’t even try making new rules against using AI to impersonate candidates in campaign ads through deepfaked audio or video. The FEC also said that it lacks the statutory authority to make rules about misrepresentations using deepfaked audio or video. And it lamented that it lacks the technical expertise to do so, anyway. Then, last week, the FEC compromised, announcing that it intends to enforce its existing rules against fraudulent misrepresentation regardless of what technology it is conducted with. Advocates for stronger rules on AI in campaign ads, such as Public Citizen, did not find this nearly sufficient, characterizing it as a “wait-and-see approach” to handling “electoral chaos.”

Perhaps this is to be expected: The freedom of speech guaranteed by the First Amendment generally permits lying in political ads. But the American public has signaled that it would like some rules governing AI’s use in campaigns. In 2023, more than half of Americans polled responded that the federal government should outlaw all uses of AI-generated content in political ads. Going further, in 2024, about half of surveyed Americans said they thought that political candidates who intentionally manipulated audio, images, or video should be prevented from holding office or removed if they had won an election. Only 4 percent thought there should be no penalty at all.

The underlying problem is that Congress has not clearly given any agency the responsibility to keep political advertisements grounded in reality, whether in response to AI or old-fashioned forms of disinformation. The Federal Trade Commission has jurisdiction over truth in advertising, but political ads are largely exempt—again, part of our First Amendment tradition. The FEC’s remit is campaign finance, but the Supreme Court has progressively stripped its authorities. Even where it could act, the commission is often stymied by political deadlock. The FCC has more evident responsibility for regulating political advertising, but only in certain media: broadcast, robocalls, text messages. Worse yet, the FCC’s rules are not exactly robust. It has actually loosened rules on political spam over time, leading to the barrage of messages many receive today. (That said, in February, the FCC did unanimously rule that robocalls using AI voice-cloning technology, like the Biden ad in New Hampshire, are already illegal under a 30-year-old law.)

It’s a fragmented system, with many important activities falling victim to gaps in statutory authority and a turf war between federal agencies. And as political campaigning has gone digital, it has entered an online space with even fewer disclosure requirements or other regulations. No one seems to agree where, or whether, AI is under any of these agencies’ jurisdictions. In the absence of broad regulation, some states have made their own decisions. In 2019, California was the first state in the nation to prohibit the use of deceptively manipulated media in elections, and has strengthened these protections with a raft of newly passed laws this fall. Nineteen states have now passed laws regulating the use of deepfakes in elections.

One problem that regulators have to contend with is the wide applicability of AI: The technology can simply be used for many different things, each one demanding its own intervention. People might accept a candidate digitally airbrushing their photo to look better, but not doing the same thing to make their opponent look worse. We’re used to getting personalized campaign messages and letters signed by the candidate; is it okay to get a robocall with a voice clone of the same politician speaking our name? And what should we make of the AI-generated campaign memes now shared by figures such as Musk and Donald Trump?

Despite the gridlock in Congress, these are issues with bipartisan interest. This makes it conceivable that something might be done, but probably not until after the 2024 election and only if legislators overcome major roadblocks. One bill under consideration, the AI Transparency in Elections Act, would instruct the FEC to require disclosure when political advertising uses media generated substantially by AI. Critics say, implausibly, that the disclosure is onerous and would increase the cost of political advertising. The Honest Ads Act would modernize campaign-finance law, extending FEC authority to definitively encompass digital advertising. However, it has languished for years because of reported opposition from the tech industry. The Protect Elections From Deceptive AI Act would ban materially deceptive AI-generated content from federal elections, as in California and other states. These are promising proposals, but libertarian and civil-liberties groups are already signaling challenges to all of these on First Amendment grounds. And, vexingly, at least one FEC commissioner has directly cited congressional consideration of some of these bills as a reason for his agency not to act on AI in the meantime.

One group that benefits from all this confusion: tech platforms. When few or no evident rules govern political expenditures online and uses of new technologies like AI, tech companies have maximum latitude to sell ads, services, and personal data to campaigns. This is reflected in their lobbying efforts, as well as the voluntary policy restraints they occasionally trumpet to convince the public they don’t need greater regulation.

Big Tech has demonstrated that it will uphold these voluntary pledges only if they benefit the industry. Facebook once, briefly, banned political advertising on its platform. No longer; now it even allows ads that baselessly deny the outcome of the 2020 presidential election. OpenAI’s policies have long prohibited political campaigns from using ChatGPT, but those restrictions are trivial to evade. Several companies have volunteered to add watermarks to AI-generated content, but they are easily circumvented. Watermarks might even make disinformation worse by giving the false impression that non-watermarked images are legitimate.

This important public policy should not be left to corporations, yet Congress seems resigned not to act before the election. Schumer hinted to NBC News in August that Congress may try to attach deepfake regulations to must-pass funding or defense bills this month to ensure that they become law before the election. More recently, he has pointed to the need for action “beyond the 2024 election.”

The three bills listed above are worthwhile, but they are just a start. The FEC and FCC should not be left to snipe with each other about what territory belongs to which agency. And the FEC needs more significant, structural reform to reduce partisan gridlock and enable it to get more done. We also need transparency into and governance of the algorithmic amplification of misinformation on social-media platforms. That requires that the pervasive influence of tech companies and their billionaire investors should be limited through stronger lobbying and campaign-finance protections.

Our regulation of electioneering never caught up to AOL, let alone social media and AI. And deceiving videos harm our democratic process, whether they are created by AI or actors on a soundstage. But the urgent concern over AI should be harnessed to advance legislative reform. Congress needs to do more than stick a few fingers in the dike to control the coming tide of election disinformation. It needs to act more boldly to reshape the landscape of regulation for political campaigning.

This essay was written with Nathan E. Sanders, and originally appeared in the Atlantic.

Posted on September 30, 2024 at 7:00 AM9 Comments

Comments

JG5 October 5, 2024 12:49 PM

I don’t think that many people understand the threat model of “AI” combined with the panopticon. Knowing everyone’s actions and words (e.g., from the continuous collection of audio from every cell phone that is powered on) is a very powerful dataset. I have said before that “they” will be able to perfectly spoof your voice (e.g., for the grandson in jail, needs bail money scam) and now even your video image. Not sure if it was discussed here, but some of the fake video generators can’t (yet) produce a full 3D model, so asking the person to turn their head to the side produces undefined results. It is just a matter of time until that flaw is corrected.

One of the least discussed aspects was implied by the Cambridge Analytica/Facebook scandal. Every person could be induced to “see” a different candidate via carefully crafted and carefully directed advertising. That is a form of man-in-the-middle, where the candidate’s actual words and actions, describing ideas and proposed policies, are intercepted and altered for each recipient, according to estimations of their response to various ideas. The candidates own election workers could deep-fake various messages using actual candidate video as the input. I am reminded of Mark Twain’s wit, “Believe none of what you hear and only half of what you see.”

‘Fake Jake Tapper’ Introduces CNN Segment on Artificial Intelligence
William Vaillancourt Fri, October 4, 2024 at 9:58 PM EDT · 2 min read The Daily Beast
https://www.thedailybeast.com/fake-jake-tapper-introduces-cnn-segment-on-ai

Speaking of MIM attacks, Gluegle, Microscam, and Scamazon all are doing them to intercept publisher content and run their own ads on top of it. If you search with Microscam you get MSN and Yahoo results. Gluegle correctly gives you the pointer to DailyBeast.

https://news.yahoo.com/news/fake-jake-tapper-introduces-cnn-015839329.html

License Plate Readers Are Creating a US-Wide Database of More Than Just Cars
From Trump campaign signs to Planned Parenthood bumper stickers, license plate readers around the US are creating searchable databases that reveal Americans’ political leanings and more.
https://www.wired.com/story/license-plate-readers-political-signs-bumper-stickers/
https://archive.ph/E5YVQ
Matt Burgess Dhruv Mehrotra Security Oct 3, 2024 6:30 AM

At 8:22 am on December 4 last year, a car traveling down a small residential road in Alabama used its license-plate-reading cameras to take photos of vehicles it passed. One image, which does not contain a vehicle or a license plate, shows a bright red “Trump” campaign sign placed in front of someone’s garage. In the background is a banner referencing Israel, a holly wreath, and a festive inflatable snowman.

Another image taken on a different day by a different vehicle shows a “Steelworkers for Harris-Walz” sign stuck in the lawn in front of someone’s home. A construction worker, with his face unblurred, is pictured near another Harris sign. Other photos show Trump and Biden (including “Fuck Biden”) bumper stickers on the back of trucks and cars across America. One photo, taken in November 2023, shows a partially torn bumper sticker supporting the Obama-Biden lineup.

These images were generated by AI-powered cameras mounted on cars and trucks, initially designed to capture license plates, but which are now photographing political lawn signs outside private homes, individuals wearing T-shirts with text, and vehicles displaying pro-abortion bumper stickers—all while recording the precise locations of these observations. Newly obtained data reviewed by WIRED shows how a tool originally intended for traffic enforcement has evolved into a system capable of monitoring speech protected by the US Constitution.

Clive Robinson October 5, 2024 11:20 PM

Where to start is the question

AI used this way is simply a tool used as a “force multiplier”.

It’s little different from using a chain saw rather than an axe, to “fell the forrest”.

But saying “no chain saws” will not stop the use of a different force multiplier such as a form of “band saw” to bring the forrest down.

Thus stoping the use of AI is like putting a “band aid on a broken bone” it just appears to solve a surface issue not the deeper actual more serious problem.

The actual problem is political extremism and the secrecy of communications that enables individuals with extreme ideas to form groups, and pull others in.

In an open society where communication was direct person to person and anonymity difficult, whilst it was possible for extremist groups to develope, the process was both slow and fraught with the risk of exposure and condemnation by society.

In the current E-Comms world anonymity of individuals and communicating over distance is way easier and thus extremist views much easier and faster to promote.

I was reminded of this by,

https://arstechnica.com/tech-policy/2024/10/neo-nazis-head-to-encrypted-simplex-chat-app-bail-on-telegram/

Now it’s been “made obvious to all” that Telegram gives no secrecy in ordinary or group messaging those with extream views are moving to a different allegedly more secure platform called SimpleX.

The fact this article has been written appears more to do with the spill out from Twitter becoming X.

As the opening paragraph says,

“Dozens of neo-Nazis are fleeing Telegram and moving to a relatively unknown secret chat app that has received funding from Twitter founder Jack Dorsey.”

The points being,

1, Neo-Nazis (extremists)
2, Telegram (E-Comms)
3, Secret chat (E2E Encryption)
4, Twitter and Jack Dorsey.

As with such things the author is “building up” thus reading it backwards gives the intended message of,

“Jack Dorsey founder of Twitter is paying for encrypted communications for extremists”

To hide behind thus flourish anonymously.

Much as I do not like the ideals of those people identified, it needs be pointed out that “Secure Anonymous Communications” is,

“A tool agnostic to use, and it is the intent of the directing mind as judged by observers as to if it’s use is good or bad.”

History is replete with examples of,

“Good tools used for bad purposes”

And in quite a number of cases the passage of time turns the view of “bad” to “good” in the view point of observers.

What limits extremism of any form is the “sea anchor of society” that is whilst bad may grow and spread in the dark out of sight, only things society will tolerate can grow in the light.

The real problem is society can be both bad and extreme thus intolerant.

History shows that people will,

“knowingly vote on mass for extremists”

all to often with little or no excuse.

Banning AI or any other “force multiplier” will not stop this. But ensuring the “spot light” clearly illuminates it and what it is being used for and by whom will enable society to decide.

Clive Robinson October 6, 2024 5:04 PM

AI and the Power Crunch

I’ve mentioned on the odd occasion that crypto-coin, Web3 NFTs, and AI LLM / ML systems are with little doubt “environmental disasters” and whilst there will always be deniers, they are a strong contributor to what is –incorrectly but catchily labelled– “Global Warming”.

There are many sarcastic jokes that have the punchline,

“You can have any two (of the three)”.

Well there is the same punchline for AI and it’s disastrous power requirements… Which the smallish Island Nation of Taiwan is facing.

Rather than me go through it all again, it’s easier to just link to a recent article from Yale discussing the issue in more depth than a comment box here will accept,

https://e360.yale.edu/features/taiwan-energy-dilemma

But before people say,

“Oh that’s just a little island in the South China Sea”

Before grabbing another over priced coffee with a name you almost have to say as a rhyme, consider the following cause and effect chain,

1, The AI “plans” are a “Global Political Goal” of many nations governments, thus maybe 100 countries are running into this “Power Crunch” disaster.

2, Power generation is a finite resource thus a “limited commodity” in any location as such it is a “slice of the pie” issue.

3, All slice of the pie pricing is controlled by the old economic rule that says the price of a “limited commodity” rises with demand.

4, As it’s a “limited commodity” you can not increase supply to bring the price down.

5, The price anyone pays is also dependent on the bargaining power they have. Governments and Mega Corps have most, individual citizens the least.

6, Therefore the price of AI will be payed for by the citizens through vastly increased energy prices at probably twice the proportional percentage AI takes of the energy supply.

Have a think on the major effect of “Global Warming” it’s actually that the highs and lows of atmospheric temperature get ever more extreme. As humans do not like this as it causes amongst other things “premature death” one of the biggest uses of energy by the average citizen is “moving heat around” their homes.

The causal effect chain says that AI LLM and ML systems are an “existential threat” and provably will kill people, starting with those at the bottom of the socioeconomic ladder and working up.

We saw in Texas what happened with energy pricing and who got hit the hardest and those that unfortunately died because of it. That was due to a very brief interruption in power supply. Remember that the heating/cooling effect of a home or other object is based on a “percentage of the difference” against time. Texas was lucky because the temperature extreme was short lived “that time”…

But climate change is going to make such temperature excursions not only more frequent, but more extreme thus longer in duration and premature deaths will as a consequence rise.

But it’s not just cold, look at the heat deaths in South and East Europe where a “continental climate” encourages temperature extremes in both winter and summer.

Oh and do not forget those “fires” all up the US West Coast, they are also exacerbated by Climate Change. They kill people now and cause cancer to kill people in the future. The price of medical intervention will also rise as a consequence and more will be priced out or triaged out at the bottom of the socioeconomic ladder.

Also do not forget the effects on food production causing diminished supply and thus other life reducing issues.

The “butchers bill of AI” will be high, and with current AI systems the returns of little or no use over all…

ResearcherZero October 7, 2024 2:21 AM

@Bruce

Depending on the estimate, between two-thirds and three-quarters of all money spent on lobbying is spent on behalf of businesses…

“more than three decades of disinvesting in government’s capacity to keep up with skyrocketing numbers of lobbyists and policy institutes, well-organized partisans, and an increasingly complex social and legal context. Instead, policymakers have increasingly turned to the information and analytical capacity provided for them by those with the biggest material and ideological stakes in the outcome. This dependence has created a power asymmetry crisis that has been quietly building for almost four decades.”

https://www.theatlantic.com/politics/archive/2015/03/when-congress-cant-think-for-itself-it-turns-to-lobbyists/387295/

The Library of Congress should create a website that will become the de facto online forum and clearinghouse for all public policy advocacy. Such a website would both level the playing field (it is much cheaper to post a web page than to hire an army of lobbyists to descend on Washington) and increase transparency and accountability (if all positions and arguments are public, everyone knows who is lobbying for what and why).

https://www.brookings.edu/articles/a-better-way-to-fix-lobbying/

ResearcherZero October 8, 2024 4:48 AM

Without regulation, industry will expand to fill the void that it occupies. If AI is not properly regulated, and tech companies fail to moderate, that void will be filled increasingly with garbage. Potentially leading to garbage as both input and output.

“Russian “influence actors” have amplified stories about migrants entering the U.S. in an attempt to stoke discord, according to the Department of Homeland Security report, and have used generative AI to create fake websites that appeared to be authentic U.S.-based media outlets.”

‘https://www.reuters.com/world/us/russia-iran-china-expected-use-ai-try-influence-us-election-report-says-2024-10-02/

Major tech platforms are failing to respond to deliberate misinformation…

The machine was an old WINVote machine, not used in elections for a decade.
Michael Flynn reposted Behizy’s post about the hack, failing to mentioning the context.

‘https://www.wired.com/story/trump-supporters-hacking-voting-machine/

“Until recently, election administration was demonstrably nonpartisan, but in many states it has now become a partisan issue.”

https://www.brookings.edu/articles/understanding-democratic-decline-in-the-united-states/

Howard October 8, 2024 9:09 PM

Dishonesty in political ads is not new. It’s older than your great grandparents.

Calling it “disinformation” or other modern scareword does not change this.

Steve K October 11, 2024 1:43 PM

I had a frightneing thought reading the first part of this article. If a foreign government were to begin collecting names of thousands or hundreds of thousands of people posting on social media and they immitate them making AI posts that claim to want to cause harm to candidates two things happen. 1) the Secret Service and FBI must verify each one causing a massive manpower overload, and 2) how is a person who was immitated ever going to prove they didn’t post it? The real problem is with hundreds of thousands of fake threats will they miss the one real one? This scares the c*ap out of me on so many levels.

Clive Robinson October 12, 2024 4:54 AM

@ SteveK, ALL,

Re : Protective Agency Resources

You raise two points and a conclusion based on them.

The first is,

“the Secret Service and FBI must verify each one causing a massive manpower overload”

Not true, and it’s not the way such agencies work unless “politically motivated” to do so.

They have never had the resources to work that way and they never will. Because there are other less resource intensive ways of achieving the comparable objective.

You second point is,

“how is a person who was immitated ever going to prove they didn’t post it?”

This has been an issue since we’ve had the ability to make recordings and replay them, or imitate peoples voices etc.

There is a saying that,

“You can not prove a negative”

Much as I dislike that saying, what you do is prove a positive either directly or indirectly.

You can not prove that your voice loudly and repeatedly proclaimed “fire” in a theater but you can demonstrate you were somewhere else at or around the time claimed.

Whilst that is by no means proof it was not your voice that a hundred or so people heard, it does make it improbable that it came directly from you at that time.

Which brings us to your conclusion of,

“The real problem is with hundreds of thousands of fake threats will they miss the one real one?”

You’ve misunderstood the problem.

The reality is nobody actually cares about “fake threats” unless they can make some advantage out of them politically or otherwise. Put simply they are a nuisance like rubbish blown in the wind.

What people actually care about is “prevention” of real threats.

Primarily because it’s assumed that you will never be fore warned of an actual attack, thus protection is based on the assumption an attack is always imminent.

In physical security you have zones that are in effect rings around what is being protected. You look for potential threats in each zone and deflect them. If they can not be deflected then you act more directly to intercept and if required neutralise them.

It’s the principle of segregation, in that if a potential attacker can not get to their chosen target, then generally the target can not be attacked.

However it’s also understood that an attack might as a first step be to “pin the target down” in some way or to “deny access” to an objective.

The defense against this is contingency planning, one part of which is to actively avoid bottle necks / choke points and obviously dead ends.

There is the “PACE” acronym used in all sorts of planning that stands for

(P)rimary
(A)lternate
(C)ontingency
(E)mergancy

You can look it up, but put simply they are not just different courses of action, they also include switch indicators. Such that all team members stay in sync/step even if isolated from other team members. That is situational awareness should inform them what course of action is in currently in play.

PACE can be general or more specific. General is usually done by repeated training. It’s how the army for instance gets soldiers to respond effectively instinctively in certain ways when they come under fire.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.