AI and US Election Rules

If an AI breaks the rules for you, does that count as breaking the rules? This is the essential question being taken up by the Federal Election Commission this month, and public input is needed to curtail the potential for AI to take US campaigns (even more) off the rails.

At issue is whether candidates using AI to create deepfaked media for political advertisements should be considered fraud or legitimate electioneering. That is, is it allowable to use AI image generators to create photorealistic images depicting Trump hugging Anthony Fauci? And is it allowable to use dystopic images generated by AI in political attack ads?

For now, the answer to these questions is probably “yes.” These are fairly innocuous uses of AI, not any different than the old-school approach of hiring actors and staging a photoshoot, or using video editing software. Even in cases where AI tools will be put to scurrilous purposes, that’s probably legal in the US system. Political ads are, after all, a medium in which you are explicitly permitted to lie.

The concern over AI is a distraction, but one that can help draw focus to the real issue. What matters isn’t how political content is generated; what matters is the content itself and how it is distributed.

Future uses of AI by campaigns go far beyond deepfaked images. Campaigns will also use AI to personalize communications. Whereas the previous generation of social media microtargeting was celebrated for helping campaigns reach a precision of thousands or hundreds of voters, the automation offered by AI will allow campaigns to tailor their advertisements and solicitations to the individual.

Most significantly, AI will allow digital campaigning to evolve from a broadcast medium to an interactive one. AI chatbots representing campaigns are capable of responding to questions instantly and at scale, like a town hall taking place in every voter’s living room, simultaneously. Ron DeSantis’ presidential campaign has reportedly already started using OpenAI’s technology to handle text message replies to voters.

At the same time, it’s not clear whose responsibility it is to keep US political advertisements grounded in reality—if it is anyone’s. The FEC’s role is campaign finance, and is further circumscribed by the Supreme Court’s repeated stripping of its authorities. The Federal Communications Commission has much more expansive responsibility for regulating political advertising in broadcast media, as well as political robocalls and text communications. However, the FCC hasn’t done much in recent years to curtail political spam. The Federal Trade Commission enforces truth in advertising standards, but political campaigns have been largely exempted from these requirements on First Amendment grounds.

To further muddy the waters, much of the online space remains loosely regulated, even as campaigns have fully embraced digital tactics. There are still insufficient disclosure requirements for digital ads. Campaigns pay influencers to post on their behalf to circumvent paid advertising rules. And there are essentially no rules beyond the simple use of disclaimers for videos that campaigns post organically on their own websites and social media accounts, even if they are shared millions of times by others.

Almost everyone has a role to play in improving this situation.

Let’s start with the platforms. Google announced earlier this month that it would require political advertisements on YouTube and the company’s other advertising platforms to disclose when they use AI images, audio, and video that appear in their ads. This is to be applauded, but we cannot rely on voluntary actions by private companies to protect our democracy. Such policies, even when well-meaning, will be inconsistently devised and enforced.

The FEC should use its limited authority to stem this coming tide. The FEC’s present consideration of rulemaking on this issue was prompted by Public Citizen, which petitioned the Commission to “clarify that the law against ‘fraudulent misrepresentation’ (52 U.S.C. §30124) applies to deliberately deceptive AI-produced content in campaign communications.” The FEC’s regulation against fraudulent misrepresentation (C.F.R. §110.16) is very narrow; it simply restricts candidates from pretending to be speaking on behalf of their opponents in a “damaging” way.

Extending this to explicitly cover deepfaked AI materials seems appropriate. We should broaden the standards to robustly regulate the activity of fraudulent misrepresentation, whether the entity performing that activity is AI or human—but this is only the first step. If the FEC takes up rulemaking on this issue, it could further clarify what constitutes “damage.” Is it damaging when a PAC promoting Ron DeSantis uses an AI voice synthesizer to generate a convincing facsimile of the voice of his opponent Donald Trump speaking his own Tweeted words? That seems like fair play. What if opponents find a way to manipulate the tone of the speech in a way that misrepresents its meaning? What if they make up words to put in Trump’s mouth? Those use cases seem to go too far, but drawing the boundaries between them will be challenging.

Congress has a role to play as well. Senator Klobuchar and colleagues have been promoting both the existing Honest Ads Act and the proposed REAL Political Ads Act, which would expand the FEC’s disclosure requirements for content posted on the Internet and create a legal requirement for campaigns to disclose when they have used images or video generated by AI in political advertising. While that’s worthwhile, it focuses on the shiny object of AI and misses the opportunity to strengthen law around the underlying issues. The FEC needs more authority to regulate campaign spending on false or misleading media generated by any means and published to any outlet. Meanwhile, the FEC’s own Inspector General continues to warn Congress that the agency is stressed by flat budgets that don’t allow it to keep pace with ballooning campaign spending.

It is intolerable for such a patchwork of commissions to be left to wonder which, if any of them, has jurisdiction to act in the digital space. Congress should legislate to make clear that there are guardrails on political speech and to better draw the boundaries between the FCC, FEC, and FTC’s roles in governing political speech. While the Supreme Court cannot be relied upon to uphold common sense regulations on campaigning, there are strategies for strengthening regulation under the First Amendment. And Congress should allocate more funding for enforcement.

The FEC has asked Congress to expand its jurisdiction, but no action is forthcoming. The present Senate Republican leadership is seen as an ironclad barrier to expanding the Commission’s regulatory authority. Senate Majority Leader Mitch McConnell has a decades-long history of being at the forefront of the movement to deregulate American elections and constrain the FEC. In 2003, he brought the unsuccessful Supreme Court case against the McCain-Feingold campaign finance reform act (the one that failed before the Citizens United case succeeded).

The most impactful regulatory requirement would be to require disclosure of interactive applications of AI for campaigns—and this should fall under the remit of the FCC. If a neighbor texts me and urges me to vote for a candidate, I might find that meaningful. If a bot does it under the instruction of a campaign, I definitely won’t. But I might find a conversation with the bot—knowing it is a bot—useful to learn about the candidate’s platform and positions, as long as I can be confident it is going to give me trustworthy information.

The FCC should enter rulemaking to expand its authority for regulating peer-to-peer (P2P) communications to explicitly encompass interactive AI systems. And Congress should pass enabling legislation to back it up, giving it authority to act not only on the SMS text messaging platform, but also over the wider Internet, where AI chatbots can be accessed over the web and through apps.

And the media has a role. We can still rely on the media to report out what videos, images, and audio recordings are real or fake. Perhaps deepfake technology makes it impossible to verify the truth of what is said in private conversations, but this was always unstable territory.

What is your role? Those who share these concerns could submit a comment to the FEC’s open public comment process before October 16, urging it to use its available authority. We all know government moves slowly, but a show of public interest is necessary to get the wheels moving.

Ultimately, all these policy changes serve the purpose of looking beyond the shiny distraction of AI to create the authority to counter bad behavior by humans. Remember: behind every AI is a human who should be held accountable.

This essay was written with Nathan Sanders, and was previously published on the Ash Center website.

Posted on October 20, 2023 at 7:10 AM26 Comments

Comments

Clive Robinson October 20, 2023 8:50 AM

@ Bruce, ALL,

Re : Responsibility

“If an AI breaks the rules for you, does that count as breaking the rules?”

Yes, because as the argument with drones on the battlefield points out,

“The bullet still hits you if the gun is fired by a human hand directly or a machine created by human hand.”

In the case of “willful action” the responsability lies with the directing mind giving the order.

Likewise in other cases where action has not been taken by directly taken by a “directing mind” the question of responsability is not avoided. That is it moves to if their negligent / in action is defendable or not in the eyes of observers.

This generaly boils down to,

1, Legislation / Regulation
2, Available Resources
3, Available Time
4, Available Information / knowledge
5, Harms minimisation.

All judged by a supposadly “fully independent” and “impartial” observer against the mores, morals, ethics of the society/environment they come from (but not of the directing mind).

Clive Robinson October 20, 2023 9:17 AM

@ Bruce, ALL,

Re : ChatBot surveillance

As I’ve pointed out in the past, ChatBots will probably turn out to be the most insidious surveillance tool yet created. And they are only a code line or two away from being the most pernicious of interogators.

Which brings us to,

“AI chatbots representing campaigns are capable of responding to questions instantly and at scale, like a town hall taking place in every voter’s living room, simultaneously.”

No not like a “town hall” meeting but a “guilded cage” interrogation and retraining camp.

The AI by analysing your words in just a sentance or two categorizes you to where you are from, and how you got there in life thus what are the most likely to work buttons to push with you. It then moves into echo chamber type tactics to in effect give an individual “cognative bias”.

If people think this sounds “far fetched” not realy, we already know research has started and is being reported into the public domain. Read the Wired Link at the bottom of my comment yesterday that describes the first part of the process,

https://www.schneier.com/blog/archives/2023/10/friday-squid-blogging-on-squid-intelligence.html/#comment-427821

Clive Robinson October 20, 2023 9:28 AM

@ Bruce,

Today,is the 20th of October.

In the essay it says,

“What is your role? Those who share these concerns can submit a comment to the FEC’s open public comment process before October 16

Is the date wrong or has the process closed?

yet another bruce October 20, 2023 9:56 AM

While we are exploring the transitive properties of legal culpability can someone please clarify whether candidates accept any responsibility when they retweet material that originates outside their campaign organization? At least initially, the most inflammatory deep-fake agitprop is going to be prepared by trolls and then amplified by campaigns.

Generative AI is going to make this kind of agitprop cheaper and more plentiful but trolls have been hard at work for many years with existing tools. Consider things like Project Veritas and 2000 Mules.

Tim Stevens October 20, 2023 10:53 AM

AI in this context is just a technological extension of manually creating fake news/ads. Like the farms in Vietnam paying $0.75 for writing a fake “How to” page, to post to monetize with click-through ads. A click-through ad to a hotel reservation in Hawaii can apparently generate $25 of revenue. So take AI out of this discussion and decide if the originating person is illegal or liable, with AI just their tool of choice.

Anonymous October 20, 2023 11:27 AM

If an AI breaks the rules for you, does that count as breaking the rules?
Yes. behind every AI is a human who should be held accountable.

AI will allow digital campaigning to evolve from a broadcast medium to an interactive one.
Interactive entertainment through automated manipulation.

What is your role?
My role is to find out what is the role of technology in my future.

Decypher October 20, 2023 12:38 PM

”not any different than the old-school approach of hiring actors and staging a photoshoot, or using video editing software.”

I’d say that is an incorrect analogy. hiring an actor, the actor has the decision to do what is right or wrong, like hiring an assassin. both the assassin and the person that hired would go to prison in a murder.

AI is a TOOL, so would be more like if you killed someone with a gun (AI) who would be responsible? The human and not the gun.

However, there are safety requirements on guns like a safety so you can’t accidentally pull the trigger. This is where AI is lacking and needs to be held accountable.

mark October 20, 2023 12:41 PM

Any use of AI in elections should be banned. One small use is a misdemeanor, with the fine equivalent to the money spent on the AI. Three uses, or spending more than $5M on a major use, is a felony, and the candidate, or the chief exec of the PAC goes to jail, not probation, for up to five years (minimum six months).

Steve October 20, 2023 7:00 PM

In my fantasy world, if a candidate could be proved to have been lying (as opposed to genuinely mistaken) in a campaign ad, speech, news program appearance, etc., they would be immediately removed from office and barred from ever running again.

Yeah, I know.

I’d also like a pony.

ResearcherZero October 21, 2023 1:54 AM

The process is often closed without an orange pass. Companies and governments will continue to push farther into our private data and we will all struggle to keep them out.

Indexing an increasing lack of transparency from tech companies.

‘https://hai.stanford.edu/news/introducing-foundation-model-transparency-index

LexisNexus for example, as it remains a good example of a malicious actor.

“LexID Digital combines a unique identifier, a confidence score and a visualization graph to genuinely understand a user’s unique digital identity across all channels and touchpoints.” …they have a claim called “True Location” which sounds a bit like they attempt to de-anonymize people using VPNs.

“It’s not just eBay scanning your ports, there is allegedly a network of 30,000 websites out there all working for the common aim of harvesting open ports, collecting IP addresses, and User Agents in an attempt to track users all across the web…”

‘https://blog.nem.ec/2020/05/24/ebay-port-scanning/

Pop this quote up on the fridge:

“The overwhelming consensus from privacy experts is that this plan has little to do with protecting privacy and everything to do with protecting market share.”

‘https://www.theguardian.com/technology/2019/mar/17/the-cambridge-analytica-scandal-changed-the-world-but-it-didnt-change-facebook

ResearcherZero October 21, 2023 2:23 AM

Growth Hacking is a Public Relations method to suck in users through various brain jacking techniques and other incentives to grow new customers (free giveaways, free feature access, financial reward for new customer referrals, bots). Once the user base is growing, extra fees and more aggressive data exploitation is employed to monetize the user base (the customer being the actual product that is for sale).

During this process of growth hacking, the company will profess values of freedom, loyalty, and pledge to protect customer privacy. Exploiting the user is the core objective.

“Some of Thiel’s business interests rely on the FBI and other government agencies as potential revenue sources.”

‘https://www.businessinsider.com/peter-thiel-fbi-informant-charles-johnson-johnathan-buma-chs-genius-2023-10

ResearcherZero October 21, 2023 3:19 AM

“A publisher’s audience is their currency. No matter how they make money from content—be it through advertising, paid subscription or syndication, a publisher’s core asset is audience and audience data.” – AOL

Facebook and Google’s business model is filtering what you see in order to drive clicks. “As a result, people get into these echo chambers.”

It’s manipulative, driving consumption and making people believe things that they don’t want to believe. It couldn’t exist if everyone really understood it clearly.
https://www.pcmag.com/news/how-companies-turn-your-data-into-money

The Machine Is Learning

VC firms invest lots of money for promising-looking services/technology with the expectation they’ll make big money and gain a return on investment in the form of ownership stakes. When the company is bought out or goes public, it’s massive sacks of cash for everybody. Mining for data (money) has never been so profitable.

The data mining techniques that underpin these analyses can be divided into two main purposes; they can either describe the target dataset or they can predict outcomes through the use of machine learning algorithms. …Advances within artificial intelligence only continue to expedite adoption across industries.

‘https://www.datamation.com/big-data/facebook-and-data-mining-is-anything-private/

Winter October 22, 2023 3:57 AM

@Ricky

The average election denier I see . . .

Indeed, when you loose by a wide margin because the majority does not want you, just deny these voters exist.

Kevin October 22, 2023 6:23 PM

Surely the rules should be similar to current advertising rules. (I’m in Australia – not sure if the rules are the same in the USA).

An ad is OK if it’s unbelievable, but if it’s believable it must be factual/correct. So Red Bull can show an ad of someone drinking RedBull and flying, because the average person KNOWS that it’s a lie. But stating RedBull helps you lose weight is believable, and is therefore only allowed if it’s able to be proven.

(There was a case a while back where a campervan salesman told a customer they could put the campervan into cruise control, and go back and make a coffee. Someone did, and crashed, but the courts said the salesman was at fault because his statement may be believed by the average person!)

Can’t we extend this to digital media and political campaigns? Showing a video/picture of Trump’s face on cartoon body shaking hands with a terrorist is obviously fake, so it would be allowed. But a realistic AI or deep fake showing the same would be interpreted as real by the average person and therefore must be able to be proved or be true.

This law could also apply to deep fake porn. If it’s ‘obviously’ a fake – then it’s allowed. But if it’s realistic enough that the average person would believe it, then it’s not allowed.

Think of the Idiots October 22, 2023 10:38 PM

@Kevin

Even our esteemed – volunteer blog spam eliminator missed this context

‘there is a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on an adult of ordinary sensibilities’

I’d say it’s not ordinary at all, and we’d all be much better off admitting the truth rather than letting others profit off it at our expense

PaulBart October 23, 2023 6:55 AM

@Ricky
No, no. China and North Korea have democracy and free and fair elections, they say so.

Winter October 23, 2023 7:13 AM

@PaulBart

China and North Korea have democracy and free and fair elections, they say so.

You do not have to supply evidence for the claim that you are an election denier. We already know.

Gert-Jan October 23, 2023 7:34 AM

… require … to disclose when they use AI images, audio, and video that appear in their ads.

This is totally useless. All ads will include this, if needed. What is that supposed to tell the viewer, or warn the viewer about?

The most impactful regulatory requirement would be to require disclosure of interactive applications of AI for campaigns

I think it would be good to require all interactive applications (regardless of their field of use) to proactively disclose that they are not human.

However, there are safety requirements on guns like a safety so you can’t accidentally pull the trigger. This is where AI is lacking and needs to be held accountable.

I agree with Kevin and Decypher. With the note that the AI may be responsible, but you have to hold the AI producer / manager accountable. And that might actually be very messy, since the that might involve anyone from the producer, anyone else who messed with the AI up to and including the person deploying the AI.

Chris Becke October 24, 2023 1:52 AM

The idea that a political campaign might use AI to create an interactive interaction with their campaign for me raises the question of, how do LLM AIs deal with doublethink?

So, Ron DeSantis is at war with woke but his own lawyers revealed that woke means admitting that racisim exists, so how would a LLM be trained to respond with anti-woke positions without, perhaps, becoming overtly racist?

Either way, I would not as a campaign manager be confident that a LLM would not say something stupid and entirely detrimental to millions.

Clive Robinson October 24, 2023 7:01 AM

@ Chris Becke

Re : AI output being prejudiced.

What you see as potential prejudice is part of what is called bias[1] in the LLM “weights”.

“…so how would a LLM be trained to respond with anti-woke positions without, perhaps, becoming overtly racist?”

As has been found an LLM output is biased not just by what is input to it but the order it is input to it no matter how “pure” the input is. So far the solution is to pay people to apply corrections to what appears to be aberrant output from the LLM.

Likewise it’s been found that
having previous LLM output fed back into a LLM as even quite a small fraction of it’s input causes not just bias but degenerative output, so after just a few turns through the LLM is effectively beyond recovery even by human corrective input.

So assume that bias or in this case prejudice will come built in to some extent.

But further consider, the response of people to the LLM will almost certainly get fed back in to “adjust” the LLM to the attitudes of those who are daft enough to interact with it.

It’s not hard to see that what the LLM gets back will be a reflection of it’s output… So it won’t be “pure” input…

Thus I would expect the LLM to become increasingly biased thus prejudiced. In a way not unlike the way we see with humans that get caught up in online echo chambers.

[1] All LLM’s are biased to start off with, as they are “trained” or as some mistakenly chose to call it “learn”. The process is to use a form of feedback that “adjusts the weights” used in the MAD inputs to the individual “neurons”. In effect it almost randomly adjusts errors –i.e. effectively bias– untill a desired statistical output happens. This hit or miss process is in part because of the normalisation and nonlinear output of each “neuron” is in effect a one way function.

Scott November 18, 2023 12:11 PM

How can you write a whole op-ed on regulation of political speech and not mention freedom of speech? Even if you think AI is a big enough threat to deserve an exception to freedom of speech, or that preventing deceptive use of AI is congruent with free speech, the argument needs to be made.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.