Deepfakes and the 2024 US Election

Interesting analysis:

We analyzed every instance of AI use in elections collected by the WIRED AI Elections Project (source for our analysis), which tracked known uses of AI for creating political content during elections taking place in 2024 worldwide. In each case, we identified what AI was used for and estimated the cost of creating similar content without AI.

We find that (1) half of AI use isn’t deceptive, (2) deceptive content produced using AI is nevertheless cheap to replicate without AI, and (3) focusing on the demand for misinformation rather than the supply is a much more effective way to diagnose problems and identify interventions.

This tracks with my analysis. People share as a form of social signaling. I send you a meme/article/clipping/photo to show that we are on the same team. Whether it is true, or misinformation, or actual propaganda, is of secondary importance. Sometimes it’s completely irrelevant. This is why fact checking doesn’t work. This is why “cheap fakes”—obviously fake photos and videos—are effective. This is why, as the authors of that analysis said, the demand side is the real problem.

Posted on February 4, 2025 at 7:01 AM7 Comments

Comments

R.Cake February 4, 2025 10:21 AM

@Brian, this is the case specifically for AI generated content. Of course, misinformation really is an issue. It maliciously misleads those who are easily misled. But as the source says, item (2) applies – no AI would be necessary to effectively mislead people. All it takes is a talking head producing a lengthy, hardly coherent sermon about a variety of topics and a claim to be whatever expert they claim to be. (nearly) any person can do this without bothering about computing power limitations.

Not anonymouse February 4, 2025 5:09 PM

So let me get this straight… AI isn’t an issue because all AI-generated fakes are just as easily generated (and to exactly the same believable-ness level) without AI?

There are a couple key points I’m making here… the “just as easily” part… and the “same believable-ness level” part… If it’s in any way easier to do with AI overall, or easier to make look as/more real, then you have a possible (soon coming, if not here already) issue of a greater volume of more realistic-looking fakes, because it becomes more accessible to make them. With greater volume of realistic-looking fakes, more people will believe them, and with more people believing them, it has an affect on society as a whole.

I’m harping on this because “nevertheless cheap to replicate without AI” does not necessarily mean the same thing as “just as easy to make without AI.” Money is not the only way to measure easiness. Actual less work to do, speed at which it can be done, or more accessible to the common man, are other ways to measure it. And have potentially different outcomes than just measuring money.

You can just say “well I’m not easily misled, so it’s someone else’s problem” but the fact is that everyone has biases, even you, smart guy. Therefore everyone has confirmation bias, even you. Everyone more easily sees the truth in what they already think is true, even with you. And everyone can be manipulated to some degree, even you. And this builds up over time for everyone, even for you. Meaning, a little manipulation now, a bit more later on, and a bit more later on, it adds up to quite a bit eventually. For everyone, even for you.

Would you like to be more manipulated or more free? Then be very careful about what you watch, listen to, do, or even think about. Everything has its effect.

Clive Robinson February 4, 2025 10:00 PM

@ Bruce, ALL,

The authors make three points in the part you quote.

But if we look at the first two,

“We find that (1) half of AI use isn’t deceptive, (2) deceptive content produced using AI is nevertheless cheap to replicate without AI,”

It begs the question,

“What is deceptive?”

For years the visual side of music has had “lipsyncing” done by amateurs and experts alike.

Is it actually,

“Deceptive when expected?”

Likewise even the best of models in photographs get “touched up” to remove small oft unimportant marks, blemishes and imperfections.

Likewise you can get apps for your phone that can smooth lines, slim things down and generally make someone look not just more attractive, but younger or less stressed or more confident.

They are all deliberate and willful deception, as technically they are all some form of “misrepresentation or fraud”, but does anyone actually care?

More and more I see “Shock Horror XXX is YYY” where YYY is almost “click-baity” to push product under eyeballs (See UK Daily Mail newspaper as a repeat offender of this behaviour).

Societally we have “white lies” that is “little untruths” such as answers to the age old “Do stripes make me look fat?”. Where the answer being sort is almost certainly not the truth but some sign of comfort or demonstration of affection.

Lets be honest even going back to pre-Nixon times in the US lies were told about candidates to try and vilify them in others minds.

Thus should we really care that AI is just a logical progression of vilification and deception that’s more than a half century old?

Thus people should try to put themselves in effect a decade or so in the future and take a “looking back” point of view.

Will we see what is currently AI deception as just “part of the game” or something that “Should have been stamped out and obliterated”.

In Russia and other similar types of authoritarian political system where “the leader” had to be in effect “omnipotent” and in effect a deity, not just women but people who had fallen out of favour for some reason would quite literally be “painted out” of the historical record and thousands were employed in doing so.

Do I actually care if it’s a highly skilled artisan or a “billion dollar brain” computer doing the,

“Wipe out and touch up?”

Surprisingly I suspect most after a little thought would say “No”.

That is it’s not the mechanism but the intent that is of importance.

With “Free speech” pushed as far as it has been legislating against “intent” is going to be hard (see issues with “defamation” to see why).

Thus I can see the same arguments being used in a decade or more, getting squashed under the wheel as it turns just each time the rut gets a little deeper and the dirt just thrown all the more.

The simple fact is,

“You can not legislate against ‘gaming the system’, and ‘boundaries will be pushed’ ever harder.”

Thus are we even interested in the “deception”? I’d argue actually no, what we care about is the ability to hate/dislike or worse those our cognitive bias takes exception to.

We are by nature for various reasons “tribal” in nature, and not that long ago “tribal” was effectively a “fight to the death”.

As some will know there have been events in the past day in Europe that are quite frankly horrific.

We know there will be people arguing that the “lies and deception” are to blame… But are they really?

That is are they,

“The cause or the excuse?”

As I get older I think it’s the “excuse” and almost a “loyalty test to the tribe” by those who for what ever reason chose to become what they are and thus do what they have.

Thus we get to the question of “The moral compass” in an individual, be it lacking or overly strong it is a problem for society.

It is thus a question of how people in effect develop a faulty moral compass and so become a threat to society.

Whilst it could be argued that “deception” is how, that does not really hold water as an argument. This has been known “politically” in the 1990’s with the ideas behind “nudge units” etc. So why do we still do it?

ResearcherZero February 7, 2025 11:28 AM

There is little coverage of the objective of operations. Much of the focus is on entry points and initial access (whether or not this your brain or a physical network). The resulting discussions are often external debate unfocused on what an adversary aims to achieve. The specific long-term targeting and success of a single campaign and the other operations that support the overall mission receive very little analysis in public reports.

Operations target existing gaps and weaknesses in the structures of societies. Doubts, uncertainty, disputes, anxieties, ego, greed, all of the openings that can be leveraged. If an opportunity is to be found, it will certainly be easier to infiltrate in moments of discord or upheaval within the structures and institutions of the targeted population. The best defenses against such moments of peril are the experienced and knowledge people who man the various departments and agencies that normally swing into action to repel and then inspect the damage so that responses can be improved in future the next time it happens.

Cyber operations are a form of modern political warfare, rather than decisive battles.
They typically provide a supplementary capability with a different range of effects to the physical destruction of the target. These operations don’t win wars, but instead support espionage, deception, subversion and propaganda efforts.

Without personnel able to analyze or detect what actually took place, then the objective (and perhaps the success) cannot accurately be determined. This however may only be of benefit once the elected representatives have ceased lobbing partisan mortars at their corresponding counterparts on the other side of the political battle lines. Until such a time takes place for careful consideration and reflection, free of argy-bargy, both the politicians and their audience will remain mostly ignorant of the entirety of events.

Only when people are in the mood to be receptive or believe that information is delivered to them by a source they judge credible, accompanied by either experience or events that lend credibility, are they finally ready to accept or dismiss the information presented.
Their readiness to then believe or act on that information then rests upon their biases.

Finalized intelligence reports are debiased throughout production but recipients are not.

Maintaining animosity between parties ensures emotions and self-interest take priority over accepting information that may lead to mutually beneficial outcomes and solutions for both.
Once an adversary enters a situation where this is no longer possible, the backdoor opens.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.