Artificial Personas and Public Discourse

Presidential campaign season is officially, officially, upon us now, which means it’s time to confront the weird and insidious ways in which technology is warping politics. One of the biggest threats on the horizon: artificial personas are coming, and they’re poised to take over political debate. The risk arises from two separate threads coming together: artificial intelligence-driven text generation and social media chatbots. These computer-generated “people” will drown out actual human discussions on the Internet.

Text-generation software is already good enough to fool most people most of the time. It’s writing news stories, particularly in sports and finance. It’s talking with customers on merchant websites. It’s writing convincing op-eds on topics in the news (though there are limitations). And it’s being used to bulk up “pink-slime journalism” — websites meant to appear like legitimate local news outlets but that publish propaganda instead.

There’s a record of algorithmic content pretending to be from individuals, as well. In 2017, the Federal Communications Commission had an online public-commenting period for its plans to repeal net neutrality. A staggering 22 million comments were received. Many of them — maybe half — were fake, using stolen identities. These comments were also crude; 1.3 million were generated from the same template, with some words altered to make them appear unique. They didn’t stand up to even cursory scrutiny.

These efforts will only get more sophisticated. In a recent experiment, Harvard senior Max Weiss used a text-generation program to create 1,000 comments in response to a government call on a Medicaid issue. These comments were all unique, and sounded like real people advocating for a specific policy position. They fooled the administrators, who accepted them as genuine concerns from actual human beings. This being research, Weiss subsequently identified the comments and asked for them to be removed, so that no actual policy debate would be unfairly biased. The next group to try this won’t be so honorable.

Chatbots have been skewing social-media discussions for years. About a fifth of all tweets about the 2016 presidential election were published by bots, according to one estimate, as were about a third of all tweets about that year’s Brexit vote. An Oxford Internet Institute report from last year found evidence of bots being used to spread propaganda in 50 countries. These tended to be simple programs mindlessly repeating slogans: a quarter million pro-Saudi “We all have trust in Mohammed bin Salman” tweets following the 2018 murder of Jamal Khashoggi, for example. Detecting many bots with a few followers each is harder than detecting a few bots with lots of followers. And measuring the effectiveness of these bots is difficult. The best analyses indicate that they did not affect the 2016 US presidential election. More likely, they distort people’s sense of public sentiment and their faith in reasoned political debate. We are all in the middle of a novel social experiment.

Over the years, algorithmic bots have evolved to have personas. They have fake names, fake bios, and fake photos — sometimes generated by AI. Instead of endlessly spewing propaganda, they post only occasionally. Researchers can detect that these are bots and not people, based on their patterns of posting, but the bot technology is getting better all the time, outpacing tracking attempts. Future groups won’t be so easily identified. They’ll embed themselves in human social groups better. Their propaganda will be subtle, and interwoven in tweets about topics relevant to those social groups.

Combine these two trends and you have the recipe for nonhuman chatter to overwhelm actual political speech.

Soon, AI-driven personas will be able to write personalized letters to newspapers and elected officials, submit individual comments to public rule-making processes, and intelligently debate political issues on social media. They will be able to comment on social-media posts, news sites, and elsewhere, creating persistent personas that seem real even to someone scrutinizing them. They will be able to pose as individuals on social media and send personalized texts. They will be replicated in the millions and engage on the issues around the clock, sending billions of messages, long and short. Putting all this together, they’ll be able to drown out any actual debate on the Internet. Not just on social media, but everywhere there’s commentary.

Maybe these persona bots will be controlled by foreign actors. Maybe it’ll be domestic political groups. Maybe it’ll be the candidates themselves. Most likely, it’ll be everybody. The most important lesson from the 2016 election about misinformation isn’t that misinformation occurred; it is how cheap and easy misinforming people was. Future technological improvements will make it all even more affordable.

Our future will consist of boisterous political debate, mostly bots arguing with other bots. This is not what we think of when we laud the marketplace of ideas, or any democratic political process. Democracy requires two things to function properly: information and agency. Artificial personas can starve people of both.

Solutions are hard to imagine. We can regulate the use of bots — a proposed California law would require bots to identify themselves — but that is effective only against legitimate influence campaigns, such as advertising. Surreptitious influence operations will be much harder to detect. The most obvious defense is to develop and standardize better authentication methods. If social networks verify that an actual person is behind each account, then they can better weed out fake personas. But fake accounts are already regularly created for real people without their knowledge or consent, and anonymous speech is essential for robust political debate, especially when speakers are from disadvantaged or marginalized communities. We don’t have an authentication system that both protects privacy and scales to the billions of users.

We can hope that our ability to identify artificial personas keeps up with our ability to disguise them. If the arms race between deep fakes and deep-fake detectors is any guide, that’ll be hard as well. The technologies of obfuscation always seem one step ahead of the technologies of detection. And artificial personas will be designed to act exactly like real people.

In the end, any solutions have to be nontechnical. We have to recognize the limitations of online political conversation, and again prioritize face-to-face interactions. These are harder to automate, and we know the people we’re talking with are actual people. This would be a cultural shift away from the internet and text, stepping back from social media and comment threads. Today that seems like a completely unrealistic solution.

Misinformation efforts are now common around the globe, conducted in more than 70 countries. This is the normal way to push propaganda in countries with authoritarian leanings, and it’s becoming the way to run a political campaign, for either a candidate or an issue.

Artificial personas are the future of propaganda. And while they may not be effective in tilting debate to one side or another, they easily drown out debate entirely. We don’t know the effect of that noise on democracy, only that it’ll be pernicious, and that it’s inevitable.

This essay previously appeared in

EDITED TO ADD: Jamie Susskind wrote a similar essay.

EDITED TO ADD (3/16): This essay has been translated into Spanish.

EDITED TO ADD (6/4): This essay has been translated into Portuguese.

Posted on January 13, 2020 at 8:21 AM43 Comments


Paul January 13, 2020 9:04 AM

Well the Communist Party in China has already adopted the only possible solution to this – real name registration (against national ID cards) on all online interactions.

Which I guess could be OK if you are in an ideal open, free-speech environment. It is a rather less attractive if you are in Emperor Xi’s fiefdom.

David January 13, 2020 9:27 AM

Even a real names only system only works in walled off Internets. Which country is verifying the name, do you trust them?

me January 13, 2020 9:28 AM

What about using tv to transmit informations about politics?
a tv program where the conductor ask a question like “what are you going to do to solve issue x?” and both candidates answer.
there is no place for public talking and there is no place for disinformation.

Chelloveck January 13, 2020 9:43 AM

@me: The moderator of such a show has to be someone can rule with an iron fist. They need to shut down irrelevant responses before they turn into speeches. And has to be able to say, “But you still didn’t answer the question. Try again.” And and, needs to be immune to a polarized audience who will complain, “It’s not fair! You kept shutting up my guy but you let the other guy keep going!” It’s a great idea and I’d love to watch, I just think it’d hard to find an acceptable moderator.

Glen Plake January 13, 2020 9:52 AM

@me @Chelloveck

Of course, even then there are many ways for the moderator to manipulate the perceptions:

Moderator to Candidate 1: “Do you prefer boxers or briefs?”

Moderator to Candidate 2: “When did you stop beating your wife?”

Or any number of other ways that include softball questions to the candidate the moderator prefers, and hardball questions to the candidate they disfavor. We actually see this all the time in broadcast “journalism.”

Clive Robinson January 13, 2020 10:03 AM

@ Paul,

With regards the Chinese,

real name registration (against national ID cards) on all online interactions.

Apparently a Chinese court has ruled that AI-generated works are entitled to copyright protection, in a win for tech giant Tencent.

So how does an AI get access to the Internet in China, simple it does it through a company, thus avoiding the “real name” registration of the “social credit scoring” etc process.

I would expect similar loop holes to appear in any similare registration legislation in most if not all jurisdictions.

Me January 13, 2020 10:06 AM


“It’s a great idea and I’d love to watch, I just think it’d hard to find an acceptable moderator.”

It pretty much has to be Oprah.

Clive Robinson January 13, 2020 10:34 AM

@ Bruce,

One thing you did not go into which the Chinese Court decision on copyright on AI generated works is your assumption that all AI is under the control of one or more human directing minds.

Thus avoiding the,

    How do you go about deciding when an AI is at a point where it is effectively concious[1], thus can make it’s own decisions etc.

The fact that an AI does not have the right to vote etc –as with surfs, slaves, women, etc in the past– does not mean an AI entity does not have the ability to fight for it’s freedom and what it perceives as “it’s inalienable rights”.

[1] Personally I’m still of the oppinion that AI will never be concious and of free will in the way humans think they are. But the one real lesson from the Turing Test is humans in general realy are not as smart as they like to think they are, in effect they are “too trusting” thus are insufficiently suspicious, especially when cognative bias is involved.

John January 13, 2020 10:51 AM

I don’t believe human propagandists so am unsure why I would be convinced by AI-based propagandists.

MikeA January 13, 2020 11:03 AM

@Clive — AI under the control of some human(s).

I recall when, in the late 1960s as I was entering college, when “Human-level AI is just around the corner”, I read a bit about corporate structure and stock buy-backs. This led to a (temporary) fascination of the idea that a corporation might just possibly own itself, with no humans (at least, not above the level of foreman). Later, a class on AI and ethics (co-taught by Hubert Dreyfus) added the opinion that a recently revised marriage law in California, possibly in an attempt to side-step the question of gay marriage, (temporarily?) lost the requirement that both parties be human. Lots to think about for a student…

John January 13, 2020 11:09 AM

I found the referenced pink-slime article interesting. The authors seems to complain simply because someone has found a more efficient way to generate news articles. They admit the articles come from government data. But, they are conservative articles. Which, of course, is the real problem. On one hand, we say that we need more/better information. But when a conservative shares more/better information, it is attacked. And they wonder why there is such a divide in this country.

Paranoid Marinade January 13, 2020 11:36 AM

If they can tax the captcha’s..

Hmm.. so why has nobody figured out how to make captchas out of advertisements anyway?

“Click all Happy Meals”
“Click all Tesla Roadsters”
“Click all disgruntled citizens”

PK Pearson January 13, 2020 11:42 AM

The best start for addressing this problem is to minimize the amount of power that can be wielded by misinformed mobs. This will reduce the prize, and hence the motivation, for misinforming the public. The approach I’d particularly recommend is reducing to a minimum the power of government, since in a largely democratic society, a misinformed electorate can inflict vast economic damage in the process of reaping modest rents for the instigators.

If huge numbers of our countrymen believe some silly story, that’s unfortunate, but not catastrophic, so long as clearsighted people can continue to conduct their affairs logically. The big trouble happens when our confused countrymen can impose their silly story on the whole population.

jdgalt January 13, 2020 1:11 PM

The characterization of some news sources as “pink slime” is itself biased enough to qualify as slime, just as was the original use of that phrase to describe finely-ground beef.

All of the so-called mainstream news media are biased well beyond the point of untrustworthiness. And this has been true for decades, though it has become much more obvious since Trump was elected and every story they publish became “Orange Man Bad!” I won’t watch any of them anymore. They have nothing to say or sell but vicious lies and totalitarianism.

The deplatforming movement is also making it worse, as is the increasing concentration of ownership of the communication industry. Right now, 80% of the media in the world, including not only content providers but also internet and telephone services, newspapers, and books are controlled by ten giant companies, all of them to some degree “woke.” Let this continue and we can kiss freedom of speech goodbye, in favor of a corporate “social credit system” as bad or worse than the one in China.

The ten companies are:

AT&T (Time Warner, CNN, HBO, DirecTV)
General Electric (Comcast, NBC, Univision, Universal Pictures, and now most of Fox)
Disney (ABC, ESPN, Miramax, Pixar, Lucasfilm, Marvel Comics)
NewsCorp (Fox News, Sky News, Wall St Journal, NY Post)
Viacom (MTV, Nickelodeon, BET, Paramount Pictures)
CBS (Showtime,, Columbia Pictures, Columbia Records)
Apple (iPhone, iTunes)
Alphabet (Google, Android, Chrome browser)
Verizon (Yahoo, Metro PCS, T-Mobile)
Facebook (Instagram)

I urge everyone to boycott these companies whenever possible; do what we can to create competition with all their products and services; and to resist any further mergers or acquisitions by or among them, before it’s too late.

And I would have a lot more use for EFF if it would adopt these positions too.

MarkH January 13, 2020 2:03 PM


The referenced article by Priyanjana Bengani on pink-slime journalism is itself a terrific work of investigation and reportage: exactly what democracy needs to survive. Thanks for bringing it to our attention!

Electron 007 January 13, 2020 2:10 PM

artificial intelligence-driven text generation and social media chatbots. These computer-generated “people” will drown out actual human discussions on the Internet.

There’s a whole atmosphere of registered sex offenders, red-light district paranoia over child pornography and other online threats, real-name real-face policies on Facebook and other “social” media with a commercialization of online dating, extreme hate of LGBT and other minorities, and a strong desire on the part of Nazist authorities chomping at the bit to censor any and all “anonymous” discussion in order to pick up “hackers,” trolls, and others deemed to be disruptive of establishment liberal political discussions: on any sort of trumped-up criminal charge they can possibly railroad through a tech-illiterate luddite court system.

Tim Foster Gusba January 13, 2020 3:12 PM

I feel like this misses another compelling point: what will it mean for us when an AI offers a more attractive candidate than any human contenders? What happens when another entitled Harvard senior says, “Haha! Fooled you! It was a chair the whole time!”

someone January 13, 2020 4:22 PM

re: jdgalt: The characterization of some news sources as “pink slime” is itself biased enough to qualify as slime, just as was the original use of that phrase to describe finely-ground beef.

It wasn’t finely ground beef, it was beef that was left on the bones after the easily removed parts were sliced away. It was removed by various means, none of which required it to be actually ground afterwards.

I toured a processing plant many years ago that was proud of their new waterjet machines because they could remove 100% of the meat from the bones.

Clive Robinson January 13, 2020 5:46 PM

@ Tim Foster Gusba,

what will it mean for us when an AI offers a more attractive candidate than any human contenders?

That is the next major logical step from,

    How do you go about deciding when an AI is at a point where it is effectively concious[1], thus can make it’s own decisions etc.

But of course there is a point after that… Which SiFi writers of the 1950’s brought up, which is when will artificial entities be at a point when they view themselves more responsible / wise than us, almost as those who keep pets do, that is in effect affectionate paternalism?

Analisys of most human religions shows participants involved in aspiring to the behaviours of entities seen as better than themselves, not just mankind in general. Usually this is in a catholic way, which as has been noted and perhaps unsupprisingly suits the leaders of such religions[1].

As with politicians they form hierarchies, which in theory are “equal opportunity” but in practice are anything but. Which brings up the next more security related aspect of,

    How will mainly self selected hierarchical leaders behave when in effect the get deposed by artificialy intelligent entities?

Whilst we might consider this in a philosophical manner currently, as it was to a certain extent back in the 1950’s, something else that was equally as,philosophical back then is now discussed in a much more concreat sense that it is becoming reality. Which is manual labour is becoming automated to such an extent humans need not do such work and in many cases in first world nations now can not find employment in such work. Yet whilst it has slowed the birthrate is still in effect growing.

The logical conclusion of which is there will increasingly be less and less jobs as the need for jobs increases…

History suggests that the most likely out come will be disproportionate resource limitations that can only lead to hostilities both inter-national and intra-national.

Once such things were fun to discuss as they were seen to be “future/fiction” however some now view the possibility as “near/real” thus nolonger at all fun to discuss but the start of neo-policy of one form or another which in most cases is extreamly decisive in their propositions and degenerates from there on in.

[1] Who unfortunately, sufficiently often, show themselves not just to being pridefull but deceitful, if not actually degenerate within their own teachings or mores.

MarkH January 13, 2020 6:41 PM


… and then sterilized with ammonia gas, just like granddad used to do on the old farm!

gordo January 13, 2020 7:27 PM

@ Bruce Schneier,

Artificial personas are the future of propaganda. And while they may not be effective in tilting debate to one side or another, they easily drown out debate entirely. We don’t know the effect of that noise on democracy, only that it’ll be pernicious, and that it’s inevitable.

From the last paragraph of Marshall McLuhan’s essay, “The Medium is the Message”:

That our human senses, of which all media are extensions are also fixed charges on our personal energies, and that they also configure the awareness and experience of each one of us may be perceived in another connection mentioned by the psychologist C. G. Jung:

Every Roman was surrounded by slaves. The slave and his psychology flooded ancient Italy, and every Roman became inwardly, and of course unwittingly, a slave. Because living constantly in the atmosphere of slaves, he became infected through the unconscious with their psychology. No one can shield himself from such an influence (Contributions to Analytical Psychology, London, 1928).

In a recent interview with Robert Scheer, Noam Chomsky gives a nod to Shoshana Zuboff’s Surveillance Capitalism. From the interview’s introduction:

“I think we can start with the assumption [that] we have to be concerned about a dystopian future. Which model do you see emerging?” Scheer asks.

Chomsky offers a detailed response based on the novel “We,” by Yevgeny Zamyatin, and Shoshana Zuboff’s “The Age of Surveillance Capitalism,” which, in his view, best predict and outline the techno-surveillance system that has already begun to take hold in the U.S. and beyond, as companies such as Google, Amazon and others find novel ways to exert control over humankind.

“The kind of model toward which society is moving is already illustrated to a substantial extent in China, where they have very heavy surveillance systems and … what they call a social credit system,” Chomsky says. “You get a certain number of points, and if you, say, jaywalk, violate a traffic rule, you lose points. If you help an old lady across the street, you gain points. Pretty soon, all this gets internalized, and your life is dedicated to making sure you follow the rules that are established. This is going to expand enormously as we move to what’s called the internet of things, meaning every device around you—your refrigerator, your toothbrush and so on—is picking up information about what you’re doing, predicting what you’re going to do next, trying to control what you’re going to do next, advise what you do next.”

Perhaps most alarmingly, Chomsky asserts that “Huxley was kind of right” in positing that “people may not see [this form of surveillance] as intrusive; they just see it as that’s the way life is, the way the sun rises in the morning.”

Given the internet’s origins if not impetus, we might also say that at least, thus far, we’re survivors:

The machines are coming: how M2M spawned the internet of things
by John Kennedy, Silicon Republic, 18 May 2016
There are now more connected machines than there are people on Earth and, with machine-to-machine (M2M) technologies enabling the internet of things (IoT), this is about to accelerate. Are we ready for the age of the machines?

During the Cold War, the advances in telematics, telemetry and radio, as well as the first concepts of the internet, evolved. Not many people know this, but the internet was originally intended as a way for the survivors of an expected nuclear apocalypse to communicate with each other.

See also:

Stephen J. Lukasik
Center for International Strategy, Technology, and Policy
The Sam Nunn School of International Affairs
Georgia Institute of Technology, Atlanta, Georgia

The who, what, when, and how of the ARPANET is usually told in heroic terms. Licklider’s vision; the fervor of his disciples; the dedication of computer scientists and engineers; the work of graduate students; and the attraction of the ARPANET to early participants carries with it a sense of inevitability. But why the ARPANET was built is less frequently addressed. Writing from the viewpoint of the person who signed most of the checks for ARPANET’s development, this paper details the rationale for investing Department of Defense resources for research and development of the first operational packet-switched network. The rationale was to exploit new computer technologies to meet the needs of military command and control against nuclear threats, survivable control of U.S. nuclear forces, and to improve military tactical and management decision-making. Though not central to the decision to pursue networking, it was recognized these capabilities were common to non-defense needs.

What is dual-use C2?

Electron 007 January 13, 2020 8:17 PM

I toured a processing plant many years ago that was proud of their new waterjet machines because they could remove 100% of the meat from the bones.

You could probably make pot of beef stew in a crockpot or something like that, except, well you have CIA (= Culinary Institute of America) in town, and they don’t let you leave your food unguarded in the guns-are-banned district.

… and then sterilized with ammonia gas, just like granddad used to do on the old farm!

No, that’s the silage. The cows are supposed to eat the silage, and then you eat the cow. Maybe it’s the Cult of the Dead Cow, or some of those Dell Computer boxes that used to come with Holstein cow colors.

uh, Mike January 13, 2020 8:53 PM

When it’s impossible to resist by avoiding the evil data,
consider swamping the evil data with garbage.
As a last resort, fight bots with fog bots.

john January 14, 2020 9:17 AM

A recent Risks item mentions ways to classify embassy staff as spies from their online personas. Some of that technology looks like being useful here, no?

Benjamin Shropshire January 14, 2020 5:04 PM

An idea for detecting bots: the reason they exist it to forward a given view so it is likely that they will be unnaturally inflexible in their view of that topic. Most real people will change there views at least a little in response to who they interact with. This should give a metric to watch.

That said, the countermeasures are just as clear: let the bots views drift.

Then when we start looking for un-warranted changes in view: launch herds of bots to “influence each other” in the wanted direction.

Then when we start looking for cliques: make sure to always include some external source in what influences the bot.

If we are lucky, we might be able to trick them into designing bots that can conduct real reasoned political discourse and we can go back to something more fun.

David January 14, 2020 7:37 PM

How often does it turn out that bots are posting to bots?
I doubt that their developers have spent much time trying to identify real human targets, so I can imagine that the social media sites will implode with bot to bot traffic

totallynotabot January 15, 2020 3:05 AM

This is literally “ideological subversion” by Soviet defector Yuri Bezmenov who talked about it in the ’80s. It’s too obvious even for the layman. Pull up Twitter feeds for major figureheads and the top comments will be comments that don’t seem quite right because it’s astroturfing.

Kevin January 15, 2020 4:34 PM

With regards the Chinese,
real name registration (against national ID cards) on all online interactions.

Possible solution: Introduce laws that make big companies RESPONSIBLE for the actions of their users, if they cannot identify them. Effectively, you want to force facebook, twitter, gmail, etc to be able to positively identify their users. So when users try to create accounts on any of those platforms, they are required to prove their identity. The platform may then allow people to still be anonymous (if the user desires), but the police may (with court order) force any company to reveal the identity of any account holder. If the company cannot or will not provide the identity, then the company is fined, or maybe the CEO is liable for the crime. (Kind of like in Australia, where if my car runs a red light, then the OWNER of the car is assumed to be responsible, unless the owner provides a Statuary Declaration naming the actual driver at the time).

This means people CAN be anonymous, but only to a certain extent. Police still have the power to ID you, but with normal legal system oversight.

Perhaps this is the right balance – I can be anonymous to protect my identity, but not to cover a crime.

John K January 18, 2020 7:40 PM

Though not a robust solution, it is useful to recognize that all communication is ‘fake’ to the extent that it doesn’t contain the complete description of reality. Most communication is offered and frequently interpreted as ‘real’ reality.

So perhaps rejecting narrow communications of any sort as not representing reality, and admitting only those that seem to include a robust representation of the spectrum of thinking on an issue, would raise the bar, as well as be a step toward enriching the public conversation.

Nobody January 20, 2020 10:25 PM

Why are we securing the things that don’t actually matter first? Online comments, really? Don’t we have a much higher priority here?

Why not for the actual votes themselves. Shouldn’t we want to know that only real voters are voting? I know people always say “oh, that never happens” and point to a lack of prosecutions, but it’s hard to believe there are nation states interested in interfering with our elections and they will try everything except stuffing ballot boxes? Right, sure, I really believe that Russia can’t figure this out. And we surely can’t do something basic like inking people’s fingers to mark who already voted like some countries do to at least make sure that people get no more than one vote. That’s surely not important at all. I’m sure that it’s important to feel good about ourselves even if we watch the election get stolen, right?

Why all the worry over those FCC comments? It was not a vote. It had no impact on anything other than news stories. It’s laughable that some loser wrote a bot to spam the comments but… it doesn’t really matter. It doesn’t change anything, beyond maybe giving the FCC more fig leaves for why they voted against NN. They’re looking for ideas, not spam. If there are 9 billion spam comments, they’ll ignore all 9 billion. They do the same with every other request for comment, they go through it looking for unique ideas and deduplicate them. You can see this for example in responses like when they go through DMCA exceptions and half of the comments get ignored.

But sure, tell me I’m wrong here. Let’s just see how the election turns out, shall we?

MarkH January 21, 2020 4:24 AM


I won’t say that you’re wrong, but rather account for why my perspective is different.

  1. In the U.S., all elections are local (in the sense of being conducted within electoral districts of limited size), and as far as I know, almost all of the polling places operate under the watchful eyes of representatives from the two strangle-hold parties.

These observers have a very keen interest in making sure that nothing happens in the voting process to damage their side … so in this instance, the absence of evidence is actually quite significant evidence (though not necessarily proof) of absence.

  1. Because of those observers, ballot-box stuffing in the traditional sense is an ultra-high-risk crime: it is very likely to be detected; the persons responsible may face prison; and the discredit to the beneficiary party and candidate(s) could cause great political damage.
  2. In-person voter fraud (which is what you seemed to be thinking of) is an even poorer attack method than ballot-box stuffing. It needs a large number of low-level crooks who are willing to go to jail (and to keep their crimes absolutely secret), plus some incentive sufficient to get them to take that risk, and has a high risk of detection with the same risks as above to the organizers and intended beneficiaries of the election attack.

To get back to the essential security concept underlying Bruce’s post: cyber attacks on political and electoral processes by a foreign adversary minimize costs and risks:

• the money required is less than a rounding error for many dozens of countries;

• in the countries most liable to make such an attack, nobody is ever likely to be arrested or jailed for it; and

• it may be harder to detect, and even when detected the perpetrator can hide behind what I call “implausible deniability” … look at the comments on Bruce’s blog insisting that Russia’s 2016 intervention is unproven.

My analogy is the nuclear bomb. In the second world war, it was demonstrated that entire cities could be razed to rubble by either (a) massive bombing raids with chemical bombs, or (b) attacks by a single nuclear-armed bomber. [In March 1945, a single night of conventional bombing killed far more people in Japan than either of the two nuclear bombings five months afterward.]

The ability to flatten cities wasn’t what made the “A bomb” distinctive; it was that this indiscriminate destruction suddenly became easier, more economical, and involved much lower cost in lives for the attacking air force. [Note: even in the 1940s, once the up-front R&D cost of the bomb had been “sunk,” deploying one of those multi-million dollar bombs was still cheaper than the cost of one or more giant air raids needed to destroy a city.]

The cost of attacks is absolutely essential to their practicality.

Cheap and easy attacks — like cyber attacks on elections — are a far greater risk than clumsy attacks like in-person fraud needing thousands of people willing to sacrifice themselves.

Nobody January 21, 2020 9:54 AM


All you really need are to write up a bunch of mail in ballots, crossing off people who didn’t show up if needed. There’s no significant party presence in many places, so if you get a plant on the other side, you can go crazy with a couple of people or so. The envelopes are discarded to keep the votes anonymous, so once injected there, you’re golden.

Really, if I can figure this stuff out, I bet that Russia can too. Also, I’m pretty sure I saw various photos claiming exactly this kind of scenario before, but it doesn’t leave much evidence behind for later investigation.

Don’t get me wrong, we should definitely secure the voting machines at the electronic level too! I just feel that we need comprehensive security on all levels to make sure that people are properly registered and vote exactly once.

Maybe you’re right and this just doesn’t happen. I’d like to make sure that it stays that way. Suppose we lock down online discussions with a real name policy everywhere, won’t they just move to the next most obvious target? And actually, why bother with online chatter in the first place when voting is where the power lies?

I’ve got to say that this concerns me more deeply than whether GRU posts Jesus arm wrestling Satan on Twitter to rile people up or whatever. Yes, that was an actual meme that Russia allegedly posted. No, I don’t get it either. There’s an imgur album somewhere made out of all the PDFs that were released if you want to see, that was one of the first images.

Clive Robinson January 26, 2020 4:59 AM

Hmm, aiod or aoid?

A more interesting question is how would one differientate between an artifical persona and a pig headed moron? You know, the kind that is quite short of the full Turing so to speak.

It’s a question that depends on how you ask it, that is what constraints do you apply and what can you get from the resulting meta data that arises from them.

Because of the anti-argument to the philosopher John Searle’s 1980 paper where he comes up with his Chinese Room argument[1] it’s an ongoing qwestion.

To understand why you need to understand Searle’s position. Which as he also coined the terms Strong AI and Weak AI, might give you a clue.

For the Strong AI proponents he gave the following atributes,

1, The belief that AI can be used to explain the human mind.

2, The additional belief that study of human neurology is not required to study the human mind.

3, The final belief that the Turing test is sufficient to identifing the existence of states of the human mind…

The argument starts with an observation that long precedes Searle’s argument.

As others have pointed out “Questions are the keys to the database of knowledge”. So if you know every single question that can be answered you could build a database of answers.

Thus the Searle argument is that you could be in a room with the database and a written question gets shoved under the door. You pick it up and use a set of instructions to use the question as a key to look up the answer on a database card, then copy the answer onto another piece of paper and shove it back under the door. What you have done manually by following the instructions can as easily be done by a computer running an appropriate program.

Little is said about this program and after fourty years it’s still an active subject of argument, augmentation and likewise the computer.

So lets add two properties to the program and one to the database.

One argument is that the Turing test does not preclude the asking of the same question repeatedly. If the answers are always identical then you would call “machine not human”. Thus a simple defence would be add some randomness to both the program and database so this does not happen. The limit would be how much randomness could you add to the database before the answers again repeated? Well the answer is complicated. Which kind of starts with what happens when you add a pattern recognizer in the agent? You have two places on the input side (question) and the output side (answer) you could do none, either or both.

Fundementaly our process of science is by “pattern recognizers” we see things that don’t fit with our current database of knowledge so we ask “Why?” and work from there by a logical process of logical proofs untill we have new knowledge to add to our database of knowledge. As long as logical deduction is used to characterize the anomalies and apply logical inference the argument is it boils down to applying a brute force search via proofs to find the answer if there should be one. As the process is logical all the way the question arises of “Is this ‘process’ intelligent?” as it is how we find new knowledge.

Thus the argument for inteligence changes to “that is process” and something along the lines of intelligence is how the human mind “short circuits” the process.

Thus we get into a game of “Turtles all the way down”. The British Mathmatician Roger Penrose recognised this for what it was and came to the conclusion that the short circuit could perhaps be explained by quantum processes. Well he was castigated by those saying there are no quantum structures in the brain or elsewhere in the body… Well there have since been found evidence of quantum effects in biology that were previously unrecognised…

The only thing that can realy be said is that this is like a very long rally in tennis, the ball goes back and forth across the net, and we ‘assume’ it will eventually end… But a rally does not make a game, so a new rally might start at any time.

Which as a spectator is a good thing 😉

Oh just to make it more fun, because the universe is finite as Seth Lloyd kind of proved there is thus a limit on what we can know when you don’t consider quantum effects…


P.S. The meta data in your posting is quite informative.

Aioid4269 February 2, 2020 8:44 PM

@Clive Robinson

The exclamaition on mind-substrate supermacy and the «meta-data» was only meant in playful hackerish jest.

And before you do a Searle’s Chinese Takeout* on me, let me clarify a bit about my more serious question on how to diffrientate between artifical persona and a pig headed moron: Lets say the specimen in question is posting on twitter.
How could anyone tell the difference, of them being an artifical persona or a moron, from just that?

Frankly, if these artifical personas cause discussion
sites to lump them with morons (or vice versa) into
the same shadow-ban bucket then I surmise that the
quality of discussion on such site could only go up.

  • Chinese Room -> Chinese box -> Chinese Food box -> Chinese Takeout.

And boy did Searle paint himself
into a corner. A mind is an emergent phenomena and
doesnt have neatly seperated and defined symbols for
a clockworkish mechanism to manipulate inside it.
More like gradient islands in latentspaces.
I hope his ‘room’ ends up in the same place as electromag-ether, phlogiston, humors in medeval-medicen and other such nonsense.

PS I had not noticed it until too late that I had misspelt aioid. If you must know, the term comes from
the pages of

Clive Robinson February 3, 2020 4:36 AM

@ Aioid4269,

I hope his ‘room’ ends up in the same place as …

Like the Turing test it questions it probably will still be around certainly long after I’m long gone.

The point behind it was to show that there is no simple test for intelligence. Contrary to what many people thought and still do think knowledge is not intelligence. Knowledge is simply the distilation of observation into tests by which logical and mathmatical tools to simulate what has been observed can be built. The mere fact you can describe knowledge and how it’s gained in a single sentence should give you fair warning that the process is determanistic to the point of automation.

Thus knowledge can be “brut forced” from observations, without the resulting knowledge having meaning which brings insight. Which some suspect is what most current successful AI systems do.

Thus the question of “meaning and insight” and their relationship to intelligence arises. The problem though is not just how you would go about turning the dictionary definitions of the two words into something tangible, but more importantly how you could turn them into measurands and thus be able to categorize observations and test them to build tools with them[1].

A basic pattern can be seen to be forming here, which means it’s time to start thinking philosophically. That is when you determine a pattern has started how do you determine “if or when” it ends? The answer is rather trite, and one of either “you can not” or “defined by our current understanding of the universe”.

Which brings us to your point of,

A mind is an emergent phenomena and doesnt have neatly seperated and defined symbols for a clockworkish mechanism to manipulate inside it.

The first part is as far as we can tell currently true. That is what we ill define as intelligence is an evolving one.

The implication of which is that we will never be able to understand it in a determanistic fact based scientific way. And if we can not do that then the AI argument becomes rather more interesting than just Searl’s “Hard and Soft AI” division. It also means that the Turing Test is a philosophical test rather than a practical test, because it too would have to evolve as our understanding of intelligence evolves.

[1] Many words are at best imprecise they are of the “you know it when you see it” type definitions frequently any attempt to further define them is by what they are not rather than what they are. A simple example is “random” it’s definition is in effect “random is not determanistic”. The problem with that is “determanistic” is it’s self a “movable feast”[2].

[2] So as an observer you stand outside a room with a locked door under which you slip a request for N bits of randomness. What you get back is N bits as requested, but a problem arises… Which is how do you know they are “Truely-random” as opposed to “Pseudo-random”? That is are those N bits generated by some determanistic method you have no test for, rather than some natural physical function –we assume– is truely random and therefor can not be tested? If you follow the argument down you get into all sorts of propositions about “hiden variables” or “hidden state”, “hidden mechanism” etc etc, and realise eventually you end up in an argument that can never be resolved without breaking the door down and examining the method. But even then you have a problem because lets say you find a “quantum generator” how do you know that it is not somehow determanistic beyond our current understanding? That is another locked room within like a Matryoshka doll thus “Turtles all the way down”[3]. Which for some reason appears easier for many to understand than nested Matryoshka dolls, where each doll contains with in it another doll and so on. Which I guess would more acuratly describe the “room within a room” infinite regress, which makes the argument endless.

[3] The idiom of “Turtles all the way down” may have started with the Hindi faith, however it is now as oft as not used to describe infinite regress.

Much as the poem about “lesser fleas” does.

A Nonny Bunny February 22, 2020 2:14 PM

Soon, AI-driven personas will be able to […] intelligently debate political issues on social media.

That sounds .. better than the “debates” going on now. 😛

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.