AI-Generated Text and the Detection Arms Race

In 2023, the science fiction literary magazine Clarkesworld stopped accepting new submissions because so many were generated by artificial intelligence. Near as the editors could tell, many submitters pasted the magazine’s detailed story guidelines into an AI and sent in the results. And they weren’t alone. Other fiction magazines have also reported a high number of AI-generated submissions.

This is only one example of a ubiquitous trend. A legacy system relied on the difficulty of writing and cognition to limit volume. Generative AI overwhelms the system because the humans on the receiving end can’t keep up.

This is happening everywhere. Newspapers are being inundated by AI-generated letters to the editor, as are academic journals. Lawmakers are inundated with AI-generated constituent comments. Courts around the world are flooded with AI-generated filings, particularly by people representing themselves. AI conferences are flooded with AI-generated research papers. Social media is flooded with AI posts. In music, open source software, education, investigative journalism and hiring, it’s the same story.

Like Clarkesworld’s initial response, some of these institutions shut down their submissions processes. Others have met the offensive of AI inputs with some defensive response, often involving a counteracting use of AI. Academic peer reviewers increasingly use AI to evaluate papers that may have been generated by AI. Social media platforms turn to AI moderators. Court systems use AI to triage and process litigation volumes supercharged by AI. Employers turn to AI tools to review candidate applications. Educators use AI not just to grade papers and administer exams, but as a feedback tool for students.

These are all arms races: rapid, adversarial iteration to apply a common technology to opposing purposes. Many of these arms races have clearly deleterious effects. Society suffers if the courts are clogged with frivolous, AI-manufactured cases. There is also harm if the established measures of academic performance – publications and citations – accrue to those researchers most willing to fraudulently submit AI-written letters and papers rather than to those whose ideas have the most impact. The fear is that, in the end, fraudulent behavior enabled by AI will undermine systems and institutions that society relies on.

Upsides of AI

Yet some of these AI arms races have surprising hidden upsides, and the hope is that at least some institutions will be able to change in ways that make them stronger.

Science seems likely to become stronger thanks to AI, yet it faces a problem when the AI makes mistakes. Consider the example of nonsensical, AI-generated phrasing filtering into scientific papers.

A scientist using an AI to assist in writing an academic paper can be a good thing, if used carefully and with disclosure. AI is increasingly a primary tool in scientific research: for reviewing literature, programming and for coding and analyzing data. And for many, it has become a crucial support for expression and scientific communication. Pre-AI, better-funded researchers could hire humans to help them write their academic papers. For many authors whose primary language is not English, hiring this kind of assistance has been an expensive necessity. AI provides it to everyone.

In fiction, fraudulently submitted AI-generated works cause harm, both to the human authors now subject to increased competition and to those readers who may feel defrauded after unknowingly reading the work of a machine. But some outlets may welcome AI-assisted submissions with appropriate disclosure and under particular guidelines, and leverage AI to evaluate them against criteria like originality, fit and quality.

Others may refuse AI-generated work, but this will come at a cost. It’s unlikely that any human editor or technology can sustain an ability to differentiate human from machine writing. Instead, outlets that wish to exclusively publish humans will need to limit submissions to a set of authors they trust to not use AI. If these policies are transparent, readers can pick the format they prefer and read happily from either or both types of outlets.

We also don’t see any problem if a job seeker uses AI to polish their resumes or write better cover letters: The wealthy and privileged have long had access to human assistance for those things. But it crosses the line when AIs are used to lie about identity and experience, or to cheat on job interviews.

Similarly, a democracy requires that its citizens be able to express their opinions to their representatives, or to each other through a medium like the newspaper. The rich and powerful have long been able to hire writers to turn their ideas into persuasive prose, and AIs providing that assistance to more people is a good thing, in our view. Here, AI mistakes and bias can be harmful. Citizens may be using AI for more than just a time-saving shortcut; it may be augmenting their knowledge and capabilities, generating statements about historical, legal or policy factors they can’t reasonably be expected to independently check.

Fraud booster

What we don’t want is for lobbyists to use AIs in astroturf campaigns, writing multiple letters and passing them off as individual opinions. This, too, is an older problem that AIs are making worse.

What differentiates the positive from the negative here is not any inherent aspect of the technology, it’s the power dynamic. The same technology that reduces the effort required for a citizen to share their lived experience with their legislator also enables corporate interests to misrepresent the public at scale. The former is a power-equalizing application of AI that enhances participatory democracy; the latter is a power-concentrating application that threatens it.

In general, we believe writing and cognitive assistance, long available to the rich and powerful, should be available to everyone. The problem comes when AIs make fraud easier. Any response needs to balance embracing that newfound democratization of access with preventing fraud.

There’s no way to turn this technology off. Highly capable AIs are widely available and can run on a laptop. Ethical guidelines and clear professional boundaries can help – for those acting in good faith. But there won’t ever be a way to totally stop academic writers, job seekers or citizens from using these tools, either as legitimate assistance or to commit fraud. This means more comments, more letters, more applications, more submissions.

The problem is that whoever is on the receiving end of this AI-fueled deluge can’t deal with the increased volume. What can help is developing assistive AI tools that benefit institutions and society, while also limiting fraud. And that may mean embracing the use of AI assistance in these adversarial systems, even though the defensive AI will never achieve supremacy.

Balancing harms with benefits

The science fiction community has been wrestling with AI since 2023. Clarkesworld eventually reopened submissions, claiming that it has an adequate way of separating human- and AI-written stories. No one knows how long, or how well, that will continue to work.

The arms race continues. There is no simple way to tell whether the potential benefits of AI will outweigh the harms, now or in the future. But as a society, we can influence the balance of harms it wreaks and opportunities it presents as we muddle our way through the changing technological landscape.

This essay was written with Nathan E. Sanders, and originally appeared in The Conversation.

Posted on February 10, 2026 at 7:03 AM4 Comments

Comments

wiredog February 10, 2026 9:22 AM

“Social media platforms turn to AI moderators.”
At least as a first cut on moderation this is a good thing. Moderation at scale is a hard to impossible task and while Section 230 of the CDA provides some protection, it has limits. Lack of good moderation has killed a number of sites. So an LLM that can hear the various dog whistles and an image recognizer that can tag probable csm are good things. The problem comes with the appeal process, which can also get overwhelmed.

Carl Byor February 10, 2026 10:50 AM

Please observe how the author contrives the impression of balance by selling supposed “upsides” while not stating whether they are completely overwhelmed by the downsides. The subtext is clear: let’s use A.I. to get ourselves out of a hole that we’ve dug with A.I. In other words, the solution is always more A.I.

That, right there, is the basic premise of the tech industry. Regardless of what happens, the answer is always more tech.

And that should tell you everything you need to know about this author and the interests that put the spotlight on him.

mark February 10, 2026 11:43 AM

Actually, Neil shut down Clarkesworld submissions for three or four months. After, he opened it again, but has developed a method of detection which he does NOT advertise.

From what you wrote, Bruce, expert systems are useful. Chatbots are not. Deepfakes are negatively useful.

Clive Robinson February 10, 2026 12:16 PM

@ Bruce,

Whilst not quite a foregone conclusion the AI “arms race” was a reasonable thing to expect.

What I suspect most people who expected an arms race got caught by is the shear breadth of the battle front. That is people had not realised just how “generally” the LLM can be used, because they don’t tend to think about it in the right way.

But the main thing about the Current AI LLM and ML systems is they are actually not very good mostly they are,

“Jokers of all trades and masters of none”

And that’s before we start talking about “the memory problem”. Which includes the “garbage bin” training models.

Though they are improving the chance of an ROI greater than the rate of inflations is not at all likely. And it’s fairly clear that financial interests are “Pivoting out” of Tech stocks because they see that there is nothing of worth to support valuations.

We are back to the crazy notion of “burn rate” and we have previous experience of what that means.

But also it’s because people tend to forget about the deflationary effect of “new technology” the price drop for the same capability has a habit of falling way way faster than people actually understand.

Yes we can put LLMs on high end laptops now… Especially if you pick the right model. This has kicked out most of the runway/moat of the likes of Nvidia and OpenAI etc. Those Data Centers people keep talking up are a half decade away at best due to external factors. All this nonsense about nuclear reactors or data centers in space if pretty much “blowing smoke” to try to keep hot air in the ballon.

But more importantly is the NIMBY effect people don’t want the proposed Data Centers anywhere near them. Especially if they come with a jerry built nuclear power station knocked up in a hurry next door. And that’s before we talk about “environmental impact” of using fossil fuels like coal… Which is most likely how the generators will be run. But communities are realising their is nothing in having a data center near them, they won’t get any jobs there not even as janitors.

But as indicated LLMs can now run on high end laptops. So questions arise as to,

1, What exactly is going to go into these Data Centers anyway?
2, Will the data agrigation, collection and reliable collation be possible?
3, Will the ML side become sufficient to get error rates from over 30% down to a lot less than 5%?

And a whole load more that the answers are not going to keep the current AI Corps in business…

With the big one being,

Will there ever be a realistic return on investment?

Even Nvidia know the answer to that. Which is why that $100Billion circular investment deal is gone, and chances are OpenAI will have to sell it’s self to Microsoft for 1cent on the dollar at best in a fairly short period of time.

But the real question people should be thinking about is,

“Does the future of general AI coincide with Current LLM and ML systems?”

The answer to which is “no”.

Which raises,

“Do Current LLM and ML systems actually have a realistic future?”

To which the answer is more complicated but yes, just as we are still using “Expert Systems” that were new in the 1980’s in niche activities LLMs if specific to niche areas have a future probably.

These are the realities and people should get used to what they actually mean…

One thing I can predict is that Nvidia has lost exclusivity and thus it’s valuation is going to sink. Will it stay above the $1trillion it zoomed through like a rocket?

That is questionable but it may get quite close depending on retail inflation and tech deflation.

But people should consider the hard reality of,

“What will an Open Source model on a high end laptop actually get me?”

And I suspect that it will be like asking back in the 1970’s

“What will an electronic calculator get me?”

And in the 80’s

“What will an 8bit personal computer get me?”

These were times when the technologies were fresh and very expensive… But for any real measure of performance the price had gone from not affordable to Xmas present money in less than a decade.

Now apply that thinking to Current LLM systems…

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.