AI-Generated Text and the Detection Arms Race

In 2023, the science fiction literary magazine Clarkesworld stopped accepting new submissions because so many were generated by artificial intelligence. Near as the editors could tell, many submitters pasted the magazine’s detailed story guidelines into an AI and sent in the results. And they weren’t alone. Other fiction magazines have also reported a high number of AI-generated submissions.

This is only one example of a ubiquitous trend. A legacy system relied on the difficulty of writing and cognition to limit volume. Generative AI overwhelms the system because the humans on the receiving end can’t keep up.

This is happening everywhere. Newspapers are being inundated by AI-generated letters to the editor, as are academic journals. Lawmakers are inundated with AI-generated constituent comments. Courts around the world are flooded with AI-generated filings, particularly by people representing themselves. AI conferences are flooded with AI-generated research papers. Social media is flooded with AI posts. In music, open source software, education, investigative journalism and hiring, it’s the same story.

Like Clarkesworld’s initial response, some of these institutions shut down their submissions processes. Others have met the offensive of AI inputs with some defensive response, often involving a counteracting use of AI. Academic peer reviewers increasingly use AI to evaluate papers that may have been generated by AI. Social media platforms turn to AI moderators. Court systems use AI to triage and process litigation volumes supercharged by AI. Employers turn to AI tools to review candidate applications. Educators use AI not just to grade papers and administer exams, but as a feedback tool for students.

These are all arms races: rapid, adversarial iteration to apply a common technology to opposing purposes. Many of these arms races have clearly deleterious effects. Society suffers if the courts are clogged with frivolous, AI-manufactured cases. There is also harm if the established measures of academic performance – publications and citations – accrue to those researchers most willing to fraudulently submit AI-written letters and papers rather than to those whose ideas have the most impact. The fear is that, in the end, fraudulent behavior enabled by AI will undermine systems and institutions that society relies on.

Upsides of AI

Yet some of these AI arms races have surprising hidden upsides, and the hope is that at least some institutions will be able to change in ways that make them stronger.

Science seems likely to become stronger thanks to AI, yet it faces a problem when the AI makes mistakes. Consider the example of nonsensical, AI-generated phrasing filtering into scientific papers.

A scientist using an AI to assist in writing an academic paper can be a good thing, if used carefully and with disclosure. AI is increasingly a primary tool in scientific research: for reviewing literature, programming and for coding and analyzing data. And for many, it has become a crucial support for expression and scientific communication. Pre-AI, better-funded researchers could hire humans to help them write their academic papers. For many authors whose primary language is not English, hiring this kind of assistance has been an expensive necessity. AI provides it to everyone.

In fiction, fraudulently submitted AI-generated works cause harm, both to the human authors now subject to increased competition and to those readers who may feel defrauded after unknowingly reading the work of a machine. But some outlets may welcome AI-assisted submissions with appropriate disclosure and under particular guidelines, and leverage AI to evaluate them against criteria like originality, fit and quality.

Others may refuse AI-generated work, but this will come at a cost. It’s unlikely that any human editor or technology can sustain an ability to differentiate human from machine writing. Instead, outlets that wish to exclusively publish humans will need to limit submissions to a set of authors they trust to not use AI. If these policies are transparent, readers can pick the format they prefer and read happily from either or both types of outlets.

We also don’t see any problem if a job seeker uses AI to polish their resumes or write better cover letters: The wealthy and privileged have long had access to human assistance for those things. But it crosses the line when AIs are used to lie about identity and experience, or to cheat on job interviews.

Similarly, a democracy requires that its citizens be able to express their opinions to their representatives, or to each other through a medium like the newspaper. The rich and powerful have long been able to hire writers to turn their ideas into persuasive prose, and AIs providing that assistance to more people is a good thing, in our view. Here, AI mistakes and bias can be harmful. Citizens may be using AI for more than just a time-saving shortcut; it may be augmenting their knowledge and capabilities, generating statements about historical, legal or policy factors they can’t reasonably be expected to independently check.

Fraud booster

What we don’t want is for lobbyists to use AIs in astroturf campaigns, writing multiple letters and passing them off as individual opinions. This, too, is an older problem that AIs are making worse.

What differentiates the positive from the negative here is not any inherent aspect of the technology, it’s the power dynamic. The same technology that reduces the effort required for a citizen to share their lived experience with their legislator also enables corporate interests to misrepresent the public at scale. The former is a power-equalizing application of AI that enhances participatory democracy; the latter is a power-concentrating application that threatens it.

In general, we believe writing and cognitive assistance, long available to the rich and powerful, should be available to everyone. The problem comes when AIs make fraud easier. Any response needs to balance embracing that newfound democratization of access with preventing fraud.

There’s no way to turn this technology off. Highly capable AIs are widely available and can run on a laptop. Ethical guidelines and clear professional boundaries can help – for those acting in good faith. But there won’t ever be a way to totally stop academic writers, job seekers or citizens from using these tools, either as legitimate assistance or to commit fraud. This means more comments, more letters, more applications, more submissions.

The problem is that whoever is on the receiving end of this AI-fueled deluge can’t deal with the increased volume. What can help is developing assistive AI tools that benefit institutions and society, while also limiting fraud. And that may mean embracing the use of AI assistance in these adversarial systems, even though the defensive AI will never achieve supremacy.

Balancing harms with benefits

The science fiction community has been wrestling with AI since 2023. Clarkesworld eventually reopened submissions, claiming that it has an adequate way of separating human- and AI-written stories. No one knows how long, or how well, that will continue to work.

The arms race continues. There is no simple way to tell whether the potential benefits of AI will outweigh the harms, now or in the future. But as a society, we can influence the balance of harms it wreaks and opportunities it presents as we muddle our way through the changing technological landscape.

This essay was written with Nathan E. Sanders, and originally appeared in The Conversation.

Posted on February 10, 2026 at 7:03 AM5 Comments

LLMs are Getting a Lot Better and Faster at Finding and Exploiting Zero-Days

This is amazing:

Opus 4.6 is notably better at finding high-severity vulnerabilities than previous models and a sign of how quickly things are moving. Security teams have been automating vulnerability discovery for years, investing heavily in fuzzing infrastructure and custom harnesses to find bugs at scale. But what stood out in early testing is how quickly Opus 4.6 found vulnerabilities out of the box without task-specific tooling, custom scaffolding, or specialized prompting. Even more interesting is how it found them. Fuzzers work by throwing massive amounts of random inputs at code to see what breaks. Opus 4.6 reads and reasons about code the way a human researcher would­—looking at past fixes to find similar bugs that weren’t addressed, spotting patterns that tend to cause problems, or understanding a piece of logic well enough to know exactly what input would break it. When we pointed Opus 4.6 at some of the most well-tested codebases (projects that have had fuzzers running against them for years, accumulating millions of hours of CPU time), Opus 4.6 found high-severity vulnerabilities, some that had gone undetected for decades.

The details of how Claude Opus 4.6 found these zero-days is the interesting part—read the whole blog post.

News article.

Posted on February 9, 2026 at 7:04 AM8 Comments

I Am in the Epstein Files

Once. Someone named “Vincenzo lozzo” wrote to Epstein in email, in 2016: “I wouldn’t pay too much attention to this, Schneier has a long tradition of dramatizing and misunderstanding things.” The topic of the email is DDoS attacks, and it is unclear what I am dramatizing and misunderstanding.

Rabbi Schneier is also mentioned, also incidentally, also once. As far as either of us know, we are not related.

EDITED TO ADD (2/7): There is more context on the Justice.gov website version.

Posted on February 6, 2026 at 3:43 PM23 Comments

iPhone Lockdown Mode Protects Washington Post Reporter

404Media is reporting that the FBI could not access a reporter’s iPhone because it had Lockdown Mode enabled:

The court record shows what devices and data the FBI was able to ultimately access, and which devices it could not, after raiding the home of the reporter, Hannah Natanson, in January as part of an investigation into leaks of classified information. It also provides rare insight into the apparent effectiveness of Lockdown Mode, or at least how effective it might be before the FBI may try other techniques to access the device.

“Because the iPhone was in Lockdown mode, CART could not extract that device,” the court record reads, referring to the FBI’s Computer Analysis Response Team, a unit focused on performing forensic analyses of seized devices. The document is written by the government, and is opposing the return of Natanson’s devices.

The FBI raided Natanson’s home as part of its investigation into government contractor Aurelio Perez-Lugones, who is charged with, among other things, retention of national defense information. The government believes Perez-Lugones was a source of Natanson’s, and provided her with various pieces of classified information. While executing a search warrant for his mobile phone, investigators reviewed Signal messages between Pere-Lugones and the reporter, the Department of Justice previously said.

Posted on February 6, 2026 at 7:00 AM20 Comments

Backdoor in Notepad++

Hackers associated with the Chinese government used a Trojaned version of Notepad++ to deliver malware to selected users.

Notepad++ said that officials with the unnamed provider hosting the update infrastructure consulted with incident responders and found that it remained compromised until September 2. Even then, the attackers maintained credentials to the internal services until December 2, a capability that allowed them to continue redirecting selected update traffic to malicious servers. The threat actor “specifically targeted Notepad++ domain with the goal of exploiting insufficient update verification controls that existed in older versions of Notepad++.” Event logs indicate that the hackers tried to re-exploit one of the weaknesses after it was fixed but that the attempt failed.

Make sure you’re running at least version 8.9.1.

Posted on February 5, 2026 at 7:00 AM12 Comments

Microsoft is Giving the FBI BitLocker Keys

Microsoft gives the FBI the ability to decrypt BitLocker in response to court orders: about twenty times per year.

It’s possible for users to store those keys on a device they own, but Microsoft also recommends BitLocker users store their keys on its servers for convenience. While that means someone can access their data if they forget their password, or if repeated failed attempts to login lock the device, it also makes them vulnerable to law enforcement subpoenas and warrants.

Posted on February 3, 2026 at 7:05 AM14 Comments

Friday Squid Blogging: New Squid Species Discovered

A new species of squid. pretends to be a plant:

Scientists have filmed a never-before-seen species of deep-sea squid burying itself upside down in the seafloor—a behavior never documented in cephalopods. They captured the bizarre scene while studying the depths of the Clarion-Clipperton Zone (CCZ), an abyssal plain in the Pacific Ocean targeted for deep-sea mining.

The team described the encounter in a study published Nov. 25 in the journal Ecology, writing that the animal appears to be an undescribed species of whiplash squid. At a depth of roughly 13,450 feet (4,100 meters), the squid had buried almost its entire body in sediment and was hanging upside down, with its siphon and two long tentacles held rigid above the seafloor.

“The fact that this is a squid and it’s covering itself in mud—it’s novel for squid and the fact that it is upside down,” lead author Alejandra Mejía-Saenz, a deep-sea ecologist at the Scottish Association for Marine Science, told Live Science. “We had never seen anything like that in any cephalopods…. It was very novel and very puzzling.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Posted on January 30, 2026 at 5:05 PM26 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.