Latest Essays

As the US Midterms Approach, AI Is Going to Emerge as a Key Issue Concerning Voters

There is a political divide over AI but few leaders are taking a strong stand. It’s time for that to change

  • Nathan E. Sanders and Bruce Schneier
  • The Guardian
  • March 24, 2026

In December, the Trump administration signed an executive order that neutered states’ ability to regulate AI by ordering his administration to both sue and withhold funds from states that try to do so. This action pointedly supported industry lobbyists keen to avoid any constraints and consequences on their deployment of AI, while undermining the efforts of consumers, advocates, and industry associations concerned about AI’s harms who have spent years pushing for state regulation.

Trump’s actions have clarified the ideological alignments around AI within America’s electoral factions. They set down lines on a new playing field for the midterm elections, prompting members of his party, the opposition, and all of us to consider where we stand in the debate over how and where to let AI transform our lives…

Japan’s Team Mirai Uses Tech to Bolster Democracy, Not Undermine It

  • Nathan E. Sanders and Bruce Schneier
  • Tech Policy Press
  • March 19, 2026

Japan’s election last month and the rise of the country’s newest and most innovative political party, Team Mirai, illustrates the viability of a different way to do politics.

In this model, technology is used to make democratic processes stronger, instead of undermining them. It is harnessed to root out corruption, instead of serving as a cash cow for campaign donations.

Imagine an election where every voter has the opportunity to opine directly to politicians on precisely the issues they care about. They’re not expected to spend hours becoming policy experts. Instead, an …

Don’t Bet That the Pentagon—or Anthropic—Is Acting in the Public Interest

The lesson here isn’t that one AI company is more ethical than another. It’s that we must renovate our democratic structures

  • Nathan E. Sanders and Bruce Schneier
  • The Guardian
  • March 3, 2026

OpenAI is in and Anthropic is out as a supplier of AI technology for the US defense department. This news caps a week of bluster by the highest officials in the US government towards some of the wealthiest titans of the big tech industry, and the overhanging specter of the existential risks posed by a new technology powerful enough that the Pentagon claims it is essential to national security. At issue is Anthropic’s insistence that the US Department of Defense (DoD) could not use its models to facilitate “mass surveillance” or “fully autonomous weapons,” provisions the defense secretary Pete Hegseth …

OpenAI Has Shown It Cannot Be Trusted. Canada Needs Nationalized, Public AI

OpenAI, the company behind ChatGPT, lobbied Ottawa for business. All the while it hid its knowledge of Tumbler Ridge shooter

  • Nathan E. Sanders and Bruce Schneier
  • The Globe and Mail
  • March 1, 2026

Canada has a choice to make about its artificial intelligence future. The Carney administration is investing $2-billion over five years in its Sovereign AI Compute Strategy. Will any value generated by “sovereign AI” be captured in Canada, making a difference in the lives of Canadians, or is this just a passthrough to investment in American Big Tech?

Forcing the question is OpenAI, the company behind ChatGPT, which has been pushing an “OpenAI for Countries” initiative. It is not the only one eyeing its share of the $2-billion, but it appears to be the most aggressive. OpenAI’s top lobbyist in the region has met with Ottawa officials, including Artificial Intelligence Minister Evan Solomon…

Why Tehran’s Two-Tiered Internet Is So Dangerous

Authoritarian regimes elsewhere are taking note.

  • Foreign Policy
  • February 24, 2026

Iran is slowly emerging from the most severe communications blackout in its history and one of the longest in the world. Triggered as part of January’s government crackdown against citizen protests nationwide, the regime implemented an internet shutdown that transcends the standard definition of internet censorship. This was not merely blocking social media or foreign websites; it was a total communications shutdown.

Unlike previous Iranian internet shutdowns where Iran’s domestic intranet—the National Information Network (NIN)—remained functional to keep the banking and administrative sectors running, the 2026 blackout …

Rewiring Democracy Now: Switzerland Shows Us an Alternative to Corporate AI

Public AI Must Counterbalance Corporate AI

  • Nathan E. Sanders and Bruce Schneier
  • The Renovator
  • February 21, 2026

This is the second in a multi-part series by Sanders and Schneier going into depth on real-world examples of democratic technologies from their book, Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship. Their first piece was about the Japanese digital democracy party “Team Mirai.”

Skeptics of AI often point to the many significant, unchecked harms that AI produces in society today. Bosses cut human jobs, even if it means stealing human creativity to build AI replacements—regardless of whether the replacement technology is up to the job. Megacorporations aim to capture all the value created by AI for their shareholders, imperiling the global economy with risky and unsustainable ventures. And AI companies use exorbitant amounts of energy and natural resources to fuel all of this, with no apparent concern for local environmental damage or global climate impacts…

Is AI Good for Democracy?

  • Nathan E. Sanders and Bruce Schneier
  • The Times of India
  • February 18, 2026

Politicians fixate on the global race for technological supremacy between US and China. They debate geopolitical implications of chip exports, latest model releases from each country, and military applications of AI. Someday, they believe, we might see advancements in AI tip the scales in a superpower conflict.

But the most important arms race of the 21st century is already happening elsewhere and, while AI is definitely the weapon of choice, combatants are distributed across dozens of domains.

Academic journals are flooded with AI-generated papers, and are turning to AI to help review submissions. Brazil’s …

Why Sky-High Pay for AI Researchers Is Bad for the Future of Science

To ensure that AI advances benefit everyone, scientific institutions must prioritize collaborative, mission-driven structures instead of chasing top talent with astronomical compensation.

  • Nathan E. Sanders and Bruce Schneier
  • Nature
  • February 17, 2026

In 2025, Google, Amazon, Microsoft and Meta collectively spent US$380 billion on building artificial-intelligence tools. That number is expected to surge still higher this year, to $650 billion, to fund the building of physical infrastructure, such as data centers (see go.nature.com/3lzf79q). Moreover, these firms are spending lavishly on one particular segment: top technical talent.

Meta reportedly offered a single AI researcher, who had cofounded a start-up firm focused on training AI agents to use computers, a compensation package of $250 million over four years (see …

The Promptware Kill Chain

Prompt injection attacks against AI models are not simple attacks; they are the first step of a kill chain. Understanding this gives defenders a set of countermeasures.

  • Bruce Schneier, Oleg Brodt, Elad Feldman and Ben Nassi
  • Lawfare
  • February 13, 2026

The promptware kill chain: initial access, privilege escalation, reconnaissance, persistence, command & control, lateral movement, action on objective

Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on “prompt injection,” a set of techniques to embed instructions into inputs to LLM intended to perform malicious activity. This term suggests a simple, singular vulnerability. This framing obscures a more complex and dangerous reality. Attacks on LLM-based systems have evolved into a distinct class of malware execution mechanisms, which we term “promptware.” In a …

AI-Generated Text Is Overwhelming Institutions—Setting off a No-Win “Arms Race” with AI Detectors

  • Bruce Schneier and Nathan E. Sanders
  • The Conversation
  • February 5, 2026

This essay also appeared in Harvard Business Review, Japan Today, Scroll.in, and the Seattle Post-Intelligencer.

Spanish translation

In 2023, the science fiction literary magazine Clarkesworld stopped accepting new submissions because so many were generated by artificial intelligence. Near as the editors could tell, many submitters pasted the magazine’s detailed story guidelines into an AI and sent in the results. And they weren’t alone. Other fiction magazines have also reported a high number of AI-generated submissions.

This is only one example of a ubiquitous trend. A legacy system relied on the difficulty of writing and cognition to limit volume. Generative AI overwhelms the system because the humans on the receiving end can’t keep up…

Sidebar photo of Bruce Schneier by Joe MacInnis.