Latest Essays

It’s the End of the Web as We Know It

A great public resource is at risk of being destroyed.

  • Judith Donath and Bruce Schneier
  • The Atlantic
  • April 24, 2024

The web has become so interwoven with everyday life that it is easy to forget what an extraordinary accomplishment and treasure it is. In just a few decades, much of human knowledge has been collectively written up and made available to anyone with an internet connection.

But all of this is coming to an end. The advent of AI threatens to destroy the complex online ecosystem that allows writers, artists, and other creators to reach human audiences.

To understand why, you must understand publishing. Its core task is to connect writers to an audience. Publishers work as gatekeepers, filtering candidates and then amplifying the chosen ones. Hoping to be selected, writers shape their work in various ways. This article might be written very differently in an academic publication, for example, and publishing it here entailed pitching an editor, revising multiple drafts for style and focus, and so on…

Backdoor in XZ Utils That Almost Happened

The recent cybersecurity catastrophe that wasn’t reveals an untenable situation, one being exploited by malicious actors.

  • Lawfare
  • April 9, 2024

Last week, the internet dodged a major nation-state attack that would have had catastrophic cybersecurity repercussions worldwide. It’s a catastrophe that didn’t happen, so it won’t get much attention—but it should. There’s an important moral to the story of the attack and its discovery: The security of the global internet depends on countless obscure pieces of software written and maintained by even more obscure unpaid, distractible, and sometimes vulnerable volunteers. It’s an untenable situation, and one that is being exploited by malicious actors. Yet precious little is being done to remedy it…

In Memoriam: Ross Anderson, 1956-2024

  • Communications of the ACM
  • April 9, 2024

Ross Anderson unexpectedly passed away in his sleep on March 28th in his home in Cambridge. He was 67.

I can’t remember when I first met Ross. It was well before 2008, when we created the Security and Human Behavior workshop. It was before 2001, when we created the Workshop on Economics and Information Security (okay, he created that one, I just helped). It was before 1998, when we first wrote about the insecurity of key escrow systems. In 1996, I was one of the people he brought to the Newton Institute at Cambridge University, for the six-month cryptography residency program he ran (I made a mistake not staying the whole time)—so it was before then as well…

Public AI as an Alternative to Corporate AI

  • New America
  • March 14, 2024

This essay appeared as part of a round table on “Power and Governance in the Age of AI.”

The increasingly centralized control of AI is an ominous sign. When tech billionaires and corporations steer AI, we get AI that tends to reflect the interests of tech billionaires and corporations, instead of the public. Given how transformative this technology will be for the world, this is a problem.

To benefit society as a whole we need an AI public option—not to replace corporate AI but to serve as a counterbalance—as well as stronger democratic institutions to govern all of AI. Like public roads and the federal postal system, a public AI option could guarantee universal access to this transformative technology and set an implicit standard that private services must surpass to compete…

Let’s Not Make the Same Mistakes with AI That We Made with Social Media

Social media’s unregulated evolution over the past decade holds a lot of lessons that apply directly to AI companies and technologies.

  • Nathan E. Sanders and Bruce Schneier
  • MIT Technology Review
  • March 13, 2024

Oh, how the mighty have fallen. A decade ago, social media was celebrated for sparking democratic uprisings in the Arab world and beyond. Now front pages are splashed with stories of social platforms’ role in misinformation, business conspiracy, malfeasance, and risks to mental health. In a 2022 survey, Americans blamed social media for the coarsening of our political discourse, the spread of misinformation, and the increase in partisan polarization.

Today, tech’s darling is artificial intelligence. Like social media, it has the potential to change the world in many ways, some favorable to democracy. But at the same time, it has the potential to do incredible damage to society…

How Public AI Can Strengthen Democracy

  • Nathan Sanders, Bruce Schneier, and Norman Eisen
  • Brookings
  • March 4, 2024

With the world’s focus turning to misinformationmanipulation, and outright propaganda ahead of the 2024 U.S. presidential election, we know that democracy has an AI problem. But we’re learning that AI has a democracy problem, too. Both challenges must be addressed for the sake of democratic governance and public protection.

Just three Big Tech firms (Microsoft, Google, and Amazon) control about two-thirds of the global market for the cloud computing resources used to train and deploy AI models. They have a lot of the AI talent, the capacity for large-scale innovation, and face few public regulations for their products and activities…

How the “Frontier” Became the Slogan of Uncontrolled AI

  • Nathan Sanders and Bruce Schneier
  • Jacobin
  • February 28, 2024

Artificial intelligence (AI) has been billed as the next frontier of humanity: the newly available expanse whose exploration will drive the next era of growth, wealth, and human flourishing. It’s a scary metaphor. Throughout American history, the drive for expansion and the very concept of terrain up for grabs—land grabs, gold rushes, new frontiers—have provided a permission structure for imperialism and exploitation. This could easily hold true for AI.

This isn’t the first time the concept of a frontier has been used as a metaphor for AI, or technology in general. As early as 2018, the powerful foundation models powering cutting-edge applications like chatbots …

Building a Cyber Insurance Backstop Is Harder Than It Sounds

Insurers argue that a government backstop would help them cover catastrophic cyberattacks, but it’s not so simple.

  • Bruce Schneier and Josephine Wolff
  • Lawfare
  • February 26, 2024

In the first week of January, the pharmaceutical giant Merck quietly settled its years-long lawsuit over whether or not its property and casualty insurers would cover a $700 million claim filed after the devastating NotPetya cyberattack in 2017. The malware ultimately infected more than 40,000 of Merck’s computers, which significantly disrupted the company’s drug and vaccine production. After Merck filed its $700 million claim, the pharmaceutical giant’s insurers argued that they were not required to cover the malware’s damage because the cyberattack was widely attributed to the Russian government and therefore was excluded from standard property and casualty insurance coverage as a “hostile or warlike act.”…

CFPB’s Proposed Data Rules Would Improve Security, Privacy and Competition

By giving the public greater control over their banking data, the Consumer Financial Protection Bureau's proposal would deal a blow to data brokers.

  • Barath Raghavan and Bruce Schneier
  • Cyberscoop
  • January 26, 2024

In October, the Consumer Financial Protection Bureau (CFPB) proposed a set of rules that if implemented would transform how financial institutions handle personal data about their customers. The rules put control of that data back in the hands of ordinary Americans, while at the same time undermining the data broker economy and increasing customer choice and competition. Beyond these economic effects, the rules have important data security benefits.

The CFPB’s rules align with a key security idea: the decoupling principle. By separating which companies see what parts of our data, and in what contexts, we can gain control over data about ourselves (improving privacy) and harden cloud infrastructure against hacks (improving security). Officials at the CFPB have described the new rules as an attempt to accelerate a shift toward “open banking,” and after an initial comment period on the new rules closed late last year, Rohit Chopra, the CFPB’s director, …

Don’t Talk to People Like They’re Chatbots

AI could make our human interactions blander, more biased, or ruder.

  • Albert Fox Cahn and Bruce Schneier
  • Atlantic
  • January 17, 2024

For most of history, communicating with a computer has not been like communicating with a person. In their earliest years, computers required carefully constructed instructions, delivered through punch cards; then came a command-line interface, followed by menus and options and text boxes. If you wanted results, you needed to learn the computer’s language.

This is beginning to change. Large language models—the technology undergirding modern chatbots—allow users to interact with computers through natural conversation, an innovation that introduces some baggage from human-to-human exchanges. Early on in our respective explorations of ChatGPT, the two of us found ourselves typing a word that we’d never said to a computer before: “Please.” The syntax of civility has crept into nearly every aspect of our encounters; we speak to this algebraic assemblage as if it were a person—even when we know that …

Sidebar photo of Bruce Schneier by Joe MacInnis.