Latest Essays

Ten Ways AI Will Change Democracy

In a new essay, Harvard Kennedy School’s Bruce Schneier goes beyond AI-generated disinformation to detail other novel ways in which AI might alter how democracy functions.

  • Harvard Kennedy School Ash Center
  • November 6, 2023

Artificial intelligence will change so many aspects of society, largely in ways that we cannot conceive of yet. Democracy, and the systems of governance that surround it, will be no exception. In this short essay, I want to move beyond the “AI-generated disinformation” trope and speculate on some of the ways AI will change how democracy functions—in both large and small ways.

When I survey how artificial intelligence might upend different aspects of modern society, democracy included, I look at four different dimensions of change: speed, scale, scope, and sophistication. Look for places where changes in degree result in changes of kind. Those are where the societal upheavals will happen…

Decoupling for Security

Decoupling our identities from our data and actions could safeguard our secrets

  • Barath Raghavan and Bruce Schneier
  • IEEE Spectrum
  • November 5, 2023

Whether we like it or not, we all use the cloud to communicate and to store and process our data. We use dozens of cloud services, sometimes indirectly and unwittingly. We do so because the cloud brings real benefits to individuals and organizations alike. We can access our data across multiple devices, communicate with anyone from anywhere, and command a remote data center’s worth of power from a handheld device.

But using the cloud means our security and privacy now depend on cloud providers. Remember: the cloud is just another way of saying “someone else’s computer.” Cloud providers are single points of failure and prime targets for hackers to scoop up everything from proprietary corporate communications to our personal photo albums and financial documents…

Trustworthy AI Means Public AI

  • IEEE Security & Privacy
  • November-December 2023

View or Download in PDF Format

Back in 1998, Sergey Brin and Larry Page introduced the Google search engine in an academic paper that questioned the ad-based business model of the time. They wrote: “We believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.” Although they didn’t use the word, their argument was that a search engine that could be paid to return particular URLs is fundamentally less trustworthy. “Advertising income often provides an incentive to provide poor quality search results.”…

Who’s Accountable for AI Usage in Digital Campaign Ads? Right Now, No One.

In a new essay, Bruce Schneier and Nathan Sanders argue that AI is poised to dramatically ramp up digital campaigns and outline how accountability measures across platforms, government, and the media can curb risks.

  • Bruce Schneier and Nathan Sanders
  • Harvard Kennedy School Ash Center
  • October 11, 2023

If an AI breaks the rules for you, does that count as breaking the rules? This is the essential question being taken up by the Federal Election Commission this month, and public input is needed to curtail the potential for AI to take US campaigns (even more) off the rails.

At issue is whether candidates using AI to create deepfaked media for political advertisements should be considered fraud or legitimate electioneering. That is, is it allowable to use AI image generators to create photorealistic images depicting Trump hugging Anthony Fauci? And is it allowable to use …

AI Disinformation Is a Threat to Elections—Learning to Spot Russian, Chinese and Iranian Meddling in Other Countries Can Help the Us Prepare for 2024

  • The Conversation
  • September 29, 2023

This essay also appeared in Defense One, Fortune and Scientific American.

Elections around the world are facing an evolving threat from foreign actors, one that involves artificial intelligence.

Countries trying to influence each other’s elections entered a new era in 2016, when the Russians launched a series of social media disinformation campaigns targeting the U.S. presidential election. Over the next seven years, a number of countries—most prominently China and Iran—used social media to influence foreign elections, both in the U.S. and elsewhere in the world. There’s no reason to expect 2023 and 2024 to be any different…

The A.I. Wars Have Three Factions, and They All Crave Power

  • Bruce Schneier and Nathan Sanders
  • The New York Times
  • September 28, 2023

There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in A.I. technology have also brought forth a unifying realization of the risks—and the steps we need to take to mitigate them.

The reality, unfortunately, is quite different. Beneath almost all of the testimony, the manifestoes, the blog posts and the public declarations issued about A.I. are battles among deeply divided factions. Some are concerned about far-future risks that sound like science fiction. Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now. Some are motivated by potential business revenue, others by national security concerns…

Robots Are Already Killing People

The AI boom only underscores a problem that has existed for years.

  • Bruce Schneier and Davi Ottenheimer
  • The Atlantic
  • September 6, 2023

The robot revolution began long ago, and so did the killing. One day in 1979, a robot at a Ford Motor Company casting plant malfunctioned—human workers determined that it was not going fast enough. And so 25-year-old Robert Williams was asked to climb into a storage rack to help move things along. The one-ton robot continued to work silently, smashing into Williams’s head and instantly killing him. This was reportedly the first incident in which a robot killed a human; many more would follow.

At Kawasaki Heavy Industries in 1981, Kenji Urada died in similar …

Nervous About ChatGPT? Try ChatGPT With a Hammer

Once generative AI can use real-world tools, it will become exponentially more capable. Companies and regulators need to get ahead of these rapidly evolving algorithms.

  • Bruce Schneier and Nathan Sanders
  • Wired
  • August 29, 2023

Last March, just two weeks after GPT-4 was released, researchers at Microsoft quietly announced a plan to compile millions of APIs—tools that can do everything from ordering a pizza to solving physics equations to controlling the TV in your living room—into a compendium that would be made accessible to large language models (LLMs). This was just one milestone in the race across industry and academia to find the best ways to teach LLMs how to manipulate tools, which would supercharge the potential of AI more than any of the impressive advancements we’ve seen to date…

Re-Imagining Democracy for the 21st Century, Possibly Without the Trappings of the 18th Century

  • The Conversation
  • August 7, 2023

This essay was also published by Chron, Phys.org, and UPI.

Japanese translation

Imagine that we’ve all—all of us, all of society—landed on some alien planet, and we have to form a government: clean slate. We don’t have any legacy systems from the U.S. or any other country. We don’t have any special or unique interests to perturb our thinking.

How would we govern ourselves?

It’s unlikely that we would use the systems we have today. The modern representative democracy was the best form of government that mid-18th-century technology could conceive of. The 21st century is a different place scientifically, technically and socially…

Six Ways That AI Could Change Politics

A new era of AI-powered domestic politics may be coming. Watch for these milestones to know when it’s arrived.

  • Bruce Schneier And Nathan E. Sanders
  • MIT Technology Review
  • July 28, 2023

This essay also appeared in The Economic Times.

ChatGPT was released just nine months ago, and we are still learning how it will affect our daily lives, our careers, and even our systems of self-governance.

But when it comes to how AI may threaten our democracy, much of the public conversation lacks imagination. People talk about the danger of campaigns that attack opponents with fake images (or fake audio or video) because we already have decades of experience dealing with doctored images. We’re on the lookout for foreign governments that spread misinformation because we were traumatized by the 2016 US presidential election. And we worry that AI-generated opinions will swamp the political preferences of real people because we’ve seen political “astroturfing”—the use of fake online accounts to give the illusion of support for a policy—grow for decades…

Sidebar photo of Bruce Schneier by Joe MacInnis.