Essays: 2023 Archives

AI Could Improve Your Life by Removing Bottlenecks between What You Want and What You Get

  • The Conversation
  • December 21, 2023

Artificial intelligence is poised to upend much of society, removing human limitations inherent in many systems. One such limitation is information and logistical bottlenecks in decision-making.

Traditionally, people have been forced to reduce complex choices to a small handful of options that don’t do justice to their true desires. Artificial intelligence has the potential to remove that limitation. And it has the potential to drastically change how democracy functions.

AI researcher Tantum Collins and I, a public-interest technology scholar…

The Internet Enabled Mass Surveillance. AI Will Enable Mass Spying.

Spying has always been limited by the need for human labor. AI is going to change that.

  • Slate
  • December 4, 2023

Spying and surveillance are different but related things. If I hired a private detective to spy on you, that detective could hide a bug in your home or car, tap your phone, and listen to what you said. At the end, I would get a report of all the conversations you had and the contents of those conversations. If I hired that same private detective to put you under surveillance, I would get a different report: where you went, whom you talked to, what you purchased, what you did.

Before the internet, putting someone under surveillance was expensive and time-consuming. You had to manually follow someone around, noting where they went, whom they talked to, what they purchased, what they did, and what they read. That world is forever gone. Our phones track our locations. Credit cards track our purchases. Apps track whom we talk to, and e-readers know what we read. Computers collect data about what we’re doing on them, and as both storage and processing have become cheaper, that data is increasingly saved and used. What was manual and individual has become bulk and mass. Surveillance has …

AI and Trust

  • Harvard Kennedy School Belfer Center
  • December 1, 2023

German translation

I trusted a lot today. I trusted my phone to wake me on time. I trusted Uber to arrange a taxi for me, and the driver to get me to the airport safely. I trusted thousands of other drivers on the road not to ram my car on the way. At the airport, I trusted ticket agents and maintenance engineers and everyone else who keeps airlines operating. And the pilot of the plane I flew. And thousands of other people at the airport and on the plane, any of which could have attacked me. And all the people that prepared and served my breakfast, and the entire food supply chain—any of them could have poisoned me. When I landed here, I trusted thousands more people: at the airport, on the road, in this building, in this room. And that was all before 10:30 this morning…

Ten Ways AI Will Change Democracy

In a new essay, Harvard Kennedy School’s Bruce Schneier goes beyond AI-generated disinformation to detail other novel ways in which AI might alter how democracy functions.

  • Harvard Kennedy School Ash Center
  • November 6, 2023

Artificial intelligence will change so many aspects of society, largely in ways that we cannot conceive of yet. Democracy, and the systems of governance that surround it, will be no exception. In this short essay, I want to move beyond the “AI-generated disinformation” trope and speculate on some of the ways AI will change how democracy functions—in both large and small ways.

When I survey how artificial intelligence might upend different aspects of modern society, democracy included, I look at four different dimensions of change: speed, scale, scope, and sophistication. Look for places where changes in degree result in changes of kind. Those are where the societal upheavals will happen…

Trustworthy AI Means Public AI

  • IEEE Security & Privacy
  • November-December 2023

View or Download in PDF Format

Back in 1998, Sergey Brin and Larry Page introduced the Google search engine in an academic paper that questioned the ad-based business model of the time. They wrote: “We believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.” Although they didn’t use the word, their argument was that a search engine that could be paid to return particular URLs is fundamentally less trustworthy. “Advertising income often provides an incentive to provide poor quality search results.”…

Who’s Accountable for AI Usage in Digital Campaign Ads? Right Now, No One.

In a new essay, Bruce Schneier and Nathan Sanders argue that AI is poised to dramatically ramp up digital campaigns and outline how accountability measures across platforms, government, and the media can curb risks.

  • Bruce Schneier and Nathan Sanders
  • Harvard Kennedy School Ash Center
  • October 11, 2023

If an AI breaks the rules for you, does that count as breaking the rules? This is the essential question being taken up by the Federal Election Commission this month, and public input is needed to curtail the potential for AI to take US campaigns (even more) off the rails.

At issue is whether candidates using AI to create deepfaked media for political advertisements should be considered fraud or legitimate electioneering. That is, is it allowable to use AI image generators to create photorealistic images depicting Trump hugging Anthony Fauci? And is it allowable to use …

AI Disinformation Is a Threat to Elections—Learning to Spot Russian, Chinese and Iranian Meddling in Other Countries Can Help the Us Prepare for 2024

  • The Conversation
  • September 29, 2023

This essay also appeared in Defense One, Fortune and Scientific American.

Elections around the world are facing an evolving threat from foreign actors, one that involves artificial intelligence.

Countries trying to influence each other’s elections entered a new era in 2016, when the Russians launched a series of social media disinformation campaigns targeting the U.S. presidential election. Over the next seven years, a number of countries—most prominently China and Iran—used social media to influence foreign elections, both in the U.S. and elsewhere in the world. There’s no reason to expect 2023 and 2024 to be any different…

The A.I. Wars Have Three Factions, and They All Crave Power

  • Bruce Schneier and Nathan Sanders
  • The New York Times
  • September 28, 2023

There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in A.I. technology have also brought forth a unifying realization of the risks—and the steps we need to take to mitigate them.

The reality, unfortunately, is quite different. Beneath almost all of the testimony, the manifestoes, the blog posts and the public declarations issued about A.I. are battles among deeply divided factions. Some are concerned about far-future risks that sound like science fiction. Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now. Some are motivated by potential business revenue, others by national security concerns…

Robots Are Already Killing People

The AI boom only underscores a problem that has existed for years.

  • Bruce Schneier and Davi Ottenheimer
  • The Atlantic
  • September 6, 2023

The robot revolution began long ago, and so did the killing. One day in 1979, a robot at a Ford Motor Company casting plant malfunctioned—human workers determined that it was not going fast enough. And so 25-year-old Robert Williams was asked to climb into a storage rack to help move things along. The one-ton robot continued to work silently, smashing into Williams’s head and instantly killing him. This was reportedly the first incident in which a robot killed a human; many more would follow.

At Kawasaki Heavy Industries in 1981, Kenji Urada died in similar …

Nervous About ChatGPT? Try ChatGPT With a Hammer

Once generative AI can use real-world tools, it will become exponentially more capable. Companies and regulators need to get ahead of these rapidly evolving algorithms.

  • Bruce Schneier and Nathan Sanders
  • Wired
  • August 29, 2023

Last March, just two weeks after GPT-4 was released, researchers at Microsoft quietly announced a plan to compile millions of APIs—tools that can do everything from ordering a pizza to solving physics equations to controlling the TV in your living room—into a compendium that would be made accessible to large language models (LLMs). This was just one milestone in the race across industry and academia to find the best ways to teach LLMs how to manipulate tools, which would supercharge the potential of AI more than any of the impressive advancements we’ve seen to date…

Re-Imagining Democracy for the 21st Century, Possibly Without the Trappings of the 18th Century

  • The Conversation
  • August 7, 2023

This essay was also published by Chron, Phys.org, and UPI.

Japanese translation

Imagine that we’ve all—all of us, all of society—landed on some alien planet, and we have to form a government: clean slate. We don’t have any legacy systems from the U.S. or any other country. We don’t have any special or unique interests to perturb our thinking.

How would we govern ourselves?

It’s unlikely that we would use the systems we have today. The modern representative democracy was the best form of government that mid-18th-century technology could conceive of. The 21st century is a different place scientifically, technically and socially…

Six Ways That AI Could Change Politics

A new era of AI-powered domestic politics may be coming. Watch for these milestones to know when it’s arrived.

  • Bruce Schneier And Nathan E. Sanders
  • MIT Technology Review
  • July 28, 2023

This essay also appeared in The Economic Times.

ChatGPT was released just nine months ago, and we are still learning how it will affect our daily lives, our careers, and even our systems of self-governance.

But when it comes to how AI may threaten our democracy, much of the public conversation lacks imagination. People talk about the danger of campaigns that attack opponents with fake images (or fake audio or video) because we already have decades of experience dealing with doctored images. We’re on the lookout for foreign governments that spread misinformation because we were traumatized by the 2016 US presidential election. And we worry that AI-generated opinions will swamp the political preferences of real people because we’ve seen political “astroturfing”—the use of fake online accounts to give the illusion of support for a policy—grow for decades…

Can You Trust AI? Here’s Why You Shouldn’t

  • Bruce Schneier and Nathan Sanders
  • The Conversation
  • July 20, 2023

This essay also appeared in CapeTalk, CT Insider, The Daily Star, The Economic Times, ForeignAffairs.co.nz, Fortune, GayNrd, Homeland Security News Wire, Kiowa County Press, MinnPost, Tech Xplore, UPI, and Yahoo News.

If you ask Alexa, Amazon’s voice assistant AI system, whether Amazon is a monopoly, it responds by saying it doesn’t know. It doesn’t take much to make it lambaste the other tech giants, but it’s silent about its own corporate parent’s misdeeds.

When Alexa responds in this way, it’s obvious that it is putting its developer’s interests ahead of yours. Usually, though, it’s not so obvious whom an AI system is serving. To avoid being exploited by these systems, people will need to learn to approach AI skeptically. That means deliberately constructing the input you give it and thinking critically about its output…

AI Microdirectives Could Soon Be Used for Law Enforcement

And they’re terrifying.

  • Jonathon W. Penney and Bruce Schneier
  • Slate
  • July 17, 2023

Imagine a future in which AIs automatically interpret—and enforce—laws.

All day and every day, you constantly receive highly personalized instructions for how to comply with the law, sent directly by your government and law enforcement. You’re told how to cross the street, how fast to drive on the way to work, and what you’re allowed to say or do online—if you’re in any situation that might have legal implications, you’re told exactly what to do, in real time.

Imagine that the computer system formulating these personal legal directives at mass scale is so complex that no one can explain how it reasons or works. But if you ignore a directive, the system will know, and it’ll be used as evidence in the prosecution that’s sure to follow…

Will AI Hack Our Democracy?

  • Harvard Kennedy School Magazine
  • Summer 2023

View or Download in PDF Format

Back in 2021, I wrote an essay titled “The Coming AI Hackers,” about how AI would hack our political, economic, and social systems. That ended up being a theme of my latest book, A Hacker’s Mind, and is something I have continued to think and write about.

I believe that AI will hack public policy in a way unlike anything that’s come before. It will change the speed, scale, scope, and sophistication of hacking, which in turn will change so many things that we can’t even imagine how it will all shake out. At a minimum, everything about public policy—how it is crafted, how it is implemented, what effects it has on individuals—will change in ways we cannot foresee…

Snowden Ten Years Later

  • RFC 9446
  • July 2023

In 2013 and 2014, I wrote extensively about new revelations regarding NSA surveillance based on the documents provided by Edward Snowden. But I had a more personal involvement as well.

I wrote the essay below in September 2013. The New Yorker agreed to publish it, but The Guardian asked me not to. It was scared of UK law enforcement and worried that this essay would reflect badly on it. And given that the UK police would raid its offices in July 2014, it had legitimate cause to be worried.

Now, ten years later, I offer this as a time capsule of what those early months of Snowden were like…

Artificial Intelligence Can’t Work Without Our Data

We should all be paid for it.

  • Barath Raghavan and Bruce Schneier
  • Politico
  • June 29, 2023

For four decades, Alaskans have opened their mailboxes to find checks waiting for them, their cut of the black gold beneath their feet. This is Alaska’s Permanent Fund, funded by the state’s oil revenues and paid to every Alaskan each year. We’re now in a different sort of resource rush, with companies peddling bits instead of oil: generative AI.

Everyone is talking about these new AI technologies—like ChatGPT—and AI companies are touting their awesome power. But they aren’t talking about how that power comes from all of us. Without all of our writings and photos that AI companies are using to train their models, they would have nothing to sell. Big Tech companies are currently taking the work of the American people, without our knowledge and consent, without licensing it, and are pocketing the proceeds…

AI Could Shore Up Democracy—Here’s One Way

  • Bruce Schneier and Nathan Sanders
  • The Conversation
  • June 20, 2023

This essay also appeared in ArcaMax, Big News Network, Biloxi Local News & Events, Chicago Sun-Times, Fast Company, GCN, Government Technology, Inkl, Macau Daily Times, MENAFN, Nextgov, and Yahoo.

It’s become fashionable to think of artificial intelligence as an inherently dehumanizing technology, a ruthless force of automation that has unleashed legions of virtual skilled laborers in faceless form. But what if AI turns out to be the one tool able to identify what makes your ideas special, recognizing your unique perspective and potential on the issues where it matters most?…

Build AI by the People, for the People

Washington needs to take AI investment out of the hands of private companies.

  • Bruce Schneier and Nathan E. Sanders
  • Foreign Policy
  • June 12, 2023

Artificial intelligence will bring great benefits to all of humanity. But do we really want to entrust this revolutionary technology solely to a small group of U.S. tech companies?

Silicon Valley has produced no small number of moral disappointments. Google retired its “don’t be evil” pledge before firing its star ethicist. Self-proclaimed “free speech absolutist” Elon Musk bought Twitter in order to censor political speech, retaliate against journalists, and ease access to the platform for Russian and Chinese propagandists. Facebook lied about how it enabled Russian interference in the 2016 U.S. presidential election and …

Big Tech Isn’t Prepared for A.I.’s Next Chapter

  • Bruce Schneier and Jim Waldo
  • Slate
  • May 30, 2023

In February, Meta released its large language model: LLaMA. Unlike OpenAI and its ChatGPT, Meta didn’t just give the world a chat window to play with. Instead, it released the code into the open-source community, and shortly thereafter the model itself was leaked. Researchers and programmers immediately started modifying it, improving it, and getting it to do things no one else anticipated. And their results have been immediate, innovative, and an indication of how the future of this technology is going to play out. Training speeds have hugely increased, and the size of the models themselves has shrunk to the point that you can create and run them on a laptop. The world of A.I. research has dramatically changed…

Rethinking Democracy for the Age of AI

We need to recreate our system of governance for an era in which transformative technologies pose catastrophic risks as well as great promise.

  • Cyberscoop
  • May 10, 2023

This text is the transcript from a keynote speech delivered during the RSA Conference in San Francisco on April 25, 2023. 

There is a lot written about technology’s threats to democracy. Polarization. Artificial intelligence. The concentration of wealth and power. I have a more general story: The political and economic systems of governance that were created in the mid-18th century are poorly suited for the 21st century. They don’t align incentives well. And they are being hacked too effectively.

At the same time, the cost of these hacked systems has never been greater, across all human history. We have become too powerful as a species. And our systems cannot keep up with fast-changing disruptive technologies…

Can We Build Trustworthy AI?

AI isn't transparent, so we should all be preparing for a world where AI is not trustworthy, write two Harvard researchers.

  • Nathan Sanders and Bruce Schneier
  • Gizmodo
  • May 4, 2023

We will all soon get into the habit of using AI tools for help with everyday problems and tasks. We should get in the habit of questioning the motives, incentives, and capabilities behind them, too.

Imagine you’re using an AI chatbot to plan a vacation. Did it suggest a particular resort because it knows your preferences, or because the company is getting a kickback from the hotel chain? Later, when you’re using another AI chatbot to learn about a complex economic issue, is the chatbot reflecting your politics or the politics of the company that trained it?…

Just Wait Until Trump Is a Chatbot

Artificial intelligence is already showing up in political ads. Soon, it will completely change the nature of campaigning.

  • Nathan E. Sanders and Bruce Schneier
  • The Atlantic
  • April 28, 2023

Earlier this week, the Republican National Committee released a video that it claims was “built entirely with AI imagery.” The content of the ad isn’t especially novel—a dystopian vision of America under a second term with President Joe Biden—but the deliberate emphasis on the technology used to create it stands out: It’s a “Daisy” moment for the 2020s.

We should expect more of this kind of thing. The applications of AI to political advertising have not escaped campaigners, who are already “pressure testing” possible uses for the technology. In the 2024 presidential-election campaign, you can bank on the appearance of AI-generated personalized fundraising emails, text messages from chatbots urging you to vote, and maybe even some deepfaked campaign …

How Artificial Intelligence Can Aid Democracy

  • Bruce Schneier, Henry Farrell, and Nathan E. Sanders
  • Slate
  • April 21, 2023

Hebrew Translation

There’s good reason to fear that A.I. systems like ChatGPT and GPT4 will harm democracy. Public debate may be overwhelmed by industrial quantities of autogenerated argument. People might fall down political rabbit holes, taken in by superficially convincing bullshit, or obsessed by folies à deux relationships with machine personalities that don’t really exist.

These risks may be the fallout of a world where businesses deploy poorly tested A.I. systems in a battle for market share, each hoping to establish a monopoly.

But dystopia isn’t the only possible future. A.I. could advance the public good, not private profit, and bolster democracy instead of undermining it. That would require an A.I. not under the control of a large tech monopoly, but rather developed by government and available to all citizens. This public option is within reach if we want it…

Brace Yourself for a Tidal Wave of ChatGPT Email Scams

Thanks to large language models, a single scammer can run hundreds or thousands of cons in parallel, night and day, in every language under the sun.

  • Bruce Schneier and Barath Raghavan
  • Wired
  • April 4, 2023

Here’s an experiment being run by undergraduate computer science students everywhere: Ask ChatGPT to generate phishing emails, and test whether these are better at persuading victims to respond or click on the link than the usual spam. It’s an interesting experiment, and the results are likely to vary wildly based on the details of the experiment.

But while it’s an easy experiment to run, it misses the real risk of large language models (LLMs) writing scam emails. Today’s human-run scams aren’t limited by the number of people who respond to the initial email contact. They’re limited by the labor-intensive process of persuading those people to send the scammer money. LLMs are about to change that. A decade ago, one type of spam email had become a punchline on every late-night show: “I am the son of the late king of Nigeria in need of your assistance …” Nearly everyone had gotten one or a thousand of those emails, to the point that it seemed everyone must have known they were scams…

How AI Could Write Our Laws

ChatGPT and other AIs could supercharge the influence of lobbyists—but only if we let them

  • Nathan E. Sanders & Bruce Schneier
  • MIT Technology Review
  • March 14, 2023

Nearly 90% of the multibillion-dollar federal lobbying apparatus in the United States serves corporate interests. In some cases, the objective of that money is obvious. Google pours millions into lobbying on bills related to antitrust regulation. Big energy companies expect action whenever there is a move to end drilling leases for federal lands, in exchange for the tens of millions they contribute to congressional reelection campaigns.

But lobbying strategies are not always so blunt, and the interests involved are not always so obvious. Consider, for example, a 2013 …

Why the U.S. Should Not Ban TikTok

The ban would hurt Americans—and there are better ways to protect their data.

  • Bruce Schneier and Barath Raghavan
  • Foreign Policy
  • February 23, 2023

Congress is currently debating bills that would ban TikTok in the United States. We are here as technologists to tell you that this is a terrible idea and the side effects would be intolerable. Details matter. There are several ways Congress might ban TikTok, each with different efficacies and side effects. In the end, all the effective ones would destroy the free internet as we know it.

There’s no doubt that TikTok and ByteDance, the company that owns it, are shady. They, like most large corporations in China, operate at the pleasure of the Chinese government. They collect extreme levels of information about users. But they’re not alone: Many apps you use do the same, including Facebook and Instagram, along with seemingly innocuous apps that have no need for the data. Your data is bought and sold by data brokers you’ve never heard of who have few scruples about where the data ends up. They have digital dossiers on most people in the United States…

Everything Is Hackable

  • Slate
  • February 10, 2023

Every year, an army of hackers takes aim at the tax code.

The tax code is not computer code, but it is a series of rules—supposedly deterministic algorithms—that take data about your income and determine the amount of money you owe. This code has vulnerabilities, more commonly known as loopholes. It has exploits; those are tax avoidance strategies. There is an entire industry of black-hat hackers who exploit vulnerabilities in the tax code: We call them accountants and tax attorneys.

Hacking isn’t limited to computer systems, or even technology. Any system of rules can be hacked. In general terms, a hack is something that a system permits, but that is unanticipated and unwanted by its designers. It’s unplanned: a mistake in the system’s design or coding. It’s clever. It’s a subversion, or an exploitation. It’s a cheat, but only sort of. Just as a computer vulnerability can be exploited over the internet because the code permits it, a tax loophole is "allowed" by the system because it follows the rules, even though it might subvert the intent of those rules…

We Don’t Need to Reinvent Our Democracy to Save It from AI

  • Bruce Schneier and Nathan Sanders
  • Harvard Kennedy School Belfer Center
  • February 9, 2023

When is it time to start worrying about artificial intelligence interfering in our democracy? Maybe when an AI writes a letter to The New York Times opposing the regulation of its own technology.

That happened last month. And because the letter was responding to an essay we wrote, we’re starting to get worried. And while the technology can be regulated, the real solution lies in recognizing that the problem is human actors—and those we can do something about.

Our essay argued that the much heralded launch of the AI chatbot ChatGPT, a system that can generate text realistic enough to appear to be written by a human, poses significant threats to democratic processes. The ability to produce high quality political messaging quickly and at scale, if combined with AI-assisted capabilities to strategically target those messages to policymakers and the public, could become a powerful accelerant of an already sprawling and poorly constrained force in modern democratic life: lobbying…

The Big Idea: Bruce Schneier

  • Whatever
  • February 7, 2023

The world has systems. Systems have rules. Or are they more like guidelines? In today’s Big Idea for A Hacker’s Mind, security expert Bruce Schneier takes a look at systems, how they are vulnerable, and what that fact means for all of us.

BRUCE SCHNEIER:

Hacking isn’t limited to computer systems, or even technology. Any system can be hacked.

What sorts of system? Any system of rules, really.

Think about the tax code. It’s not computer code, but it’s a series of rules—supposedly deterministic algorithms—that take data about your income and determine the amount of money you owe. This code has vulnerabilities, more commonly known as loopholes. It has exploits; those are tax avoidance strategies. And there is an entire industry of black-hat hackers who exploit vulnerabilities in the tax code: we call them accountants and tax attorneys…

Opinion: What Peter Thiel and the ‘Pudding Guy’ revealed

  • CNN
  • February 7, 2023

The Roth IRA is a retirement account allowed by a 1997 law. It’s intended for middle-class investors and has limits on both the investor’s income level and the amount that can be invested.

But billionaire Peter Thiel and others found a hack. As one of the founders of PayPal, Thiel was able—entirely legally— to use an investment of less than $2,000 to buy 1.7 million shares of the company at $0.001 per share, turning it into $5 billion in 20 years—all forever tax-free, according to ProPublica. (Thiel’s spokesperson didn’t respond to ProPublica’s questions about its 2021 report.)…

How ChatGPT Hijacks Democracy

  • Nathan E. Sanders and Bruce Schneier
  • The New York Times
  • January 15, 2023

Launched just weeks ago, ChatGPT is already threatening to upend how we draft everyday communications like emails, college essays and myriad other forms of writing.

Created by the company OpenAI, ChatGPT is a chatbot that can automatically respond to written prompts in a manner that is sometimes eerily close to human.

But for all the consternation over the potential for humans to be replaced by machines in formats like poetry and sitcom scripts, a far greater threat looms: artificial intelligence replacing humans in the democratic processes—not through voting, but through lobbying…

Sidebar photo of Bruce Schneier by Joe MacInnis.