Latest Essays

Page 4

Rethinking Democracy for the Age of AI

We need to recreate our system of governance for an era in which transformative technologies pose catastrophic risks as well as great promise.

  • Cyberscoop
  • May 10, 2023

This text is the transcript from a keynote speech delivered during the RSA Conference in San Francisco on April 25, 2023. 

There is a lot written about technology’s threats to democracy. Polarization. Artificial intelligence. The concentration of wealth and power. I have a more general story: The political and economic systems of governance that were created in the mid-18th century are poorly suited for the 21st century. They don’t align incentives well. And they are being hacked too effectively.

At the same time, the cost of these hacked systems has never been greater, across all human history. We have become too powerful as a species. And our systems cannot keep up with fast-changing disruptive technologies…

Can We Build Trustworthy AI?

AI isn't transparent, so we should all be preparing for a world where AI is not trustworthy, write two Harvard researchers.

  • Nathan Sanders and Bruce Schneier
  • Gizmodo
  • May 4, 2023

We will all soon get into the habit of using AI tools for help with everyday problems and tasks. We should get in the habit of questioning the motives, incentives, and capabilities behind them, too.

Imagine you’re using an AI chatbot to plan a vacation. Did it suggest a particular resort because it knows your preferences, or because the company is getting a kickback from the hotel chain? Later, when you’re using another AI chatbot to learn about a complex economic issue, is the chatbot reflecting your politics or the politics of the company that trained it?…

Just Wait Until Trump Is a Chatbot

Artificial intelligence is already showing up in political ads. Soon, it will completely change the nature of campaigning.

  • Nathan E. Sanders and Bruce Schneier
  • The Atlantic
  • April 28, 2023

Earlier this week, the Republican National Committee released a video that it claims was “built entirely with AI imagery.” The content of the ad isn’t especially novel—a dystopian vision of America under a second term with President Joe Biden—but the deliberate emphasis on the technology used to create it stands out: It’s a “Daisy” moment for the 2020s.

We should expect more of this kind of thing. The applications of AI to political advertising have not escaped campaigners, who are already “pressure testing” possible uses for the technology. In the 2024 presidential-election campaign, you can bank on the appearance of AI-generated personalized fundraising emails, text messages from chatbots urging you to vote, and maybe even some deepfaked campaign …

How Artificial Intelligence Can Aid Democracy

  • Bruce Schneier, Henry Farrell, and Nathan E. Sanders
  • Slate
  • April 21, 2023

Hebrew Translation

There’s good reason to fear that A.I. systems like ChatGPT and GPT4 will harm democracy. Public debate may be overwhelmed by industrial quantities of autogenerated argument. People might fall down political rabbit holes, taken in by superficially convincing bullshit, or obsessed by folies à deux relationships with machine personalities that don’t really exist.

These risks may be the fallout of a world where businesses deploy poorly tested A.I. systems in a battle for market share, each hoping to establish a monopoly.

But dystopia isn’t the only possible future. A.I. could advance the public good, not private profit, and bolster democracy instead of undermining it. That would require an A.I. not under the control of a large tech monopoly, but rather developed by government and available to all citizens. This public option is within reach if we want it…

Brace Yourself for a Tidal Wave of ChatGPT Email Scams

Thanks to large language models, a single scammer can run hundreds or thousands of cons in parallel, night and day, in every language under the sun.

  • Bruce Schneier and Barath Raghavan
  • Wired
  • April 4, 2023

Here’s an experiment being run by undergraduate computer science students everywhere: Ask ChatGPT to generate phishing emails, and test whether these are better at persuading victims to respond or click on the link than the usual spam. It’s an interesting experiment, and the results are likely to vary wildly based on the details of the experiment.

But while it’s an easy experiment to run, it misses the real risk of large language models (LLMs) writing scam emails. Today’s human-run scams aren’t limited by the number of people who respond to the initial email contact. They’re limited by the labor-intensive process of persuading those people to send the scammer money. LLMs are about to change that. A decade ago, one type of spam email had become a punchline on every late-night show: “I am the son of the late king of Nigeria in need of your assistance …” Nearly everyone had gotten one or a thousand of those emails, to the point that it seemed everyone must have known they were scams…

How AI Could Write Our Laws

ChatGPT and other AIs could supercharge the influence of lobbyists—but only if we let them

  • Nathan E. Sanders & Bruce Schneier
  • MIT Technology Review
  • March 14, 2023

Nearly 90% of the multibillion-dollar federal lobbying apparatus in the United States serves corporate interests. In some cases, the objective of that money is obvious. Google pours millions into lobbying on bills related to antitrust regulation. Big energy companies expect action whenever there is a move to end drilling leases for federal lands, in exchange for the tens of millions they contribute to congressional reelection campaigns.

But lobbying strategies are not always so blunt, and the interests involved are not always so obvious. Consider, for example, a 2013 …

Why the U.S. Should Not Ban TikTok

The ban would hurt Americans—and there are better ways to protect their data.

  • Bruce Schneier and Barath Raghavan
  • Foreign Policy
  • February 23, 2023

Congress is currently debating bills that would ban TikTok in the United States. We are here as technologists to tell you that this is a terrible idea and the side effects would be intolerable. Details matter. There are several ways Congress might ban TikTok, each with different efficacies and side effects. In the end, all the effective ones would destroy the free internet as we know it.

There’s no doubt that TikTok and ByteDance, the company that owns it, are shady. They, like most large corporations in China, operate at the pleasure of the Chinese government. They collect extreme levels of information about users. But they’re not alone: Many apps you use do the same, including Facebook and Instagram, along with seemingly innocuous apps that have no need for the data. Your data is bought and sold by data brokers you’ve never heard of who have few scruples about where the data ends up. They have digital dossiers on most people in the United States…

Everything Is Hackable

  • Slate
  • February 10, 2023

Every year, an army of hackers takes aim at the tax code.

The tax code is not computer code, but it is a series of rules—supposedly deterministic algorithms—that take data about your income and determine the amount of money you owe. This code has vulnerabilities, more commonly known as loopholes. It has exploits; those are tax avoidance strategies. There is an entire industry of black-hat hackers who exploit vulnerabilities in the tax code: We call them accountants and tax attorneys.

Hacking isn’t limited to computer systems, or even technology. Any system of rules can be hacked. In general terms, a hack is something that a system permits, but that is unanticipated and unwanted by its designers. It’s unplanned: a mistake in the system’s design or coding. It’s clever. It’s a subversion, or an exploitation. It’s a cheat, but only sort of. Just as a computer vulnerability can be exploited over the internet because the code permits it, a tax loophole is "allowed" by the system because it follows the rules, even though it might subvert the intent of those rules…

We Don’t Need to Reinvent Our Democracy to Save It from AI

  • Bruce Schneier and Nathan Sanders
  • Harvard Kennedy School Belfer Center
  • February 9, 2023

When is it time to start worrying about artificial intelligence interfering in our democracy? Maybe when an AI writes a letter to The New York Times opposing the regulation of its own technology.

That happened last month. And because the letter was responding to an essay we wrote, we’re starting to get worried. And while the technology can be regulated, the real solution lies in recognizing that the problem is human actors—and those we can do something about.

Our essay argued that the much heralded launch of the AI chatbot ChatGPT, a system that can generate text realistic enough to appear to be written by a human, poses significant threats to democratic processes. The ability to produce high quality political messaging quickly and at scale, if combined with AI-assisted capabilities to strategically target those messages to policymakers and the public, could become a powerful accelerant of an already sprawling and poorly constrained force in modern democratic life: lobbying…

The Big Idea: Bruce Schneier

  • Whatever
  • February 7, 2023

The world has systems. Systems have rules. Or are they more like guidelines? In today’s Big Idea for A Hacker’s Mind, security expert Bruce Schneier takes a look at systems, how they are vulnerable, and what that fact means for all of us.

BRUCE SCHNEIER:

Hacking isn’t limited to computer systems, or even technology. Any system can be hacked.

What sorts of system? Any system of rules, really.

Think about the tax code. It’s not computer code, but it’s a series of rules—supposedly deterministic algorithms—that take data about your income and determine the amount of money you owe. This code has vulnerabilities, more commonly known as loopholes. It has exploits; those are tax avoidance strategies. And there is an entire industry of black-hat hackers who exploit vulnerabilities in the tax code: we call them accountants and tax attorneys…

Sidebar photo of Bruce Schneier by Joe MacInnis.