Essays: 2024 Archives

Trust Issues

The closed corporate ecosystem is the problem.

  • Bruce Schneier and Nathan E. Sanders
  • Boston Review
  • December 6, 2024

This essay appeared as a response to Evgeny Morozov in Boston Review‘s forum, “The AI We Deserve.”

For a technology that seems startling in its modernity, AI sure has a long history. Google Translate, OpenAI chatbots, and Meta AI image generators are built on decades of advancements in linguistics, signal processing, statistics, and other fields going back to the early days of computing—and, often, on seed funding from the U.S. Department of Defense. But today’s tools are hardly the intentional product of the diverse generations of innovators that came before. We agree with Morozov that the “refuseniks,” as he …

The Apocalypse That Wasn’t: AI Was Everywhere in 2024’s Elections, but Deepfakes and Misinformation Were Only Part of the Picture

  • Bruce Schneier and Nathan E. Sanders
  • The Conversation
  • December 4, 2024

This essay also appeared in Cascadia Daily News, Commonwealth Beacon, Fast Company, Gizmodo, and the Seattle Post-Intelligencer.

It’s been the biggest year for elections in human history: 2024 is a “super-cycle” year in which 3.7 billion eligible voters in 72 countries had the chance to go the polls. These are also the first AI elections, where many feared that deepfakes and artificial intelligence-generated misinformation would overwhelm the democratic processes. As 2024 draws to a close, it’s instructive to take stock of how democracy did…

Algorithms Are Coming for Democracy—but It’s Not All Bad

  • Bruce Schneier and Nathan E. Sanders
  • Wired
  • November 27, 2024

In 2025, AI is poised to change every aspect of democratic politics—but it won’t necessarily be for the worse.

India’s prime minister, Narendra Modi, has used AI to translate his speeches for his multilingual electorate in real time, demonstrating how AI can help diverse democracies to be more inclusive. AI avatars were used by presidential candidates in South Korea in electioneering, enabling them to provide answers to thousands of voters’ questions simultaneously. We are also starting to see AI tools aid fundraising and get-out-the-vote efforts. AI techniques are starting to augment more traditional polling methods, helping campaigns get cheaper and faster data. And congressional candidates have started using AI robocallers to engage voters on issues. In 2025, these trends will continue. AI doesn’t need to be superior to human experts to augment the labor of an overworked canvasser, or to write ad copy similar to that of a junior campaign staffer or volunteer. Politics is competitive, and any technology that can bestow an advantage, or even just garner attention, will be used…

The SEC Whistleblower Program Is Dominating Regulatory Enforcement

As the program, which cuts in whistleblowers on enforcements awards, grows exponentially, conflicts of interest are emerging. AI could make it worse.

  • Bruce Schneier and Nathan Sanders
  • The American Prospect
  • October 18, 2024

Tax farming is the practice of licensing tax collection to private contractors. Used heavily in ancient Rome, it’s largely fallen out of practice because of the obvious conflict of interest between the state and the contractor. Because tax farmers are primarily interested in short-term revenue, they have no problem abusing taxpayers and making things worse for them in the long term. Today, the U.S. Securities and Exchange Commission (SEC) is engaged in a modern-day version of tax farming. And the potential for abuse will grow when the farmers start using artificial intelligence…

AI Could Still Wreck the Presidential Election

Regulators have largely taken a hands-off approach to the use of AI in political ads—and the consequences may be severe.

  • Nathan E. Sanders and Bruce Schneier
  • The Atlantic
  • September 27, 2024

For years now, AI has undermined the public’s ability to trust what it sees, hears, and reads. The Republican National Committee released a provocative ad offering an “AI-generated look into the country’s possible future if Joe Biden is re-elected,” showing apocalyptic, machine-made images of ruined cityscapes and chaos at the border. Fake robocalls purporting to be from Biden urged New Hampshire residents not to vote in the 2024 primary election. This summer, the Department of Justice cracked down on a Russian bot farm that was using AI to impersonate Americans on social media, and OpenAI disrupted an …

Israel’s Pager Attacks Have Changed the World

  • The New York TImes
  • September 22, 2024

Israel’s brazen attacks on Hezbollah last week, in which hundreds of pagers and two-way radios exploded and killed at least 37 people, graphically illustrated a threat that cybersecurity experts have been warning about for years: Our international supply chains for computerized equipment leave us vulnerable. And we have no good means to defend ourselves.

Though the deadly operations were stunning, none of the elements used to carry them out were particularly new. The tactics employed by Israel, which has neither confirmed nor denied any role, to hijack an international supply chain and embed plastic explosives in Hezbollah devices have been used for years. What’s new is that Israel put them together in such a devastating and extravagantly public fashion, bringing into stark relief what the future of great power competition will look like—in peacetime, wartime and the ever expanding …

Let’s Start Treating Cybersecurity Like It Matters

That means a real investigatory board for cyber incidents, not the hamstrung one we’ve got now.

  • Bruce Schneier and Tarah Wheeler
  • Defense One
  • August 2, 2024

When an airplane crashes, impartial investigatory bodies leap into action, empowered by law to unearth what happened and why. But there is no such empowered and impartial body to investigate CrowdStrike’s faulty update that recently unfolded, ensnarling banks, airlines, and emergency services to the tune of billions of dollars. We need one. To be sure, there is the White House’s Cyber Safety Review Board. On March 20, the CSRB released a report into last summer’s intrusion by a Chinese hacking group into Microsoft’s cloud environment, where it compromised the U.S. Department of Commerce, State Department, congressional offices, and several associated companies. But the board’s report—well-researched and containing some good and actionable recommendations—shows how it suffers from its lack of subpoena power and its political unwillingness to generalize from specific incidents to the broader industry…

The CrowdStrike Outage and Market-Driven Brittleness

The outage is another consequence of companies’ sacrifice of resilience for expediency.

  • Barath Raghavan and Bruce Schneier
  • Lawfare
  • July 25, 2024

Friday’s massive internet outage, caused by a mid-sized tech company called CrowdStrike, disrupted major airlines, hospitals, and banks. Nearly 7,000 flights were canceled. It took down 911 systems and factories, courthouses, and television stations. Tallying the total cost will take time. The outage affected more than 8.5 million Windows computers, and the cost will surely be in the billions of dollars—­easily matching the most costly previous cyberattacks, such as NotPetya.

The catastrophe is yet another reminder of how brittle global internet infrastructure is. It’s complex, deeply interconnected, and filled with single points of failure. As we experienced last week, a single problem in a small piece of software can take large swaths of the internet and global economy offline…

Book Review: The Business of Secrets

The Business of Secrets: Adventures in Selling Encryption Around the World by Fred Kinch (May 24, 2004)

  • AFIO
  • July 11, 2024

The Business of Secrets: Adventures in Selling Encryption Around the World by Fred Kinch (May 24, 2024)

From the vantage point of today, it’s surreal reading about the commercial cryptography business in the 1970s. Nobody knew anything. The manufacturers didn’t know whether the cryptography they sold was any good. The customers didn’t know whether the crypto they bought was any good. Everyone pretended to know, thought they knew, or knew better than to even try to know.

The Business of Secrets is the self-published memoirs of Fred Kinch. He was founder and vice president of—mostly sales—at a US cryptographic hardware company called Datotek, from company’s founding in 1969 until 1982. It’s mostly a disjointed collection of stories about the difficulties of selling to governments worldwide, along with descriptions of the highs and (mostly) lows of foreign airlines, foreign hotels, and foreign travel in general. But it’s also about encryption…

The Hacking of Culture and the Creation of Socio-Technical Debt

  • Kim Córdova and Bruce Schneier
  • e-flux
  • June 18, 2024

Culture is increasingly mediated through algorithms. These algorithms have splintered the organization of culture, a result of states and tech companies vying for influence over mass audiences. One byproduct of this splintering is a shift from imperfect but broad cultural narratives to a proliferation of niche groups, who are defined by ideology or aesthetics instead of nationality or geography. This change reflects a material shift in the relationship between collective identity and power, and illustrates how states no longer have exclusive domain over either. Today, both power and culture are increasingly corporate…

Using AI for Political Polling

Will AI-assisted polls soon replace more traditional techniques?

  • Aaron Berger, Bruce Schneier, Eric Gong, and Nathan Sanders
  • Harvard Kennedy School Ash Center
  • June 11, 2024

Public polling is a critical function of modern political campaigns and movements, but it isn’t what it once was. Recent US election cycles have produced copious postmortems explaining both the successes and the flaws of public polling. There are two main reasons polling fails.

First, nonresponse has skyrocketed. It’s radically harder to reach people than it used to be. Few people fill out surveys that come in the mail anymore. Few people answer their phone when a stranger calls. Pew Research reported that 36% of the people they called in 1997 would talk to them, but only 6% by 2018. Pollsters worldwide have faced similar challenges…

Indian Election Was Awash in Deepfakes—but AI Was a Net Positive for Democracy

  • Vandinika Shukla and Bruce Schneier
  • The Conversation
  • June 7, 2024

This essay also appeared in Channel News Asia and PBS News.

As India concluded the world’s largest election on June 5, 2024, with over 640 million votes counted, observers could assess how the various parties and factions used artificial intelligence technologies—and what lessons that holds for the rest of the world.

The campaigns made extensive use of AI, including deepfake impersonations of candidates, celebrities and dead politicians. By some estimates, millions of Indian voters viewed deepfakes.

But, despite fears of widespread disinformation, for …

How Online Privacy Is Like Fishing

In the wake of a Microsoft spying controversy, it’s time for an ecosystem perspective

  • Barath Raghavan and Bruce Schneier
  • IEEE Spectrum
  • June 4, 2024

German translation

Microsoft recently caught state-backed hackers using its generative AI tools to help with their attacks. In the security community, the immediate questions weren’t about how hackers were using the tools (that was utterly predictable), but about how Microsoft figured it out. The natural conclusion was that Microsoft was spying on its AI users, looking for harmful hackers at work.

Some pushed back at characterizing Microsoft’s actions as “spying.” Of course cloud service providers monitor what users are doing. And because we expect Microsoft to be doing something like this, it’s not fair to call it spying…

How AI Will Change Democracy

Artificial intelligence is coming for our democratic politics, from how politicians campaign to how the legal system functions.

  • Cyberscoop
  • May 28, 2024

This article is adapted from a keynote speech delivered at the RSA Conference in San Francisco on May 7, 2024.

I don’t think it’s an exaggeration to predict that artificial intelligence will affect every aspect of our society. Not by doing new things. But mostly by doing things that are already being done by humans, perfectly competently.

Replacing humans with AIs isn’t necessarily interesting. But when an AI takes over a human task, the task changes.

In particular, there are potential changes over four dimensions: Speed, scale, scope and sophistication. The problem with AIs trading stocks isn’t that they’re better than humans—it’s that they’re faster. But computers are better at chess and Go because they use more sophisticated strategies than humans. We’re worried about AI-controlled social media accounts because they operate on a superhuman scale…

Seeing Like a Data Structure

  • Barath Raghavan and Bruce Schneier
  • Harvard Kennedy School Belfer Center
  • May 25, 2024

Technology was once simply a tool—and a small one at that—used to amplify human intent and capacity. That was the story of the industrial revolution: we could control nature and build large, complex human societies, and the more we employed and mastered technology, the better things got. We don’t live in that world anymore. Not only has technology become entangled with the structure of society, but we also can no longer see the world around us without it. The separation is gone, and the control we thought we once had has revealed itself as a mirage. We’re in a transitional period of history right now…

Lattice-Based Cryptosystems and Quantum Cryptanalysis

Quantum computers are probably coming—and when they arrive, they will, most likely, be able to break our standard public-key cryptography algorithms.

  • Communications of the ACM
  • May 25, 2024

Quantum computers are probably coming, though we don’t know when—and when they arrive, they will, most likely, be able to break our standard public-key cryptography algorithms. In anticipation of this possibility, cryptographers have been working on quantum-resistant public-key algorithms. The National Institute for Standards and Technology (NIST) has been hosting a competition since 2017, and there already are several proposed standards. Most of these are based on lattice problems.

The mathematics of lattice cryptography revolve around combining sets of vectors—that’s the lattice—in a multi-dimensional space. These lattices are filled with multi-dimensional periodicities. The …

LLMs’ Data-Control Path Insecurity

Someday, some AI researcher will figure out how to separate the data and control paths. Until then, we’re going to have to think carefully about using LLMs in potentially adversarial situations—like on the Internet.

  • Communications of the ACM
  • May 12, 2024

Back in the 1960s, if you played a 2,600Hz tone into an AT&T pay phone, you could make calls without paying. A phone hacker named John Draper noticed that the plastic whistle that came free in a box of Captain Crunch cereal worked to make the right sound. That became his hacker name, and everyone who knew the trick made free pay-phone calls.

There were all sorts of related hacks, such as faking the tones that signaled coins dropping into a pay phone and faking tones used by repair equipment. AT&T could sometimes change the signaling tones, make them more complicated, or try to keep them secret. But the general class of exploit was impossible to fix because the problem was general: Data and control used the same channel. That is, the commands that told the phone switch what to do were sent along the same path as voices…

AI and Trust

  • The Herald Business
  • April 30, 2024

This essay appeared in both English and Korean. The Korean version is available as a PDF.

Trust is essential to society. We trust that our phones will wake us on time, that our food is safe to eat, that other drivers on the road won‘t ram us. We trust many thousands of times a day. Society can’t function without it. And that we don‘t even think about it is a measure of how well it all works.

Trust is a complicated concept, and the word has many meanings. When we say that we trust a friend, it is less about their specific actions and more about them as a person. We trust their intentions, and know that those intentions will inform their actions. This is “interpersonal trust.”…

It’s the End of the Web as We Know It

A great public resource is at risk of being destroyed.

  • Judith Donath and Bruce Schneier
  • The Atlantic
  • April 24, 2024

The web has become so interwoven with everyday life that it is easy to forget what an extraordinary accomplishment and treasure it is. In just a few decades, much of human knowledge has been collectively written up and made available to anyone with an internet connection.

But all of this is coming to an end. The advent of AI threatens to destroy the complex online ecosystem that allows writers, artists, and other creators to reach human audiences.

To understand why, you must understand publishing. Its core task is to connect writers to an audience. Publishers work as gatekeepers, filtering candidates and then amplifying the chosen ones. Hoping to be selected, writers shape their work in various ways. This article might be written very differently in an academic publication, for example, and publishing it here entailed pitching an editor, revising multiple drafts for style and focus, and so on…

Backdoor in XZ Utils That Almost Happened

The recent cybersecurity catastrophe that wasn’t reveals an untenable situation, one being exploited by malicious actors.

  • Lawfare
  • April 9, 2024

Last week, the internet dodged a major nation-state attack that would have had catastrophic cybersecurity repercussions worldwide. It’s a catastrophe that didn’t happen, so it won’t get much attention—but it should. There’s an important moral to the story of the attack and its discovery: The security of the global internet depends on countless obscure pieces of software written and maintained by even more obscure unpaid, distractible, and sometimes vulnerable volunteers. It’s an untenable situation, and one that is being exploited by malicious actors. Yet precious little is being done to remedy it…

In Memoriam: Ross Anderson, 1956-2024

  • Communications of the ACM
  • April 9, 2024

Ross Anderson unexpectedly passed away in his sleep on March 28th in his home in Cambridge. He was 67.

I can’t remember when I first met Ross. It was well before 2008, when we created the Security and Human Behavior workshop. It was before 2001, when we created the Workshop on Economics and Information Security (okay, he created that one, I just helped). It was before 1998, when we first wrote about the insecurity of key escrow systems. In 1996, I was one of the people he brought to the Newton Institute at Cambridge University, for the six-month cryptography residency program he ran (I made a mistake not staying the whole time)—so it was before then as well…

Public AI as an Alternative to Corporate AI

  • New America
  • March 14, 2024

This essay appeared as part of a round table on “Power and Governance in the Age of AI.”

The increasingly centralized control of AI is an ominous sign. When tech billionaires and corporations steer AI, we get AI that tends to reflect the interests of tech billionaires and corporations, instead of the public. Given how transformative this technology will be for the world, this is a problem.

To benefit society as a whole we need an AI public option—not to replace corporate AI but to serve as a counterbalance—as well as stronger democratic institutions to govern all of AI. Like public roads and the federal postal system, a public AI option could guarantee universal access to this transformative technology and set an implicit standard that private services must surpass to compete…

Let’s Not Make the Same Mistakes with AI That We Made with Social Media

Social media’s unregulated evolution over the past decade holds a lot of lessons that apply directly to AI companies and technologies.

  • Nathan E. Sanders and Bruce Schneier
  • MIT Technology Review
  • March 13, 2024

Oh, how the mighty have fallen. A decade ago, social media was celebrated for sparking democratic uprisings in the Arab world and beyond. Now front pages are splashed with stories of social platforms’ role in misinformation, business conspiracy, malfeasance, and risks to mental health. In a 2022 survey, Americans blamed social media for the coarsening of our political discourse, the spread of misinformation, and the increase in partisan polarization.

Today, tech’s darling is artificial intelligence. Like social media, it has the potential to change the world in many ways, some favorable to democracy. But at the same time, it has the potential to do incredible damage to society…

How Public AI Can Strengthen Democracy

  • Nathan Sanders, Bruce Schneier, and Norman Eisen
  • Brookings
  • March 4, 2024

With the world’s focus turning to misinformationmanipulation, and outright propaganda ahead of the 2024 U.S. presidential election, we know that democracy has an AI problem. But we’re learning that AI has a democracy problem, too. Both challenges must be addressed for the sake of democratic governance and public protection.

Just three Big Tech firms (Microsoft, Google, and Amazon) control about two-thirds of the global market for the cloud computing resources used to train and deploy AI models. They have a lot of the AI talent, the capacity for large-scale innovation, and face few public regulations for their products and activities…

How the “Frontier” Became the Slogan of Uncontrolled AI

  • Nathan Sanders and Bruce Schneier
  • Jacobin
  • February 28, 2024

Artificial intelligence (AI) has been billed as the next frontier of humanity: the newly available expanse whose exploration will drive the next era of growth, wealth, and human flourishing. It’s a scary metaphor. Throughout American history, the drive for expansion and the very concept of terrain up for grabs—land grabs, gold rushes, new frontiers—have provided a permission structure for imperialism and exploitation. This could easily hold true for AI.

This isn’t the first time the concept of a frontier has been used as a metaphor for AI, or technology in general. As early as 2018, the powerful foundation models powering cutting-edge applications like chatbots …

Building a Cyber Insurance Backstop Is Harder Than It Sounds

Insurers argue that a government backstop would help them cover catastrophic cyberattacks, but it’s not so simple.

  • Bruce Schneier and Josephine Wolff
  • Lawfare
  • February 26, 2024

In the first week of January, the pharmaceutical giant Merck quietly settled its years-long lawsuit over whether or not its property and casualty insurers would cover a $700 million claim filed after the devastating NotPetya cyberattack in 2017. The malware ultimately infected more than 40,000 of Merck’s computers, which significantly disrupted the company’s drug and vaccine production. After Merck filed its $700 million claim, the pharmaceutical giant’s insurers argued that they were not required to cover the malware’s damage because the cyberattack was widely attributed to the Russian government and therefore was excluded from standard property and casualty insurance coverage as a “hostile or warlike act.”…

CFPB’s Proposed Data Rules Would Improve Security, Privacy and Competition

By giving the public greater control over their banking data, the Consumer Financial Protection Bureau's proposal would deal a blow to data brokers.

  • Barath Raghavan and Bruce Schneier
  • Cyberscoop
  • January 26, 2024

In October, the Consumer Financial Protection Bureau (CFPB) proposed a set of rules that if implemented would transform how financial institutions handle personal data about their customers. The rules put control of that data back in the hands of ordinary Americans, while at the same time undermining the data broker economy and increasing customer choice and competition. Beyond these economic effects, the rules have important data security benefits.

The CFPB’s rules align with a key security idea: the decoupling principle. By separating which companies see what parts of our data, and in what contexts, we can gain control over data about ourselves (improving privacy) and harden cloud infrastructure against hacks (improving security). Officials at the CFPB have described the new rules as an attempt to accelerate a shift toward “open banking,” and after an initial comment period on the new rules closed late last year, Rohit Chopra, the CFPB’s director, …

Don’t Talk to People Like They’re Chatbots

AI could make our human interactions blander, more biased, or ruder.

  • Albert Fox Cahn and Bruce Schneier
  • Atlantic
  • January 17, 2024

For most of history, communicating with a computer has not been like communicating with a person. In their earliest years, computers required carefully constructed instructions, delivered through punch cards; then came a command-line interface, followed by menus and options and text boxes. If you wanted results, you needed to learn the computer’s language.

This is beginning to change. Large language models—the technology undergirding modern chatbots—allow users to interact with computers through natural conversation, an innovation that introduces some baggage from human-to-human exchanges. Early on in our respective explorations of ChatGPT, the two of us found ourselves typing a word that we’d never said to a computer before: “Please.” The syntax of civility has crept into nearly every aspect of our encounters; we speak to this algebraic assemblage as if it were a person—even when we know that …

AI Needs to Be Both Trusted and Trustworthy

Through sensors, actuators, and IoT devices, AI is going to be interacting with the physical plane on a massive scale. The question is, how does one build trust in its actions?

  • Wired
  • January 2024

View or Download in PDF Format

In 2016, I wrote about an Internet that affected the world in a direct, physical manner. It was connected to your smartphone. It had sensors like cameras and thermostats. It had actuators: Drones, autonomous cars. And it had smarts in the middle, using sensor data to figure out what to do and then actually do it. This was the Internet of Things (IoT).

The classical definition of a robot is something that senses, thinks, and acts—that’s today’s Internet. We’ve been building a world-sized robot without even realizing it…

Sidebar photo of Bruce Schneier by Joe MacInnis.