Latest Essays

Page 3

Nervous About ChatGPT? Try ChatGPT With a Hammer

Once generative AI can use real-world tools, it will become exponentially more capable. Companies and regulators need to get ahead of these rapidly evolving algorithms.

  • Bruce Schneier and Nathan Sanders
  • Wired
  • August 29, 2023

Last March, just two weeks after GPT-4 was released, researchers at Microsoft quietly announced a plan to compile millions of APIs—tools that can do everything from ordering a pizza to solving physics equations to controlling the TV in your living room—into a compendium that would be made accessible to large language models (LLMs). This was just one milestone in the race across industry and academia to find the best ways to teach LLMs how to manipulate tools, which would supercharge the potential of AI more than any of the impressive advancements we’ve seen to date…

Re-Imagining Democracy for the 21st Century, Possibly Without the Trappings of the 18th Century

  • The Conversation
  • August 7, 2023

This essay was also published by Chron, Phys.org, and UPI.

Japanese translation

Imagine that we’ve all—all of us, all of society—landed on some alien planet, and we have to form a government: clean slate. We don’t have any legacy systems from the U.S. or any other country. We don’t have any special or unique interests to perturb our thinking.

How would we govern ourselves?

It’s unlikely that we would use the systems we have today. The modern representative democracy was the best form of government that mid-18th-century technology could conceive of. The 21st century is a different place scientifically, technically and socially…

Six Ways That AI Could Change Politics

A new era of AI-powered domestic politics may be coming. Watch for these milestones to know when it’s arrived.

  • Bruce Schneier And Nathan E. Sanders
  • MIT Technology Review
  • July 28, 2023

This essay also appeared in The Economic Times.

ChatGPT was released just nine months ago, and we are still learning how it will affect our daily lives, our careers, and even our systems of self-governance.

But when it comes to how AI may threaten our democracy, much of the public conversation lacks imagination. People talk about the danger of campaigns that attack opponents with fake images (or fake audio or video) because we already have decades of experience dealing with doctored images. We’re on the lookout for foreign governments that spread misinformation because we were traumatized by the 2016 US presidential election. And we worry that AI-generated opinions will swamp the political preferences of real people because we’ve seen political “astroturfing”—the use of fake online accounts to give the illusion of support for a policy—grow for decades…

Can You Trust AI? Here’s Why You Shouldn’t

  • Bruce Schneier and Nathan Sanders
  • The Conversation
  • July 20, 2023

This essay also appeared in CapeTalk, CT Insider, The Daily Star, The Economic Times, ForeignAffairs.co.nz, Fortune, GayNrd, Homeland Security News Wire, Kiowa County Press, MinnPost, Tech Xplore, UPI, and Yahoo News.

If you ask Alexa, Amazon’s voice assistant AI system, whether Amazon is a monopoly, it responds by saying it doesn’t know. It doesn’t take much to make it lambaste the other tech giants, but it’s silent about its own corporate parent’s misdeeds.

When Alexa responds in this way, it’s obvious that it is putting its developer’s interests ahead of yours. Usually, though, it’s not so obvious whom an AI system is serving. To avoid being exploited by these systems, people will need to learn to approach AI skeptically. That means deliberately constructing the input you give it and thinking critically about its output…

AI Microdirectives Could Soon Be Used for Law Enforcement

And they’re terrifying.

  • Jonathon W. Penney and Bruce Schneier
  • Slate
  • July 17, 2023

Imagine a future in which AIs automatically interpret—and enforce—laws.

All day and every day, you constantly receive highly personalized instructions for how to comply with the law, sent directly by your government and law enforcement. You’re told how to cross the street, how fast to drive on the way to work, and what you’re allowed to say or do online—if you’re in any situation that might have legal implications, you’re told exactly what to do, in real time.

Imagine that the computer system formulating these personal legal directives at mass scale is so complex that no one can explain how it reasons or works. But if you ignore a directive, the system will know, and it’ll be used as evidence in the prosecution that’s sure to follow…

Will AI Hack Our Democracy?

  • Harvard Kennedy School Magazine
  • Summer 2023

View or Download in PDF Format

Back in 2021, I wrote an essay titled “The Coming AI Hackers,” about how AI would hack our political, economic, and social systems. That ended up being a theme of my latest book, A Hacker’s Mind, and is something I have continued to think and write about.

I believe that AI will hack public policy in a way unlike anything that’s come before. It will change the speed, scale, scope, and sophistication of hacking, which in turn will change so many things that we can’t even imagine how it will all shake out. At a minimum, everything about public policy—how it is crafted, how it is implemented, what effects it has on individuals—will change in ways we cannot foresee…

Snowden Ten Years Later

  • RFC 9446
  • July 2023

In 2013 and 2014, I wrote extensively about new revelations regarding NSA surveillance based on the documents provided by Edward Snowden. But I had a more personal involvement as well.

I wrote the essay below in September 2013. The New Yorker agreed to publish it, but The Guardian asked me not to. It was scared of UK law enforcement and worried that this essay would reflect badly on it. And given that the UK police would raid its offices in July 2014, it had legitimate cause to be worried.

Now, ten years later, I offer this as a time capsule of what those early months of Snowden were like…

Artificial Intelligence Can’t Work Without Our Data

We should all be paid for it.

  • Barath Raghavan and Bruce Schneier
  • Politico
  • June 29, 2023

For four decades, Alaskans have opened their mailboxes to find checks waiting for them, their cut of the black gold beneath their feet. This is Alaska’s Permanent Fund, funded by the state’s oil revenues and paid to every Alaskan each year. We’re now in a different sort of resource rush, with companies peddling bits instead of oil: generative AI.

Everyone is talking about these new AI technologies—like ChatGPT—and AI companies are touting their awesome power. But they aren’t talking about how that power comes from all of us. Without all of our writings and photos that AI companies are using to train their models, they would have nothing to sell. Big Tech companies are currently taking the work of the American people, without our knowledge and consent, without licensing it, and are pocketing the proceeds…

AI Could Shore Up Democracy—Here’s One Way

  • Bruce Schneier and Nathan Sanders
  • The Conversation
  • June 20, 2023

This essay also appeared in ArcaMax, Big News Network, Biloxi Local News & Events, Chicago Sun-Times, Fast Company, GCN, Government Technology, Inkl, Macau Daily Times, MENAFN, Nextgov, and Yahoo.

It’s become fashionable to think of artificial intelligence as an inherently dehumanizing technology, a ruthless force of automation that has unleashed legions of virtual skilled laborers in faceless form. But what if AI turns out to be the one tool able to identify what makes your ideas special, recognizing your unique perspective and potential on the issues where it matters most?…

Build AI by the People, for the People

Washington needs to take AI investment out of the hands of private companies.

  • Bruce Schneier and Nathan E. Sanders
  • Foreign Policy
  • June 12, 2023

Artificial intelligence will bring great benefits to all of humanity. But do we really want to entrust this revolutionary technology solely to a small group of U.S. tech companies?

Silicon Valley has produced no small number of moral disappointments. Google retired its “don’t be evil” pledge before firing its star ethicist. Self-proclaimed “free speech absolutist” Elon Musk bought Twitter in order to censor political speech, retaliate against journalists, and ease access to the platform for Russian and Chinese propagandists. Facebook lied about how it enabled Russian interference in the 2016 U.S. presidential election and …

Sidebar photo of Bruce Schneier by Joe MacInnis.