Essays in the Category "AI and Large Language Models"

Page 2 of 4

Ten Ways AI Will Change Democracy

In a new essay, Harvard Kennedy School’s Bruce Schneier goes beyond AI-generated disinformation to detail other novel ways in which AI might alter how democracy functions.

  • Harvard Kennedy School Ash Center
  • November 6, 2023

Artificial intelligence will change so many aspects of society, largely in ways that we cannot conceive of yet. Democracy, and the systems of governance that surround it, will be no exception. In this short essay, I want to move beyond the “AI-generated disinformation” trope and speculate on some of the ways AI will change how democracy functions—in both large and small ways.

When I survey how artificial intelligence might upend different aspects of modern society, democracy included, I look at four different dimensions of change: speed, scale, scope, and sophistication. Look for places where changes in degree result in changes of kind. Those are where the societal upheavals will happen…

Trustworthy AI Means Public AI

  • IEEE Security & Privacy
  • November-December 2023

View or Download in PDF Format

Back in 1998, Sergey Brin and Larry Page introduced the Google search engine in an academic paper that questioned the ad-based business model of the time. They wrote: “We believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.” Although they didn’t use the word, their argument was that a search engine that could be paid to return particular URLs is fundamentally less trustworthy. “Advertising income often provides an incentive to provide poor quality search results.”…

Who’s Accountable for AI Usage in Digital Campaign Ads? Right Now, No One.

In a new essay, Bruce Schneier and Nathan Sanders argue that AI is poised to dramatically ramp up digital campaigns and outline how accountability measures across platforms, government, and the media can curb risks.

  • Bruce Schneier and Nathan Sanders
  • Harvard Kennedy School Ash Center
  • October 11, 2023

If an AI breaks the rules for you, does that count as breaking the rules? This is the essential question being taken up by the Federal Election Commission this month, and public input is needed to curtail the potential for AI to take US campaigns (even more) off the rails.

At issue is whether candidates using AI to create deepfaked media for political advertisements should be considered fraud or legitimate electioneering. That is, is it allowable to use AI image generators to create photorealistic images depicting Trump hugging Anthony Fauci? And is it allowable to use …

AI Disinformation Is a Threat to Elections—Learning to Spot Russian, Chinese and Iranian Meddling in Other Countries Can Help the Us Prepare for 2024

  • The Conversation
  • September 29, 2023

This essay also appeared in Defense One, Fortune and Scientific American.

Elections around the world are facing an evolving threat from foreign actors, one that involves artificial intelligence.

Countries trying to influence each other’s elections entered a new era in 2016, when the Russians launched a series of social media disinformation campaigns targeting the U.S. presidential election. Over the next seven years, a number of countries—most prominently China and Iran—used social media to influence foreign elections, both in the U.S. and elsewhere in the world. There’s no reason to expect 2023 and 2024 to be any different…

The A.I. Wars Have Three Factions, and They All Crave Power

  • Bruce Schneier and Nathan Sanders
  • The New York Times
  • September 28, 2023

There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in A.I. technology have also brought forth a unifying realization of the risks—and the steps we need to take to mitigate them.

The reality, unfortunately, is quite different. Beneath almost all of the testimony, the manifestoes, the blog posts and the public declarations issued about A.I. are battles among deeply divided factions. Some are concerned about far-future risks that sound like science fiction. Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now. Some are motivated by potential business revenue, others by national security concerns…

Robots Are Already Killing People

The AI boom only underscores a problem that has existed for years.

  • Bruce Schneier and Davi Ottenheimer
  • The Atlantic
  • September 6, 2023

The robot revolution began long ago, and so did the killing. One day in 1979, a robot at a Ford Motor Company casting plant malfunctioned—human workers determined that it was not going fast enough. And so 25-year-old Robert Williams was asked to climb into a storage rack to help move things along. The one-ton robot continued to work silently, smashing into Williams’s head and instantly killing him. This was reportedly the first incident in which a robot killed a human; many more would follow.

At Kawasaki Heavy Industries in 1981, Kenji Urada died in similar …

Nervous About ChatGPT? Try ChatGPT With a Hammer

Once generative AI can use real-world tools, it will become exponentially more capable. Companies and regulators need to get ahead of these rapidly evolving algorithms.

  • Bruce Schneier and Nathan Sanders
  • Wired
  • August 29, 2023

Last March, just two weeks after GPT-4 was released, researchers at Microsoft quietly announced a plan to compile millions of APIs—tools that can do everything from ordering a pizza to solving physics equations to controlling the TV in your living room—into a compendium that would be made accessible to large language models (LLMs). This was just one milestone in the race across industry and academia to find the best ways to teach LLMs how to manipulate tools, which would supercharge the potential of AI more than any of the impressive advancements we’ve seen to date…

Six Ways That AI Could Change Politics

A new era of AI-powered domestic politics may be coming. Watch for these milestones to know when it’s arrived.

  • Bruce Schneier And Nathan E. Sanders
  • MIT Technology Review
  • July 28, 2023

This essay also appeared in The Economic Times.

ChatGPT was released just nine months ago, and we are still learning how it will affect our daily lives, our careers, and even our systems of self-governance.

But when it comes to how AI may threaten our democracy, much of the public conversation lacks imagination. People talk about the danger of campaigns that attack opponents with fake images (or fake audio or video) because we already have decades of experience dealing with doctored images. We’re on the lookout for foreign governments that spread misinformation because we were traumatized by the 2016 US presidential election. And we worry that AI-generated opinions will swamp the political preferences of real people because we’ve seen political “astroturfing”—the use of fake online accounts to give the illusion of support for a policy—grow for decades…

Can You Trust AI? Here’s Why You Shouldn’t

  • Bruce Schneier and Nathan Sanders
  • The Conversation
  • July 20, 2023

This essay also appeared in CapeTalk, CT Insider, The Daily Star, The Economic Times, ForeignAffairs.co.nz, Fortune, GayNrd, Homeland Security News Wire, Kiowa County Press, MinnPost, Tech Xplore, UPI, and Yahoo News.

If you ask Alexa, Amazon’s voice assistant AI system, whether Amazon is a monopoly, it responds by saying it doesn’t know. It doesn’t take much to make it lambaste the other tech giants, but it’s silent about its own corporate parent’s misdeeds.

When Alexa responds in this way, it’s obvious that it is putting its developer’s interests ahead of yours. Usually, though, it’s not so obvious whom an AI system is serving. To avoid being exploited by these systems, people will need to learn to approach AI skeptically. That means deliberately constructing the input you give it and thinking critically about its output…

AI Microdirectives Could Soon Be Used for Law Enforcement

And they’re terrifying.

  • Jonathon W. Penney and Bruce Schneier
  • Slate
  • July 17, 2023

Imagine a future in which AIs automatically interpret—and enforce—laws.

All day and every day, you constantly receive highly personalized instructions for how to comply with the law, sent directly by your government and law enforcement. You’re told how to cross the street, how fast to drive on the way to work, and what you’re allowed to say or do online—if you’re in any situation that might have legal implications, you’re told exactly what to do, in real time.

Imagine that the computer system formulating these personal legal directives at mass scale is so complex that no one can explain how it reasons or works. But if you ignore a directive, the system will know, and it’ll be used as evidence in the prosecution that’s sure to follow…

Sidebar photo of Bruce Schneier by Joe MacInnis.