Essays in the Category "Trust"

Page 1 of 3

AI and Trust

  • Harvard Kennedy School Belfer Center
  • December 1, 2023

German translation

I trusted a lot today. I trusted my phone to wake me on time. I trusted Uber to arrange a taxi for me, and the driver to get me to the airport safely. I trusted thousands of other drivers on the road not to ram my car on the way. At the airport, I trusted ticket agents and maintenance engineers and everyone else who keeps airlines operating. And the pilot of the plane I flew. And thousands of other people at the airport and on the plane, any of which could have attacked me. And all the people that prepared and served my breakfast, and the entire food supply chain—any of them could have poisoned me. When I landed here, I trusted thousands more people: at the airport, on the road, in this building, in this room. And that was all before 10:30 this morning…

Trustworthy AI Means Public AI

  • IEEE Security & Privacy
  • November-December 2023

View or Download in PDF Format

Back in 1998, Sergey Brin and Larry Page introduced the Google search engine in an academic paper that questioned the ad-based business model of the time. They wrote: “We believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.” Although they didn’t use the word, their argument was that a search engine that could be paid to return particular URLs is fundamentally less trustworthy. “Advertising income often provides an incentive to provide poor quality search results.”…

Can You Trust AI? Here’s Why You Shouldn’t

  • Bruce Schneier and Nathan Sanders
  • The Conversation
  • July 20, 2023

This essay also appeared in CapeTalk, CT Insider, The Daily Star, The Economic Times, ForeignAffairs.co.nz, Fortune, GayNrd, Homeland Security News Wire, Kiowa County Press, MinnPost, Tech Xplore, UPI, and Yahoo News.

If you ask Alexa, Amazon’s voice assistant AI system, whether Amazon is a monopoly, it responds by saying it doesn’t know. It doesn’t take much to make it lambaste the other tech giants, but it’s silent about its own corporate parent’s misdeeds.

When Alexa responds in this way, it’s obvious that it is putting its developer’s interests ahead of yours. Usually, though, it’s not so obvious whom an AI system is serving. To avoid being exploited by these systems, people will need to learn to approach AI skeptically. That means deliberately constructing the input you give it and thinking critically about its output…

Can We Build Trustworthy AI?

AI isn't transparent, so we should all be preparing for a world where AI is not trustworthy, write two Harvard researchers.

  • Nathan Sanders and Bruce Schneier
  • Gizmodo
  • May 4, 2023

We will all soon get into the habit of using AI tools for help with everyday problems and tasks. We should get in the habit of questioning the motives, incentives, and capabilities behind them, too.

Imagine you’re using an AI chatbot to plan a vacation. Did it suggest a particular resort because it knows your preferences, or because the company is getting a kickback from the hotel chain? Later, when you’re using another AI chatbot to learn about a complex economic issue, is the chatbot reflecting your politics or the politics of the company that trained it?…

There’s No Good Reason to Trust Blockchain Technology

  • Bruce Schneier
  • Wired
  • February 6, 2019

Italian translation

In his 2008 white paper that first proposed bitcoin, the anonymous Satoshi Nakamoto concluded with: “We have proposed a system for electronic transactions without relying on trust.” He was referring to blockchain, the system behind bitcoin cryptocurrency. The circumvention of trust is a great promise, but it’s just not true. Yes, bitcoin eliminates certain trusted intermediaries that are inherent in other payment systems like credit cards. But you still have to trust bitcoin—and everything about it.

Much has been written about …

The Risks—and Benefits—of Letting Algorithms Judge Us

  • Bruce Schneier
  • CNN
  • January 6, 2016

China is considering a new “social credit” system, designed to rate everyone’s trustworthiness. Many fear that it will become a tool of social control—but in reality it has a lot in common with the algorithms and systems that score and classify us all every day.

Human judgment is being replaced by automatic algorithms, and that brings with it both enormous benefits and risks. The technology is enabling a new form of social control, sometimes deliberately and sometimes as a side effect. And as the Internet of Things ushers in an era of more sensors and more data—and more algorithms—we need to ensure that we reap the benefits while avoiding the harms…

The Automation of Reputation

  • Bruce Schneier
  • Edge
  • November 5, 2015

This essay is part of a conversation with Gloria Origgi entitled “What is Reputation?” Other participants were Abbas Raza, William Poundstone, Hugo Mercier, Quentin Hardy, Martin Nowak and Roger Highfield, Bruce Schneier, and Kai Krause.

Reputation is a social mechanism by which we come to trust one another, in all aspects of our society. I see it as a security mechanism. The promise and threat of a change in reputation entices us all to be trustworthy, which in turn enables others to trust us. In a very real sense, reputation enables friendships, commerce, and everything else we do in society. It’s old, older than our species, and we are finely tuned to both perceive and remember reputation information, and broadcast it to others…

VW Scandal Could Just Be the Beginning

  • Bruce Schneier
  • CNN
  • September 28, 2015

Portuguese translation by Ricardo R Hashimoto

For the past six years, Volkswagen has been cheating on the emissions testing for its diesel cars. The cars’ computers were able to detect when they were being tested, and temporarily alter how their engines worked so they looked much cleaner than they actually were. When they weren’t being tested, they belched out 40 times the pollutants. Their CEO has resigned, and the company will face an expensive recall, enormous fines and worse.

Cheating on regulatory testing has a long history in corporate America. It …

The Only Way to Restore Trust in the NSA

  • Bruce Schneier
  • The Atlantic
  • September 4, 2013

I’ve recently seen two articles speculating on the NSA’s capability, and practice, of spying on members of Congress and other elected officials. The evidence is all circumstantial and smacks of conspiracy thinking—and I have no idea whether any of it is true or not—but it’s a good illustration of what happens when trust in a public institution fails.

The NSA has repeatedly lied about the extent of its spying program. James R. Clapper, the director of national intelligence, has lied about it to Congress. Top-secret documents provided by Edward Snowden, and reported on by the …

Trust in Man/Machine Security Systems

  • Bruce Schneier
  • IEEE Security & Privacy
  • September/October 2013

View or Download in PDF Format

I jacked a visitor’s badge from the Eisenhower Executive Office Building in Washington, DC, last month. The badges are electronic; they’re enabled when you check in at building security. You’re supposed to wear it on a chain around your neck at all times and drop it through a slot when you leave.

I kept the badge. I used my body as a shield, and the chain made a satisfying noise when it hit bottom. The guard let me through the gate.

The person after me had problems, though. Some part of the system knew something was wrong, and wouldn’t let her out. Eventually, the guard had to manually override something…

1 2 3

Sidebar photo of Bruce Schneier by Joe MacInnis.