AI and Trust [Bruce Schneier’s Perspective on AI]

This essay appeared in both English and Korean. The Korean version is available as a PDF.

Trust is essential to society. We trust that our phones will wake us on time, that our food is safe to eat, that other drivers on the road won‘t ram us. We trust many thousands of times a day. Society can’t function without it. And that we don‘t even think about it is a measure of how well it all works.

Trust is a complicated concept, and the word has many meanings. When we say that we trust a friend, it is less about their specific actions and more about them as a person. We trust their intentions, and know that those intentions will inform their actions. This is “interpersonal trust.”

There‘s also a less intimate, less personal trust. We might not know someone individually, but we can trust their behavior―not because of their innate morals, but the laws they live under, and security technologies that constrain their behavior. It’s about reliability and predictability. This is “social trust.” It scales better than interpersonal trust, and allows for larger and more complex societies. It enables cooperation amongst strangers.

And scale is vital. In today‘s society we regularly trust―or not―governments, corporations, organizations, groups. I don’t know the pilot that flew my airplane, but I trust the airline that puts well-trained and well-rested pilots in cockpits on schedule. I don‘t trust the cooks and waitstaff at a restaurant, but the health codes they work under.

Because of how large and complex society has become, we‘ve replaced many of the rituals and behaviors of interpersonal trust with security mechanisms that enforce reliability and predictability―social trust. But because we use the same word for both, we confuse them. We do this all the time, especially with corporations.

We might think of them as friends, when they are actually services. Corporations are not capable of having that kind of relationship.

We‘re about to make the same error with AI. We’re going to think of them as our friends when they‘re not. And near-term AIs will be controlled by corporations, which will use them towards their profit-maximizing goals. At best, they’ll be useful services. More likely, they‘ll spy on us and try to manipulate us.

This is how the Internet already works. Companies spy on us as we use their products and services. Data brokers buy surveillance data from the smaller companies, and assemble dossiers on us. Then they sell that information back to those and other companies, who use it to manipulate our behavior to serve their interests.

We use Internet services as if they are our agents, working on our behalf. In fact, they are double agents, also secretly working for their corporate owners.

It‘s going to be no different with AI. And the result will be much worse, for two reasons.

First, AI systems will be more relational. We will converse with them using natural language, so we‘ll naturally ascribe human characteristics to them―which will make it easier for these double agents to do their work. Did your chatbot recommend a particular airline because it’s the best deal, or because the AI company got a kickback? When you asked it to explain a political issue, did it bias that explanation towards the company‘s interests, or those a political party paid it to promote?

Second, these AIs will be more intimate. One of the promises of generative AI is a personal digital assistant, acting as your advocate with others, and as a butler with you. You‘ll want it with you 24/7, learning from everything you do, so it can most effectively work on your behalf. It will notice your moods and know what to suggest. It will anticipate your needs and work to satisfy them. It will be your therapist, life coach, and relationship counselor. You will speak to it in natural language, and it will respond in kind. If it is a robot, it will look humanoid―or at least like an animal. It will interact with the whole of your existence, just like another person would. It will use your mannerisms and cultural references. It will have a convincing voice and a confident tone. Its personality will be optimized for you. You will default to thinking of it as a friend. You will forget how powerful the corporation behind the AI is, because you will be fixated on the person you think the AI is.

That‘s why we need trustworthy AI: AI whose behavior, limitations, and training are understood. AI whose biases are understood, and corrected for. AI whose goals are understood. That won’t secretly betray your trust to someone else.

The market won‘t provide this on its own, any more than it provides the health and safety codes that let you board a plane, eat at a restaurant, or buy a pharmaceutical without worry.

We need AI transparency laws. We need laws regulating AI―and robotic―safety. We need laws that enforce the trustworthiness of AI. Which means the ability to recognize when those laws are being broken, and penalties sufficiently large to incent trustworthy behavior. These laws should place restrictions not the AI itself, but on the people and corporations that build and control them. Otherwise the regulations are making the same category error I‘ve been talking about.

Because of the intimate nature of AI personal assistants, they‘ll need more than just regulation to ensure trustworthiness. These assistants will be trusted agents that need extraordinary access to our information to do its job, much like a doctor, lawyer, or accountant. Like those professionals, it―or rather, the company that controls it―should have a legal responsibility to act in our best interest: a fiduciary responsibility.

And we need public AI models. These are systems built by the public for the public. This means openness and transparency paired with a responsiveness to public demands. It should also be available for anyone to run and build on top of―providing the foundation for a free market in AI innovations. This would be a counter-balance to corporate-owned AI.

We can never make AI into our friends. But we can make them into trustworthy services: agents and not double agents.

That‘s how we can create the social trust that society needs to thrive.

Categories: AI and Large Language Models

Sidebar photo of Bruce Schneier by Joe MacInnis.