Entries Tagged "Liars and Outliers"

Page 1 of 5

AI and Trust

I trusted a lot today. I trusted my phone to wake me on time. I trusted Uber to arrange a taxi for me, and the driver to get me to the airport safely. I trusted thousands of other drivers on the road not to ram my car on the way. At the airport, I trusted ticket agents and maintenance engineers and everyone else who keeps airlines operating. And the pilot of the plane I flew in. And thousands of other people at the airport and on the plane, any of which could have attacked me. And all the people that prepared and served my breakfast, and the entire food supply chain—any of them could have poisoned me. When I landed here, I trusted thousands more people: at the airport, on the road, in this building, in this room. And that was all before 10:30 this morning.

Trust is essential to society. Humans as a species are trusting. We are all sitting here, mostly strangers, confident that nobody will attack us. If we were a roomful of chimpanzees, this would be impossible. We trust many thousands of times a day. Society can’t function without it. And that we don’t even think about it is a measure of how well it all works.

In this talk, I am going to make several arguments. One, that there are two different kinds of trust—interpersonal trust and social trust—and that we regularly confuse them. Two, that the confusion will increase with artificial intelligence. We will make a fundamental category error. We will think of AIs as friends when they’re really just services. Three, that the corporations controlling AI systems will take advantage of our confusion to take advantage of us. They will not be trustworthy. And four, that it is the role of government to create trust in society. And therefore, it is their role to create an environment for trustworthy AI. And that means regulation. Not regulating AI, but regulating the organizations that control and use AI.

Okay, so let’s back up and take that all a lot slower. Trust is a complicated concept, and the word is overloaded with many meanings. There’s personal and intimate trust. When we say that we trust a friend, it is less about their specific actions and more about them as a person. It’s a general reliance that they will behave in a trustworthy manner. We trust their intentions, and know that those intentions will inform their actions. Let’s call this “interpersonal trust.”

There’s also the less intimate, less personal trust. We might not know someone personally, or know their motivations—but we can trust their behavior. We don’t know whether or not someone wants to steal, but maybe we can trust that they won’t. It’s really more about reliability and predictability. We’ll call this “social trust.” It’s the ability to trust strangers.

Interpersonal trust and social trust are both essential in society today. This is how it works. We have mechanisms that induce people to behave in a trustworthy manner, both interpersonally and socially. This, in turn, allows others to be trusting. Which enables trust in society. And that keeps society functioning. The system isn’t perfect—there are always going to be untrustworthy people—but most of us being trustworthy most of the time is good enough.

I wrote about this in 2012 in a book called Liars and Outliers. I wrote about four systems for enabling trust: our innate morals, concern about our reputations, the laws we live under, and security technologies that constrain our behavior. I wrote about how the first two are more informal than the last two. And how the last two scale better, and allow for larger and more complex societies. They enable cooperation amongst strangers.

What I didn’t appreciate is how different the first and last two are. Morals and reputation are person to person, based on human connection, mutual vulnerability, respect, integrity, generosity, and a lot of other things besides. These underpin interpersonal trust. Laws and security technologies are systems of trust that force us to act trustworthy. And they’re the basis of social trust.

Taxi driver used to be one of the country’s most dangerous professions. Uber changed that. I don’t know my Uber driver, but the rules and the technology lets us both be confident that neither of us will cheat or attack each other. We are both under constant surveillance and are competing for star rankings.

Lots of people write about the difference between living in a high-trust and a low-trust society. How reliability and predictability make everything easier. And what is lost when society doesn’t have those characteristics. Also, how societies move from high-trust to low-trust and vice versa. This is all about social trust.

That literature is important, but for this talk the critical point is that social trust scales better. You used to need a personal relationship with a banker to get a loan. Now it’s all done algorithmically, and you have many more options to choose from.

Social trust scales better, but embeds all sorts of bias and prejudice. That’s because, in order to scale, social trust has to be structured, system- and rule-oriented, and that’s where the bias gets embedded. And the system has to be mostly blinded to context, which removes flexibility.

But that scale is vital. In today’s society we regularly trust—or not—governments, corporations, brands, organizations, groups. It’s not so much that I trusted the particular pilot that flew my airplane, but instead the airline that puts well-trained and well-rested pilots in cockpits on schedule. I don’t trust the cooks and waitstaff at a restaurant, but the system of health codes they work under. I can’t even describe the banking system I trusted when I used an ATM this morning. Again, this confidence is no more than reliability and predictability.

Think of that restaurant again. Imagine that it’s a fast food restaurant, employing teenagers. The food is almost certainly safe—probably safer than in high-end restaurants—because of the corporate systems or reliability and predictability that is guiding their every behavior.

That’s the difference. You can ask a friend to deliver a package across town. Or you can pay the Post Office to do the same thing. The former is interpersonal trust, based on morals and reputation. You know your friend and how reliable they are. The second is a service, made possible by social trust. And to the extent that is a reliable and predictable service, it’s primarily based on laws and technologies. Both can get your package delivered, but only the second can become the global package delivery systems that is FedEx.

Because of how large and complex society has become, we have replaced many of the rituals and behaviors of interpersonal trust with security mechanisms that enforce reliability and predictability—social trust.

But because we use the same word for both, we regularly confuse them. And when we do that, we are making a category error.

And we do it all the time. With governments. With organizations. With systems of all kinds. And especially with corporations.

We might think of them as friends, when they are actually services. Corporations are not moral; they are precisely as immoral as the law and their reputations let them get away with.

So corporations regularly take advantage of their customers, mistreat their workers, pollute the environment, and lobby for changes in law so they can do even more of these things.

Both language and the laws make this an easy category error to make. We use the same grammar for people and corporations. We imagine that we have personal relationships with brands. We give corporations some of the same rights as people.

Corporations like that we make this category error—see, I just made it myself—because they profit when we think of them as friends. They use mascots and spokesmodels. They have social media accounts with personalities. They refer to themselves like they are people.

But they are not our friends. Corporations are not capable of having that kind of relationship.

We are about to make the same category error with AI. We’re going to think of them as our friends when they’re not.

A lot has been written about AIs as existential risk. The worry is that they will have a goal, and they will work to achieve it even if it harms humans in the process. You may have read about the “paperclip maximizer“: an AI that has been programmed to make as many paper clips as possible, and ends up destroying the earth to achieve those ends. It’s a weird fear. Science fiction author Ted Chiang writes about it. Instead of solving all of humanity’s problems, or wandering off proving mathematical theorems that no one understands, the AI single-mindedly pursues the goal of maximizing production. Chiang’s point is that this is every corporation’s business plan. And that our fears of AI are basically fears of capitalism. Science fiction writer Charlie Stross takes this one step further, and calls corporations “slow AI.” They are profit maximizing machines. And the most successful ones do whatever they can to achieve that singular goal.

And near-term AIs will be controlled by corporations. Which will use them towards that profit-maximizing goal. They won’t be our friends. At best, they’ll be useful services. More likely, they’ll spy on us and try to manipulate us.

This is nothing new. Surveillance is the business model of the Internet. Manipulation is the other business model of the Internet.

Your Google search results lead with URLs that someone paid to show to you. Your Facebook and Instagram feeds are filled with sponsored posts. Amazon searches return pages of products whose sellers paid for placement.

This is how the Internet works. Companies spy on us as we use their products and services. Data brokers buy that surveillance data from the smaller companies, and assemble detailed dossiers on us. Then they sell that information back to those and other companies, who combine it with data they collect in order to manipulate our behavior to serve their interests. At the expense of our own.

We use all of these services as if they are our agents, working on our behalf. In fact, they are double agents, also secretly working for their corporate owners. We trust them, but they are not trustworthy. They’re not friends; they’re services.

It’s going to be no different with AI. And the result will be much worse, for two reasons.

The first is that these AI systems will be more relational. We will be conversing with them, using natural language. As such, we will naturally ascribe human-like characteristics to them.

This relational nature will make it easier for those double agents to do their work. Did your chatbot recommend a particular airline or hotel because it’s truly the best deal, given your particular set of needs? Or because the AI company got a kickback from those providers? When you asked it to explain a political issue, did it bias that explanation towards the company’s position? Or towards the position of whichever political party gave it the most money? The conversational interface will help hide their agenda.

The second reason to be concerned is that these AIs will be more intimate. One of the promises of generative AI is a personal digital assistant. Acting as your advocate with others, and as a butler with you. This requires an intimacy greater than your search engine, email provider, cloud storage system, or phone. You’re going to want it with you 24/7, constantly training on everything you do. You will want it to know everything about you, so it can most effectively work on your behalf.

And it will help you in many ways. It will notice your moods and know what to suggest. It will anticipate your needs and work to satisfy them. It will be your therapist, life coach, and relationship counselor.

You will default to thinking of it as a friend. You will speak to it in natural language, and it will respond in kind. If it is a robot, it will look humanoid—or at least like an animal. It will interact with the whole of your existence, just like another person would.

The natural language interface is critical here. We are primed to think of others who speak our language as people. And we sometimes have trouble thinking of others who speak a different language that way. We make that category error with obvious non-people, like cartoon characters. We will naturally have a “theory of mind” about any AI we talk with.

More specifically, we tend to assume that something’s implementation is the same as its interface. That is, we assume that things are the same on the inside as they are on the surface. Humans are like that: we’re people through and through. A government is systemic and bureaucratic on the inside. You’re not going to mistake it for a person when you interact with it. But this is the category error we make with corporations. We sometimes mistake the organization for its spokesperson. AI has a fully relational interface—it talks like a person—but it has an equally fully systemic implementation. Like a corporation, but much more so. The implementation and interface are more divergent than anything we have encountered to date—by a lot.

And you will want to trust it. It will use your mannerisms and cultural references. It will have a convincing voice, a confident tone, and an authoritative manner. Its personality will be optimized to exactly what you like and respond to.

It will act trustworthy, but it will not be trustworthy. We won’t know how they are trained. We won’t know their secret instructions. We won’t know their biases, either accidental or deliberate.

We do know that they are built at enormous expense, mostly in secret, by profit-maximizing corporations for their own benefit.

It’s no accident that these corporate AIs have a human-like interface. There’s nothing inevitable about that. It’s a design choice. It could be designed to be less personal, less human-like, more obviously a service—like a search engine . The companies behind those AIs want you to make the friend/service category error. It will exploit your mistaking it for a friend. And you might not have any choice but to use it.

There is something we haven’t discussed when it comes to trust: power. Sometimes we have no choice but to trust someone or something because they are powerful. We are forced to trust the local police, because they’re the only law enforcement authority in town. We are forced to trust some corporations, because there aren’t viable alternatives. To be more precise, we have no choice but to entrust ourselves to them. We will be in this same position with AI. We will have no choice but to entrust ourselves to their decision-making.

The friend/service confusion will help mask this power differential. We will forget how powerful the corporation behind the AI is, because we will be fixated on the person we think the AI is.

So far, we have been talking about one particular failure that results from overly trusting AI. We can call it something like “hidden exploitation.” There are others. There’s outright fraud, where the AI is actually trying to steal stuff from you. There’s the more prosaic mistaken expertise, where you think the AI is more knowledgeable than it is because it acts confidently. There’s incompetency, where you believe that the AI can do something it can’t. There’s inconsistency, where you mistakenly expect the AI to be able to repeat its behaviors. And there’s illegality, where you mistakenly trust the AI to obey the law. There are probably more ways trusting an AI can fail.

All of this is a long-winded way of saying that we need trustworthy AI. AI whose behavior, limitations, and training are understood. AI whose biases are understood, and corrected for. AI whose goals are understood. That won’t secretly betray your trust to someone else.

The market will not provide this on its own. Corporations are profit maximizers, at the expense of society. And the incentives of surveillance capitalism are just too much to resist.

It’s government that provides the underlying mechanisms for the social trust essential to society. Think about contract law. Or laws about property, or laws protecting your personal safety. Or any of the health and safety codes that let you board a plane, eat at a restaurant, or buy a pharmaceutical without worry.

The more you can trust that your societal interactions are reliable and predictable, the more you can ignore their details. Places where governments don’t provide these things are not good places to live.

Government can do this with AI. We need AI transparency laws. When it is used. How it is trained. What biases and tendencies it has. We need laws regulating AI—and robotic—safety. When it is permitted to affect the world. We need laws that enforce the trustworthiness of AI. Which means the ability to recognize when those laws are being broken. And penalties sufficiently large to incent trustworthy behavior.

Many countries are contemplating AI safety and security laws—the EU is the furthest along—but I think they are making a critical mistake. They try to regulate the AIs and not the humans behind them.

AIs are not people; they don’t have agency. They are built by, trained by, and controlled by people. Mostly for-profit corporations. Any AI regulations should place restrictions on those people and corporations. Otherwise the regulations are making the same category error I’ve been talking about. At the end of the day, there is always a human responsible for whatever the AI’s behavior is. And it’s the human who needs to be responsible for what they do—and what their companies do. Regardless of whether it was due to humans, or AI, or a combination of both. Maybe that won’t be true forever, but it will be true in the near future. If we want trustworthy AI, we need to require trustworthy AI controllers.

We already have a system for this: fiduciaries. There are areas in society where trustworthiness is of paramount importance, even more than usual. Doctors, lawyers, accountants…these are all trusted agents. They need extraordinary access to our information and ourselves to do their jobs, and so they have additional legal responsibilities to act in our best interests. They have fiduciary responsibility to their clients.

We need the same sort of thing for our data. The idea of a data fiduciary is not new. But it’s even more vital in a world of generative AI assistants.

And we need one final thing: public AI models. These are systems built by academia, or non-profit groups, or government itself, that can be owned and run by individuals.

The term “public model” has been thrown around a lot in the AI world, so it’s worth detailing what this means. It’s not a corporate AI model that the public is free to use. It’s not a corporate AI model that the government has licensed. It’s not even an open-source model that the public is free to examine and modify.

A public model is a model built by the public for the public. It requires political accountability, not just market accountability. This means openness and transparency paired with a responsiveness to public demands. It should also be available for anyone to build on top of. This means universal access. And a foundation for a free market in AI innovations. This would be a counter-balance to corporate-owned AI.

We can never make AI into our friends. But we can make them into trustworthy services—agents and not double agents. But only if government mandates it. We can put limits on surveillance capitalism. But only if government mandates it.

Because the point of government is to create social trust. I started this talk by explaining the importance of trust in society, and how interpersonal trust doesn’t scale to larger groups. That other, impersonal kind of trust—social trust, reliability and predictability—is what governments create.

To the extent a government improves the overall trust in society, it succeeds. And to the extent a government doesn’t, it fails.

But they have to. We need government to constrain the behavior of corporations and the AIs they build, deploy, and control. Government needs to enforce both predictability and reliability.

That’s how we can create the social trust that society needs to thrive.

This essay previously appeared on the Harvard Kennedy School Belfer Center’s website.

EDITED TO ADD: This essay has been translated into German.

Posted on December 4, 2023 at 7:05 AMView Comments

Cybercrime as a Tax on the Internet Economy

I was reading this 2014 McAfee report on the economic impact of cybercrime, and came across this interesting quote on how security is a tax on the Internet economy:

Another way to look at the opportunity cost of cybercrime is to see it as a share of the Internet economy. Studies estimate that the Internet economy annually generates between $2 trillion and $3 trillion, a share of the global economy that is expected to grow rapidly. If our estimates are right, cybercrime extracts between 15% and 20% of the value created by the Internet, a heavy tax on the potential for economic growth and job creation and a share of revenue that is significantly larger than any other transnational criminal activity.

Of course you can argue with the numbers, and there’s good reason to believe that the actual costs of cybercrime are much lower. And, of course, those costs are largely indirect costs. It’s not that cybercriminals are getting away with all that value; it’s largely spent on security products and services from companies like McAfee (and my own IBM Security).

In Liars and Outliers I talk about security as a tax on the honest.

Posted on September 1, 2016 at 9:49 AMView Comments

Prisoner's Dilemma Experiment Illustrates Four Basic Phenotypes

If you’ve read my book Liars and Outliers, you know I like the prisoner’s dilemma as a way to think about trust and security. There is an enormous amount of research—both theoretical and experimental—about the dilemma, which is why I found this new research so interesting. Here’s a decent summary:

The question is not just how people play these games­—there are hundreds of research papers on that­—but instead whether people fall into behavioral types that explain their behavior across different games. Using standard statistical methods, the researchers identified four such player types: optimists (20 percent), who always go for the highest payoff, hoping the other player will coordinate to achieve that goal; pessimists (30 percent), who act according to the opposite assumption; the envious (21 percent), who try to score more points than their partners; and the trustful (17 percent), who always cooperate. The remaining 12 percent appeared to make their choices completely at random.

Posted on August 18, 2016 at 5:36 AMView Comments

How Altruism Might Have Evolved

I spend a lot of time in my book Liars and Outliers on cooperating versus defecting. Cooperating is good for the group at the expense of the individual. Defecting is good for the individual at the expense of the group. Given that evolution concerns individuals, there has been a lot of controversy over how altruism might have evolved.

Here’s one possible answer: it’s favored by chance:

The key insight is that the total size of population that can be supported depends on the proportion of cooperators: more cooperation means more food for all and a larger population. If, due to chance, there is a random increase in the number of cheats then there is not enough food to go around and total population size will decrease. Conversely, a random decrease in the number of cheats will allow the population to grow to a larger size, disproportionally benefitting the cooperators. In this way, the cooperators are favoured by chance, and are more likely to win in the long term.

Dr George Constable, soon to join the University of Bath from Princeton, uses the analogy of flipping a coin, where heads wins £20 but tails loses £10:

“Although the odds [of] winning or losing are the same, winning is more good than losing is bad. Random fluctuations in cheat numbers are exploited by the cooperators, who benefit more than they lose out.”

EDITED TO ADD (8/12): Journal article.

Related article.

Posted on July 29, 2016 at 12:23 PMView Comments

Psychological Model of Selfishness

This is interesting:

Game theory decision-making is based entirely on reason, but humans don’t always behave rationally. David Rand, assistant professor of psychology, economics, cognitive science, and management at Yale University, and psychology doctoral student Adam Bear incorporated theories on intuition into their model, allowing agents to make a decision either based on instinct or rational deliberation.

In the model, there are multiple games of prisoners dilemma. But while some have the standard set-up, others introduce punishment for those who refuse to cooperate with a willing partner. Rand and Bear found that agents who went through many games with repercussions for selfishness became instinctively cooperative, though they could override their instinct to behave selfishly in cases where it made sense to do so.

However, those who became instinctively selfish were far less flexible. Even in situations where refusing to cooperate was punished, they would not then deliberate and rationally choose to cooperate instead.

The paper:

Abstract: Humans often cooperate with strangers, despite the costs involved. A long tradition of theoretical modeling has sought ultimate evolutionary explanations for this seemingly altruistic behavior. More recently, an entirely separate body of experimental work has begun to investigate cooperation’s proximate cognitive underpinnings using a dual-process framework: Is deliberative self-control necessary to reign in selfish impulses, or does self-interested deliberation restrain an intuitive desire to cooperate? Integrating these ultimate and proximate approaches, we introduce dual-process cognition into a formal game-theoretic model of the evolution of cooperation. Agents play prisoner’s dilemma games, some of which are one-shot and others of which involve reciprocity. They can either respond by using a generalized intuition, which is not sensitive to whether the game is one-shot or reciprocal, or pay a (stochastically varying) cost to deliberate and tailor their strategy to the type of game they are facing. We find that, depending on the level of reciprocity and assortment, selection favors one of two strategies: intuitive defectors who never deliberate, or dual-process agents who intuitively cooperate but sometimes use deliberation to defect in one-shot games. Critically, selection never favors agents who use deliberation to override selfish impulses: Deliberation only serves to undermine cooperation with strangers. Thus, by introducing a formal theoretical framework for exploring cooperation through a dual-process lens, we provide a clear answer regarding the role of deliberation in cooperation based on evolutionary modeling, help to organize a growing body of sometimes-conflicting empirical results, and shed light on the nature of human cognition and social decision making.

Very much in line with what I wrote in Liars and Outliers.

Posted on January 28, 2016 at 6:18 AMView Comments

The Value of Breaking the Law

Interesting essay on the impossibility of being entirely lawful all the time, the balance that results from the difficulty of law enforcement, and the societal value of being able to break the law.

What’s often overlooked, however, is that these legal victories would probably not have been possible without the ability to break the law.

The state of Minnesota, for instance, legalized same-sex marriage this year, but sodomy laws had effectively made homosexuality itself completely illegal in that state until 2001. Likewise, before the recent changes making marijuana legal for personal use in WA and CO, it was obviously not legal for personal use.

Imagine if there were an alternate dystopian reality where law enforcement was 100% effective, such that any potential law offenders knew they would be immediately identified, apprehended, and jailed. If perfect law enforcement had been a reality in MN, CO, and WA since their founding in the 1850s, it seems quite unlikely that these recent changes would have ever come to pass. How could people have decided that marijuana should be legal, if nobody had ever used it? How could states decide that same sex marriage should be permitted, if nobody had ever seen or participated in a same sex relationship?

This is very much like my notion of “outliers” in my book Liars and Outliers.

Posted on July 16, 2013 at 12:35 PMView Comments

Me at the RSA Conference

I’ll be speaking twice at the RSA Conference this year. I’m giving a solo talk Tuesday at 1:00, and participating in a debate about training Wednesday at noon. This is a short written preview of my solo talk, and this is an audio interview on the topic.

Additionally: Akamai is giving away 1,500 copies of Liars and Outliers, and Zcaler is giving away 300 copies of Schneier on Security. I’ll be doing book signings in both of those companies’ booths and at the conference bookstore.

Posted on February 25, 2013 at 1:49 PMView Comments

Experimental Results: Liars and Outliers Trust Offer

Last August, I offered to sell Liars and Outliers for $11 in exchange for a book review. This was much less than the $30 list price; less even than the $16 Amazon price. For readers outside the U.S., where books can be very expensive, it was a great price.

I sold 800 books from this offer—much more than the few hundred I originally intended—to people all over the world. It was the end of September before I mailed them all out, and probably a couple of weeks later before everyone received their copy. Now, three months after that, it’s interesting to count up the number of reviews I received from the offer.

That’s not a trivial task. I asked people to e-mail me URLs for their review, but not everyone did. But counting the independent reviews, the Amazon reviews, and the Goodreads reviews from the time period, and making some reasonable assumptions, about 70 people fulfilled their end of the bargain and reviewed my book.

That’s 9%.

There were some outliers. One person wrote to tell me that he didn’t like the book, and offered not to publish a review despite the agreement. Another two e-mailed me to offer to return the price difference (I declined).

Perhaps people have been busier than they expected—and haven’t gotten around to reading the book and writing a review yet. I know my reading is often delayed by more pressing priorities. And although I didn’t put any deadline on when the review should be completed by, I received a surge of reviews around the end of the year—probably because some people self-imposed a deadline. What is certain is that a great majority of people decided not to uphold their end of the bargain.

The original offer was an exercise in trust. But to use the language of the book, the only thing inducing compliance was the morals of the reader. I suppose I could have collected everyone’s names, checked off those who wrote reviews, and tried shaming the rest—but that seems like a lot of work. Perhaps this public nudge will be enough to convince some more people to write reviews.

EDITED TO ADD (1/11): I never intended to make people feel bad with this post. I know that some people are busy, and that reading an entire book is a large time commitment (especially in our ever-shortened-attention-span era). I can see how this post could be read as an attempt to shame, but—really—that was not my intention.

EDITED TO ADD (1/22): Some comments.

Posted on January 11, 2013 at 8:10 AMView Comments

1 2 3 5

Sidebar photo of Bruce Schneier by Joe MacInnis.