Liars and Outliers

Liars and Outliers, Bruce Schneier’s most recent security-related text, is an interesting and wide-ranging review of trust in commerce and broader society. And I do mean wide-ranging—he covers everything from the implications of early mankind’s organization into groups of around 150 individuals (the “Dunbar number”) to reputation systems such as eBay and Yelp reviews. Liars and Outliers doesn’t hang together quite as well as his previous books, but it’s still a terrific primer for readers who want more insights into the complex world of security and trust.

I had the opportunity to speak with Dr. Schneier about his book. The text of the interview appears below.

Why did you write Liars and Outliers?

For me, writing is exploration. I initially set out to write a book about the human side of security, both the economics and psychology of security. More specifically, how we as a society use security to protect ourselves from individual members of society. It wasn’t until I was most of the way through the book that I realized I was really writing about trust: that security was just the mechanism for society to induce trust.

Why is thinking about trust important right now?

We’re living in an ever more complex, ever more technological world. The amount of trust we need in each other to survive as a society is enormous. Unless we explicitly think about trust—why it is important, how we induce it, what we do about untrustworthy people within our society—we risk getting security wrong, to the detriment of society. To give a specific example, international banking has the potential to collapse society, unless we ensure that those who need to be in positions of trust are indeed trustworthy.

Every author has a core audience they expect to buy a book. Who are your core readers for Liars and Outliers?

I wrote this book for a general audience. Liars and Outliers is not a computer book or even a technology book; it’s more a current affairs book or a sociology book. And as such, my audience is anyone who is interested in the topic. Increasingly, when I write I don’t have a specific type of reader in mind, just someone who is both intelligent and interested in the topics.

Who should read your book, but probably won’t?

I would like more policymakers to read my book. I regularly get comments from readers who say they’ll send the book to their legislator. Unfortunately, legislators—at least in the U.S.—generally have pretty rigid policy notions and not a lot of time for book-length reading. But I think my book would have a positive effect on a number of political debates—not because it gives answers, but because it helps the reader to better understand the questions.

Your position seems to be that we need a fairly strong central government to increase the cost of defection, both for criminal and civil matters. Are there areas in the US where we get this sort of regulation right? How about where we get it wrong, either by being too lax or too restrictive.

Laws are an important part of achieving security and motivating trust. I don’t have strong opinions on the type of government that should achieve those laws, though. In order to trust our food system, we need laws that make it illegal to lie about the food you sell, but whether those laws come from a democratically elected legislator or a benevolent despot don’t matter. I certainly have opinions about effective and appropriate systems of government, but that’s not the thrust of the book.

In the U.S., we get regulation wrong all the time, often by having too little of it. That is, without laws prohibiting certain behaviors, and effective punishments for those who breach those laws, the only things to prevent those behaviors are reputational effects. And it’s too easy for organizations—mostly large corporations—to sidestep those reputational effects. The lax banking regulations that led to the financial meltdown of 2008 are an obvious example of that. Sometimes, though, we have too much regulation. Our nonsensical counterterrorism policy post-9/11 is an example of that.

Often these regulatory failures occur because we’ve let our political ideology drive policy instead of doing what makes sense; or because technology has changed the balance between attacker and defender, and we haven’t reacted fast enough. As for where we’ve gotten it right: violent crime in the U.S. is probably a good example.

Does technology offer the prospect to deliver trust at a societal level?

Technology enables trust to scale. For example, in the early days of banking, loan officers had to know the customers who received bank loans. Today, a perfect stranger can get a loan from a bank over the Internet. Why? Because the banks use an automatic system of credit scoring to determine who gets loans. That’s a technology-driven trust mechanism. Society has lots of these, and they all help make trust scale. They’re not perfect, of course; we’ve all heard stories of how credit scores didn’t work for a particular individual or how the bank got screwed. But by and large the credit scoring system works.

Right now, I’m in New York City, where there are 10 million people I don’t know. Some of them want to take advantage of me in some way. That I can visit this city, safely—without even thinking twice about it—is a testament to how well technology enables trust to scale.

On the other hand, technology can make trust harder. The bad guys can do more damage, from further away, at less risk to themselves, then ever before. So technology both giveth and taketh away.

In Down and Out in the Magic Kingdom, Cory Doctorow created an interesting universe where your personal wealth depended on your reputation. That’s a fun scenario, but what do you think of existing reputation systems? And how do you see them evolving?

We make use of a lot of reputation systems, both informal and formal. The credit ratings system is essentially a reputation system. Others are the various online recommendation systems: eBay feedback, Amazon reviews, Yelp. They’re all useful, and they allow commerce to scale. The eBay feedback system enables non-merchants to regularly buy and sell to each other remotely—something much harder before eBay was invented.

The problem is that all of these systems can be gamed. If there is value in having a good reputation, then someone who can’t get that reputation naturally has an incentive to try to purchase it. So we have fake positive reputation. A recent New York Times article talked about an Amazon merchant who was trading free merchandise for five-star reviews. The evolution of these systems will primarily involve dealing with these fake reviews. I expect an increasingly complex arms race between those who post fraudulent reviews and those who detect them. Because unless we can all trust these reviewer systems, we won’t use them.

You single out corporations as a class of actors in your model. How can individuals trust corporations, specifically public corporations that are required to maximize shareholder value?

Trusting corporations is no different than trusting people. Both maximize their utility. People maximize happiness, and corporations maximize profit. You can trust individuals and corporations either by knowing them well, or by trusting the mechanisms that surround them and encourage trustworthy behavior. I start the book with a story of a plumber who comes to my house to fix a leak. I trust him, even though I don’t know him, because I trust the system that produced his name and house call, as well as the greater societal systems that produced him as an adult member of our society. Corporations are no different; the underlying mechanisms are exactly the same. Right now, I am in a New York hotel. I trust the hotel, and the corporation that owns the hotel, and all other hotels with the same name, both because of what I know about them directly and what I know about the greater economic and social systems that operate in.

You refer to transparency in decision-making and operations as a means of establishing trust. Science fiction author David Brin wrote The Transparent Society, essentially a thought experiment postulating a world with near-universal surveillance. Do we need to move closer to that type of transparency, where only the most secretive government and corporate entities have any privacy at all?

No. Privacy has enormous personal and social value, and we need to keep it. In general, privacy increases power, and any loss of privacy needs to be viewed using that lens. For example, in the relationship between individuals and the government, the government has more power. So reduced individual privacy reduces individual power and therefore increases the power imbalance between individuals and government, and is bad for liberty. On the other hand, reduced government privacy—open government laws—reduces government power and therefor reduces the power imbalance. This is good for liberty. A similar dynamic exists between individuals and other powerful organizations, such as corporations.

One of the steps you propose to increasing trust is to reduce concentrations of power. How does that idea play out if we need strong institutions to punish breaches of trust?

Reducing concentrations of power is essential because the powerful—whether they be individuals or organizations—can more easily breach trust, and do more damage when they do.

On the other hand, as you rightly said, we often need strong institutions to implement the very security mechanisms we need to induce trust. The way we solve this dilemma is through openness. Police are already allowed to intrude on the most intimate aspects of our lives to solve crimes, but we require them to first go to a judge and convince him or her that the intrusion is necessary—that’s the warrant process—and to disclose to the person intruded upon that the intrusion occurred. What is dangerous are secret courts and police that can operate with impunity.

You point out that, even though maintaining trust in a complex, interconnected society is a hard problem, we seem to do a pretty good job of it as a society. Are you optimistic or pessimistic about trust in the future?

I’m both. I’m optimistic because, over the long run, we tend to do very well. We’ve muddled through a tremendous amount of technological and societal change over the past few millennia. We often get it wrong in the short run, but eventually we get it right. Think about it this way: by any measure, we have much more freedom, more trust, and more liberty, than we did even 100 years ago.

In the near term, though, there’s more call for pessimism. For example, the damage done by post 9/11 policies is very difficult to overcome. We still haven’t fixed the trust problems that caused the financial crisis of 2008. And at least in the U.S., we have a government completely incapable of tackling any difficult policy problems.

We also have dangers as a result of fast technological change. Untrustworthy people can now do more damage, more quickly, than ever before. How to deal with this is an open problem that I don’t know the solution to. But I believe the value of trust is greater than the risks of trust, and the society will continue to muddle along. We get trust both right and wrong, but in the long run we get it more right than wrong.

Categories: Text, Written Interviews

Sidebar photo of Bruce Schneier by Joe MacInnis.