Review of Liars and Outliers
It’s no secret that I’m a huge fan of Bruce Schneier and his work. So when he offered readers a chance to buy his book at a discount in exchange for a review, I jumped at the chance. This review fulfills the obligation that I took on.
Every once in a while, you learn something that recontextualizes the world for you, and you start looking at everything through a new lens. After reading Liars and Outliers, I’ve been framing the systems I interact with in terms of cooperation, defection and the pressures applied to prevent defection.
At a certain abstract level, many human actions taken at large are much like Prisoners’ Dilemmas (or other game-theoretic games where the global optimum is at odds with personal optima). When you go to the grocery store, you (along with everybody else) has a choice between paying for your goods—cooperating—or walking out—defecting. If you pay, it’s good for everybody, because it helps ensure that the grocery store will continue to serve the area, but if you walk out, you get free groceries, but the costs are passed onto other customers. If too many people steal, the store might close. The fact that most people don’t steal groceries allows stores to continue operating. These defectors, as Schneier calls people who make the selfish choice over the societally optimal choice, are the titular Liars and Outliers, and society functions because they are outliers.
Schneier first lays out how society depends upon trust. We need to act in the group’s interest instead of our own interests, but we also need to be able to trust that others will act in that interest. Though you might not think of it, this trust is implicit nearly everywhere in society. If the grocery store can’t trust that shoppers will pay for their goods, then they won’t operate. If you don’t trust that a distant merchant won’t take your money and never deliver you goods, you’re not going to buy anything from them. If you don’t trust that Level3 isn’t going to spy on your communications and sell them to others, you won’t use the Internet.
But we don’t explicitly trust each person, company, and organization we deal with; we trust the systems that societies that surround them.
Schneier goes through a detailed history of the way that as society has become more connected and complex, both the kind of trust and the mechanisms we need to achieve that trust have changed. In smaller societies where people all know each other, morals and reputation sufficed to discourage defection and exclude defectors. But as societies grew, it became harder to pass knowledge about who was trustworthy and why, and we started to need institutions—the police, for example—to enable trust again. And rather than implying a lack of trust, security systems—other tools and systems that help reduce defection—allows us to trust that others will act in the group interest. These four pressures—moral pressure, reputational pressure, institutional pressure, and security systems—are what enables trust in society.
Moral pressures work well in small groups and sometimes in larger groups. Schneier points out that society does not pressure to vote other than the moral pressure we feel. There’s almost never a discernible, direct benefit from voting, since each person’s vote is just one vote, and elections are (almost) never that close. Yet still, we vote. Likewise, there is little pressure except for moral pressure that prevents us from littering. However, these pressures are vulnerable to the Bad Apple Effect. At a certain tipping point, the moral pressure breaks down. If there’s already litter there, what’s the point in doing the work to find a trash can? However, this effect can be used for good. By placing customer energy consumption as a percentile score of their neighbors (ensuring a fairly small group), average energy consumption goes down. However, moral pressure often fails at larger scales, morals conflict, and morals are vulnerable to being manipulated (cf. every Sob Story Guy on the T).
Slightly larger groups can use reputational pressure. If you buy a product from a merchant and the product is a lemon, you can tell your friends to avoid that merchant in the future. Reputational pressure works well in moderately sized groups, but can fail in larger situations: If a particular car dealership is shady and only occasionally knowingly sells a lemon, it is difficult for those who bought the lemon to communicate this to others who are looking to buy from the dealership later. Branding is an attempt to scale reputational pressure—it associates a product or service with an identity that can be used to build a reputation among consumers. And technological systems, like online reviews of products and businesses, is another way that reputational pressure can scale. But in any situation where interpersonal ties are looser, and the benefits of defecting can outweigh the reputational consequences, reputational pressure doesn’t suffice.
Institutions come into play when the informal pressures of morals and reputation fail. Enforced rules with punishments meted out by an agent of the institution—be it a government, school, or any other organization—are the core of institutional pressure. While customers might be able to damage the reputation of a store that defrauds its customers, they can call the police to report the fraud and get a formal punishment made against the store. In my favorite example in the book, criminal organizations severely punish those who defect (from the organization, while cooperating with society at large). It’s important that punishments be large enough to be an effective deterrent against defection. For instance, if parking fines are very low, illegal parking might be normal, with fines regarded as part of the cost of business. Importantly, the institution making and enforcing the rules must be seen as accountable to its constituents, so that they feel moral incentives to follow the laws and support the institution. Institutional pressures are far from perfect, and the problems with laws are numerous and well-documented, including loopholes and unintended consequences.
The final enabler of trust is security systems. Security systems fill in where it’s not possible for the other societal pressures to prevent defection. There are several different types of security systems. Defensive systems are what comes to mind when you think “security”, like locks, bulletproof vests, vaults, etc. But this is just one piece. Other types of security systems focus on making defection harder (e.g. encryption makes it harder to steal information), detecting defection (e.g. burglar alarms), tracing defection after the fact (forensic systems), and recovering from the defection. In general, security systems actually directly affect how difficult it is to defect at the time of defection, rather than affecting the risk–reward calculation. A fence around a yard, even if it’s not tall or locked, is a simple security system, as is a guard in a prison. Security systems also can act as and augment other societal pressures: online reviews are an example of a security system enforcing reputational pressure. Security systems aren’t a panacea. DRM is a security system preventing users from defecting from publishers’ wishes, but necessitates invasive intrusion into the user’s computer and significant costs on otherwise simpler distribution technology. And security systems can be impossible to implement, as in doping in sports. And adding technological security to other systems can mean that in order to break the system, you need only break the technology. And too much security restricts societal freedom.
Having described in detail societal pressures, Schneier then discusses how these pressures effect trust in the real world. He goes into how competing interests inherently means that following one group’s interest (for instance, society at large) can mean defecting from another group (for instance, a criminal syndicate). People also have numerous other competing interests when deciding whether to defect from a group, including self-interest, competing morals, self-preservation, and specific relations. These can be reinforcing or in disagreement, and the ways they affect interaction with society are complex and discussed in detail.
Here, Schneier starts to specifically deal with the actions of organizations, starting with organizations in general, and moving on to corporations and public institutions specifically. By and large, organizations don’t feel moral pressure. They can have interests that are like morals, though. (“Don’t be evil” is held up as one interest that is much like a moral.) They very much feel reputational pressure, though, but they can also engage in advertising and public relations and thusly significantly influence their own reputation, which isn’t an option that most people have. It’s more difficult to apply institutional pressure to organization, so most penalties are financial. You can’t put an organization in jail. Many corporations are organized across many jurisdictions so they are difficult or impossible to regulate. Security systems generally are designed to protect against the people in organizations.
Further discussion of trust in organizations is heavily informed by the principal–agent problem, and recognizing some of the unintended consequences of that relationship. A fish company might have a corporate policy to not overfish, but the organization also wants to sell as many fish as possible. A manager in the organization is thereby faced with many pressures: they want to not overfish because both society and the company have declared it’s bad, but if they’re judged and compensated based upon the number of fish caught, they then have financial and reputational incentives to catch more.
Cover-ups and whistleblowing are two specific outcomes of these types of competing interests; the former involves continuing to defect from society to cooperate with the organization, and the latter is the opposite. “Cover your ass” and “I’ll be gone, you’ll be gone” (which was used by bankers issuing worthless mortgage-backed securities in discussing potential consequences) are two other examples of personal responses to organizational pressures, where personal interests may not align with organizational interests.
Institutions are a notable type of organization specifically because their purpose is to enforce societal pressures. That is, institutions are themselves agents of society which additionally have their own interests. Take the TSA, which is tasked with protecting airplanes. But their actions seem arbitrary. Their interests center on self-preservation, reputation (if there’s an airplane attack, that’s bad, but if people get in car accidents, that’s okay), and relative power (if it gets bigger, it can do more and be likelier to survive). Regulatory capture also affects institutions, and can help cause failures like those in the financial industry or Deepwater Horizon. Large corporations can also act in the place of institutions, but are not subject to oversight from their users and are only motivated by profit.
The book ends with more discussion of how societal pressures fail together, and how societies misapply pressures. We overemphasize rare, spectacular risks, and downplay common, mundane ones. This helps explain the way the TSA was created and the way it operates. But it also explains a lot else of the way we do things and apply pressure. If you get incentives wrong in other ways, cooperation becomes difficult, or there may be incentives to defect. No Child Left Behind led to teachers in certain aras, fearing for their jobs or with other incentives, to help their students cheat on the tests. Security systems weren’t in place to prevent this, since they weren’t necessary before.
How technology affects trust is also discussed. The modern world would not exist as we know it without technology, especially the communication and data storage technology that brings us virtually together. But technology puts more people into contact with each other, at higher speed, in more complex ways. Defectors can do more damage, and larger groups can be slower to adjust behaviors to adjust for new attacks on the system. Defectors also can learn about a system and disseminate information more quickly, enabling larger-scale defection before society has a chance to disrupt them (the smart cow problem). The decreased costs of organizing also means that organizations be largely headless and be very difficult to sanction. Wikileaks is the canonical example of a technology-enabled organization defecting against the government’s interests that is proving impossible to shut down. And with increased technology and more complexity, more societal wicked problems with hundreds of potential inputs and outcomes.
Of course, I’ve skipped over some important points in the book, and this review necessarily glosses over many key details. I strongly recommend reading it in its entirely. It’s extremely well researched, and takes ideas from many fields of study into a cohesive and understandable framework to understand the critical role of trust in society and the systems we use to allow that trust.
And lastly, since I’ve now reviewed the book, I have cooperated, not defected, and hopefully the grand experiment has worked. Thanks, Bruce!
Categories: Liars and Outliers, Text