High-Tech Cheats in a World of Trust

I CAN put my cash card into an ATM anywhere in the world and take out a fistful of local currency, while the corresponding amount is debited from my bank account at home. I don’t even think twice: regardless of the country, I trust that the system will work.

The whole world runs on trust. We trust that people on the street won’t rob us, that the bank we deposited money in last month returns it this month, that the justice system punishes the guilty and exonerates the innocent. We trust the food we buy won’t poison us, and the people we let in to fix our boiler won’t murder us.

My career has taken me from cryptography to information security, general security technology to the economics and psychology of security. Most recently, I have become interested in how we induce trustworthy behaviour.

Society is, after all, an interdependent system that requires widespread cooperation to function. People need to act in ways that are expected of them, to be consistent and compliant. And not just individuals, but organisations and systems.

But in any cooperative system, there is an alternative, parasitic, strategy available—cheating. A parasite obtains the benefits of widespread cooperation while at the same time taking advantage of it. There are—and always will be—robbers, crooked banks and judges who take bribes. So how do we ensure that the parasites are kept to a small enough minority to not ruin everything for everyone?

Remember the variations on the prisoner’s dilemma, the game theory scenarios framed by Merrill Flood and Melvin Dresher at the US RAND Corporation think tank in 1950 that show why two individuals might not cooperate, even if it looks to be in their best interest to do so. The paradox is that it is in our collective interest to be trustworthy and cooperate, while it is in our individual self-interest to be parasitic and defect, or cheat. If too many defect, society stops functioning, the crime rate soars, international banking collapses and judicial rulings become available for sale to the highest bidder. No one would trust anyone, because there wouldn’t be enough trust to go around.

The way to solve this is to put our thumb on the scales. If we can increase the benefits of cooperation or the costs of defection, we can induce people to act in the group interest—because it is also in their self-interest. In my book Liars and Outliers I call such mechanisms societal pressures. A bank’s reputation in the community is a societal pressure. So is the lock on the ATM that keeps criminals out.

This problem isn’t new, nor unique to us. Since all complex systems must deal with the problems resulting from parasites it is not surprising that we have a complex interplay of societal pressures. The most basic are moral systems regulating our own behaviour, and reputational systems we use to regulate each other’s behaviour. Most of us try not to treat others unfairly because it makes us feel bad and we know they will treat us badly in return. Most don’t steal because we feel guilty—and there are consequences when we are caught. We recognise it is in our long-term self-interest not to act in our immediate self-interest.

Morals and reputation worked well enough for primitive lifestyles, but these began to fail as society grew too large. Trust is personal and intimate among people who know each other, and morals and reputation are easily limited to an in-group. Institutional systems—laws—formalised reputational systems, and security technologies allowed societal pressure to scale up as we expanded into ever-larger groups.

So my naive trust in ATMs turns out to be based on many complex things: the moral inclinations of most of the people involved in building and operating transfer systems; the fact that a financial institution with a reputation for cheating would probably lose its customers; the myriad banking laws and regulations that exist to punish fraudsters; and knowing that the very different security measures underpinning ATMs, bank transfers and banking will work properly even if some of those involved would prefer to cheat me.

This trust isn’t absolute, of course. Not every societal pressure affects everyone equally. Some care more about their reputations, others are naturally law-abiding and still others are better at picking locks. But the goal isn’t total compliance, just to limit the scope for defection. Criminals still target ATMs, and the occasional rogue bank employee steals money from accounts. But for the most part, societal pressures keep defector damage small enough to keep the system intact.

But sometimes the scope is too great and underlying systems come crashing down. Overfishing has destroyed breeding stocks in many places. Crime and corruption have devastated some countries. The international banking system almost collapsed in 2008. But in general, societal pressures work as a delicate balance between cooperation and defection: too little societal pressure and the scope of defection becomes too great; too much and security is too costly.

This balance isn’t static—technological changes disrupt it all the time. The changes can be related to defecting, so ATM-based “card skimmers” make it easier for criminals to steal my codes and empty my bank account. Or they may be related to security, with computerised auditing technology making it more difficult for fraudulent transactions to go through the system unnoticed. Or they could be unrelated to either: cheap telecoms make it easier to interconnect bank networks globally. Like societal pressures, these things change the prisoner’s dilemma calculations.

Life becomes dangerously insecure when new technologies, innovations and ideas increase the scope of defection. Defectors innovate. New attacks become possible. Existing attacks become easier, cheaper, more reliable or more devastating. More people may defect, simply because it’s easier to. In response, society must also innovate, to reduce the scope of defection and restore the balance. This dynamic is as old as civilisation.

Global banking, terrorists with nuclear weapons, genetic engineering, bioweapons, pandemics: we now have such dangerous systems that a few defectors can wreak havoc so great that reactive rebalancing might not be enough. Worse still, by the time that society realises that the scope of defection has increased and societal pressures need to be increased, irreversible damage may already have been done.

To add to the complexity, not all defectors are bad. Neither cooperation nor defection relate to any absolute standard of morality. It is defectors who are in the vanguard for change, such as those who helped escaped slaves in the US south before the civil war. It is defectors who agitate to overthrow repressive regimes in the Middle East—and defectors who fuel the Occupy movement. Without them, society stagnates.

How to achieve this balance is at the core of many of our policy debates about the internet. Anonymity is essential to freedom and liberty and saves the lives of dissidents everywhere. Yet it also protects criminals. Copyright both protects and stifles innovation. And balance is central to debates about air security, terrorism in general, and protecting economies against financial fraud. The big challenge will be to understand how to simultaneously provide both more societal pressure to deal with the threats of technology, and less pressure to ensure an open, free and evolving society.

Categories: Theory of Security, Trust

Sidebar photo of Bruce Schneier by Joe MacInnis.