Entries Tagged "societal security"

Page 1 of 3

Research on Human Honesty

New research from Science: “Civic honesty around the globe“:

Abstract: Civic honesty is essential to social capital and economic development, but is often in conflict with material self-interest. We examine the trade-off between honesty and self-interest using field experiments in 355 cities spanning 40 countries around the globe. We turned in over 17,000 lost wallets with varying amounts of money at public and private institutions, and measured whether recipients contacted the owner to return the wallets. In virtually all countries citizens were more likely to return wallets that contained more money. Both non-experts and professional economists were unable to predict this result. Additional data suggest our main findings can be explained by a combination of altruistic concerns and an aversion to viewing oneself as a thief, which increase with the material benefits of dishonesty.

I am surprised, too.

Posted on July 5, 2019 at 6:15 AMView Comments

Are We Becoming More Moral Faster Than We're Becoming More Dangerous?

In The Better Angels of Our Nature, Steven Pinker convincingly makes the point that by pretty much every measure you can think of, violence has declined on our planet over the long term. More generally, “the world continues to improve in just about every way.” He’s right, but there are two important caveats.

One, he is talking about the long term. The trend lines are uniformly positive across the centuries and mostly positive across the decades, but go up and down year to year. While this is an important development for our species, most of us care about changes year to year—and we can’t make any predictions about whether this year will be better or worse than last year in any individual measurement.

The second caveat is both more subtle and more important. In 2013, I wrote about how technology empowers attackers. By this measure, the world is getting more dangerous:

Because the damage attackers can cause becomes greater as technology becomes more powerful. Guns become more harmful, explosions become bigger, malware becomes more pernicious… and so on. A single attacker, or small group of attackers, can cause more destruction than ever before.

This is exactly why the whole post-9/11 weapons-of-mass-destruction debate was so overwrought: Terrorists are scary, terrorists flying airplanes into buildings are even scarier, and the thought of a terrorist with a nuclear bomb is absolutely terrifying.

Pinker’s trends are based both on increased societal morality and better technology, and both are based on averages: the average person with the average technology. My increased attack capability trend is based on those two trends as well, but on the outliers: the most extreme person with the most extreme technology. Pinker’s trends are noisy, but over the long term they’re strongly linear. Mine seem to be exponential.

When Pinker expresses optimism that the overall trends he identifies will continue into the future, he’s making a bet. He’s betting that his trend lines and my trend lines won’t cross. That is, that our society’s gradual improvement in overall morality will continue to outpace the potentially exponentially increasing ability of the extreme few to destroy everything. I am less optimistic:

But the problem isn’t that these security measures won’t work—even as they shred our freedoms and liberties—it’s that no security is perfect.

Because sooner or later, the technology will exist for a hobbyist to explode a nuclear weapon, print a lethal virus from a bio-printer, or turn our electronic infrastructure into a vehicle for large-scale murder. We’ll have the technology eventually to annihilate ourselves in great numbers, and sometime after, that technology will become cheap enough to be easy.

As it gets easier for one member of a group to destroy the entire group, and the group size gets larger, the odds of someone in the group doing it approaches certainty. Our global interconnectedness means that our group size encompasses everyone on the planet, and since government hasn’t kept up, we have to worry about the weakest-controlled member of the weakest-controlled country. Is this a fundamental limitation of technological advancement, one that could end civilization? First our fears grip us so strongly that, thinking about the short term, we willingly embrace a police state in a desperate attempt to keep us safe; then, someone goes off and destroys us anyway?

Clearly we’re not at the point yet where any of these disaster scenarios have come to pass, and Pinker rightly expresses skepticism when he says that historical doomsday scenarios have so far never come to pass. But that’s the thing about exponential curves; it’s hard to predict the future from the past. So either I have discovered a fundamental problem with any intelligent individualistic species and have therefore explained the Fermi Paradox, or there is some other factor in play that will ensure that the two trend lines won’t cross.

Posted on January 4, 2017 at 7:42 AMView Comments

Psychological Model of Selfishness

This is interesting:

Game theory decision-making is based entirely on reason, but humans don’t always behave rationally. David Rand, assistant professor of psychology, economics, cognitive science, and management at Yale University, and psychology doctoral student Adam Bear incorporated theories on intuition into their model, allowing agents to make a decision either based on instinct or rational deliberation.

In the model, there are multiple games of prisoners dilemma. But while some have the standard set-up, others introduce punishment for those who refuse to cooperate with a willing partner. Rand and Bear found that agents who went through many games with repercussions for selfishness became instinctively cooperative, though they could override their instinct to behave selfishly in cases where it made sense to do so.

However, those who became instinctively selfish were far less flexible. Even in situations where refusing to cooperate was punished, they would not then deliberate and rationally choose to cooperate instead.

The paper:

Abstract: Humans often cooperate with strangers, despite the costs involved. A long tradition of theoretical modeling has sought ultimate evolutionary explanations for this seemingly altruistic behavior. More recently, an entirely separate body of experimental work has begun to investigate cooperation’s proximate cognitive underpinnings using a dual-process framework: Is deliberative self-control necessary to reign in selfish impulses, or does self-interested deliberation restrain an intuitive desire to cooperate? Integrating these ultimate and proximate approaches, we introduce dual-process cognition into a formal game-theoretic model of the evolution of cooperation. Agents play prisoner’s dilemma games, some of which are one-shot and others of which involve reciprocity. They can either respond by using a generalized intuition, which is not sensitive to whether the game is one-shot or reciprocal, or pay a (stochastically varying) cost to deliberate and tailor their strategy to the type of game they are facing. We find that, depending on the level of reciprocity and assortment, selection favors one of two strategies: intuitive defectors who never deliberate, or dual-process agents who intuitively cooperate but sometimes use deliberation to defect in one-shot games. Critically, selection never favors agents who use deliberation to override selfish impulses: Deliberation only serves to undermine cooperation with strangers. Thus, by introducing a formal theoretical framework for exploring cooperation through a dual-process lens, we provide a clear answer regarding the role of deliberation in cooperation based on evolutionary modeling, help to organize a growing body of sometimes-conflicting empirical results, and shed light on the nature of human cognition and social decision making.

Very much in line with what I wrote in Liars and Outliers.

Posted on January 28, 2016 at 6:18 AMView Comments

Security Trade-offs in the Longbow vs. Crossbow Decision

Interesting research: Douglas W. Allen and Peter T. Leeson, “Institutionally Constrained Technology Adoption: Resolving the Longbow Puzzle,” Journal of Law and Economics, v. 58, Aug 2015.

Abstract: For over a century the longbow reigned as undisputed king of medieval European missile weapons. Yet only England used the longbow as a mainstay in its military arsenal; France and Scotland clung to the technologically inferior crossbow. This longbow puzzle has perplexed historians for decades. We resolve it by developing a theory of institutionally constrained technology adoption. Unlike the crossbow, the longbow was cheap and easy to make and required rulers who adopted the weapon to train large numbers of citizens in its use. These features enabled usurping nobles whose rulers adopted the longbow to potentially organize effective rebellions against them. Rulers choosing between missile technologies thus confronted a trade-off with respect to internal and external security. England alone in late medieval Europe was sufficiently politically stable to allow its rulers the first-best technology option. In France and Scotland political instability prevailed, constraining rulers in these nations to the crossbow.

It’s nice to see my security interests intersect with my D&D interests.

Posted on January 22, 2016 at 6:44 AMView Comments

Joseph Stiglitz on Trust

Joseph Stiglitz has an excellent essay on the value of trust, and the lack of it in today’s society.

Trust is what makes contracts, plans and everyday transactions possible; it facilitates the democratic process, from voting to law creation, and is necessary for social stability. It is essential for our lives. It is trust, more than money, that makes the world go round.

At the end, he discusses a bit about the security mechanisms necessary to restore it:

I suspect there is only one way to really get trust back. We need to pass strong regulations, embodying norms of good behavior, and appoint bold regulators to enforce them. We did just that after the roaring ’20s crashed; our efforts since 2007 have been sputtering and incomplete. Firms also need to do better than skirt the edges of regulations. We need higher norms for what constitutes acceptable behavior, like those embodied in the United Nations’ Guiding Principles on Business and Human Rights. But we also need regulations to enforce these norms ­ a new version of trust but verify. No rules will be strong enough to prevent every abuse, yet good, strong regulations can stop the worst of it.

This, of course, is what my book Liars and Outliers is about.

Posted on December 30, 2013 at 9:55 AMView Comments

The Effect of Money on Trust

Money reduces trust in small groups, but increases it in larger groups. Basically, the introduction of money allows society to scale.

The team devised an experiment where subjects in small and large groups had the option to give gifts in exchange for tokens.

They found that there was a social cost to introducing this incentive. When all tokens were “spent”, a potential gift-giver was less likely to help than they had been in a setting where tokens had not yet been introduced.

The same effect was found in smaller groups, who were less generous when there was the option of receiving a token.

“Subjects basically latched on to monetary exchange, and stopped helping unless they received immediate compensation in a form of an intrinsically worthless object [a token].

“Using money does help large societies to achieve larger levels of co-operation than smaller societies, but it does so at a cost of displacing normal of voluntary help that is the bread and butter of smaller societies, in which everyone knows each other,” said Prof Camera.

But he said that this negative result was not found in larger anonymous groups of 32, instead co-operation increased with the use of tokens.

“This is exciting because we introduced something that adds nothing to the economy, but it helped participants converge on a behaviour that is more trustworthy.”

He added that the study reflected monetary exchange in daily life: “Global interaction expands the set of trade opportunities, but it dilutes the level of information about others’ past behaviour. In this sense, one can view tokens in our experiment as a parable for global monetary exchange.”

Posted on September 5, 2013 at 1:57 PMView Comments

Violence as a Source of Trust in Criminal Societies

This is interesting:

If I know that you have committed a violent act, and you know that I have committed a violent act, we each have information on each other that we might threaten to use if relations go sour (Schelling notes that one of the most valuable rights in business relations is the right to be sued—this is a functional equivalent).

Abstract of original paper; full paper is behind a paywall.

Posted on July 22, 2013 at 6:36 AMView Comments

The Value of Breaking the Law

Interesting essay on the impossibility of being entirely lawful all the time, the balance that results from the difficulty of law enforcement, and the societal value of being able to break the law.

What’s often overlooked, however, is that these legal victories would probably not have been possible without the ability to break the law.

The state of Minnesota, for instance, legalized same-sex marriage this year, but sodomy laws had effectively made homosexuality itself completely illegal in that state until 2001. Likewise, before the recent changes making marijuana legal for personal use in WA and CO, it was obviously not legal for personal use.

Imagine if there were an alternate dystopian reality where law enforcement was 100% effective, such that any potential law offenders knew they would be immediately identified, apprehended, and jailed. If perfect law enforcement had been a reality in MN, CO, and WA since their founding in the 1850s, it seems quite unlikely that these recent changes would have ever come to pass. How could people have decided that marijuana should be legal, if nobody had ever used it? How could states decide that same sex marriage should be permitted, if nobody had ever seen or participated in a same sex relationship?

This is very much like my notion of “outliers” in my book Liars and Outliers.

Posted on July 16, 2013 at 12:35 PMView Comments

1 2 3

Sidebar photo of Bruce Schneier by Joe MacInnis.