Entries Tagged "Liars and Outliers"

Page 1 of 5

Cybercrime as a Tax on the Internet Economy

I was reading this 2014 McAfee report on the economic impact of cybercrime, and came across this interesting quote on how security is a tax on the Internet economy:

Another way to look at the opportunity cost of cybercrime is to see it as a share of the Internet economy. Studies estimate that the Internet economy annually generates between $2 trillion and $3 trillion, a share of the global economy that is expected to grow rapidly. If our estimates are right, cybercrime extracts between 15% and 20% of the value created by the Internet, a heavy tax on the potential for economic growth and job creation and a share of revenue that is significantly larger than any other transnational criminal activity.

Of course you can argue with the numbers, and there’s good reason to believe that the actual costs of cybercrime are much lower. And, of course, those costs are largely indirect costs. It’s not that cybercriminals are getting away with all that value; it’s largely spent on security products and services from companies like McAfee (and my own IBM Security).

In Liars and Outliers I talk about security as a tax on the honest.

Posted on September 1, 2016 at 9:49 AMView Comments

Prisoner's Dilemma Experiment Illustrates Four Basic Phenotypes

If you’ve read my book Liars and Outliers, you know I like the prisoner’s dilemma as a way to think about trust and security. There is an enormous amount of research—both theoretical and experimental—about the dilemma, which is why I found this new research so interesting. Here’s a decent summary:

The question is not just how people play these games­—there are hundreds of research papers on that­—but instead whether people fall into behavioral types that explain their behavior across different games. Using standard statistical methods, the researchers identified four such player types: optimists (20 percent), who always go for the highest payoff, hoping the other player will coordinate to achieve that goal; pessimists (30 percent), who act according to the opposite assumption; the envious (21 percent), who try to score more points than their partners; and the trustful (17 percent), who always cooperate. The remaining 12 percent appeared to make their choices completely at random.

Posted on August 18, 2016 at 5:36 AMView Comments

How Altruism Might Have Evolved

I spend a lot of time in my book Liars and Outliers on cooperating versus defecting. Cooperating is good for the group at the expense of the individual. Defecting is good for the individual at the expense of the group. Given that evolution concerns individuals, there has been a lot of controversy over how altruism might have evolved.

Here’s one possible answer: it’s favored by chance:

The key insight is that the total size of population that can be supported depends on the proportion of cooperators: more cooperation means more food for all and a larger population. If, due to chance, there is a random increase in the number of cheats then there is not enough food to go around and total population size will decrease. Conversely, a random decrease in the number of cheats will allow the population to grow to a larger size, disproportionally benefitting the cooperators. In this way, the cooperators are favoured by chance, and are more likely to win in the long term.

Dr George Constable, soon to join the University of Bath from Princeton, uses the analogy of flipping a coin, where heads wins £20 but tails loses £10:

“Although the odds [of] winning or losing are the same, winning is more good than losing is bad. Random fluctuations in cheat numbers are exploited by the cooperators, who benefit more than they lose out.”

EDITED TO ADD (8/12): Journal article.

Related article.

Posted on July 29, 2016 at 12:23 PMView Comments

Psychological Model of Selfishness

This is interesting:

Game theory decision-making is based entirely on reason, but humans don’t always behave rationally. David Rand, assistant professor of psychology, economics, cognitive science, and management at Yale University, and psychology doctoral student Adam Bear incorporated theories on intuition into their model, allowing agents to make a decision either based on instinct or rational deliberation.

In the model, there are multiple games of prisoners dilemma. But while some have the standard set-up, others introduce punishment for those who refuse to cooperate with a willing partner. Rand and Bear found that agents who went through many games with repercussions for selfishness became instinctively cooperative, though they could override their instinct to behave selfishly in cases where it made sense to do so.

However, those who became instinctively selfish were far less flexible. Even in situations where refusing to cooperate was punished, they would not then deliberate and rationally choose to cooperate instead.

The paper:

Abstract: Humans often cooperate with strangers, despite the costs involved. A long tradition of theoretical modeling has sought ultimate evolutionary explanations for this seemingly altruistic behavior. More recently, an entirely separate body of experimental work has begun to investigate cooperation’s proximate cognitive underpinnings using a dual-process framework: Is deliberative self-control necessary to reign in selfish impulses, or does self-interested deliberation restrain an intuitive desire to cooperate? Integrating these ultimate and proximate approaches, we introduce dual-process cognition into a formal game-theoretic model of the evolution of cooperation. Agents play prisoner’s dilemma games, some of which are one-shot and others of which involve reciprocity. They can either respond by using a generalized intuition, which is not sensitive to whether the game is one-shot or reciprocal, or pay a (stochastically varying) cost to deliberate and tailor their strategy to the type of game they are facing. We find that, depending on the level of reciprocity and assortment, selection favors one of two strategies: intuitive defectors who never deliberate, or dual-process agents who intuitively cooperate but sometimes use deliberation to defect in one-shot games. Critically, selection never favors agents who use deliberation to override selfish impulses: Deliberation only serves to undermine cooperation with strangers. Thus, by introducing a formal theoretical framework for exploring cooperation through a dual-process lens, we provide a clear answer regarding the role of deliberation in cooperation based on evolutionary modeling, help to organize a growing body of sometimes-conflicting empirical results, and shed light on the nature of human cognition and social decision making.

Very much in line with what I wrote in Liars and Outliers.

Posted on January 28, 2016 at 6:18 AMView Comments

The Value of Breaking the Law

Interesting essay on the impossibility of being entirely lawful all the time, the balance that results from the difficulty of law enforcement, and the societal value of being able to break the law.

What’s often overlooked, however, is that these legal victories would probably not have been possible without the ability to break the law.

The state of Minnesota, for instance, legalized same-sex marriage this year, but sodomy laws had effectively made homosexuality itself completely illegal in that state until 2001. Likewise, before the recent changes making marijuana legal for personal use in WA and CO, it was obviously not legal for personal use.

Imagine if there were an alternate dystopian reality where law enforcement was 100% effective, such that any potential law offenders knew they would be immediately identified, apprehended, and jailed. If perfect law enforcement had been a reality in MN, CO, and WA since their founding in the 1850s, it seems quite unlikely that these recent changes would have ever come to pass. How could people have decided that marijuana should be legal, if nobody had ever used it? How could states decide that same sex marriage should be permitted, if nobody had ever seen or participated in a same sex relationship?

This is very much like my notion of “outliers” in my book Liars and Outliers.

Posted on July 16, 2013 at 12:35 PMView Comments

Me at the RSA Conference

I’ll be speaking twice at the RSA Conference this year. I’m giving a solo talk Tuesday at 1:00, and participating in a debate about training Wednesday at noon. This is a short written preview of my solo talk, and this is an audio interview on the topic.

Additionally: Akamai is giving away 1,500 copies of Liars and Outliers, and Zcaler is giving away 300 copies of Schneier on Security. I’ll be doing book signings in both of those companies’ booths and at the conference bookstore.

Posted on February 25, 2013 at 1:49 PMView Comments

Experimental Results: Liars and Outliers Trust Offer

Last August, I offered to sell Liars and Outliers for $11 in exchange for a book review. This was much less than the $30 list price; less even than the $16 Amazon price. For readers outside the U.S., where books can be very expensive, it was a great price.

I sold 800 books from this offer—much more than the few hundred I originally intended—to people all over the world. It was the end of September before I mailed them all out, and probably a couple of weeks later before everyone received their copy. Now, three months after that, it’s interesting to count up the number of reviews I received from the offer.

That’s not a trivial task. I asked people to e-mail me URLs for their review, but not everyone did. But counting the independent reviews, the Amazon reviews, and the Goodreads reviews from the time period, and making some reasonable assumptions, about 70 people fulfilled their end of the bargain and reviewed my book.

That’s 9%.

There were some outliers. One person wrote to tell me that he didn’t like the book, and offered not to publish a review despite the agreement. Another two e-mailed me to offer to return the price difference (I declined).

Perhaps people have been busier than they expected—and haven’t gotten around to reading the book and writing a review yet. I know my reading is often delayed by more pressing priorities. And although I didn’t put any deadline on when the review should be completed by, I received a surge of reviews around the end of the year—probably because some people self-imposed a deadline. What is certain is that a great majority of people decided not to uphold their end of the bargain.

The original offer was an exercise in trust. But to use the language of the book, the only thing inducing compliance was the morals of the reader. I suppose I could have collected everyone’s names, checked off those who wrote reviews, and tried shaming the rest—but that seems like a lot of work. Perhaps this public nudge will be enough to convince some more people to write reviews.

EDITED TO ADD (1/11): I never intended to make people feel bad with this post. I know that some people are busy, and that reading an entire book is a large time commitment (especially in our ever-shortened-attention-span era). I can see how this post could be read as an attempt to shame, but—really—that was not my intention.

EDITED TO ADD (1/22): Some comments.

Posted on January 11, 2013 at 8:10 AMView Comments

1 2 3 5

Sidebar photo of Bruce Schneier by Joe MacInnis.