Psychological Model of Selfishness

This is interesting:

Game theory decision-making is based entirely on reason, but humans don't always behave rationally. David Rand, assistant professor of psychology, economics, cognitive science, and management at Yale University, and psychology doctoral student Adam Bear incorporated theories on intuition into their model, allowing agents to make a decision either based on instinct or rational deliberation.

In the model, there are multiple games of prisoners dilemma. But while some have the standard set-up, others introduce punishment for those who refuse to cooperate with a willing partner. Rand and Bear found that agents who went through many games with repercussions for selfishness became instinctively cooperative, though they could override their instinct to behave selfishly in cases where it made sense to do so.

However, those who became instinctively selfish were far less flexible. Even in situations where refusing to cooperate was punished, they would not then deliberate and rationally choose to cooperate instead.

The paper:

Abstract: Humans often cooperate with strangers, despite the costs involved. A long tradition of theoretical modeling has sought ultimate evolutionary explanations for this seemingly altruistic behavior. More recently, an entirely separate body of experimental work has begun to investigate cooperation's proximate cognitive underpinnings using a dual-process framework: Is deliberative self-control necessary to reign in selfish impulses, or does self-interested deliberation restrain an intuitive desire to cooperate? Integrating these ultimate and proximate approaches, we introduce dual-process cognition into a formal game-theoretic model of the evolution of cooperation. Agents play prisoner's dilemma games, some of which are one-shot and others of which involve reciprocity. They can either respond by using a generalized intuition, which is not sensitive to whether the game is one-shot or reciprocal, or pay a (stochastically varying) cost to deliberate and tailor their strategy to the type of game they are facing. We find that, depending on the level of reciprocity and assortment, selection favors one of two strategies: intuitive defectors who never deliberate, or dual-process agents who intuitively cooperate but sometimes use deliberation to defect in one-shot games. Critically, selection never favors agents who use deliberation to override selfish impulses: Deliberation only serves to undermine cooperation with strangers. Thus, by introducing a formal theoretical framework for exploring cooperation through a dual-process lens, we provide a clear answer regarding the role of deliberation in cooperation based on evolutionary modeling, help to organize a growing body of sometimes-conflicting empirical results, and shed light on the nature of human cognition and social decision making.

Very much in line with what I wrote in Liars and Outliers.

Posted on January 28, 2016 at 6:18 AM • 8 Comments

Comments

keinerJanuary 28, 2016 6:40 AM

...can't await the day this groundbreaking knowledge is introduced in high-frequency trading algorithms and the stock exchanges get the power plugs to punish asocial buy/sell behaviour....

paulJanuary 28, 2016 9:02 AM

The two named strategies complement each other. Since the only time that the intuitive cooperator changes its stripes is when it defects, the intuitive defector makes sense. Unless the payoff matrix gets radically changed.

David LeppikJanuary 28, 2016 1:48 PM

Presumably this would have co-evolved with the desire to want to punish selfishness. Punishment (which can be as subtle as a nasty look) is how we communicate which social norms people actually care about, versus the ones we give lip service to.

Jesse ThompsonJanuary 28, 2016 4:05 PM

This strikes at the meat of a moral code I put together in 2004.

The problem with terms (that I will call "misnomers") like "selfishness" and "selflessness" is that nobody even takes the time to define what "Self" means.

So *almost* everyone winds up presuming it means the INSTANT gratification of the individual subject animal. You are either optimizing for a snapshot of positive emotional state of this animal, or if you ever "sacrifice" an instant hit of gratification with any justification whatever then you are called ugly words like "selfless".

In my view, "Self" is a lot more complicated and thankfully much more inclusive as a result.

Put simply, Self is a hierarchy of cooperating super-organisms. From the molecules that make up a protein, to the proteins that make up a cell, to the cells that make up an organ, to the organs that make up a body, to the bodies who overlap to constitute human cultures and organizations from your household to your extended family to your neighborhood to your church to your gaming guild to your political party to your country to your species to entire ecosystem.

The "primate animal" from first illustration either exists as a super-group of smaller elements (limbs, organs, etc) or as a member of larger ones (book club, romantic relationship, bowling league).. but when you stop myopically optimizing for the short term well being of the animal, and begin optimizing for the balanced well being of the animal's parts and the animals larger associations then.. in my view, *absolutely all* of what we intuit as morality comes to a heel at our feet.

"Karma" is largely encapsulated in the fact that the positive fate of people like you (same family in preference of different, same nation in preference of different, same species in preference of different, while race and sex are demonstrably false out-groupings), or who share an association with you improves the positive fate of the shared organization itself which translate into positive outcomes for you directly. It may also improve your reputation in the eyes of any who witness the event, which works further to improve outcomes for you directly.

This is even biomemetic of the apparently "selfless" behavior of many cells and organs in your body, who will often put themselves at severe risk in order to maintain the body's own safety.

And finally, it dovetails with a different element of my code which measures identity as not limited nor defined by a person's body, nor even by their memories but by their *intentions and desires*. According to my code, you actually ARE defined by what change you want to see in the world (this helps Ghandi's quote directly result in self-actualization, btw.. ;3). This helps explain how easily you can harmonize with your human associations, because by SHARING goals you also directly share essence of being.

And finally, this lends direct call to immortalization through what is often inaccurately called "selfless" acts of heroism. Risking or even sacrificing your own continued existence can be justified in any cases where it is clear that the absolute most powerful way that you can see your will be done is via that sacrifice.

It also offers an exact definition for love (unidirectional, romantic or platonic, one to one or one to many), as nothing more than adopting the desires and goals (and thus identity) of the person you love in among your own desires and goals with preferentially high rank.

I feel as though all of these descriptions of self, motivation and cooperation are put in far simpler terms, and terms far easier to emulate in a computer than I had initially found them in the philosophical descriptions of other people. I wouldn't mind seeing some research done upon the back of my philosophy though, and find out how neural networks mindful of the well being of those with harmonizing intentions would fare in test such as prisoner's dilemma and others.

DYJanuary 29, 2016 11:24 AM

Isn't this just the same old distinction between "hot" and "cold" cognition? Please see Robert Frank's results and papers.

RadioStarJanuary 29, 2016 12:46 PM

We find that, depending on the level of reciprocity and assortment, selection favors one of two strategies: intuitive defectors who never deliberate, or dual-process agents who intuitively cooperate but sometimes use deliberation to defect in one-shot games. Critically, selection never favors agents who use deliberation to override selfish impulses: Deliberation only serves to undermine cooperation with strangers.

There have been many studies along these lines, but this is a very good one, as it does focused - attempts to focus - on rationality engaged in empathy, versus operating by pure "instinct", or "intuition"...

I see a few problems with the study, however, some of which are relatively common to many such studies:

The common factors

1. human beings are incredibly complex, so attempting to create some truly meaningful data set for comparison in such studies is very difficult to realistically do

I obviously here am specifically implying the complexity of human psychological and emotional makeups, which directly pin into their capacity to operate by conscious reasoning as opposed to pure intuition, or instinct.

2. These are deeply subjective areas of testing, far moreso then I believe is typically taken into account -- or even understood by anyone today. That subjectivity absolutely and deeply impacts the results of such studies.

3. The terminology and processes involved in the study are themselves very poorly understood, partly because of the immense difficulty of subjectivity, which certainly includes expectations effecting outcomes and many other key factors... but also these are liquid and debateable and poorly defined terms as are our understanding of the underlining processes. That said, this study did largely attempt to keep those factors relatively confined.


Specific problems & comments with the study:

[NOT that this is a poor study, nor a study which has produced no meaningful data, at all.]

1. I would hazard a guess here :-) that ordinarily people are not very conscious, and by that I specifically mean that they do not ordinarily consciously deliberate. How dare I present such a guess. However, consider, is it really true that someone who deeply and regularly consciously "deliberates" (reasons) would come to the conclusion that they should only do so for entirely selfish purposes?

...

Recall Edward Nash, aka "A Beautiful Mind". And his coming to an understanding of the "selfish" benefit of cooperation in economic models. For a more entertaining consideration.

More realistically, if you are a person who genuinely is devoted to consciously deliberating your words and actions at a very deep level, then you are inevitably are going to eventually come to the conclusions that reasoning and empathy necessarily do go hand in hand. Because rarely are purely selfish aims brought to productive ends.

And, by "productive", I certainly do mean there "selfishly productive".

In dumb ass terms, if you are only using your skills of conscious reasoning for the purposes of temporary selfish gain, then you are not really very deep in conscious reasoning at all. Because almost invariably, the long term, wider picture, means that cooperation is usually the best path for selfish gain in any economic model.

...

So, you are effectively talking about people who do utilize conscious deliberation, but only on a very shallow level here. That this is enough of a common element to be found in a sample group, is not at all surprising -- nor should it be to any reader here.

These are obviously casual assertions. I do not cite studies, nor do I cite the reams of ancedotal evidence, but present them as a consideration.

2. It is extremely rare, I would posit, that people come to the mindset of conscious deliberation - of deep conscious deliberation - of their everyday words and actions. This is simply not what we see going on. No small part of this is because of the subjectivity issue which certainly comes to play here, and engages enormous bias in such logic. No small part of this is because of the fact that this is so rarely a requirement for anyone to actually have, being as typically people simply need to obtain very simple goal models that provide extremely dubious quality of product ... sex, roof, food, drugs, personal security, and so on. Praise is a major factor here, but involves merely these other factors. I put "personal security" here, because of anxieties people deal with being so core to their everyday motivating processes.

The subjectivity angle brings into camera all of the known biases discovered and well proven over the past twenty years.

This does not mean I would suggest such reasoning never takes place, it certainly does. But, such reasoning typically takes place in extremely isolated scenarios which engage the requirement of longer term deliberation, of longer term goal satisfaction.

So, for instance, it is routine for architects of all sorts of fields to engage regularly in longer term models which rely on long term benefits. But, on an everyday selfish level, such long term planning is transparent and largely engaged by unconscious processes and physical desires. This even includes the selfish desires to, for instance, satisfy personal anxieties about stability to obtain meaningful peer praise by producing some long term engaged "product".

3. The unconscious processes here and conscious processes are extremely poorly understood, and rarely is it necessary for people to make conscious of those unconscious processes. This "obviously" is deeply tied to points 1 and 2. :-)

Why do you do what you do? What did you do what you did? Do you really know why? Do you generally even know *what* you did and *what* you said, from a truly objective as opposed to subjective angle? Usually, no. People think they say and do what they say and do for one reason - even over the years - and they really do and say it all for another entirely. If it ever comes under conscious light at all, it is a fraud. It is simply for justification for their own selves, and not for meaningful, accurate understanding.

Accuracy in self-understanding is not required for daily life.

Right now. In the vast majority of circumstances.

4. Empathy and conscious deliberation are not the same thing. They can be related. Ultimately, they would be tied. The reason for this is because "conscious deliberation" engages many processes, it is not simply doing math or performing a calculation on paper. We put ourselves in scenarios and live them. That engages deep psychological and emotional processes.

But such engagement is not what you will find with shallow reasoning which obtains short term goals. It is unnecessary for that.

Further, that method human beings use of putting themselves in scenarios engages deep unconscious processes.

(Which is one reason why I can state this, and while this is extremely common behavior, it is not behavior which is commonly conscious.)

On a more mechanical level: take the act of practicing tennis. You conscious commit to practice. But, you are learning unconsciously well. Eventually, you are engaging immediate reactive impulses, emotional impulses one could certainly call them.

It at least appears as if the conscious trains the unconscious. (Though where does the conscious truly begin, and is it not ultimately like reading a book in the dark with a penlight?)

In activities which engage other human beings, this is far, far more deep then mere mechanical motions such as tennis require. It does require true empathic emotion. But, getting there are training those responses are something else altogether.

Chris MontanoJanuary 29, 2016 2:31 PM

I found this pretty interesting from the perspective of the implications of these findings on social decision-making. I am not interested in joining the deliberation of the validity of the study. However, I find it interesting to think, "If this is accurate, what does this looks like in application to day to day life?"

If I were to apply these findings at face value, I understand the study to imply that:
1. There are "cooperative" and "uncooperative" tendencies that are a product of evolution. This implies that these traits are genetically based and therefore, require multi-generational, environment pressure to change- maybe. There is no DNA that responds to "reason."
2. I do not need to induce the aid of naturally cooperative people. It is their baseline tendency. All I need do is ask.
3. There is no rationale whereby a naturally uncooperative individual will be made to become cooperative. Why waste the energy presenting a well-reasoned and structured argument?
4. The more I argue my case rationally, the more likely, the naturally cooperative individuals are to derive a rationale where they may select to become uncooperative. Therefore- drive group decisions requiring a high degree of cooperation very quickly. It keeps the cooperators cooperating. Long, drawn out discussions and negotiations only increase the probability that I will lose cooperation without any corresponding chance to gain cooperation from those uncooperative.
5. If I am building a company culture based upon trust and cooperation, identify this tendency rapidly in the interview process and hire accordingly. Do not even consider hiring uncooperative talent in hopes they become cooperative.
6. If for some reason, I find an "uncooperative" has been employed, then find a way to move them out of the company quickly. They will not become cooperative.
7. If a “cooperative” employee has an exception, they remain fundamentally cooperative, so forgive and forget.
8. When conducting agreements in business, cooperation can manifest exceptions over time and lack of cooperation is unlikely to ever change. Therefore, regardless of a person's tendency, tight contracts will protect in light of a cooperation exception and also in persistent non-cooperative counter-parties. However, during negotiations, quicker is better. The longer I negotiate, the more difficult the uncooperative person will become and the cooperative person may go into "exception mode.” Hell, bag all of this and just use a blockchain contract and it doesn’t matter if someone in business is cooperative or not.
9. Pick “cooperative” friends. Stick with them even if they have lapses. That’s part of the friendship deal. Since it is their fundamental tendency, they will likely return to their old selves.
10. Cut loose personal relationships with “uncooperative” and give up hoping for change. (It is not my business to change anyone anyway.)

Coyne TibbetsJanuary 30, 2016 7:58 AM

IMO

Civilization requires a high degree of cooperation between its component individuals; that is its defining characteristic.

That cooperation necessarily requires altruism, in that the members must sacrifice some of their temporary personal desires for the benefit of the whole. Of course, the payback is high: we would have almost none of the benefits we have today if it weren't for that civilization.

Those who are selfish never learn that lesson. A person who has learned the benefits of altruism can, at need, choose to temporarily be selfish, to satisfy their personal desires at the expense of another. But a person who never learned the benefits of altruism, cannot conceive of a short term sacrifice as anything but a permanently unrequited personal loss.

Worse, lacking an understanding of altruism themselves, they cannot imagine that any other person would be altruistic. So even if an attempt is made to teach them the benefits of altruism, they are incapable of understanding that their partner would be altruistic; they expect only betrayal.

(I think this is a source of much criminality.)

I don't think the results of the test should be a surprise to anyone.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.