Interlude for Behavioral Economics

post by Scott Alexander (Yvain) · 2012-07-06T20:12:51.125Z · LW · GW · Legacy · 53 comments

Contents

53 comments

The so-called “rational” solutions to the Prisoners' Dilemma and Ultimatum Game are suboptimal to say the least. Humans have various kludges added by both nature or nurture to do better, but they're not perfect and they're certainly not simple. They leave entirely open the question of what real people will actually do in these situations, a question which can only be addressed by hard data.

As in so many other areas, our most important information comes from reality television. The Art of Strategy discusses a US game show “Friend or Foe” where a team of two contestants earned money by answering trivia questions. At the end of the show, the team used a sort-of Prisoner's Dilemma to split their winnings: each team member chose “Friend” (cooperate) or “Foe” (defect). If one player cooperated and the other defected, the defector kept 100% of the pot. If both cooperated, each kept 50%. And if both defected, neither kept anything (this is a significant difference from the standard dilemma, where a player is a little better off defecting than cooperating if her opponent defects).

Players chose “Friend” about 45% of the time. Significantly, this number remained constant despite the size of the pot: they were no more likely to cooperate when splitting small amounts of money than large.

Players seemed to want to play “Friend” if and only if they expected their opponents to do so. This is not rational, but it accords with the “Tit-for-Tat” strategy hypothesized to be the evolutionary solution to Prisoner's Dilemma. This played out on the show in a surprising way: players' choices started off random, but as the show went on and contestants began participating who had seen previous episodes, they began to base their decision on observable characteristics about their opponents. For example, in the first season women cooperated more often than men, so by the second season a player was cooperating more often if their opponent was a woman - whether or not that player was a man or woman themselves.

Among the superficial characteristics used, the only one to reach statistical significance according to the study was age: players below the median age of 27 played “Foe” more often than those over it (65% vs. 39%, p < .001). Other nonsignificant tendencies were for men to defect more than women (53% vs. 46%, p=.34) and for black people to defect more than white people (58% vs. 48%, p=.33). These nonsignificant tendencies became important because the players themselves attributed significance to them: for example, by the second season women were playing “Foe” 60% of the time against men but only 45% of the time against women (p<.01) presumably because women were perceived to be more likely to play “Friend” back; also during the second season, white people would play “Foe” 75% against black people, but only 54% of the time against other white people.

(This risks self-fulfilling prophecies. If I am a black man playing a white woman, I expect she will expect me to play “Foe” against her, and she will “reciprocate” by playing “Foe” herself. Therefore, I may choose to “reciprocate” against her by playing “Foe” myself, even if I wasn't originally intending to do so, and other white women might observe this, thus creating a vicious cycle.)

In any case, these attempts at coordinated play worked, but only imperfectly. By the second season, 57% of pairs chose the same option - either (C, C) or (D, D).

Art of Strategy included another great Prisoner's Dilemma experiment. In this one, the experimenters spoiled the game: they told both players that they would be deciding simultaneously, but in fact, they let Player 1 decide first, and then secretly approached Player 2 and told her Player 1's decision, letting Player 2 consider this information when making her own choice.

Why should this be interesting? From the previous data, we know that humans play “tit-for-expected-tat”: they will generally cooperate if they believe their opponent will cooperate too. We can come up with two hypotheses to explain this behavior. First, this could be a folk version of Timeless Decision Theory or Hofstadter's superrationality; a belief that their own decision literally determines their opponent's decision. Second, it could be based on a belief in fairness: if I think my opponent cooperated, it's only decent that I do the same.

The “researchers spoil the setup” experiment can distinguish between these two hypotheses. If people believe their choice determines that of their opponent, then once they know their opponent's choice they no longer have to worry and can freely defect to maximize their own winnings. But if people want to cooperate to reward their opponent, then learning that their opponent cooperated for sure should only increase their willingness to reciprocate.

The results: If you tell the second player that the first player defected, 3% still cooperate (apparently 3% of people are Jesus). If you tell the second player that the first player cooperated.........only 16%  cooperate. When the same researchers in the same lab didn't tell the second player anything, 37% cooperated.

This is a pretty resounding victory for the “folk version of superrationality” hypothesis. 21% of people wouldn't cooperate if they heard their opponent defected, wouldn't cooperate if they heard their opponent cooperated, but will cooperate if they don't know which of those two their opponent played.

Moving on to the Ultimatum Game: very broadly, the first player usually offers between 30 and 50 percent, and the second player tends to accept. If the first player offers less than about 20 percent, the second player tends to reject it.

Like the Prisoner's Dilemma, the amount of money at stake doesn't seem to matter. This is really surprising! Imagine you played an Ultimatum Game for a billion dollars. The first player proposes $990 million for herself, $10 million for you. On the one hand, this is a 99-1 split, just as unfair as $99 versus $1. On the other hand, ten million dollars!

Although tycoons have yet to donate a billion dollars to use for Ultimatum Game experiments, researchers have done the next best thing and flown out to Third World countries where even $100 can be an impressive amount of money. In games in Indonesia played for a pot containing a sixth of Indonesians' average yearly income, Indonesians still rejected unfair offers. In fact, at these levels the first player tended to propose fairer deals than at lower stakes - maybe because it would be a disaster if her offer get rejected.

It was originally believed that results in the Ultimatum Game were mostly independent of culture.  Groups in the US, Israel, Japan, Eastern Europe, and Indonesia all got more or less the same results. But this elegant simplicity was, like so many other things, ruined by the Machiguenga Indians of eastern Peru. They tend to make offers around 25%, and will accept pretty much anything.

One more interesting finding: people who accept low offers in the Ultimatum Game have lower testosterone than those who reject them.

There is a certain degenerate form of the Ultimatum Game called the Dictator Game. In the Dictator Game, the second player doesn't have the option of vetoing the first player's distribution. In fact, the second player doesn't do anything at all; the first player distributes the money, both players receive the amount of money the first player decided upon, and the game ends. A perfectly selfish first player would take 100% of the money in the Dictator Game, leaving the second player with nothing.

In a metaanalysis of 129 papers consisting of over 41,000 individual games, the average amount the first player gave the second player was 28.35%. 36% of first players take everything, 17% divide the pot equally, and 5% give everything to the second player, nearly doubling our previous estimate of what percent of people are Jesus.

The meta-analysis checks many different results, most of which are insignificant, but a few stand out. Subjects playing the dictator game “against” a charity are much more generous; up to a quarter give everything. When the experimenter promises to “match” each dollar given away (eg the dictator gets $100, but if she gives it to the second player the second player gets $200), the dictator gives much more (somewhat surprising, as this might be an excuse to keep $66 for yourself and get away with it by claiming that both players still got equal money). On the other hand, if the experimenters give the second player a free $100, so that they start off richer than the dictator, the dictator compensates by not giving them nearly as much money.

Old people give more than young people, and non-students give more than students. People from “primitive” societies give more than people from more developed societies, and the more primitive the society, the stronger the effect.  The most important factor, though? As always, sex. Women both give more and get more in dictator games.

It is somewhat inspiring that so many people give so much in this game, but before we become too excited about the fundamental goodness of humanity, Art of Strategy mentions a great experiment by Dana, Cain, and Dawes. The subjects were offered a choice: either play the Dictator Game with a second player for $10, or get $9 and the second subject is sent home and never even knows what the experiment is about. A third of participants took the second option.

So generosity in the Dictator Game isn't always about wanting to help other people. It seems to be about knowing, deep down, that some anonymous person who probably doesn't even know your name and who will never see you again is disappointed in you. Remove the little problem of the other person knowing what you did, and they will not only keep the money, but even be willing to pay the experiment a dollar to keep them quiet.

53 comments

Comments sorted by top scores.

comment by Kaj_Sotala · 2012-07-06T12:13:06.128Z · LW(p) · GW(p)

At the end of the show, the team used a sort-of Prisoner's Dilemma to split their winnings: each team member chose “Friend” (cooperate) or “Foe” (defect). If one player cooperated and the other defected, the defector kept 100% of the pot. If both cooperated, each kept 50%. And if both defected, neither kept anything (this is a significant difference from the standard dilemma, where a player is a little better off defecting than cooperating if her opponent defects).

A very interesting episode of a similar game show, Golden Balls (where "Split" = "Friend", "Steal" = "Foe").

As Bruce Schneier comments:

This is the weirdest, most surreal round of "Split or Steal" I have ever seen. The more I think about the psychology of it, the more interesting it is. I'll save my comments for the comments, because I want you to watch it before I say more. Really.

Replies from: tomcatfish, fortyeridania
comment by Alex Vermillion (tomcatfish) · 2020-07-13T07:31:01.238Z · LW(p) · GW(p)

Wow. I also will not give anything away, but I agree that this is an insane round of this game. There are two agents with very different modeling processes trying to achieve the best outcome for themselves, but (I don't know if this applies only to me or to others), unlike a normal PD, we are not a participant so we don't know the processes of any of the agents, which makes it very enjoyable. This round is a testament to something, that is for sure.

comment by fortyeridania · 2012-07-06T15:39:39.475Z · LW(p) · GW(p)

Steve Landsburg also blogged about this show (video clip included).

comment by andreacasalotti · 2012-07-07T11:37:32.048Z · LW(p) · GW(p)

Regarding the Dana, Cain and Dawes experiment, the abstract says: "Over two studies, we found that about one third of participants were willing to exit a $10 dictator game and take $9 instead. " One third is less than "the majority of participants" stated by you. A fisherman's tale?

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2012-07-07T17:45:57.817Z · LW(p) · GW(p)

Good catch. I got the numbers out of Art of Strategy, then searched for the study online. Either it's a slightly different study than the one cited in the book with slightly different results, or I'm transmitting an error from there.

comment by bentarm · 2012-07-06T12:55:09.316Z · LW(p) · GW(p)

Steve Landsburg has an interesting point about versions of the Dictator Game (and several other similar games) in which people have the option to "destroy" some or all of the money if they don't like the offer. In particular, he recently commented on the so-called "Destructor Game" ( pdf of paper here).

In this game, participants were given the option of deciding whether or not to take away some of the money that the experimenter had given to some of the other participants. When they chose to do so, the experimenters concluded that they were indulging a taste for destruction. As Landsburg sensibly points out, nothing was actually destroyed in any of these experiments. Money was simply transferred from an anonymous subject back to the experimenter.

The reason I bring this up here is because it seems like as good as place as any to get an answer to the next question - has anyone actually ever done such an experiment in which the goods to be destroyed were actually destroyed. (You can imagine giving everyone candy bars, and giving the participants the option to take someone else's candy bars and throw them into some stinking garbage heap). My instinct is that Landsburg is right, and people would be less likely to engage in destructive behaviour if they were destroying actual goods instead of just paper money, but I would be interested to see if this has ever actually been studied. Does anyone know?

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2012-07-07T17:48:08.938Z · LW(p) · GW(p)

I've never heard of a "destroy the money" experiment, but the fact that most economists before Landsburg didn't think of this, and I didn't think of it, and my sources didn't think of it, makes me skeptical that the average participant in the Dictator Game is thinking of it.

I'm also reminded of stories about medical malpractice lawsuits, where juries will sometimes award really big sums of money even when they're not sure whether the doctor was guilty, on the grounds that hospitals/clinics/insurances are large faceless institutions and probably have so much money they won't miss a little. I would expect players to treat the researcher (presumably working off grant money from a big research university) the same way juries treat hospitals.

Replies from: JenniferRM
comment by JenniferRM · 2012-07-17T23:51:11.174Z · LW(p) · GW(p)

On the theme of point destruction, there's a reasonably big literature on a variation where destruction of rewards can be undertaken by players to reduce the rewards of other players, with variations where they control who can see what cooperative acts and vary the sizes of the group. Dunbar's number sometimes makes an appearance. I imagine you've heard of this, and if you haven't hopefully this comment will add it to your arsenal of game theory. It would be awesome if it made an appearance later in your sequence :-)

The general hand-wavy upshot is that for humans (assuming you're in a situation where large scale cooperation and positive externalities are actually possible and valuable to you) the best situation is to be in a large-ish group where people can at least see defectors after the act of defecting, and can also see other people's punishment behavior, and can punish both outright defectors and also "punish non-punishers". So far as I'm aware, you don't need recourse to more recursion than that. You don't have to get totally silly with punishing of non-punishers of non-punishers of defectors. There are elements of the literature here, here, and here, if anyone wants entry points. The first is most accessible :-)

comment by TraderJoe · 2012-07-06T08:11:32.214Z · LW(p) · GW(p)

The results: If you tell the second player that the first player defected, 3% still cooperate (apparently 3% of people are Jesus). If you tell the second player that the first player cooperated.........only 16% cooperate.

Is there really anything exceptional in the 3% figure? 3% of people facing a player who chose "Foe" preferred to transfer money from the game show owners to that player. 97% preferred the game show owners to keep the money. If anything, 3% is below what I would have expected. More surprising [IMO] is the fact that 16% co-operate when they know that it costs them to do so. I have no idea what that 16% were thinking.

Replies from: PeterisP, Vaniver, Alexei, ArisKatsaris, Eneasz
comment by PeterisP · 2012-07-06T22:22:34.147Z · LW(p) · GW(p)

The participants don't know the rules, and have been given a hint that they don't know the rules - as the host said that the choices will be independent/hidden, but then is telling you the other contestant's choice. So they can easily assume a chance that the host is lying, or might then give the first contestant a chance to switch his choice, etc.

Replies from: drnickbone
comment by drnickbone · 2012-07-07T22:01:00.507Z · LW(p) · GW(p)

This is a good catch, and criticism of the "deliberately spoil the experiment" design.

A better design would be to put the contestants in adjacent rooms, but to allow the second contestant to "accidentally" overhear the first (e.g. speaking loudly, through thin walls). Then the experimenter enters the second contestant's room and asks them whether they want to co-operate or defect.

comment by Vaniver · 2012-07-06T22:35:21.008Z · LW(p) · GW(p)

I have no idea what that 16% were thinking.

My guess is those people were willing to pay to reward the other player for cooperating. (That is, they gain psychic value from the other person's gain, and knowing it was the result of their actions.)

comment by Alexei · 2012-07-06T21:46:01.032Z · LW(p) · GW(p)

I have no idea what that 16% were thinking.

I think you can apply TDT of sorts: if I was in the other person's position, I would want them to cooperate. Coupled with the fact that the roles were selected randomly, you could essentially make a precommitment: if another person and I are in this situation, I'll cooperate no matter what. I think that doesn't change your expected value, but it does reduce variance.

Replies from: army1987
comment by A1987dM (army1987) · 2012-07-07T08:01:44.250Z · LW(p) · GW(p)

BTW, lots of LWers said they'd give money to Omega in the Counterfactual mugging.

comment by ArisKatsaris · 2012-07-09T10:02:27.933Z · LW(p) · GW(p)

More surprising [IMO] is the fact that 16% co-operate when they know that it costs them to do so. I have no idea what that 16% were thinking.

I'd be thinking that I'd like to do the honorable/right thing. There exist non-monetary costs in defecting; those include a sense of guilt. That's the difference to a True Prisoner's Dilemma, where you actually prefer defecting if you know the other person cooperated.

Replies from: complexmeme
comment by complexmeme · 2012-07-09T19:35:50.004Z · LW(p) · GW(p)

That last "if you know the other person cooperated" is unnecessary, in a True Prisoner's Dilemma each player prefers defecting in any circumstance.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-07-11T12:09:44.112Z · LW(p) · GW(p)

That last "if you know the other person cooperated" is unnecessary, in a True Prisoner's Dilemma each player prefers defecting in any circumstance.

Not quite: e.g. If you're playing True Prisoner's Dilemma against a copy of yourself, you prefer cooperating, because you know your choice and your copy's choice will be identical, but you don't know what the choice will be before you actually make it.

If you don't know for sure that they'll be identical, but there's some other logical connection that will e.g. make it 99% certain they'll be identical. (e.g. your copies were not created at that particular moment, but a month ago, and were allowed to read different random books in the meantime), then one would argue you're still better off preferring cooperation.

Replies from: complexmeme
comment by complexmeme · 2012-07-12T15:34:28.555Z · LW(p) · GW(p)

Given the context, I was assuming the scenario being discussed was one where the two players' decisions are independent, and where no one expects they may be playing against themselves.

You're right that the game changes if a player thinks that their choice influences (or, arguably, predicts) their opponent's choice.

comment by Eneasz · 2012-07-09T16:43:49.067Z · LW(p) · GW(p)

I have no idea what that 16% were thinking.

If you were playing against yourself, would you co-operate?

comment by CronoDAS · 2012-07-06T08:29:14.062Z · LW(p) · GW(p)

I wonder what people do in this Ultimatum Game "variant":

Player A and B have a contest of some sort (for example, they might run a race, or play a game of checkers, or whatever), and the winner of the contest gets to be the one who makes the proposal in the Ultimatum Game.

The game theory is the same, the social context is quite different...

Replies from: bentarm, sixes_and_sevens
comment by bentarm · 2012-07-06T13:36:12.637Z · LW(p) · GW(p)

I managed to find this. There is a noticeable tendency for proposers to keep more of the money if they have earned it. This is especially pronounced in the Dictator Game, but also exists in the Ultimatum Game.

comment by sixes_and_sevens · 2012-07-06T09:21:05.796Z · LW(p) · GW(p)

Although I can't recall where I got it from, and Google is failing me, I'm pretty sure there's a body of experimental evidence along these lines, showing that the second player is overwhelmingly more likely to accept an unfair split if the roles are designated in a way you describe.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2012-07-06T16:54:32.442Z · LW(p) · GW(p)

I don't have time to find an example right now, but I have some experience in this field and just want to affirm sixes_and_sevens' assertion.

comment by ScottMessick · 2012-07-06T20:34:16.511Z · LW(p) · GW(p)

But this elegant simplicity was, like so many other things, ruined by the Machiguenga Indians of eastern Peru.

Wait, is this a joke, or have the Machiguenga really provided counterexamples to lots of social science hypotheses?

Replies from: KPier, army1987, ShardPhoenix
comment by KPier · 2012-07-07T04:14:18.816Z · LW(p) · GW(p)

He also says:

As in so many other areas, our most important information comes from reality television.

I'm guessing both are a joke.

Replies from: knb
comment by knb · 2012-07-08T04:24:06.988Z · LW(p) · GW(p)

Yeah, I also took it as a joke.

comment by A1987dM (army1987) · 2012-07-07T07:32:34.001Z · LW(p) · GW(p)

I took the “like so many other things” to only apply to “was ruined”, not to “was ruined by the Machiguenga”...

comment by ShardPhoenix · 2012-07-07T04:42:15.624Z · LW(p) · GW(p)

I think he means that many elegant, simple hypothesis have obscure counterexamples, not that the Machiguenga Indians are typically one of those counterexamples.

Replies from: DaFranker
comment by DaFranker · 2012-07-07T05:15:15.737Z · LW(p) · GW(p)

Is there not already a past sequence/post dealing with the creation of such ambiguities when there are multiple plausible implicit statements inferable from an inexact syntactical construction? I thought I saw something along those lines somewhere yesterday, but I can't seem to find it by just retracing my steps.

Replies from: shokwave
comment by shokwave · 2012-07-07T08:30:32.740Z · LW(p) · GW(p)

I genuinely can't tell if this is intentional.

comment by bentarm · 2012-07-06T12:42:11.889Z · LW(p) · GW(p)

Old people give more than young people, and non-students give more than students

These two could both be explained by rich people giving more than poor people. Is that the case?

So generosity in the Dictator Game doesn't seem to be about wanting to help other people. It seems to be about knowing, deep down, that some anonymous person who probably doesn't even know your name and who will never see you again is disappointed in yo

We hardly need this experiment to know that people don't tend to arbitrarily give each other money for no particular reason - when was the last time you received an anonymous envelope full of cash in the mail?

Replies from: JenniferRM
comment by JenniferRM · 2012-07-18T00:10:24.416Z · LW(p) · GW(p)

I know several people who went through phases of leaving little "prizes" sprinkled around the world in the hopes that random strangers would discover them, collect the prizes, and think better of the world for this reason. I have never personally received anonymous cash in the mail, but it wouldn't entirely surprise me if it happened some day.

Replies from: handoflixue
comment by handoflixue · 2012-07-18T20:04:55.486Z · LW(p) · GW(p)

It seems more likely if people have some way of getting your mailing address without directly asking for it, but I can understand that this would quite possibly have negative consequences too >.>

comment by drnickbone · 2012-07-07T21:17:21.485Z · LW(p) · GW(p)

Players seemed to want to play “Friend” if and only if they expected their opponents to do so. This is not rational, but it accords with the “Tit-for-Tat” strategy hypothesized to be the evolutionary solution to Prisoner's Dilemma.

Same comment as on your previous article in the series. Tit-for-Tat co-operates with a player who co-operated last time, not with a partner that it anticipates will co-operate this time.

It is reputational systems which reward correct prediction (co-operate if and only if you predict that the other player will co-operate this time). That is because the reputational damage from defecting against a co-operator is large : the co-operator gains sympathy; the defector risks punishment or reduced co-operation from other observers. Whereas if a person who is generally known to co-operate defects against another defector, there is generally not a reputational hit (indeed there is probably a slight uplift to reputation for predicting correctly and not letting the defector get away with it).

Super-rational players co-operate if and only if the other player is super-rational. If this was the strategy that humans in fact followed (i.e. there were ways in which super-rational players could reliably recognize each other) then co-operation would be pretty near universal among humans in PDs. But it isn't.

The empirical evidence (from this show, and other studies) is that humans play a reputational strategy rather than pure Tit-for-Tat or super-rational strategy. It appears to be what humans do, and there is a fairly convincing case it is what we're adapted to do.

EDIT: The other evidence you quote in your article is very interesting though:

The results: If you tell the second player that the first player defected, 3% still cooperate (apparently 3% of people are Jesus). If you tell the second player that the first player cooperated.........only 16% cooperate. When the same researchers in the same lab didn't tell the second player anything, 37% cooperated.

That suggests a mixture between reputational and super-rational strategies with a bit of "pure co-operate" thrown in as well. If there were a pure super-rational strategy then no-one would co-operate after hearing for sure that the other player had already co-operated. (This is unless they both knew for sure going into the game that the other player was super-rational; then they could both commit to co-operate regardless; it is equivalent in that case to counterfactual mugging, or to Newcomb with transparent boxes). Whereas if there were a pure reputational strategy, then knowing that the other player had co-operated would increase the probability of co-operating, not reduce it. Interesting.

I'm wondering if there are any game-theory models which predict a mixed equilibrium between super-rational and reputation, and whether the equilibrium allows a small % of "pure co-operators" into the mix as well?

Replies from: Strange7, wedrifid
comment by Strange7 · 2012-07-08T05:18:21.462Z · LW(p) · GW(p)

Pure co-operate can be a reasonable strategy, even with foreknowledge of the opponent's defection in this round, if you think your opponent is playing something close to tit-for-tat and expect to play many more rounds with them.

comment by wedrifid · 2012-07-07T22:26:26.481Z · LW(p) · GW(p)

Same comment as on your previous article in the series. Tit-for-Tat co-operates with a player who co-operated last time, not with a partner that it anticipates will co-operate this time.

Agree again. Yvain is misusing terms and misrepresenting evolutionary strategies. This sequence is vastly overrated.

comment by Giles · 2012-07-10T18:40:45.271Z · LW(p) · GW(p)

Players seemed to want to play “Friend” if and only if they expected their opponents to do so.

Does this mean that a significant fraction players actually prefer the (C, C) outcome to the (D, C) outcome? What would happen if you pretended the game was PD but if there was a (D, C) result you offered the defector a (secret) chance to change their move to C? Would a lot of them accept that offer?

Actually, I'm not sure whether the extra move needs to be secret or whether it can be announced in the original rules.

Replies from: handoflixue
comment by handoflixue · 2012-07-18T20:07:18.702Z · LW(p) · GW(p)

I may have to test that variant. I occasionally work Prisoner's Dilemma style situations in to my games, as it's a very easy way to learn about the players :)

comment by CronoDAS · 2012-07-06T08:18:05.425Z · LW(p) · GW(p)

So people on "Friend or Foe" turned into CliqueBots?;)

Replies from: JenniferRM, Luke_A_Somers
comment by JenniferRM · 2012-07-17T23:56:51.783Z · LW(p) · GW(p)

For those not catching the reference: CliqueBots

comment by Luke_A_Somers · 2012-07-06T16:21:18.179Z · LW(p) · GW(p)

Kind of, with some of the cliques being self-destructive.

comment by AlexMennen · 2012-07-06T02:59:46.359Z · LW(p) · GW(p)

by the second season women were playing “Foe” 60% of the time against women but only 45% of the time against men (p<.01) presumably because women were perceived to be more likely to play “Friend” back

This looks backwards.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2012-07-06T03:06:54.754Z · LW(p) · GW(p)

Thanks, fixed.

comment by Martin Čelko (martin-celko) · 2022-08-12T21:46:00.161Z · LW(p) · GW(p)

All this seems to suggest that in competitive game people aim to get as much money as possible in every decision?

The more "primitive" people just don't know the value of money. 

Its like giving candy to someone who has very little utility for it. 

Cooperation suggests merely that some people might have more built up tolerance for loss. 

It does not seem to indicate any lack of greed. 

comment by name99 · 2021-03-05T17:54:02.049Z · LW(p) · GW(p)

"5% give everything to the second player, nearly doubling our previous estimate of what percent of people are Jesus"

I wonder how much "windfall" or similar circumstances around the money change how one responds. In my recent history I've had two windfall gains, one an inheritance and one the money that was being handed out as the "everyone gets a check" part of covid relief. In both cases I was happy to give the money to family who needed it more than me. 

I raise this because I don't think of myself as Jesus (not even by Scott's fairly undemanding tithing / rational altruism standards). I think the dispositive thing was really that this was windfall money; I don't think of "normal revenue" money in the same way. Would I consider money won while playing one of these games as windfall or as earned? I suspect it might be very fragile to the precise framing of the experiment...

comment by Giles · 2012-07-10T18:28:49.431Z · LW(p) · GW(p)

the dictator gives much more (somewhat surprising, as this might be an excuse to keep $66 for yourself and get away with it by claiming that both players still got equal money)

I don't think it's surprising - the modified version of the game increases the amount of fuzziness that each dollar buys, but doesn't increase the pain associated with spending that dollar. So the player will spend more dollars before the pain overtakes the fuzzy.

Thanks for the great write-up.

comment by johnswentworth · 2012-07-06T23:11:48.976Z · LW(p) · GW(p)

I don't think the "spoil the setup" experiment distinguishes TDT from the belief in fairness. Just because the second person's decision comes after the first doesn't mean it has no effect on the first. It's very much like Newcomb's problem in that regard, and one of the main points of TDT was to account for that effect. Depending on the details of the rewards and how strongly you think the other player's decisions correlate with your own, it may make sense to precommit to cooperation even if you're told the other person's choice. And if it makes sense to precommit to cooperation, that's what TDT will do (unless I'm missing something).

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2012-07-07T17:46:57.188Z · LW(p) · GW(p)

Do you have an alternate explanation for why so many fewer people cooperated in the "spoil the setup" experiment than in ordinary experiments?

Replies from: johnswentworth
comment by johnswentworth · 2012-07-08T02:01:08.462Z · LW(p) · GW(p)

The superrationality explanation still makes sense. If the other player's choice is known, then symmetry is broken, so the superrational agent should defect.

Other than that, I'm not really sure what you mean by "explanation". The "folk version of superrationality" sounds plausible, but the underlying causes of the experimental results still feel pretty mysterious. Demystifying them is well beyond my capability, but it's certainly an interesting question.

comment by beoShaffer · 2012-07-06T03:19:56.379Z · LW(p) · GW(p)

As with the previous entries in the sequence, I like the article but strongly suggest that you add links between sequence entries.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2012-07-06T03:33:08.718Z · LW(p) · GW(p)

Thanks. I'll do that when I'm done with the whole thing, so that I don't have to keep going back and adding new "Next In Sequence" posts when I post new articles.

Replies from: beoShaffer
comment by beoShaffer · 2012-07-06T03:52:19.720Z · LW(p) · GW(p)

That works.

comment by Lysandre Terrisse · 2024-10-27T10:14:37.058Z · LW(p) · GW(p)

People from “primitive” societies give more than people from more developed societies, and the more primitive the society, the stronger the effect.

 

I am not sure what you intended to say here, but the word "primitive" definitely looks like a red flag. As I don't think I am the only one to believe this, I would ask you to please change the wording or delete this sentence.