Business Insider: "They Finally Tested The 'Prisoner's Dilemma' On Actual Prisoners — And The Results Were Not What You Would Expect"
post by chaosmage · 2013-07-24T12:44:05.763Z · LW · GW · Legacy · 11 commentsContents
11 comments
Article at http://www.businessinsider.com/prisoners-dilemma-in-real-life-2013-7#ixzz2ZxwzT6nj, seems revelant to a lot of the discussion here.
There've been studies about people who consider themselves to be relatively successful are less cooperative than people who consider themselves relatively unsuccessful. The study referenced in that article seems to bear this out.
So if you want the other party to cooperate, should you attempt to give that party the impression it has been relatively unsuccessful, at least if that party is human?
11 comments
Comments sorted by top scores.
comment by JoshuaZ · 2013-07-24T14:40:39.310Z · LW(p) · GW(p)
Since prison culture often emphasizes punishment and retribution for not cooperating with fellow prisoners, the fact that they often cooperate is not surprising. My guess is that if one did this with people who had just been convicted of crimes but had not yet been in prison for a long time, one would see less cooperation.
comment by Nornagest · 2013-07-24T18:53:49.287Z · LW(p) · GW(p)
I already said this on Facebook, but I might as well paraphrase it here. Social feedback loops in American prisons are supposed to be pretty tight: unless they went to some trouble to blind this after the fact (and it doesn't look like they did), it would be obvious whether participants had cooperated or defected. Especially in the non-iterated version (smaller result space), which is the opposite of how it's supposed to work.
On top of that, the social dynamics are a little different from most PD-like problems we encounter in that there's an adversarial relationship baked in: usually you're pitted against someone with whom you have no special kinship, but here we have outsiders setting members of a ready-made insider group against each other. (The original formulation of the problem does have this feature, but it's usually been ignored in analysis.)
Replies from: CronoDAScomment by Qiaochu_Yuan · 2013-07-24T20:35:23.768Z · LW(p) · GW(p)
Before reading this article, and based only on its title, I predicted (on PredictionBook, with confidence 80%) that the result would be that prisoners cooperated "surprisingly" often, based both on the phrase "not what you would expect" and based on vague general things I guessed about prison culture. Thanks for the calibration exercise!
The calibration exercise aside, I don't think this is particularly relevant to PD discussions on LessWrong, which I thought were more about "true" PDs (e.g. us vs. aliens, us vs. a paperclip maximizer). The incentives in a PD between humans are not easily deducible from the setup of the experiment alone.
comment by Oscar_Cunningham · 2013-07-24T17:11:52.586Z · LW(p) · GW(p)
comment by NancyLebovitz · 2013-07-24T15:41:57.070Z · LW(p) · GW(p)
So if you want the other party to cooperate, should you attempt to give that party the impression it has been relatively unsuccessful, at least if that party is human?
Maybe you should give the impression that you're both relatively unsuccessful.
comment by earthwormchuck163 · 2013-07-24T13:57:36.890Z · LW(p) · GW(p)
So if you want the other party to cooperate, should you attempt to give that party the impression it has been relatively unsuccessful, at least if that party is human?
I don't think so. It seems more likely to me that the common factor between increased defection rate and self-perceived success is more consequentialist thinking. This leads to perceived success via actual success, and to defection via thinking "defection is the dominant strategy, so I'll do that".
comment by telms · 2013-07-30T04:18:34.072Z · LW(p) · GW(p)
It's my understanding that, in a repeated series of PD games, the best strategy in the long run is "tit-for-tat": cooperate by default, but retaliate with defection whenever someone defects against you, and keep defecting until the original defector returns to cooperation mode. Perhaps the prisoners in this case were generalizing a cooperative default from multiple game-like encounters and treating this particular experiment as just one more of these more general interactions?
Replies from: ThisSpaceAvailable↑ comment by ThisSpaceAvailable · 2013-08-07T01:48:33.532Z · LW(p) · GW(p)
Well, to be precise, researchers found tit-for-tat was the best, given the particular set-up. There's no strategy that is better than every other strategy in every set-up. If everyone has a set choice (either "always defect (AD)" or "always cooperate (AC)"), then the best strategy is AD. If there are enough TFT players, however, they will increase each others' scores, and the TFT will be more successful than AD. The more iterations there are, the more advantage TFT will give. However, if all of the players are TFT or AC, then AC will be just as good as TFT. If you have an evolutionary situation between AD, AC, and TFT where complexity is punished, "all TFT" isn't an equilibrium, because you'll have mutations to AC, which will out-compete TFT due to lower complexity, until there are enough AC that AD becomes viable, at which point TFT will start to have an advantage again. All AD will be an equilibrium, because once you reach that point, AC will be inferior, and an incremental increase in TFT due to mutation will not be able to take hold. If you have all AC, then AD will start to proliferate. If you have AC and AD, but no TFT, then eventually AD will take over.
Replies from: telmscomment by BlueSun · 2013-07-29T21:28:11.657Z · LW(p) · GW(p)
Tim Harford has some relevant comments:
What does that tell us – that prisoners take care of each other? Or that they fear reprisals?
Probably not reprisals: they were promised anonymity. It’s really not clear what this result tells us. We knew already that people often co-operate, contradicting the theoretical prediction. We also know, for instance, that economics students co-operate more rarely than non-economists – perhaps because they’ve been socialised to be selfish people, or perhaps because they just understand the dilemma better.
You think the prisoners just didn’t understand the nature of the dilemma?
That’s possible. The students were much better educated and most students had played laboratory games before. Maybe the prisoners co-operated because they were too confused to betray each other.
That seems speculative.
It is speculative, but consider this: the researchers also looked at a variant game in which one player has to decide whether to stay silent or confess, and then the other player decides how to respond. If you play first in this game you would be well-advised to stay silent, because people typically reward you for that. In this sequential game, it was the students, not the prisoners, who were more likely to co-operate with each other by staying silent. So the students were just as co-operative as prisoners but their choice of when to co-operate with each other made more logical sense.