Fairness vs. Goodness
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-02-22T20:22:00.000Z · LW · GW · Legacy · 21 commentsContents
21 comments
It seems that back when the Prisoner's Dilemma was still being worked out, Merrill Flood and Melvin Drescher tried a 100-fold iterative PD on two smart but unprepared subjects, Armen Alchian of UCLA and John D. Williams of RAND.
The kicker being that the payoff matrix was asymmetrical, with dual cooperation awarding JW twice as many points as AA:
(AA, JW) | JW: D | JW: C |
AA: D | (0, 0.5) | (1, -1) |
AA: C | (-1, 2) | (0.5, 1) |
The resulting 100 iterations, with a log of comments written by both players, make for fascinating reading.
JW spots the possibilities of cooperation right away, while AA is slower to catch on.
But once AA does catch on to the possibilities of cooperation, AA goes on throwing in an occasional D... because AA thinks the natural meeting point for cooperation is a fair outcome, where both players get around the same number of total points.
JW goes on trying to enforce (C, C) - the option that maximizes total utility for both players - by punishing AA's attempts at defection. JW's log shows comments like "He's crazy. I'll teach him the hard way."
Meanwhile, AA's log shows comments such as "He won't share. He'll punish me for trying!"
I confess that my own sympathies lie with JW, and I don't think I would have played AA's game in AA's shoes. This would seem to indicate that I'm more of a utilitarian than a fair-i-tarian. Life doesn't always hand you fair games, and the best we can do for each other is play them positive-sum.
Though I might have been somewhat more sympathetic to AA, if the (C, C) outcome had actually lost him points, and only (D, C) had made it possible for him to gain them back. For example, this is also a Prisoner's Dilemma:
(AA, JW) | JW: D | JW: C |
AA: D | (-2, 2) | (2, 0) |
AA: C | (-5, 6) | (-1, 4) |
Theoretically, of course, utility functions are invariant up to affine transformation, so a utility's absolute sign is not meaningful. But this is not always a good metaphor for real life.
Of course what we want in this case, societally speaking, is for JW to slip AA a bribe under the table. That way we can maximize social utility while letting AA go on making a profit. But if AA starts out with a negative number in (C, C), how much do we want AA to demand in bribes - from our global, societal perspective?
The whole affair makes for an interesting reminder of the different worldviews that people invent for themselves - seeming so natural and uniquely obvious from the inside - to make themselves the heroes of their own stories.
21 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Vladimir_Nesov · 2009-02-22T20:32:11.000Z · LW(p) · GW(p)
You are missing a bracket in cost matrices.
comment by Johnicholas · 2009-02-22T20:37:41.000Z · LW(p) · GW(p)
I am surprised that you didn't mention utility functions that are sensitive to relative income (or relative wealth).
comment by Daniel_Reeves2 · 2009-02-22T20:50:27.000Z · LW(p) · GW(p)
how much do we want AA to demand in bribes - from our global, societal perspective?
Enough to get both fair and max-welfare, of course.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-02-22T20:52:59.000Z · LW(p) · GW(p)
Nesov, fixed.
John, that would be an interesting middle ground of sorts - the trouble being that from a social perspective, you probably do want as much wealth generated as possible, if it's any sort of wealth that can be reinvested.
Reeves, if both players play (C, C) and then divide up the points evenly at the end, isn't that sort of... well... communism?
Replies from: grawk1, Osuniev↑ comment by grawk1 · 2013-03-08T07:59:32.772Z · LW(p) · GW(p)
Eliezer, I'm surprised at you! While your personal political inclinations may be libertarian/capitalist, you don't get to end the discussion by saying that you are a Blue this is the idea of the hated Greens, so it must be wrong.
As you've said: "Politics is the mind-killer" and that wasn't an honest attempt at engaging with an idea, it was a cached thought to allow you to reject it without even turning on your brain in the first place.
↑ comment by Osuniev · 2013-03-08T09:18:06.828Z · LW(p) · GW(p)
/ Reeves, if both players play (C, C) and then divide up the points evenly at the end, isn't that sort of... well... communism?
Is this wrong for other reason than cached thoughts though ? (Probably yes, but you didn't explain it).
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-08T15:31:35.150Z · LW(p) · GW(p)
Because it's rejecting the premise of a perfectly good experiment, not because it would be a bad idea in real life. Also there's difficulties with having canonical utility measures across agents, but that's a separate point.
comment by Daniel_Reeves2 · 2009-02-22T21:17:37.000Z · LW(p) · GW(p)
isn't that ... communism?
Setting aside the point that in communism you don't get the max-welfare outcome, you could view communism as a highly unfair mechanism because it gives high wealth-producers a large negative payoff (taxes away most of their wealth) and gives low wealth-producers a big positive payoff. In that sense laissez-faire is exactly fair, giving everyone zero payoff.
This all depends on where you draw the boundaries around the mechanism though.
(Also, you've swept under the rug the problem of equilibrium play in an iterated prisoner's dilemma. It's not as simple as tit-for-tat, of course.)
comment by nazgulnarsil3 · 2009-02-22T21:20:00.000Z · LW(p) · GW(p)
god damn communists. always on about income inequality instead of trying to maximize the amount everyone gets. I always refer to Mind the Gap by Paul Graham in these cases.
comment by Vladimir_Nesov · 2009-02-22T22:14:38.000Z · LW(p) · GW(p)
Shouldn't fairness, if it is a concern, just lead to different utilities, so that players should still play (C,C), but on new payoffs, to get the most fair outcome?
comment by Yvain2 · 2009-02-22T22:50:00.000Z · LW(p) · GW(p)
Wealth redistribution in this game wouldn't have to be communist. Depending on how you set up the analogy, it could also be capitalist.
Call JW the capitalist and AA the worker. JW is the one producing wealth, but he needs AA's help to do it. Call the under-the-table wealth redistribution deals AA's "salary".
The worker can always cooperate, in which case he makes some money but the capitalist makes more.
Or he can threaten to defect unless the capitalist raises his salary - he's quitting his job or going on strike for higher pay.
(To perfect the analogy with capitalism, make two changes. First, the capitalist makes zero without the worker's cooperation. Second, the worker makes zero in all categories, and can only make money by entering into deals with the capitalist. But now it's not a Prisoner's Dilemma at all - it's the Ultimatum Game.)
IANAGT, but I bet the general rule for this class of game is that the worker's salary should depend a little on how much the capitalist can make without workers, how much the worker can make without capitalists, and what the marginal utility structure looks like - but mostly on their respective stubbornness and how much extra payoff having the worker's cooperation gives the capitalist.
In the posted example, AA's "labor" brings JW from a total of 50 to a total of 100. Perhaps if we ignore marginal utilities and they're both equally stubborn, and they both know they're both equally stubborn and so on, JW will be best off paying AA 25 for his cooperation, leading to the equal 75 - 75 distribution of wealth?
[nazgul, a warning. I think I might disagree with you about some politics. Political discussions in blogs are themselves prisoner's dilemmas. When we all cooperate and don't post about politics, we are all happy. When one person defects and talks about politics, he becomes happier because his views get aired, but those of us who disagree with him get angry. The next time you post a political comment, I may have to defect as well and start arguing with you, and then we're going to get stuck in the (D,D) doldrums.]
comment by michael_webster2 · 2009-02-23T02:47:45.000Z · LW(p) · GW(p)
Without a theory of focal points, it is very hard to analyze this interaction.
One point, however, is very clear - the slight change away from symmetry reveals two agents arguing over what should be focal, without the slightest interest in a Nash equilibrium/
comment by nazgulnarsil3 · 2009-02-23T03:47:24.000Z · LW(p) · GW(p)
AA is willing to pay in order to achieve a more egalitarian outcome. in other words: AA is willing to pay money in order to force others to be more like him.
a desire to change the payoff matrix itself is my point: one monkey gets the banana and the other monkey cries justice. justice is formalized fairness. I can easily envision that AA would also pay in order to alter the payoff matrix.
So let's set up another trial of this with an added meta dilemma: in each case the disadvantaged member of the trial can forfeit another 5 points in order to alter the payoff matrix itself in an egalitarian direction. The caveat is that the advantaged person can pay an additional 5 points to stop you. or make it so they can contribute any number of points and the other has to contribute an equal number to stop you. what sort of equilibrium would result here?
comment by Wei_Dai2 · 2009-02-23T16:25:45.000Z · LW(p) · GW(p)
This is fascinating. JW plays C in the last round, even though AA just played D in the next-to-last round. What explains that? Maybe JW's belief in his own heroic story is strong enough to make him sacrifice his self-interest?
Theoretically, of course, utility functions are invariant up to affine transformation, so a utility's absolute sign is not meaningful. But this is not always a good metaphor for real life.
So you're suggesting that real life has some additional structure which is not representable in ordinary game theory formalism? Can you think of an extension to game theory which can represent it? (Mathematically, not just metaphorically.)
comment by Nick_Tarleton · 2009-02-24T16:56:12.000Z · LW(p) · GW(p)
Wei: The utility of never playing is never stated, so it's natural to assume it's zero in either case, and the signs are meaningful. I don't know if that counts as extending the formalism.
Replies from: DanielLCcomment by Michael_Howard · 2009-02-24T19:29:52.000Z · LW(p) · GW(p)
Fascinating comment log, especially given how smart the players were.
I can't find it online, but back in the "OMG the Japanese are eating us for breakfast" days, a study asked ordinary Americans: would you rather...
a) Our economy grow 2% next year and the Japanese 4%. b) Both economies grow 1%.Most picked (a).
comment by Otus2 · 2009-02-24T20:43:57.000Z · LW(p) · GW(p)
I was very confused by the commentary on the page linked, suggesting JW was the smart one. Sure AA was a bit slow to start, but did the same thing I would have done - tried to balance the score. Only thing I don't understand is his pattern... I would have defected once after every four CCs.
To modify the example Michael Howard mentioned above: If you had to choose betweed (1%, 4%) and (2%, 2%) to be randomly assigned growth rates for your economy and another country's, which would you choose? I'd choose 2% for both, though expected would be less that way.
comment by Michael_Howard · 2009-02-24T21:57:49.000Z · LW(p) · GW(p)
Most picked (a).Ack, brain fell out! I meant they picked (b).
Otus, I'd pick your first option. I don't think the safety of (2%, 2%) is worth the 0.5% expected value loss. If the risks were huge, say (-20%, 30%) and (2%, 2%) I'd probably play safe.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-03-08T14:21:35.456Z · LW(p) · GW(p)
Mapping from growth rates onto utility is not as simple as adding. Relative growth provides strategic advantage.