# What should superrational players do in asymmetric games?

post by badger · 2014-01-24T07:42:05.108Z · LW · GW · Legacy · 17 commentsRereading Hofstadter's essays on superrationality prompted me to wonder what strategies superrational agents would want to commit to in asymmetric games. In symmetric games, everyone can agree on outcome they'd like to jointly achieve, leaving the decision-theoretic question of whether the players can commit or not. In asymmetric games, life becomes murkier. There are typically many Pareto-efficient outcomes, and we enter the wilds of cooperative game theory and bargaining solutions trying to identify the right one. While, say, the Nash bargaining solution is appealing on many levels, I have a hard time connecting the logic of superrationality to any particular solution. Recently though, I found some insight in "Cooperation in Strategic Games Revisited" by Adam Kalai and Ehud Kalai (working paper version and three-page summary version) for the special case of two-player games with side transfers.

Just to make sure everyone's on common ground, the prototypical game examined in the argument for superrationality is the prisoners' dilemma:

Alice / Bob | Cooperate |
Defect |

Cooperate |
10 / 10 | 0 / 12 |

Defect |
12 / 0 | 4 / 4 |

The unique dominant-strategy equilibrium is *(Defect, Defect)*. However, Hofstadter argues that "superrational" players would recognize the symmetry in reasoning processes between each other and thus conclude that cooperating is in their interest. The argument is not in favor of unconditional cooperation. Instead, the reasoning is closer to "I cooperate if and only I expect you to cooperate if and only if I cooperate". Many bits have been devoted to formalizing this reasoning in timeless decision theory and other variants.

The symmetry in the prisoners' dilemma makes it easy to pick out *(Cooperate, Cooperate)* as the action profile each player ideally wants to see happen. Consider instead the following skewed prisoners' dilemma:

Alice/Bob | Cooperate |
Defect |

Cooperate |
2 / 18 | 0 / 12 |

Defect |
12 / 0 | 4 / 4 |

The *(Cooperate, Cooperate)* outcome still has the highest total benefit, but *(Defect, Defect)* is also Pareto-efficient. With this asymmetry, it seems reasonable for Alice to *Defect, *even as someone who would cooperate in the original prisoners' dilemma. Suppose however players can also agree to transfer utility between themselves on a 1-to-1 basis (like if they value cash equally and can make side-payments). Then, *(Cooperate, Cooperate)* with a transfer between 2 and 14 from Bob to Alice dominates *(Defect, Defect)*. The size of the transfer is still up in the air, although a transfer of 8 (leaving both with a payoff of 10) is appealing since it takes us back to the original symmetric game. I feel confident suggesting this as an outcome the players should commit to if possible.

While the former game could be symmetrized in a nice way, what about more general games where payoffs could look even more askew or strategy sets could be completely different?

Let *A* be the payoff matrix for Alice and *B* be the payoff matrix for Bob in any given game. Kalai and Kalai point out that the game (*A*, *B*) can be decomposed into the sum of two games:

where payoffs are identical in the first game (the *team game*) and zero-sum in the second (the *advantage game*). Consider playing these games separately. In the team game, Alice and Bob both agree on the action profile that maximizes their payoff with no controversy. In the advantage game, preferences are exactly opposed, so each can play their maximin strategy, again with no controversy. Of course, the rub is the team game strategy profile could be very different from the advantage game strategy profile.

Suppose Alice and Bob could commit to playing each game separately. Kalai and Kalai define the payoffs each gets between the two games as

where *coco* stands for *cooperative/competitive*. We don't actually have two games to be played separately, so the way to achieve these payoffs is for Alice and Bob to actually play the team game actions and hypothetically play the advantage game. Transfers then even out the gains from the team game results and add in the hypothetical advantage game results. Even though the original game might be asymmetric, this simple decomposition allows players to cooperate exactly where interests are aligned and compete exactly where interests are opposed.

For example, consider two hot dog vendors. There are 40 potential customers at the airport and 100 at the beach. If both choose the same location, they split the customers there evenly. Otherwise, the vendor at each location sells to everyone at that place. Alice turns a profit of $2 per customer, while Bob turns a profit of $1 per customer. Overall this yields the payoffs:

Alice / Bob | Airport |
Beach |

Airport |
40 / 20 | 80 / 100 |

Beach |
200 / 40 | 100 / 50 |

The game decomposes into the team game:

Alice / Bob | Airport |
Beach |

Airport |
30 / 30 | 90 / 90 |

Beach |
120 / 120 | 75 / 75 |

and the advantage game:

Alice / Bob | Airport |
Beach |

Airport |
10 / -10 | -10 / 10 |

Beach |
80 / -80 | 25 / -25 |

The maximizing strategy profile for the team game is *(Beach, Airport)* with payoffs *(120, 120)*. The maximin strategy profile for the advantage game is *(Beach, Beach)* with payoffs *(25, -25)*. In total, this game has a coco-value of *(145, 95)*, which would be realized by Alice selling at the beach, Bob selling at the airport, and Alice transferring *55* to Bob. Alice generates most of the profits in this situation, but Bob has to be compensated for his credible threat to start selling at the beach too.

The bulk of the Kalai and Kalai article is extending the coco-value to incomplete information settings. For instance, each vendor might have some private information about the weather tomorrow, which will affect the number of customers at the airport and the beach. The Kalais prove that being able to publicly observe the payoffs for the chosen actions is sufficient for agents to commit themselves to the coco-value ex-ante (before receiving any private information) and that being able to publicly observe all hypothetical payoffs from alternative action profiles is sufficient for commitment even after agents have private information.

The Kalais provide an axiomatization of the coco-value, showing it is the payoff pair that uniquely satisfies all of the following:

*Pareto optimality*: The sum of the values is maximal.*Shift invariance*: Increasing a player's payoff by a constant amount in each cell increases their value by the same amount.*Payoff dominance*: If one player always gets more than the other in each cell, that player can't get a smaller value for the game.*Invariance to redundant strategies*: Adding a new action that is a convex combination of the payoffs of two other actions can't change the value.*Monotonicity in actions*: Removing an action from a player can't increase their value for the game.*Monotonicity in information*: Giving a player strictly less information can't increase their value for the game.

The coco-value is also easily computable, unlike Nash equilibria in general. I'm hard-pressed to think of any more I could want from it (aside from easy extensions to bigger classes of games). Given its simplicity, I'm surprised it wasn't hit upon earlier.

## 17 comments

Comments sorted by top scores.

## comment by Benya (Benja) · 2014-01-24T16:05:36.635Z · LW(p) · GW(p)

I'm hard-pressed to this of any more I could want from [the coco-value] (aside from easy extensions to bigger classes of games).

Invariance to affine transformations of players' utility functions. This solution requires that both players value outcomes in a common currency, plus the physical ability to transfer utility in this currency outside the game (unless there are two outcomes o_1 and o_2 of the game such that A(o_1) + B(o_1) = A(o_2) + B(o_2) = max_o A(o) + B(o), and such that A(o_1) >= A's coco-value >= A(o_2), in which case the players can decide to play the convex combination of these two outcomes that gives each player their coco-value, but this only solves the utility transfer problem, it doesn't make the solution invariant under affine transformations).

Replies from: ThisSpaceAvailable, Vaniver, badger## ↑ comment by ThisSpaceAvailable · 2014-01-27T04:58:56.070Z · LW(p) · GW(p)

Invariance of the players' utility functions by the same affine transformation, or by independent transformations?

Replies from: Benja## ↑ comment by Benya (Benja) · 2014-01-27T11:44:19.802Z · LW(p) · GW(p)

Independent.

## ↑ comment by Vaniver · 2014-01-24T17:17:43.658Z · LW(p) · GW(p)

Invariance to affine transformations of players' utility functions.

This is done by the transfer function between the players, since if I redefine my utility to be 10 times its previous value, then it takes only one of your utility to give me 10, and 10 of my utility to give you one.

Now, of course, you want to lie about the transfer function instead of your utility; "no, I don't like dollars you've given me as much as dollars I've earned myself."

## ↑ comment by badger · 2014-01-24T16:20:52.210Z · LW(p) · GW(p)

Oh, I definitely agree. I meant it's hard to hope for anything more inside environments with transferable/quasilinear utility. It's a big assumption, but I've resigned myself to it somewhat since we need it for most of the positive results in mechanism design.

## comment by ThisSpaceAvailable · 2014-01-27T04:48:31.851Z · LW(p) · GW(p)

If mixed strategies are permitted, then even without utility transfers, the players can improve on the (d,d) outcome in the asymmetric case. For instance, with the first asymmetric PD given above, if Alice cooperates 45% of the time, and Bob cooperates 70% of the time, Alice will have an average payoff of 5.91, while Bob will have a payoff of 7.95

## comment by DanielLC · 2014-01-24T19:42:13.569Z · LW(p) · GW(p)

I feel like it's not even clear in that prototypical example. Utility isn't directly transferrable. While you can argue for (cooperate, cooperate) whether the utility is measured in (one human, one hundred paperclips) or (one hundred humans, one paperclip), if both of those situations come up, (cooperate, cooperate) on both is not Pareto efficient. I'm thinking it might be good to normalize using the a priori probability of it coming up. For example, if you have a 50% chance of each of the above possibilities coming up, and you know only one will happen, the obvious thing to do is to maximize humans if you get the option with more humans, and paperclips if you get the one with more paperclips. It's what you'd do if you were going to do both once.

## comment by Vaniver · 2014-01-24T17:15:20.900Z · LW(p) · GW(p)

Then, (Cooperate, Cooperate) with a transfer between 2 and 14 from Bob to Alice dominates (Defect, Defect).

I would use terminology like "is Pareto efficient," which refers to *outcomes*, rather than *dominate*, because dominate refers to *strategies*. Note also that dominance requires it be *superior*, not just not inferior, to all other outcomes; so Bob would have to commit to cooperating and transferring *more* than 2 units of utility for Alice to consider cooperating.

## ↑ comment by badger · 2014-01-24T17:54:08.531Z · LW(p) · GW(p)

My two cents on usage: "Dominates" in that context is standard in economics, although I should have qualified it as "Pareto dominates". We can distinguish between at least three levels of dominance: every agent strictly preferring one outcome over another, at least one agent strictly preferring it and all others weakly preferring it, and every agent weakly preferring it. I think common usage is to call the first "strict Pareto dominance", the second "weak Pareto dominance", and the third doesn't comes up. Unqualified, "Pareto dominance" probably means "weak Pareto dominance", but that's by no means standard. Also, I interpret "between 2 and 14" as "in the open interval (2, 14)".

Thanks for the input on possible confusions! Dominance does get heavily overloaded depending on what's being ordered.

Replies from: Vaniver## ↑ comment by Vaniver · 2014-01-24T21:04:19.960Z · LW(p) · GW(p)

Also, I interpret "between 2 and 14" as "in the open interval (2, 14)".

Ah, okay. I generally picture that interval as closed unless someone says "strictly between," but I should have thought through my objection and said "wait, this disappears if the interval is open."

## comment by **[deleted]** ·
2014-01-24T15:54:19.200Z · LW(p) · GW(p)

If you want to see this right now, go to a local mainstreaming school where children with and without disabilities are educated together. Watch how children and adults without traumatic head injuries interact with children who do.

Or, watch a human playing with a border collie dog, or an octopus.

Replies from: cousin_it## comment by Luke_A_Somers · 2014-01-24T14:30:22.504Z · LW(p) · GW(p)

If you can trade cash conditionally on behavior, it's trivial to fix the prisoners' dilemma. Each of you offers to pay the other a fee to cooperate, then you cooperate and you 'both pay' which balances out. PD is only hard when you can't do that.

Replies from: badger, Benja## ↑ comment by badger · 2014-01-24T15:57:39.078Z · LW(p) · GW(p)

Unless I'm missing something, the promise of cash transfers alone doesn't fix the prisoners' dilemma. If Alice expects Bob to cooperate and pay her $100 iff both cooperate, Alice's best response is to cooperate and then pay Bob nothing. The standard logic of best responses makes everything unravel back to defection.

To fix incentives with conditional transfers, players need to commit to them. If they can do that though, why not just commit to cooperation directly? On decision-theoretic grounds alone, I don't see why one would be easier than another. External commitment devices like escrow (through a trusted third party or cryptography) do rely on cash.

We agree on what the final payoffs in the PD should be, whether or not transfers make it easier. The point of this post is what payoffs agents should commit to in general two-person games with transfers, however they accomplish that commitment.

## ↑ comment by Benya (Benja) · 2014-01-24T15:38:11.896Z · LW(p) · GW(p)

...so? What you say is true but seems entirely irrelevant to the question what the superrational outcome in an asymmetric game should be.

Replies from: shminux## ↑ comment by shminux · 2014-01-24T17:58:02.726Z · LW(p) · GW(p)

I think the point is that in PD symmetry+precommitment => cooperation, and asymmetry + precommitment = symmetry (this is the "trivial fix"), so asymmetry + precommitment => cooperation.

Replies from: Luke_A_Somers## ↑ comment by Luke_A_Somers · 2014-01-25T12:30:55.478Z · LW(p) · GW(p)

Kinda. Benja pointed out the asymmetry + precommitment = symmetry part; I pointed out the symmetry + precommitment part. It just seems that precommitment + free trade makes all sorts of problems rather easy. Not all, of course - just who do you precommit to equalizing with, say, and of course how do you calibrate your relative utilities.