Freaky unfairness
post by benelliott · 2011-01-12T10:08:16.966Z · LW · GW · Legacy · 10 commentsContents
10 comments
In Freaky Fairness the author discusses the problem of multi-player game theory problems where all players have access to each-other's source code. He gives an algorithm called Freaky Fairness, and makes the claim that 'all players run Freaky Fairness' is both a Strong Nash Equilibrium and a Pareto Optimum. Unfortunately, as far as I can tell, this claim is false.
For reference, the algorithm of Freaky Fairness works as follows:
-
Calculate the security values in mixed strategies for all subsets of players.
-
Divide all other players into two groups: those whose source code is an exact copy of Freaky Fairness (friends), and everyone else (enemies).
-
If there are no enemies: build a Shapley value from the computed security values of coalitions; play my part in the outcome that yields the highest total sum in the game; give up some of the result to others so that the resulting allocation agrees with the Shapley value.
- If there are enemies: play my part in the outcome that brings the total payoff of the coalition of all enemies down to their security value.
Now consider the following simple game for three players:
- Each player chooses one player to nominate.
- They can nominate themselves but they don't have to.
- If a player gets nominated by at least two players (possibly including themselves) then they win the game and receive $300
- If every player gets nominated exactly once then nobody gets anything.
Conventional competitive game theory notes that each player has a dominant strategy of nominating themselves, and so nobody gets anything. Since the game potentially offers an $300 prize there is room for improvement here. Lets see how Freaky Fairness does:
- The empty coalition and all coalitions with only one player have a security value of 0.
- All coalitions with at least two players have a security value of $300 (this is independent of whether you use the alpha method, the beta method or mixed strategies to calculate security values).
- The Shapley values work out at $100 to each player (this is trivial by symmetry but can be worked out the long way if you feel like it).
- If all players run Freaky Fairness then they will choose one of their number, nominate them and then that player will then share out the money evenly.
If any one player deviates then the other two will split the money between themselves to ensure that this one player gets nothing, so far so good. Unfortunately if two players deviate they is nothing the remaining Freaky Fairness player can do to ensure they don't each walk away with $150, which is more than they get by sticking to Freaky Fairness.
Why does the proof given in the original article fail here? The proof assumes that the Shapley method gives every coalition at least their security value, which unfortunately it doesn't manage in this case. This is not a problem that can be fixed by a better method of sharing, as it should be fairly obvious that there is no way to share a quantity of money between three people such that any two of them get all of it.
Freaky Fairness is still a very impressive algorithm. If we define a class of 'nice games' as games where there exists a way of sharing out the winnings such that every coalition receives at least its security value, then Freaky Fairness works on all these games so long as Shapley does (where Shapley fails you can easily modify FF to find a real solution). We now just need to consider the class of 'nasty games', such as the one above. (For those more familiar with co-operative game theory, nice games are those which have a non-empty core, nasty games are those which do not).
Note: I was quite surprised that none of the smart people in the comments section of the original article noticed this, which leads me to think it's quite possibly wrong. Any criticism is welcome.
10 comments
Comments sorted by top scores.
comment by cousin_it · 2011-01-12T12:05:46.980Z · LW(p) · GW(p)
You're right and I was unbelievably stupid. (And I even was aware of the setup you mentioned, but failed to connect it to FF!) Added an update to the post. Thanks! Please do more of this.
Replies from: benelliott↑ comment by benelliott · 2011-01-12T16:12:50.999Z · LW(p) · GW(p)
If its any consolation, I've found that the nasty games actually don't have Nash Equilibria if you view algorithms as strategies, so FF is as good as it gets.
Replies from: cousin_it, DanielLC↑ comment by cousin_it · 2011-01-12T21:11:29.024Z · LW(p) · GW(p)
Could you give a proof of that?
Replies from: benelliott↑ comment by benelliott · 2011-01-12T22:17:39.700Z · LW(p) · GW(p)
By the definition of a nasty game, in any possible mixed (or pure) strategy equilibrium at least one subset S of players must be receiving less than their security value. Therefore, if they were to each pick the following algorithm:
- Play my part in the strategy that gives S our security value.
- Share any pay-offs necessary to ensure everybody in S gets more than they would have done before.
They will all get more than they got before.
I hope I didn't confuse anyone by just saying Nash Equilibrium without specifying that it had to be strong (which are the kind of Nash Equilibria that are referred to in the original post). The theorem that Nash Equilibria always exist only applies to Weak Nash Equilibria.
Incidentally, at the time I made the above post my proof was hideously long and complicated, I am slightly ashamed of how long it took me to spot this one.
↑ comment by DanielLC · 2011-01-14T00:45:24.409Z · LW(p) · GW(p)
Do you mean Strong Nash Equilibria? I've read that it's been proven that every possible game has at least one Nash Equilibrium.
Replies from: benelliott↑ comment by benelliott · 2011-01-14T07:18:47.205Z · LW(p) · GW(p)
Yes, I do mean that, although I'm not sure the proof extends to games with infinite numbers of strategies.
comment by Wei Dai (Wei_Dai) · 2011-01-12T18:30:03.482Z · LW(p) · GW(p)
I don't think the counterexample is really a counterexample to cousin_it's claim. Quoting from Wikipedia:
The Nash equilibrium defines stability only in terms of unilateral deviations. In cooperative games such concept is not convincing enough. Strong Nash equilibrium allows for deviations by every conceivable coalition [4]. Formally, a Strong Nash equilibrium is a Nash equilibrium in which no coalition, taking the actions of its complements as given, can cooperatively deviate in a way that benefits all of its members [5]. However, the Strong Nash concept is some times perceived too "strong" in that the environment allows for unlimited private communication. In fact, Strong Nash equilibrium has to be Pareto efficient. As a result of these requirements, Strong Nash almost never exists.
The counterexample shows that 'all players run Freaky Fairness' is not a Strong Nash equilibrium but that wasn't the claim...
Replies from: benelliott↑ comment by benelliott · 2011-01-12T18:35:28.328Z · LW(p) · GW(p)
"any coalition of players that decides to deviate (collectively or individually) cannot win total payoff greater than their group security value"
This phrase was what caused me to assume that a strong Nash Equilibrium was meant.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-01-12T19:02:31.943Z · LW(p) · GW(p)
Ok, after re-reading his post, I agree he probably did mean "strong Nash Equilibrium". Can you make a note in your post that both of you are really talking about the Strong Nash equilibrium? As far as I can tell, 'all players run Freaky Fairness' is actually a (plain) Nash equilibrium so the current wording is rather confusing.
Replies from: benelliott↑ comment by benelliott · 2011-01-12T19:36:08.878Z · LW(p) · GW(p)
Changed it, and I think you are correct about it being an ordinary Nash Equilibrium (one of infinitely many).