# In the Pareto-optimised crowd, be sure to know your place

post by Stuart_Armstrong · 2011-01-07T18:12:03.561Z · LW · GW · Legacy · 12 commentstldr: In a population playing independent two-player games, Pareto-optimal outcomes are only possible if there is an agreed universal scale of value relating each players' utility, and the players then acts to maximise the scaled sum of all utilities.

In a previous post, I showed that if you are about the play a bargaining game with someone when the game's rules are initially unknown, then the best plan is not to settle on a standard result like the Nash Bargaining Solution or the Kalai-Smorodinsky Bargaining Solution (see this post). Rather, it is to decide in advance how much your respective utilities are worth relative to each other, and then maximise their sum. Specifically, if you both have (representatives of) utility functions u_{1} and u_{2}, then you must pick a θ>0 and maximise u_{1}+θu_{2} (with certain extra measures to break ties). This result also applies if the players are to play a series of known independent games in sequence. But how does this extend to more than two players?

Consider the case where there are three players (named imaginatively 1, 2 and 3), and that they are going to pair off in each of the possible pairs (12, 23 and 31) and each play a game. The utility gains from each game are presumed to be independent. Then each of the pairs will choose factors θ_{12}, θ_{23} and θ_{31}, and seek to maximise u_{1}+θ_{12}u_{2}, u_{2}+θ_{23}u_{3} and u_{3}+θ_{31}u_{1} respectively. Note here that I am neglecting tie-breaking and such; the formal definitions needed will be given in the proof section.

A very interesting situation comes up when θ_{12}θ_{23}θ_{31}=1. In that case, there is an universal scale of "worth" for each of the utilities: it's as if the three utilities are pounds, dollars and euros. Once you know the exchange rate from pounds to dollars (θ_{12}), and from dollars to euros (θ_{23}), then you know the exchange rate from euros to pounds (θ_{31}=1/(θ_{12}θ_{23})). We'll call these situations transitive.

Ideally we'd want the outcomes to be Pareto-optimal for the three utilities. Then the major result is:

**The outcome utilities are Pareto-optimal if and only if ****the ****θ ****are transitive.**

What if there are not three, but hundreds, or thousands of players, playing games with each other? If we assume that the utilities derived from each game is independent, then we get a similar result: given a sequence of players 1, 2,..., n such that each plays a game with the next player and n and 1 also play a game, then we still get:

**The outcome utilities are Pareto-optimal if and only if ****the ****θ ****are transitive (ie ****θ _{12}**

**θ**

_{23}...**θ**

_{n-1n}**θ**

_{n1}=1).What this means is the Pareto-optimal outcomes are only possible if there is a universal (linear) scale of value relating all the utilities of any player linked to another by direct or indirect trade links. The only important factor is the weight of your utility in the crowd. This result should be easy to extend to games with more than two players, but that's beyond the scope of this post.

The rest of this post is a proof of the result, and may be skipped by those not of a technical bent.

*Proof:*

Apologies if this result is known or trivial; I haven't seen if before myself.

Here, we will always assume that the set possible outcomes is convex (mixed solutions are always possible) and that the Pareto-optimal outcome set is smooth (no corners) and contains no straight line segments. What these conditions mean is that if O is the set of Pareto-optimal outcomes, then each point in O has a well defined tangent and normal vector. And further, the slope of the tangent never stays constant as we move around O. Because O is the upper-right boundary of a convex set, this means that each point in O is uniquely determined by the slope of its tangent vector - equivalently, by the slope of its normal vector. The normal vector is particularly useful, for the point in O that maximises the utility u_{1}+θu_{2 }is the point that has normal vector (1, θ). This means that each point in O is uniquely determined by the value of θ. This is illustrated in the following diagram. Here θ=2/3, the purple set is the set of possible outcomes, the blue lines are the sets of constant x+(2/3)y, and the red normal vector (1, 2/3) is drawn from the maximising outcome point (1, 2.5):

Now assume we have θ_{12}, θ_{23} and θ_{31} as above. The utility outcomes for the three games (given by maximising u_{1}+θ_{12}u_{2}, u_{2}+θ_{23}u_{3} and u_{3}+θ_{31}u_{1}) will be designated o_{12}=(x_{1},y_{2}), o_{23}=(x_{2},y_{3}) and o_{31}=(x_{3},y_{1}), with the index corresponding to the players.

We now consider a change to the utilities by adding the vectors v_{12}=(x_{12},y_{12}), v_{23} and v_{31} to each of these outcomes. We want the changes to be Pareto-optimal, and to remain within the set of possible outcomes for each game. The first condition can be phrased as requiring that the changes to each utility be positive; i.e. that x_{12}+y_{31} (the change to player 1's utility) x_{23}+y_{12} and x_{31}+y_{23} all be positive. The second condition requires that o_{12}+v_{12} is not to the right of the natural tangent line through o_{12}. Since the normal to this tangent line is (1,θ_{12}), this is equivalent with requiring that the dot product of v12 and (1,θ_{12}) be negative, hence that x_{12}+θ_{12}y_{12}, x_{23}+θ_{23}y_{23} and x_{31}+θ_{31}y_{31} all be negative.

Then using all six inequalities above, we can see that:

x_{12} ≤ -θ_{12}(y_{12}) ≤ θ_{12}(x_{23}) ≤ -θ_{12}θ_{23}(y_{23}) ≤ θ_{12}θ_{23}(x_{31}) ≤ -θ_{12}θ_{23}θ_{31}(y_{31}) ≤ θ_{12}θ_{23}θ_{31}(x_{12})

Now if the change is actually going to be a Pareto-improvement, one of the inequalities will have to be a strict inequality, giving x_{12} < θ_{12}θ_{23}θ_{31}(x_{12}). This is obviously impossible when θ_{12}θ_{23}θ_{31} = 1, so in that circumstance, there cannot be any Pareto-improvements. This demonstrates half of the implication.

Now let T be the sixth root of θ_{12}θ_{23}θ_{31}. If θ_{12}θ_{23}θ_{31 }> 1 (equivalent with T > 1), then the following vectors (strictly) obey all the above inequalities:

v_{12}=(θ_{12}θ_{23}θ_{31 }ε, -Tθ_{23}θ_{31 }ε), v_{23}=(T^{2}θ_{23}θ_{31} ε, -T^{3}θ_{31 }ε), v_{31}=(T^{4}θ_{31}ε, -T^{5}ε).

Since these obey the inequalities strictly and the outcome sets are smooth, for sufficiently small ε, these provide a Pareto optimal improvement. If θ_{12}θ_{23}θ_{31} < 1 (equivalent with T < 1), then the following vectors work in the same way:

v_{12}=(-θ_{12}θ_{23}θ_{31 }ε, Tθ_{23}θ_{31 }ε), v_{23}=(-T^{2}θ_{23}θ_{31} ε, T^{3}θ_{31 }ε), v_{31}=(-T^{4}θ_{31}ε, T^{5}ε).

This proof can be extended to n players in a similar fashion.

When the outcome sets contains straight line segments, we need a tie breaking system for cases when the "utility indifference lines" are parallel to these segments; apart from that, the result goes through. If the outcome sets contain corners, it remains true that there are no Pareto-optimal improvements when θ_{12}θ_{23}θ_{31}=1. But it is possible to have θ_{12}θ_{23}θ_{31}≠1 and be Pareto-optimal *as long as the outcomes are on a corner*. However, in these circumstances, there will always be values θ'_{12}, θ'_{23} and θ'_{31}, giving the same outcomes as θ_{12}, θ_{23} and θ_{31} and with θ'_{12}θ'_{23}θ'_{31}=1. This can be seen by replacing the corner with a series of limiting smooth curves.

## 12 comments

Comments sorted by top scores.

## comment by Perplexed · 2011-01-07T20:03:22.808Z · LW(p) · GW(p)

This result should be easy to extend to games with more than two players

I suspect that there are difficulties when more than two players are involved in the decision making, and a coalition between a majority can leave a minority powerless.

For example, suppose that three players iteratively play a game in which they vote as to how to divide up a jackpot. Four choices:

- Split evenly between 1 and 2
- Split evenly between 2 and 3
- Split evenly between 3 and 1
- Split evenly among all three players.

Majority rule. If no proposed split receives a majority, then nobody gets anything.

There are three stable bargaining solutions - each of which is pareto optimal but forces one of your θ factors to be zero. There needs to be some added structure to the problem to force the three-way split as the unique solution. As I described it, the three-way split is *not even* a solution.

ETA: In the 3-way split option, assume that Omega increases the jackpot by 5%, so that total payoff is maximized by the split. But it is still better to split two ways, if you can find a partner.

## comment by Stuart_Armstrong · 2011-01-07T21:09:29.613Z · LW(p) · GW(p)

Yes coallitions and things like that can happen. But the result still applies - every Pareto optimal solution is a weighing of utilites and a maximising of the sum. They don't have to be fair or nice, though.

If the players all have utilities that scale linearly with money for the jackpot, then all four of your choices correspond to valuing everyone's utilities equally and maximising the sum; only because the solution space is a flat plane, we need an extra "tie-break" method for deciding which to go for.

## comment by Perplexed · 2011-01-07T21:44:11.976Z · LW(p) · GW(p)

every Pareto optimal solution is a weighing of utilities and a maximizing of the sum.

all four of your choices correspond to valuing everyone's utilities equally and maximizing the sum;

Not true after my edit. The three-way split is the unique maximum when all players are valued equally; but the two way splits are also Pareto optimal, are each favored by a majority of players, and are *bargaining solutions* (for some bargaining protocols).

But, now that I look at it, these two-way splits are also scaled utility maxima - for example, using a (49, 49, 2) scaling.

You have me convinced that every point on the Pareto frontier is a maximum of some weighted sum of utilities. But you don't yet have me convinced that, in every case, all of the weights can be non-zero.

ETA: I am a bit more convinced after re-reading the last paragraph of your posting.

## comment by Stuart_Armstrong · 2011-01-07T22:40:57.458Z · LW(p) · GW(p)

No, you are right. For certain setups (say when the set of outcomes is a perfect circle/sphere), then if one player gets everything they want, this only happens if the weight of the other player's utility is zero.

I didn't write up that case, because there I would have to let the thetas be zero and infinity, a subtlety that would distract from the main point.

## comment by Psychohistorian · 2011-01-08T07:05:56.928Z · LW(p) · GW(p)

Two concerns.

First, and I don't mean this disrespectfully, why should we care? Trying to generate pareto optimality through a series of games played by randomly but fully paired-off players, with no potential for trade, doesn't resemble anything I can think of. This is particularly true because players of games have little interest in pareto optimality; the very concept of "game" implies "competition," and the concept of "pareto optimality" implies "cooperation." Indeed, this assumes that trade is impossible, or else making game outcomes pareto optimal is redundant. While I may in some sense prefer pareto optimal outcomes for games, what most players are assumed to care about is maximizing their own utility, which is largely unrelated to pareto optimality. Perhaps my imagination is insufficient, but this doesn't seem applicable to anything.

Second, the =1 seems arbitrary, because there are no units. So long as none of the factors are zero, the product will be some number that could be scaled to 1. I don't think undermines your point; it just seems odd. FUrthermore, it's interesting to note that there is always a consistent scale in which the product equals zero, even if there is another where it equals 1. I'm not sure if this is significant; again it just seems odd..

## comment by Stuart_Armstrong · 2011-01-08T10:51:39.262Z · LW(p) · GW(p)

Who said there is no potential for trade? Each game could be exactly that - a trade - where the "game" is simply dividing up the gain from trade.

The product of the three theta cannot be scaled. First of all, the theta are unitless (as a ratio between two utility functions), so you can't scale them by changing their units. You can change individual thetas by scaling the u1, u2 or u3, but this will not change their product: scaling u1, for instance, will scale θ12 one way and θ31 the opposite way, cancelling out. The =1 is well defined.

## comment by Psychohistorian · 2011-01-09T02:19:05.442Z · LW(p) · GW(p)

If trade is permitted, and there are no transaction costs, all outcomes are pareto optimal. That's what I was objecting to.

I think you might be worried more about maximizing expected (or average) utility than pareto optimization. The two are largely unrelated.

Again, I may be misunderstanding something, but the requirement that there be a universal scale for utility seems to require that individuals be interchangeable, if the scale is dependent on observable characteristics (e.g. someone who has $200 has 2 utils of happiness). Unless this scale is somehow contingent on measuring subjective utility, you're saying that pareto optimality should be possible only when players have an agreed-upon set of preferences.

Even if I'm confused on that point, I still fail to see how this hypothetical relates to anything in reality.

## comment by Stuart_Armstrong · 2011-01-09T10:43:26.627Z · LW(p) · GW(p)

If people are Pareto optimal in a trade, then yes, that trade is Pareto optimal.

However if there are three trades between three people (12, 23 and 31) and each trade is Pareto-optimal, that does not mean that the set of three trades as a whole is Pareto-optimal. It's generally possible to change these trades and get a better outcome for everyone (which is kinda what a market with price signals tries to do).

The only cases where you cannot do this is precisely where there is an agreed upon weighting of everyone's utility function etc...

The individuals need not be interchangeable, the weighting need not be fair. But Pareto-optimality is only possible in this situation (when the outcome set is smooth).

## comment by Will_Sawin · 2011-01-07T18:53:52.346Z · LW(p) · GW(p)

If we assume that all theoretical choices are possible with positive probability, then the set should be, in fact, smooth, or at least differentiable, as it would be impossible for two different weightings to produce the same exact outcome. This makes the result nicer.

I proved an essentially analogous result, viewing it as a justification of utilitarianism.

I think the fact that Pareto is equivalent to utility-weighting is well-known in economics. It's one of the consequences of the separating hyperplane theorem. I have not seen it applied to bargaining, because economists mostly think in terms of actual rather than optimal bargaining. Perhaps it has been, though.

## comment by Stuart_Armstrong · 2011-01-07T21:03:23.218Z · LW(p) · GW(p)

It's one of the consequences of the separating hyperplane theorem

Cool. Do you have a reference?

## comment by Will_Sawin · 2011-01-07T22:08:17.921Z · LW(p) · GW(p)

No. I learned it in classes I took years ago.

## comment by Jayson_Virissimo · 2011-01-07T18:28:58.910Z · LW(p) · GW(p)

Rather, it is to decide in advance how much your respective utilities are worth relative to each other...

How would the players go about this, exactly? It sounds to me like you are assuming a can opener.