Fair Division of Black-Hole Negentropy: an Introduction to Cooperative Game Theory
post by Wei Dai (Wei_Dai) · 2009-07-16T04:17:44.081Z · LW · GW · Legacy · 37 commentsContents
37 comments
Non-cooperative game theory, as exemplified by the Prisoner’s Dilemma and commonly referred to by just "game theory", is well known in this community. But cooperative game theory seems to be much less well known. Personally, I had barely heard of it until a few weeks ago. Here’s my attempt to give a taste of what cooperative game theory is about, so you can decide whether it might be worth your while to learn more about it.
The example I’ll use is the fair division of black-hole negentropy. It seems likely that for an advanced civilization, the main constraining resource in the universe is negentropy. Every useful activity increases entropy, and since entropy of the universe as a whole never decreases, the excess entropy produced by civilization has to be dumped somewhere. A black hole is the only physical system we know whose entropy grows quadratically with its mass, which makes it ideal as an entropy dump. (See http://weidai.com/black-holes.txt where I go into a bit more detail about this idea.)
Let’s say there is a civilization consisting of a number of individuals, each the owner of some matter with mass mi. They know that their civilization can’t produce more than (∑ mi)2 bits of total entropy over its entire history, and the only way to reach that maximum is for every individual to cooperate and eventually contribute his or her matter into a common black hole. A natural question arises: what is a fair division of the (∑ mi)2 bits of negentropy among the individual matter owners?
Fortunately, Cooperative Game Theory provides a solution, known as the Shapley Value. There are other proposed solutions, but the Shapley Value is well accepted due to its desirable properties such as “symmetry” and “additivity”. Instead of going into the theory, I’ll just show you how it works. The idea is, we take a sequence of players, and consider the marginal contribution of each player to the total value as he or she joins the coalition in that sequence. Each player is given an allocation equal to his or her average marginal contribution over all possible sequences.
So in the black-hole negentropy game, suppose there are two players, Alice and Bob, with masses A and B. There are then two possible sequences, {Alice, Bob} and {Bob, Alice}. In {Alice, Bob}, Alice’s marginal contribution is just A2. When Bob joins, the total value becomes (A+B)2, so his marginal contribution is (A+B)2-A2 = B2+2AB. Similarly, in {Bob, Alice}, Bob’s MC is B2, and Alice’s is A2+2AB. Alice’s average marginal contribution, and hence her allocation, is therefore A2+AB, and Bob’s is B2+AB.
What happens when there are N players? The math is not hard to work out, and the result is that player i gets an allocation equal to mi (m1 + m2 + … + mN). Seems fair, right?
ETA: At this point, the interested reader can pursue two paths to additional knowledge. You can learn more about the rest of cooperative game theory, or compare other approaches to the problem of fair division, for example welfarist and voting-based. Unfortunately, I don't know of a good online resource or textbook for systematically learning cooperative game theory. If anyone does, please leave a comment. For the latter path, a good book is Hervé Moulin's Fair Division and Collective Welfare, which includes a detailed discussion of the Shapley Value in chapter 5.
ETA2: I found that a website of Martin J. Osborne and Ariel Rubinstein offers their game theory textbook for free (after registration), and it contains several chapters on cooperative game theory. The site also has several other books that might be of relevance to this community. A more comprehensive textbook on cooperative game theory seems to be Introduction to the Theory of Cooperative Games. A good reference is Handbook of Game Theory with Economic Applications.
37 comments
Comments sorted by top scores.
comment by nerzhin · 2009-07-16T17:32:37.682Z · LW(p) · GW(p)
I was really confused by this article, so I looked up Shapley value and re-thought the example. This is repetitive, but maybe someone will find it useful.
Society wants to dump matter in a black hole. It (society as a whole) gets a payoff of (total matter dumped)^2 for doing so. If Alice agrees to dump 3 kg, the payoff is 9. Then Bob agrees to dump 2 kg, and the total societal payoff jumps to 25.
In some sense Alice contributed 9 and Bob 16, but it doesn't seem fair to reward them that way. We don't want order of contribution to matter that much. So you imagine doing it in the opposite order. In that case, Alice's marginal contribution is 21 and Bob's is 4.
Then you average over all possible orderings. In this example, Alice gets (9+21)/2 = 15, and Bob gets (16 + 4)/2 = 10.
Replies from: Lironcomment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-16T13:05:43.731Z · LW(p) · GW(p)
I would also advise that you copy from Wikipedia some of the basic criteria satisfied (only) by Shapley.
The black hole example may be the obvious one to a certain kind of mind, but one should perhaps point out that this is also a unique way of dividing value that could apply to e.g. a dynamically formed corporation with many internal projects in the absence of top-down management division, and that if you lacked this unique way of dividing value, such a distributed corporation might fail due to disputes over how to divide value in which every agent thinks of themselves as the "critical final ingredient" which creates most of the value and deserves most of the money.
I instantly promoted this post based on its obvious extreme importance, but given the current rating, you might need further exposition to get beyond the "obvious to Eliezer" level.
Replies from: Strange7↑ comment by Strange7 · 2011-02-14T12:10:30.869Z · LW(p) · GW(p)
That applies to a complex organization, where contributions are dissimilar, but my (admittedly limited) understanding of the physics involved suggests that black holes are not picky eaters. Mass is mass.
Given that the mass of the hole itself can be measured, as well as the mass of any given contribution, wouldn't it be possible to treat a domesticated singularity as a publicly-traded corporation, awarding shares in the total negentropy extracted proportional to the initial contributions?
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-03-01T09:16:30.348Z · LW(p) · GW(p)
That applies to a complex organization, where contributions are dissimilar, but my (admittedly limited) understanding of the physics involved suggests that black holes are not picky eaters. Mass is mass.
See No-hair theorem.
comment by Wei Dai (Wei_Dai) · 2018-05-09T18:47:46.158Z · LW(p) · GW(p)
But cooperative game theory seems to be much less well known.
I found a paper that explains why:
So why, despite these advantages, is cooperative game theory currently dominated by noncooperative theory as applied to economics? Perhaps one answer is that the characteristic function, by assumption, rules out externalities—situations in which a coalition’s payoff depends on what other coalitions are doing. Yet, interactions between coalitions are at the very heart of economics, e.g., bargaining between unions and management, competition between companies, and trade between nations.Replies from: ryan_b
comment by Vladimir_Nesov · 2009-07-16T10:58:58.748Z · LW(p) · GW(p)
The procedure for "fair division" is presented as more or less arbitrary. At least some rationale should be given explicitly. Black holes weaken the post; I think it'll be better if you take them out right away and rename the post (links won't break).
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2009-07-16T11:41:01.724Z · LW(p) · GW(p)
You're the second person to not like to black hole example. It happens to be something I was thinking about, and found a solution for in cooperative game theory. Maybe it's not ideal for pedagogy, but it does have the advantage of being a "real-world" example.
For other examples, as well as the theoretical considerations that led to Shapley Value, see chapter 5 of the book I linked to, or the Wikipedia entry on Shapley Value.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2009-07-16T12:26:21.288Z · LW(p) · GW(p)
it does have the advantage of being a "real-world" example.
Harvesting negentropy by dropping mass into black holes is not a real world example, it is a far future sci-fi example.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-16T12:59:10.496Z · LW(p) · GW(p)
I'd have to disagree with that. It's a far-future science example and an obvious default one given the current model of physics for anyone who understands thermodynamics. Okay, it may take some explaining before it's equally obvious to everyone, but it is, in fact, obvious. Especially if you want value that scales nonlinearly with material. I find it difficult to put into mere words just how obvious this is - it far exceeds the obviousness of, say, using gold to talk about inflation, when the choice of those particular atoms is completely arbitrary in a grand physical sense.
Replies from: Wei_Dai, Douglas_Knight↑ comment by Wei Dai (Wei_Dai) · 2009-07-17T00:24:35.101Z · LW(p) · GW(p)
Yes, in some sense it's an obvious default, which seems to go largely unrecognized (even by those who understand thermodynamics), maybe due to a bias towards thinking that value scales linearly with material. But I don't want to claim too much. There are a number of caveats I didn't mention in my post:
- Some space-time geometries may have better entropy dumps than black holes. In an open universe without dark energy, for example, the cosmological background temperature goes to 0 and negentropy is essentially infinite.
- Why make negentropy the object of fair division, instead of value created from using up negentropy, which might not be a linear function of it?
- Why should individuals own matter? If they don't, then our intuition about what constitutes fair division would change drastically.
↑ comment by Douglas_Knight · 2009-07-16T17:32:03.932Z · LW(p) · GW(p)
may take some explaining before it's equally obvious to everyone
That doesn't seem like a partly useful meaning of "obvious."
The real issue is that most people don't believe in the future. Do you want that as a prereq for game theory? does it have positive propaganda value? (I'd guess that it has negative propaganda value, but I'd also guess that the people complaining are incorrect about how distracting it is.)
Replies from: HalFinney↑ comment by HalFinney · 2009-07-16T18:21:55.316Z · LW(p) · GW(p)
Obligatory "obvious" joke: http://www.basicjokes.com/djoke.php?id=803
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2009-07-16T18:40:49.905Z · LW(p) · GW(p)
Beat me to it. Link.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-16T18:57:08.388Z · LW(p) · GW(p)
See also: "Trivial".
comment by Liron · 2009-07-16T06:44:59.461Z · LW(p) · GW(p)
My intuitive proposal for M.i's negentropy is to take a fraction of the total entropy equal to M.i's fraction of the total mass, and I see this is equal to the Shapley value. Does this continue to be the case for other negentropy(mass) functions?
(Regardless, Shapley's algorithm seems more fair than mine.)
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2009-07-16T21:00:01.499Z · LW(p) · GW(p)
That's a good question, and the answer turns out to be no. For example, suppose negentropy(mass) = mass^3 instead of mass^2. Then in sequence {Alice, Bob}, Alice's MC = A^3 and Bob's MC = 3 A^2 B + 3 A B^2 + B^3. Average MC for Alice is A^3 + 3/2 A^2 B + 3/2 A B^2.
In the proportional approach you suggest, Alice's allocation would be (A+B)^3 * A/(A+B) = A (A+B)^2 = A^3 + 2 A^2 B + A B^2.
The difference is troubling, and I'll have to think about what is going on.
Replies from: GuySrinivasan↑ comment by SarahSrinivasan (GuySrinivasan) · 2009-07-16T23:13:16.329Z · LW(p) · GW(p)
First, by phrasing the question as negentropy(mass) you have assumed we're talking about agents contributing some quantities of a fungible good, and a transform on the total of the good which yields utility. Proportional allocation doesn't make any sense at all (AFAICT) if the contributions aren't of a fungible good.
But let's take those assumptions and run with them. Alice and Bob will contribute A and B of a fungible good. The total contribution will be A+B, which will yield u(A+B) utility. How much of the utility should be credited to Alice, and how much to Bob?
Shapely says Alice's share of the credit should be u(A)/2 + u(A+B)/2 - u(B)/2. Proportional allocation says instead u(A+B) A/(A+B). When precisely are these equal? algebra*
[u(A) - u(B)] / u(A+B) = (A - B) / (A + B)
Well now that's a lot of structure! I'm having trouble phrasing it as a simple heuristic that resonates intuitively with me... I don't feel like I understand as a matter of some more basic principle why proportional allocation only makes sense when this holds. Can anyone help me out by translating the functional equation into some pithy English?
Wait, maybe this will help: Alice, Bob, and Eve.
2/6 u(A) + 1/6 (u(A+B) - u(B)) + 1/6 (u(A+E) - U(E)) + 2/6 (u(A+B+E) - u(B+E)) = u(A+B+E) * A / (A+B+E)
algebra
[(2uA - 2uBE) + (uAB - uE) + (uAE - uB)] / uABE = [(2A - BE) + (A - E) + (A - B)] / ABE
Is this the best structure for understanding it? I'm not sure, 'cause I still don't intuit what's going on, but it seems pretty promising to me.
(Edited to rearrange equations and add:) If we want proportional allocation to work, then comparing the difference in contributions between any two agents to the magnitude of the total contribution should be the same as comparing the difference in utility derivable from the agents alone to the magnitude of the total utility derivable from the agents together.
Sounds pretty but I'm not sure I intuit why it rather than another principle should hold here.
Replies from: cousin_it↑ comment by cousin_it · 2009-07-17T10:35:39.293Z · LW(p) · GW(p)
Differentiating your first equation by B at B=0 after rearranging the terms a bit, we get a differential equation:
=2u(x)/x-u'(0))
Solving the equation using the first method from this page yields
=C_1x+C_2x^2)
This gives the necessary and sufficient condition for the Shapley value to coincide with proportional allocation for two participants. The result also holds for three or more participants because the Shapley value is linear with respect to the utility function. So yeah, Wei Dai just got lucky with that example.
(I'd nearly forgotten how to do this stuff. Thanks for the exercise!)
Replies from: Wei_Dai, CronoDAS↑ comment by Wei Dai (Wei_Dai) · 2009-07-17T12:49:12.079Z · LW(p) · GW(p)
Good work. :)
We're left with the question: for a non-linear and non-quadratic utility function, what is fairer, proportional allocation, or Shapley Value? Among the fairness properties satisfied by Shapley Value, it appears that proportional allocation doesn't satisfy "Equal Impact", defined as follows (from page 161 of Moulin's book):
The impact of removing agent j on agent i ’s share is the same as that of removing agent i on agent j ’s share
With the cubic example, Bob's impact on Alice with Shapley Value is 3/2 A^2 B + 3/2 A B^2, and with proportional allocation it's 2 A^2 B + A B^2. So, is Equal Impact essential to fairness? I'm not sure...
Replies from: cousin_it↑ comment by cousin_it · 2009-07-17T13:28:37.919Z · LW(p) · GW(p)
I'm gonna ramble a bit to clear up my own thoughts, apologies if this sounds obvious...
The Shapley value is the only value operator defined on all coalitional games that satisfies some intuitive axioms. (They're all very natural and don't include "Equal Impact".) Proportional allocation isn't defined on all coalitional games, only a subset of them where you arbitrarily chose some numbers as players' "contributions". (A general coalitional game doesn't come with that structure - it's just a set of 2^N numbers that specifies the payoff for each possible cooperating clique.) After this arbitrary step, proportional allocation does seem to satisfy the same natural axioms that the Shapley value does. But you can't expand it to all coalitional games coherently, because otherwise the intuitive axioms would force your "contribution" values to become Shapley values.
In general, I see no reason to use proportional allocation over the Shapley value. If each player suffers a loss of utility proportional to their individual contribution, or any other side effect with an arbitrary cost function, just include it in the SV calculation.
Replies from: Wei_Dai, Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2009-07-19T11:34:18.899Z · LW(p) · GW(p)
Ok, I think I can articulate a reason for my doubt about the Shapley Value. One nice property for a fair division method to have is that the players can't game the system by transferring their underlying contributions to one another. That is, Alice and Eve shouldn't be able to increase their total negentropy allocation (at Bob's expense) by transferring matter from one to the other ahead of time. Proportional allocation satisfies this property, but Shapley Value doesn't (unless it happens to coincide with proportional allocation).
Replies from: cousin_it, Perplexed↑ comment by cousin_it · 2009-07-20T09:37:50.080Z · LW(p) · GW(p)
If such transfer of resources is allowed, your share of negentropy must depend only on your contribution, the total contribution and the number of players. If we further assume that zero contribution implies zero share, it's straightforward to prove (by division in half, etc.) that proportional allocation is the only possible scheme.
This still isn't very satisfying. John Nash would have advised us to model the situation with free transfer as a game within some larger class of games and apply some general concept like the Shapley value to make the answer pop out. But I'm not yet sure how to do that.
Replies from: Perplexed↑ comment by Perplexed · 2011-02-14T13:25:43.724Z · LW(p) · GW(p)
If such transfer of resources is allowed, your share of negentropy must depend only on your contribution, the total contribution and the number of players. If we further assume that zero contribution implies zero share, it's straightforward to prove (by division in half, etc.) that proportional allocation is the only possible scheme.
One difficulty with proportional allocation is deciding how to measure contributions. Do you divide proportionately to mass contributed, or do you divide proportionately to negentropy contributed?
↑ comment by Perplexed · 2011-02-14T13:24:51.187Z · LW(p) · GW(p)
In Shapley value, a coalition of Alice and Eve against Bob is given equal weight with the other two possible two-against-one coalitions. Yes, Shapley does permit 'gaming; in ways that proportional allocation does not, but it treats all possible 'gamings' (or coalition structures) equally.
↑ comment by Wei Dai (Wei_Dai) · 2009-07-17T14:43:04.559Z · LW(p) · GW(p)
According to Moulin, there are several different sets of axioms that can be used to uniquely derive the Shapley Value, and Equal Impact is among them (it can be used to derive Shapley Value by itself, if I understand correctly).
The problem with all of those sets of axioms is that each set seems to include at least one axiom that isn't completely intuitive. For example, using the terminology in the Wikipedia article, we can use Symmetry, Additivity and Null Player, and while Symmetry and Null Player seem perfectly reasonable, I'm not so sure about Additivity.
comment by jimrandomh · 2009-07-16T04:39:19.938Z · LW(p) · GW(p)
If you want to talk about future societies creating black holes as entropy sinks, fine. But if you want to talk about game theory, you can't talk about black holes at the same time without confusing things very badly.
comment by Wei Dai (Wei_Dai) · 2009-07-31T01:41:46.744Z · LW(p) · GW(p)
I updated this post with references to a couple of textbooks on cooperative game theory.
comment by timtyler · 2009-07-16T06:31:56.376Z · LW(p) · GW(p)
In an article like this, it might help to say what the terms "cooperative game theory" and "non-cooperative game theory" mean - or at least provide a reference. These are esoteric technical terms with a counter-intuitive meaning.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2009-07-16T07:58:45.514Z · LW(p) · GW(p)
I assume that the reader is already familiar with "non-cooperative game theory". It's what everyone already knows, usually just by the name "game theory". The typical definition of "cooperative game theory" isn't very helpful for understanding the difference between the two, which is why I'm providing an example instead.
But here's Wikipedia's definition of "cooperative game":
Replies from: timtyler, wedrifidA cooperative game is a game where groups of players ("coalitions") may enforce cooperative behaviour, hence the game is a competition between coalitions of players, rather than between individual players. An example is a coordination game, when players choose the strategies by a consensus decision-making process.
↑ comment by wedrifid · 2009-12-07T06:31:07.644Z · LW(p) · GW(p)
The example given here gives me the impressions that 'cooperative game theory' is just non-cooperative game theory where each agent is a coalition rather than an individual. Unless this is the intended meaning I suggest the definition is misleading.
comment by Marcello · 2009-07-16T07:09:10.392Z · LW(p) · GW(p)
Let’s say there is a civilization consisting of a number of individuals, each the owner of some matter with mass mi. They know that their civilization can’t produce more than (∑ mi)^2 bits of total entropy over its entire history
Typo: (∑ mi)^2 should be mi^2
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2009-07-16T08:00:26.832Z · LW(p) · GW(p)
No, that's not a typo... it's supposed to be the square of the sum of the individual masses.