Book report: Theory of Games and Economic Behavior (von Neumann & Morgenstern)
post by Nisan · 20200511T09:47:00.773Z · score: 37 (13 votes) · LW · GW · 4 commentsContents
2player zerosum games nplayer zerosum games nplayer nonzerosum games What does it mean? Disappointment Fun bits References None 4 comments
I finally read a game theory textbook. Von Neumann and Morgenstern's "Theory of Games and Economic Behavior" basically started the field of game theory. I'll summarize the main ideas and my opinions.
This was also the book that introduced the VNM theorem about decisiontheoretic utility. They presented it as an improvement on "indifference curves", which apparently was how economists thought about preferences back then.
2player zerosum games
We start with 2player zerosum games, where the payoffs to the players sum to 0 in every outcome. (We could just as well consider constantsum games, where the sum of payoffs is the same in every outcome.) Maximizing your reward is the same as minimizing your opponent's reward. An example is Rock, Paper, Scissors.
Suppose player 1 moves first, and then player 2 moves with, and then the players get their rewards , . Then player 2 should play and player 1 should play . If instead player 2 moves first, then player 1 should play and player 2 should play .
The payoff to player 1 will be the "maximin" or "minimax" value, depending on whether they move first or second. And moving second is better:
If the action space is finite and we allow mixed strategies, then by the Kakutani fixed point theorem equality obtains. (The book proves this with elementary linear algebra instead.) In Rock, Paper, Scissors, the maximin value is 0, which you get by playing randomly.
VNM then argue that if both players must move simultaneously, they should play the maximin/minimax strategy. Their argument is kind of sketchy: They say that since player 1 can secure at least the maximin reward for themselves, and since player 2 can prevent player 1 from getting more than the maximin reward, it's rational for player 1 to play the strategy that guarantees them the maximin reward (17.8.2). They claim they aren't assuming knowledge of rationality of all players, but I think they're smuggling it in.
Also, the minimax strategies form a Nash equilibrium (which they call a "saddle point"). But their argument for playing minimax doesn't rely on this fact directly.
Despite the sketchiness I agree that you should play maximin in 2player zerosum games, unless you have reason to believe your opponent won't play maximin.
player zerosum games
Next we move on to player zerosum games. We allow coalitions, coordination, and side payments, so Nash equilibrium is no longer relevant. An example is the 3player game where you point at another player; if two players point to each other, they each get 1 point and the third player gets 2; otherwise, everyone gets 0. We pass through four layers of abstraction:

First we define a game in extensive form: There are players and a finite sequence of moves. A move may be chosen by a player, or be determined randomly with fixed probabilities. A player makes a limited observation of the previous moves before moving. The number of turns, the identity of the player who gets to move, and the allowed observations can all depend on previous moves. All players get numerical rewards at the end.

A game can be expressed in normal form, where each each player chooses a strategy without making any observations, and then the reward is a function of the strategies chosen.

The characteristic function for a game is defined as follows: If we divide the players into two coalitions and allow players within a coalition to coordinate, then we get a 2player game. Let be the minimax value for coalition . It's the total rewards of all coalition members if they coordinate on an optimal strategy.
Note that this requires the coalition to have access to a shared source of randomness that the other coalition can't see (in order to implement mixed strategies). They also need to coordinate on which minimax strategy to execute, if there's more than one.
 A solution to a game is not a single outcome but a set of outcomes, defined as follows:
An imputation is a vector of rewards for each player such that and for all players . Every imputation is the result of some choice of strategies.
For imputations , , we say that dominates () if there's a coalition satisfying:
The last condition means that the coalition can secure enough collective reward to guarantee the imputation for everyone in the coalition. (This is where we assume the players can make side payments.) Domination isn't necessarily transitive.
A solution (nowadays called a von Neumann–Morgenstern stable set) is a set of imputations such that no element of dominates another, and every imputation outside of is dominated by an element of . Solutions aren't necessarily unique, and we'll see later that they might not even exist. A solution to the pointing game described above is:
This represents the outcome where two players always point at each other, but the solution doesn't determine which two players those are. Surprisingly, this is not the only solution.
player nonzerosum games
Finally, we can remove the constantsum condition. So we get nonzerosum games like the Prisoner's Dilemma, whose characteristic function is:
(Each player's maximin or BATNA corresponds to mutual defection, with zero reward.)
(If the players cooperate, they get 1 reward each.)
An player nonzerosum game is simply an player zerosum game where the extra player has only one strategy and can't make side payments. It turns out that you can equivalently define the solutions without referring to this extra player; you just need to change the definition of imputation to require . It turns out that the solutions are all Paretooptimal. So a solution to Prisoner's Dilemma is one where both players always cooperate, but then there's an undetermined side payment:
What does it mean?
VNM say that a solution is a "standard of behavior". I think it looks kind of like an epistemic state. If is a solution, then we can imagine it being common knowledge that the outcome will belong to . If is not a solution, then either there's an imputation in that we can deduce will not occur, or there's a coalition that can force an outcome outside of ; either way, is not a consistent epistemic state.
The theory doesn't say anything about the "negotiating, bargaining, higgling, contracting and recontracting" (61.2.2) that picks out a particular solution or outcome. Morgenstern really wanted to discover a dynamic theory (Rellstab 1992), and such a theory would probably be a model of negotiation. But this book presents only a static theory, a theory of equilibria. The authors speculate that a dynamic theory would talk about a single imputationunderconsideration changing over time (4.8.23).
I don't know what progress has been made toward a dynamic theory in the last 75 years, but it seems like we now have the tools to build one: Logical induction tells us how epistemic states change over time, and the Löbian handshake is a mechanism for forming coalitions in opensource game theory.
Disappointment
Unfortunately, the authors had a lot of trouble finding solutions. This is understandable because it involves a search over the powerset of a manifold. The authors found all solutions to all 3player zerosum games. But they didn't find all solutions to all 4player zerosum games; they say there's a "bewildering variety" of them (35.3.1). The more players there are, the more complicated it gets.
They find families of solutions for some games of players, but they aren't exhaustive. They describe how to decompose games into smaller games, but most games aren't decomposable. They suggest modeling markets by taking to be large, but they don't actually do this. They show that all solutions are closed, but otherwise they prove nothing about solutions in general.
They even fail to prove that every game has a solution, and in fact (Lucas 1968) is an example of a 10player game with no solution. I imagine a dynamic theory would say that in this game, the players fail to converge to an equilibrium in the limit of infinite computation.
Fun bits
The book requires only an elementary background in mathematics. It's a bit wordy. Von Neumann came up with pretty much all the results (Rellstab 1992).
There are applications to Poker and 3person markets that I found interesting.
There are at least two easter eggs in the text. Fig. 4 in (8.3.2) contains a hidden elephant because von Neumann's wife liked elephants (Morgenstern 1976). And I'm pretty sure there's a hidden allusion to the Devil in (56.10.4): This paragraph is about the extra, fictional player in a nonzerosum game, who wants to minimize the rewards of the real players. Real players can collude with the fictional player to make everyone as a whole worse off, and in exchange they are enriched by side payments from the fictional player.
Soon after writing this book, von Neumann went on to build nuclear bombs. This feels profound, but I can't say why exactly.
References
W. F. Lucas. "A game with no solution". Bull. Amer. Math. Soc., Volume 74, Number 2 (1968). 237–239. https://projecteuclid.org/euclid.bams/1183529519
Oskar Morgenstern. "The Collaboration between Oskar Morgenstern and John von Neumann on the Theory of Games". Journal of Economic Literature, Vol. 14, No. 3 (Sep. 1976). p. 805–816. https://www.jstor.org/stable/2722628 (paywalled)
John von Neumann, Oskar Morgenstern. Theory of Games and Economic Behavior. Princeton University Press (1944).
Urs Rellstab. "New Insights into the Collaboration between John von Neumann and Oskar Morgenstern on the Theory of Games and Economic Behavior". History of Political Economy (1992), 24 (Supplement). p. 77–93. https://doi.org/10.1215/0018270224Supplement77 (paywalled)
4 comments
Comments sorted by top scores.
If interested in the subject, I'd recommend William Poundstone's Prisoner's Dilemma, which I've recently completed. This book contains three parts to it. They interweave in chronological order and are obviously closely related.
The first, is a colorful history of the interesting polymath John von Neumann was. It provides his personal background, professional history, and the circumstances surrounding his introduction of Game Theory.
The second portion is a broader history of Game Theory and particularly, in its relation to cold war era events and strategy at the time. The United States govt paid for the research leading to its advent and the intersection of science and the politics of the mid 20th century world is quite interesting. Much information regarding the nuclear project and related developments, the OP reviewer mentioned finding interesting, can be found here. Game Theory was in fact quite consequential. Discussion of other Game Theorists and debate and development of the subject can also be found.
Lastly, the book contains descriptions of Game Theory itself. Various scenarios, games, and the developments of each. This includes those mentioned above and more.
While the above is a brief synopsis, if looking to broaden knowledge of the field and have a more colorful read, I would explore Poundstone's title.
When I read your summary (and follow up post), I get the impression that you are suggesting it might be reasonable to study the book, followup on its ideas, or spend time looking to improve upon its shortcomings. It seems to me that paying attention to a 75 years old textbook (and drawing the attention of others to it) only makes sense for its historical value or if you manage to tease out some timeless lessons. Hm, or maybe to let people know that "no, it doesn't make sense for you to read this". But my impression is that neither of these was the goal? If not, what was the aim of the post then?
Why just not read something uptodate instead?
Multiagent Systems, Algorithmic, GameTheoretic,and Logical Foundations (by Yoav Shoham and Kevin LeytonBrown, 2010)  which I read and can recommend (freely available online)
Game Theory (Maschler, Zamir, Solan, 2013)  which I didn't read, but it should be good and has a 2020 version
Thanks for the references! I now know that I'm interested specifically in cooperative game theory, and I see that Shoham & LeytonBrown has a chapter on "coalitional game theory", so I'll take a look.
Related to that: An interesting take (not only) on cooperative game theory is Schelling's The Strategy of Conflict (from 1960, resp. second edition from 1980, but I am not aware of sufficient followup research on the ideas presented there). And there might be some useful references in CLR's sequence on Cooperation, Conflict, and Transformative AI [? · GW].