Logical Inductors Converge to Correlated Equilibria (Kinda)
post by Diffractor · 2018-05-30T20:23:54.000Z · LW · GW · 1 commentsContents
1 comment
Logical inductors of "similar strength", playing against each other in a repeated game, will converge to correlated equilibria of the one-shot game, for the same reason that players that react to the past plays of their opponent converge to correlated equilibria. In fact, this proof is essentially just the proof from Calibrated Learning and Correlated Equilibrium by Forster (1997), adapted to a logical inductor setting.
However, there are two complications. The first complication is large differences in power between the logical inductors. To be specific, imagine a logical inductor playing matching pennies against another logical inductor of great strength that can simulate what the highest-valued action will be (though not the exploration steps). Also, the outcome {heads,heads}is assigned utility, while {tails,tails}is 0 utility. This is essentially a Death-in-Damascus scenario. The unique Nash equilibria is randomizing at approximately 50/50 odds, but in repeated plays of this game with density-zero exploration, the best move is to just play heads to get the utility, because the inductor can't fool its opponent except on days where it explores, which happen with asymptotic frequency 0. So sufficiently large differences in power between the inductors can lead to results that aren't Nash or correlated equilibria.
As such, in order to guarantee correlated equilibria, we will need to make a "no subjective dependencies" assumption. More precisely, in the limit, conditioning on your own move can't shift your probabilities of your opponent's move. The -matching pennies against a powerful predictor game fails this, because the expected utility of the agent (conditional on it playing "tails") must be (if it was less than , the utility would go up because of exploration steps where it fools the opponent, so the limiting behavior is playing "tails" with probability limiting to 0, and of those plays of "tails" are ones where it succesfully fools the opponent due to exploration). So, conditional on playing "tails", the probability of the opponent playing "tails" is about . Conditional on playing "heads", the probability of the opponent playing "tails" is 0.
The second complication is that all Nash equilibria are correlated equilibria, so maybe logical inductors converge to Nash equilibria in most cases. In general, I don't expect this, because it's possible for a game to have a unique Nash equilibria s.t. playing best-responses to the prediction of the opponent doesn't converge to that equilibrium, because the equilibrium is unstable (small divergences from the equilibrium mean that the incentive gradients point further away from the equilibrium in all directions). My next direction after this will be to attempt to adapt one of these proofs to the logical inductor setting.
For the moment, an (admittedly rather contrived) example of the game of Chicken shows that it's possible to converge to a correlated equilibrium that isn't a Nash equilibrium, and there are intuitive reasons to expect this to happen generally. It's still a Nash equilibrium of the infinitely repeated game, however.
On to the proof.
#Notation
is player . There are finitely many players. (the letters and are taken already)
is the finite set of actions for . We will assume that all players only have finitely many actions available.
, it is the set of joint outcomes.
is a specific outcome, given by selecting action , and everyone else in the game selecting outcome . It will be convenient to view the game as a 2-player game, with player 1 vs. player "everyone else", whose moves are indexed by .
is the j'th action for player 1.
is some assignment of prices to all possible outcomes of the game, interpreted as a matrix, as in the previous post.
. It is the space of price matrices. A point in this space is denoted by , while is the corresponding matrix.
is the set of all price matrices that actually correspond to a probability distribution. The prices of the logical inductor will converge to this set.
denotes two equations limiting to each other as the variable limits to infinity.
is an indicator variable that's 1 when outcome occurs on turn , and 0 otherwise.
is the matrix that corresponds to the empirical frequency, up to time , of the various joint outcomes.
is the empirical frequency up to time of player taking action , obtained by summing over row in the matrix .
is a "fuzzy cell". It is a fuzzy subset of , that is implemented by some expressible feature that takes the prices of the sentences in as input, and outputs a number in [0,1].
is the number given by taking the prices of the logical inductor at time , and feeding them into the expressible feature that corresponds to the fuzzy cell . It yields the degree of membership in the fuzzy set C, of the point that corresponds to the prices. By a slight abuse of notation, can also be defined similarly.
is a set of fuzzy cells over , subject to six restrictions.
- There are finitely many fuzzy cells.
- All cells are convex.
- There are two special cells, defined by the sum over all entries of being less than , and greater than , as implemented by continuous indicator functions.
- All the other cells must have a "central point" that lies in .
- All these other cells must have all points in their support be within of that "central point".
(This may seem a bit arbitrary, but it's because logical inductors can't reference a set characterized by a discontinuous indicator function, so we had to use fuzzy sets. Condition 2 is needed to set up a lemma, condition 3 is convenient to separate out and ignore times where the logical inductor probabilities haven't approximately converged yet, and conditions 4 and 5 show up near the end of the proof.)
is the set of fuzzy cells in (except for the two which characterize incoherent probabilities) which contain points (i.e. is nonzero) for which the action is an evidential best response.
is defined as above, except that the cells are subject to the extra restriction that they contain points which correspond to matrices that are subjectively independent of player 1. This just means that all the row vectors lie in a 1-d subspace, so the probability distribution over what everyone else does is independent of what player 1 does. For these matrices, the causal and evidential notions of best response coincide.
is the set of points in for which is a probability distribution, and is a causal best response. This set is convex for all actions. All points in have a corresponding point in , obtained by taking an arbitrary row vector and renormalizing it to 1.
#Proof
To begin with, the set of points for all (these are the empirical frequencies) is confined to the bounded space of all possible joint probability distributions , so by Bolzano-Weierstrass, a convergent subsequence exists.
Consider such a subsequence that converges to some limiting point/joint distribution . That is,
Now, fix some action such that , that will, from this point on, be assumed to be taken in the specific outcome .
To restate the definition of ...
Because the sum over all cells of is 1, for all ,
Now, the sums can be interchanged to yield
The two special fuzzy sets that pick out sufficiently incoherent probabilities have finitely many points in them, and by Lemma 6 from the last post, all cells outside of have the inner term limiting to 0.
Let be the number of times where the probabilities were in , , and let be the fraction of the time up to where the probabilities were in and the outcome happened.
Putting all these equalities and asymptotic equalities together, we get
Now, let be the 'th entry in the matrix corresponding to the center of the cell C. This is associating a cell with a canonical probability distribution over everyone's joint distribution, and reading out the probability of happening.
Remember that is our joint outcome, and remains the same, but the cell that we're reading that coordinate out from varies within the sum.
The empirical fraction of instances that occur when the probabilities are in must limit to be within of , by convexity of , calibration of logical inductors, and the cell itself having size . Thus, by fixing an epsilon/cell size ahead of time, we get that
In the limit as goes to infinity, because is taken with nonzero frequency in the limit,
Furthermore, as previously established,
In the limit of infinitely many cells and shrinking to 0,
Now, in general, we can't go much further than this (I think). However, if we consider the special case where player 1 has roughly equal or greater computational power than all others so all cells outside of have an asymptotic probability of 0 (ie, in the limit, the probability distribution over everyone else's moves is independent of player 1's moves), then we can conclude that the limiting point is a correlated equilibria.The reasoning goes as follows:
Consider the -length vector where the 'th entry is given by . This vector corresponds to the limiting distribution over everyone's moves, conditional on the inductor taking move . Because of what equals in the limit, and because of all cells outside of limiting to 0, this vector is approximated, in the limit, by the vector
As previously mentioned, because all these cells are in , and (in the limit as goes to 0), their center points are also subjectively independent from player 1, each cell center point is associated with a canonical vector in , specifically, (we can consider the cells as never having a center point for which an action has probability 0, so this is always well-defined).
Now, we can consider some weighting of all of these vectors, where the weight on the cell vector is
It is fairly easy to verify that this sums up to one across all . Because it is a weighting of a bunch of vectors in , which is a convex set, the resulting vector must be in , the set of distributions over the actions of others for which is a causal best response. The cancels out of the top and bottom, leaving the vector
which, as previously established, is the vector giving the probability distribution over everyone else's moves conditional on player 1 making the move .
Therefore, conditional on player 1 making move , the distribution over opponent moves was such that was a causal best response.
This same argument can be repeated for all players to conclude that they play a correlated equilibrium if they are all bad enough at predicting each other that all players think they can change their own action without affecting the probability distribution over everyone else's actions.
#Correlated or Nash?
Admittedly, since all Nash equilibria are correlated equilibria, this isn't quite strong enough. As previously mentioned, there are games where it is possible to show that a wide range of strategies do not converge to a Nash equilibrium, and I'll work on that case soon.
However, there is an (admittedly sketchy) argument that shows that logical inductors plausibly converge to correlated equilibria in general.
Consider two logical inductors going up against each other in a game of Chicken. Logical inductor A has a trader with a very large starting budget that buys conditional contracts forcing down the expected utility of "swerve", but only on even-numbered days. Logical inductor B is the same, but for odd-numbered days.
Eventually, both logical inductors will learn that the other inductor is very likely to go straight on the -generable subsequence of even/odd days, and converge to swerving on those days. This probably continues in a self-enforcing way, even after the traders run out of budget, because inductor A exploring into going straight on odd-numbered days experiences ruin, and vice-versa, so they both learn to get out of the way when the other one is going straight. This is a correlated equilibria, because the long-run frequencies over all joint outcomes don't converge to any of the three Nash equilibria.
However, it is a Nash equilibria of the infinitely repeated game, by the Folk Theorem.
Admittedly there's no proof, but it seems very plausible that this behavior of superstitiously learning to defend/swerve out of the way on -generable sequences happens in general when logical inductors play each other, and once established and learned, these patterns are stable.
1 comments
Comments sorted by top scores.
comment by Vanessa Kosoy (vanessa-kosoy) · 2018-07-04T11:34:30.000Z · LW(p) · GW(p)
Regarding the “no subjective dependencies” assumption: I might be missing something obvious, but do you have a proof that this assumption is true in some non-trivial cases? For example, when the players are perfectly symmetric?
Also, you say "the second complication is that all Nash equilibria are correlated equilibria, so maybe logical inductors converge to Nash equilibria in most cases." In what sense it that a "complication"?