In Logical Time, All Games are Iterated Games

post by abramdemski · 2018-09-20T02:01:07.205Z · LW · GW · 10 comments

Contents

  Logical Time
  Iterated Games
  Toy Models
None
10 comments

Logical Time

The main purpose of this post is to introduce the concept of logical time. The idea was mentioned in Scott's post, Bayesian Probability is for things that are Space-like Separated from You [AF · GW]. It was first coined in a conference call with, Daniel Demski, Alex Mennan, and perhaps Corey Staten and Evan Lloyd -- I don't remember exactly who was there, or who first used the term. Logical time is an informal concept which serves as an intuition pump for thinking about logical causality and phenomena in logical decision theory; don't take it too seriously. In particular, I am not interested in anybody trying to formally define logical time (aside from formal approaches to logical causality). Still, it seems like useful language for communicating decision-theory intuitions.

Suppose you are playing chess, and you consider moving your bishop. You play out a hypothetical game which results in your loss in several moves. You decide not to move your bishop as a result of this. The hypothetical game resulting in your loss still exists within logic. You are logically later than it, in that the game you actually play depends on what happened in this hypothetical game.

Suppose you're stuck in the desert in a Parfit's Hitchhiker problem. Paul Ekman is reading your face, deciding whether you're trustworthy. Paul Ekman does this based on experience, meaning that the computation which is you has a strong similarity with other computations. This similarity can be used to predict you fairly reliably, based on your facial expressions. What creates this similarity? According to the logical time picture, there is a logical fact much earlier in logical time, which governs the connection between facial expressions and behavior.

To the extent that agents are trying to predict the future, they can be thought of as trying to place themselves later in logical time than the events which they're trying to predict. Two agents trying to predict each other are competing to see who can be later in logical time. This is not necessarily wise; in games like chicken, there is a sense in which you want to be earlier in logical time.

Traditional game theory, especially Nash equilibria, relies on what amounts to loopy logical causality to allow each agent to be after the other in logical time. Whether this is bad depends on your view on logical time travel. Perhaps there is a sense in which logical time can be loopy, due to prediction (which is like logical time travel). Perhaps logical time can't be loopy, and this is a flaw in the models used by traditional game theory.

Iterated Games

In logical time, all games are iterated games. An agent tries to forecast what happens in the decision problem it finds itself in by comparing it to similar decision problems which are small enough for it to look at. This puts it later in logical time than the small examples. "Similar games" includes the exact same game, but in which both players have had less time to think.

This means it is appropriate to use iterated strategies. Agents who are aware of logical time can play tit-for-tat in single-shot Prisoner's Dilemma, and so, can cooperate with each other.

Iterated games are different in character than single-shot games. The folk theorem shows that almost any outcome is possible in iterated play (in a certain sense). This makes it difficult to avoid very bad outcomes, such as nearly always defecting in the prisoner's dilemma, despite the availability of much better equilibria such as tit-for-tat. Intuitively, this is because (as Yoav Shoham et al point out in If multi-agent learning is the answer, what is the question?) it is difficult to separate "teaching behavior" from "learning behavior": as in the tit-for-tat strategy, it is generally wise to adopt behavior designed to shape the incentive gradient of the other player, in addition to improving your own score. Unfortunately, it is difficult to say what it means to pursue these two objectives simultaneously.

The subfield most related to this problem is multi-agent learning. Sadly, as discussed in the Shoham et al paper I cited parenthetically in the preceding paragraph, multi-agent learning typically avoids the difficulty by focusing on learning single-shot equilibria via iterated play. Learning single-shot equilibria in an iterated setting is a somewhat weird thing to be doing (hence the title of the paper). However, it is understandable that people might avoid such a difficult problem. The folk theorem illustrates a severe equilibrium selection issue, meaning that traditional tools have little to say about rational play.

One might imagine learning to play single-shot games by playing them over and over. But, what can you do to learn iterated games? You might imagine that you jump up a level again, experiencing the iterated version repeatedly to discover the optimal iterated strategy. However, iterating the game more doesn't really escape the iterated setting; there is no further level!

(You might think meta-iteration involves making the other player forget what it learned in iterated play so far, so that you can re-start the learning process, but that doesn't make much sense if you retain your own knowledge; and if you don't, you can't be learning!)

Toy Models

We can make pictures of logical time using phenomena which we understand more fully. One such picture is based on proofs. If we imagine a theorem prover proving every theorem in some order (such as an ordering based on proof length), we can think of logical time as time-of-proof. We can formulate counterfactuals consistent with this notion of logical time. (As I mentioned before, a picture of logical time is just a picture of logical causality / logical counterfactuals -- the notion of logical time adds nothing, formally.)

We can examine logical time travel in this kind of model by constructing predictors using stronger logics, which allows a predictor to find shorter proofs. This creates decision-theoretic puzzles, because the agent with the weaker logic can't recognize the loopiness of the situation; it thinks it cannot influence the predictor, because (according to its weaker logic) the predictor has a short proof-length and is therefore earlier in logical time. We, on the other hand, can recognize that agents who act as if they control the predictor could do better in the decision problem.

This weirdness seems to only be possible because of the "two dimensional logical time" which exists in this toy model, in which we can vary both proof length and logical strength. One agent has access to arbitrarily long proofs via oracles, and so is "later" in the length dimension; the other has a stronger logic, and so is "later" in the strength dimension.

However, we can collapse the two dimensions into one via logical induction. Logical induction eventually learns to predict what stronger logics would predict, so computation time and logical strength are more or less the same.

You might expect that the loopy scenario in which an agent and a predictor accurately predict each other becomes impossible in logical induction, but, it does not. Logical-induction agents can predict each other well by examining what similar agents do in similar situations. As a result, LIDT agents converge to playing correlated equilibria with each other, more or less. (This result ignores the iterated aspect of the games, just like the multi-agent learning approaches I was complaining about earlier; despite learning from all the nearby copies within logic, the LIDT agents think only of the utility for their one decision, which paradoxically results in poorer outcomes even for that decision. Asymptotic decision theory does better, but no nice results for game theory have come from it so far.)

So long as an agent eventually settles down to making some reliable pattern of decisions in a situation, there will be relatively young logical inductors which have learned enough to accurately forecast the decisions made by logical-induction agents who reason using much more computational power.

We can think of the purely logical case, with its apparent two dimensions of logical time, as being a degenerate extreme version of the phenomenon in logical induction. In logical induction, the early predictions may be quite accurate, but they are fallible; they always run the risk of being wrong, since we're in the process of learning. In the pure logical case, we also run the risk of being wrong: using a stronger logic to make predictions runs the risk of introducing inconsistencies. This is easy to forget, since we are accustomed to the assumption that we can easily add axioms to a consistent system to get a stronger one.

An early predictor predicting a late agent must give up on some accuracy -- a prediction which relies on anything else than actually running the computation to be predicted has some chance of failure. This breaks the loopiness in logical time; the late agent always adds some small amount of information, even if its action is predicted with high reliability.

10 comments

Comments sorted by top scores.

comment by Gordon Seidoh Worley (gworley) · 2018-09-20T19:06:46.082Z · LW(p) · GW(p)

I realize you're not interested in defining logical time or presumably the philosophical issues underlying time that would connect it to, but for what it's worth I think "logical" is a weird thing to call this way of looking at time, but then this approach to time is almost entirely lacking in analytical philosophy as far as I have read so it lacks a good name at all. I've instead been calling things that look like this functional theories of time by analogy to functionalism since they treat time by what it does rather than what it's made of, often because metaphysically it's made of nothing, instead being a way of making sense of causality as observed from within the universe. That is, functional theories of time treat time as an observable phenomenon rather than a description of physics and make no assumption that time caches out metaphysically to anything that looks like time as it is experienced. Husserl, Heidegger, and Merleau-Ponty all wrote things suggesting we think about time in this way.

Replies from: abramdemski
comment by abramdemski · 2018-09-22T01:09:45.838Z · LW(p) · GW(p)

I agree that "functional time" makes sense, but somehow, I like "logical time" better. It brings out the paradox: logical truth is timeless, but any logical system must have a proof ordering, which brings out a notion of time based on what follows from what.

comment by JenniferRM · 2018-10-10T16:18:09.254Z · LW(p) · GW(p)
(You might think meta-iteration involves making the other player forget what it learned in iterated play so far, so that you can re-start the learning process, but that doesn't make much sense if you retain your own knowledge; and if you don't, you can't be learning!)

If I was doing meta-iteration my thought would be to maybe turn the iterated game into a one-shot game of "taking the next step from a position of relative empirical ignorance and thereby determining the entire future".

So perhaps make up all the plausible naive hunches that I or my opponent might naively believe (update rules, prior probabilities, etc), then explore the combinatorial explosion of imaginary versions of us playing the iterated game starting from these hunches. Then adopt the hunch(es) that maximizes some criteria and play the first real move that that hunch suggests.

This would be like adopting tit-for-tat in iterated PD *because that seems to win tournaments*.

After adopting this plan your in-game behavior is sort of simplistic (just sticking to the initial hunch that tit-for-tat would work) even though many bits of information about the opponent are actually arriving during the game.

If I try to find analogies in the real world here it calls to mind martial arts practice with finite training time. You go watch a big diverse MMA tournament first. Then you notice that grapplers often win. Meta-iteration has finished and then your zeroth move is to decide to train as a grappler during the limited time before you fight for the first time ever. Then in the actual game you don't worry too much about the many "steps" in the game where decision theory might hypothetically inject itself. Instead, you just let your newly trained grappling reflexes operate "as trained".

Note that I don't think this even close optimal! (I think "Bruce Lee" beats this strategy pretty easily?) However, if you squint you could argue that this rough model of meta-iteration is what humans mostly do for games of very high importance. Arguably, this is because humans have neurons that are slow to rewire for biological reasons than epistemic reasons...

However, when offered the challenge that "meta-iteration can't be made to make sense", this is what pops into my head :-)

When I try to think of a more explicitly computational model of meta-iteration-compatible gaming my attention is drawn to Core War. If you consider the "players of Core War" to be the human programmers, their virtue is high quality programming and they only make one move: the program they submit. If you consider the "players of Core War" to be the programs themselves their virtues are harder to articulate but speed of operation is definitely among them.

comment by dranorter · 2018-09-26T06:16:36.495Z · LW(p) · GW(p)

I think it's worth mentioning that part of the original appeal of the term (which made us initially wary) was the way it matches intuitively with the experience of signaling behavior. Here's the original motivating example. Imagine that you are in the Parfit's Hitchhiker scenario and Paul Ekman has already noticed that you're lying. What do you do? You try to get a second chance. But it won't be enough to simply re-state that you'll pay him. Even if he doesn't detect the lie this time around, you're the same person who had to lie only a moment ago. What changed? Well, you want to signal that what's changed is that some logical time has passed. A logically earlier version of you got a ride from a logically earlier Ekman but didn't pay. But then Ekman put effort into remembering the logical past and learning from it. A logically more recent version of you wasn't expecting this, and perished in the desert. Given that both you and Ekman know these things, what you need to do in order to survive is to show that you are in the logical future of those events, and learned from them. Not only that, but you also want to show that you won't change your mind during the ride back to civilization. There will be time to think during the car ride, and thinking can be a way of getting into the logical future. You want to demonstrate that you're fully in the logical future of the (chronologically yet-to-be-made) decision to pay.

This might be an easy problem if the hitchhiker and Ekman shared the same concept of logical time (and knew it). Then it would be similar to proving you remembered the literal past; you could describe a trivial detail or an agreed-upon signal. However, agents are not necessarily incentivized (or able) to use a shared imaginary iterated version of whatever game they're playing. To me it seems like one of the real questions the logical time terminology brings up is, when, and to what extent, will agents be incentivized to use compatible notions of logical time?

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-02-08T21:49:47.304Z · LW(p) · GW(p)

subfield most related to this problem is multi-agent learning. Sadly, as discussed in the Shoham et al paper I cited parenthetically in the preceding paragraph, multi-agent learning typically avoids th


404 not found

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-02-08T21:48:52.386Z · LW(p) · GW(p)

kind of model by constructing predictors using stronger logics, which allows a predictor to

Link broken 

comment by migueltorrescosta · 2018-09-20T08:53:26.598Z · LW(p) · GW(p)

Thank you for your post abramdemski!

I failed to understand why you can't arrive at a solution for the Single-Shot game via Iterated Play without memory of the previous game. In order to clarify my ideas let me define two concepts first:

Iterated Play with memory: We repeatedly play the game knowing the results of the previous games.

Iterated Play without memory: We repeatedly play the game, while having no memory of the previous play.

The distinction is important: With memory we can at any time search all previous games and act accordingly, allowing for strategies such as Tit-for-Tat and other history dependent strategies. Without memory we can still learn ( for example by applying some sort of Bayesian updates to our probability estimates of each move being played ), whilst not having access to the previous games before each move. That way we can "learn" how to best play the single shot version of the game by iterated play.

Does what I said above need any clarification, and is there any failure in its' logic?

Best Regards, Miguel

Replies from: abramdemski
comment by abramdemski · 2018-09-21T20:24:53.323Z · LW(p) · GW(p)

If you have no memory, how can you learn? I recognize that you can draw a formal distinction, allowing learning without allowing the strategies being learned to depend on the previous games. But, you are still allowing the agent itself to depend on the previous games, which means that "learning" methods wich bake in more strategy will perform better. For example, a learning method could learn to always go straight in a game of chicken by checking to see whether going straight causes the other player to learn to swerve. IE, it doesn't seem like a principled distinction.

Furthermore, I don't see the motivation for trying to do well in a single-shot game via iterated play. What kind of situation is it trying to model? This is discussed extensively in the paper I mentioned in the post, "If multi-agent learning is the answer, what is the question?"

Replies from: dranorter
comment by dranorter · 2018-09-26T07:27:24.347Z · LW(p) · GW(p)

The definition may not be principled, but there's something that feels a little bit right about it in context. There are various ways to "stay in the logical past" which seem similar in spirit to migueltorrescosta's remark, like calculating your opponent's exact behavior but refusing to look at certain aspects of it. The proposal, it seems, is to iterate already-iterated games by passing more limited information of some sort between the possibly-infinite sessions. (Both your and the opponent's memory gets limited.) But if we admit that Miguel's "iterated play without memory" is iterated play, well, memory could be imperfect in varied ways at every step, giving us a huge mess instead of well-defined games and sessions. But, that mess looks more like logical time at least.

Not having read the linked paper yet, the motivation for using iterated or meta-iterated play is basically to obtain a set of counterfactuals which will be relevant during real play. Depending on the game, it makes sense that this might be best accomplished by occasionally resetting the opponent's memory.

Replies from: abramdemski
comment by abramdemski · 2018-09-26T07:52:43.871Z · LW(p) · GW(p)

I have been thinking a bit about evolutionarily stable equilibria, now. Two things seem interesting (perhaps only as analogies, not literal applications of the evolutionarily stable equilibria concept):

  • The motivation for evolutionary equilibria involves dumb selection, rather than rational reasoning. This cuts the tricky knots of recursion. It also makes the myopic learning, which only pays attention to how well things perform in of round, seem more reasonable. Perhaps there's something to be said about rational learning algorithms needing to cut the knots of recursion somehow, such that the evolutionary equilibrium concept holds a lesson for more reflective agents.
  • The idea of evolutionary stability is interesting because it mixes the game and the metagame together a little bit: the players should do what is good for them, but the resulting solution should also be self-enforcing, which means consideration is given to how the solution shapes the future dynamics of learning. This seems like a necessary feature of a solution.