# UDT as a Nash Equilibrium

post by cousin_it · 2018-02-06T14:08:30.211Z · LW · GW · 17 commentsI realized today that UDT doesn't really need the assumption that other players use UDT. In any game where all players have the same utility function, "everyone using UDT" is a Nash equilibrium that gives everyone their highest possible expected utility. So you can just use it unilaterally.

That covers such cases as Absent-Minded Driver, Psy-Kosh's problem, Wei's coordination problem. Notably it doesn't cover the Prisoner's Dilemma, because the players assign different utilities to the same outcome.

(Also this shows how Von Neumann-Morgenstern expected utility maximization is basically a restriction of UDT to single player games with perfect recall. For imperfect recall (AMD) or multiple players (Psy-Kosh) you need the full version.)

You can actually push game theory a bit further, and allow games where players have different utility functions, as long as some subset of players can jointly enforce their highest possible expected utilities. UDT doesn't support that kind of decision-making, but really it should be a no-brainer too...?

## 17 comments

Comments sorted by top scores.

## comment by abramdemski · 2018-02-07T10:14:21.080Z · LW(p) · GW(p)

This seems close to Jessica's observation here:

https://agentfoundations.org/item?id=853

Unilateral decisions based on a Nash equilibria can be in poor equilibria, and so fail to be globally optimal. So, more like UDT 1.0 as opposed to UDT 1.1. You can fail stag hunt.

Your last paragraph sounds like it is in the direction of cooperative oracles (which are motivated by the need to select equilibria, IE, moving from UDT1.0 to UDT1.1 but in a way compatible with general game theoretic reasoning involving agents with various utility functions):

https://agentfoundations.org/item?id=1468

Replies from: cousin_it## ↑ comment by cousin_it · 2018-02-07T12:26:52.721Z · LW(p) · GW(p)

I think the approach in my first paragraph solves Wei's coordination problem (which was the motivation for UDT1.1) assuming some kind of tie-breaking, which is common enough. And the generalization described in the last paragraph solves stag hunt as well.

It's a bit worrying that perfectly symmetric PD is left out. But maybe we should leave it to be solved with bargaining later, not treat it as an edge case of conflict-free decision making.

Replies from: abramdemski## ↑ comment by abramdemski · 2018-02-10T23:18:55.373Z · LW(p) · GW(p)

It seems to me that "assuming some kind of tie breaking" is *basically* UDT1.1, is it not?

I agree that the generalization solves stag hunt. Do you think cooperative oracles are a good formalization of the generalization you intend? (Also, stag hunt involves differing utility functions, but they're not really essential to the problem, unlike prisoner's dilemma. I generally imagine a same-utility-function version for that reason, so what it's pointing at is coordination problems which don't involve asymmetry / tie-breaking.)

Replies from: cousin_it## ↑ comment by cousin_it · 2018-02-11T09:18:52.370Z · LW(p) · GW(p)

It seems to me that "assuming some kind of tie breaking" is basicallyUDT1.1, is it not?

Tie-breaking is just a technical assumption used in all decision theories (even VNM). It's not the essence of UDT1.1 at all.

Imagine a variant of Wei's coordination game where both agents prefer the (1,2) outcome to the (2,1) outcome. It's not a tie, but UDT1 still can't solve it, because there's no obvious proof that the first agent returning 1 would imply any particular utility. You need to use the fact that the two agents receive different observations, and optimize the global strategy (input-output map). UDT1.1 supplies that, and so does my approach in the post.

(And then Stuart published ADT and Eliezer published FDT, which both mystifyingly dropped that idea and went back to UDT1...)

Do you think cooperative oracles are a good formalization of the generalization you intend?

Unlike cooperative oracles, I'm not trying to solve bargaining. I think game theory will never be reduced to decision theory, that's like going back from Einstein to Newton. The contribution of my post is more about finding the natural boundaries of decision theory within game theory, and arguing that PD shouldn't be included.

Replies from: abramdemski, abramdemski## ↑ comment by abramdemski · 2018-02-12T23:30:34.128Z · LW(p) · GW(p)

I think game theory will never be reduced to decision theory, that's like going back from Einstein to Newton. The contribution of my post is more about finding the natural boundaries of decision theory within game theory, and arguing that PD shouldn't be included.

You said something in the post that I'm going to assume is closely related:

(Also this shows how Von Neumann-Morgenstern expected utility maximization is basically a restriction of UDT to single player games with perfect recall. For imperfect recall (AMD) or multiple players (Psy-Kosh) you need the full version.)

I think I have two points which may shift you on this:

- If agents are using reflective oracles, which is to a certain extent a natural toy model of agents reasoning about each other (since it solves the grain of truth problem, allowing us to represent Bayesians who can reason about other agents in the same way they reason about everything, rather than in the game-theoretic way where there's a special thing you do in order to reason about agents), then AIXI-like constructions will play Nash equilibria. IE, Nash equilibria are then just a consequence of maximizing expected utility.
- There's a sense in which correlated equilibria are what you get if you want game-theory to follow from individual rationality axioms rather than generalize them; this is argued in Correlated Equilibrium as an Expression of Bayesian Rationality by Aumann.

## ↑ comment by cousin_it · 2018-02-13T01:00:46.589Z · LW(p) · GW(p)

Yeah, that's similar to how Benya explained reflective oracles to me years ago. It made me very excited about the approach back then. But at some point I realized that to achieve anything better than mutual defection in the PD, the oracle needs to have a "will of its own", pulling the players toward Pareto optimal outcomes. So I started seeing it as another top-down solution to game theory, and my excitement faded.

Maybe not much point in trying to sway my position now, because there are already people who believe in cooperative oracles and more power to them. But this also reminds me of a conversation I had with Patrick several months before the Modal Combat paper came out. Everyone was pretty excited about it then, but I kept saying it would lead to a zoo of solutions, not some unique best solution showing the way forward. Years later, that's how it played out.

We don't have any viable attack on game theory to date, and I can't even imagine what it could look like. In the post I tried to do the next best thing and draw a line: these problems are amenable to decision theory and these aren't. Maybe if I get it just right, one day it will show me an opening.

Replies from: abramdemski## ↑ comment by abramdemski · 2018-02-13T18:49:32.705Z · LW(p) · GW(p)

Yeah, I also put a significant probability on the "there's going to be a zoo of solutions" model of game theory. I suppose I've recently been more optimistic than usual about non-zoo solutions.

## ↑ comment by abramdemski · 2018-02-12T23:13:27.421Z · LW(p) · GW(p)

Imagine a variant of Wei's coordination game where both agents prefer the (1,2) outcome to the (2,1) outcome. It's not a tie, but UDT1 still can't solve it, because there's no obvious proof that the first agent returning 1 would imply any particular utility. You need to use the fact that the two agents receive different observations, and optimize the global strategy (input-output map). UDT1.1 supplies that, and so does my approach in the post.

Ok. I thought by tie-breaking you were implying equilibrium selectien. I don't understand how your approach in the post is doing anything more than tie-breaking, then. I still don't understand any difference from what you're indicating and Jessica's post.

Unlike cooperative oracles, I'm not trying to solve bargaining.

What about agents with *access* to cooperative oracles, but which are conventianally rational (VNM rational) with respect to the probabilities provided by the cooperative oracle? This means they're stuck in Nash equilibria (rather than doing anything nicer via bargaining), but the nash equilibria are (something close to) pareto-optimal in the set of Nash equilibria. This means in particular that you get UDT1.1 equilibrium selection; but, also you get reasonable generalizations of it to agents with different utility functions.

Is *that* similar to what you meant in your final paragraph?

## ↑ comment by cousin_it · 2018-02-13T00:26:01.308Z · LW(p) · GW(p)

Slightly offtopic to your questions (which I'll try to answer in the other branch), but I'm surprised we seem to disagree on some simple stuff...

In my mind UDT1.1 isn't about equilibrium selection:

1) Here's a game that requires equilibrium selection, but doesn't require UDT1.1 (can be solved by UDT1). Alice and Bob are placed in separate rooms with two numbered buttons each. If both press button 1, both win 100 dollars. If both press button 2, both win 200 dollars. If they press different buttons, they get nothing.

2) Here's a game that has only one equilibrium, but requires UDT1.1 (can't be solved by UDT1). Alice and Bob are placed in separate rooms with two numbered buttons each. The experimenter tells each of them which button to press (maybe the same, maybe different). If they both obey, both win 500 dollars. If only one obeys, both win 100 dollars. Otherwise nothing.

Maybe we understand UDT1 and UDT1.1 differently? I'm pretty sure I'm following Wei's intent, where UDT1.1 simply fixes the bug in UDT1's handling of observations.

Replies from: abramdemski## ↑ comment by abramdemski · 2018-02-13T23:09:22.084Z · LW(p) · GW(p)

The title of the UDT1.1 post is "explicit optimization of global strategy". The key paragraph:

The fix is straightforward in the case where every agent already has the same source code and preferences. UDT1.1, upon receiving input X, would put that input aside and first iterate through all possible input/output mappings that it could implement and determine the logical consequence of choosing each one upon the executions of the world programs that it cares about. After determining the optimal S* that best satisfies its preferences, it then outputs S*(X).

Since optimal global strategies are also Nash equilibria in the framework of Jessica's post, we can think of global policy selection as equilibrium selection (at least to the extent that we buy that framework). You top-level post also seems to buy this connection (?).

I think your problem #1 superficially doesn't sound like it requires UDT1.1, because it is very plausible that UDT1 can solve it based on a particular structure of correlations between Alice and Bob's actions. But, actually, I suspect we need UDT1.1 to have very good guarantees; UDT1 is solving it via assumptions about correlation structure, not via some proposed mechanism which would systematically believe in such correlations.

I'm unclear on why you're saying problem #2 requires UDT1.1. It is better to obey, unless you think obeying negatively correlates with your other copy obeying. Is that the source of difficulty you're pointing at? We need UDT1.1 not to select equilibrium, but to ensure that we're in any equilibrium at all?

Replies from: cousin_it## ↑ comment by cousin_it · 2018-02-14T00:54:12.299Z · LW(p) · GW(p)

Ah, I see. You're thinking of both theories in a math-intuition-based setting ("negatively correlates with your other copy" etc). I prefer to use a crisp proof-based setting, so we can discern what we *know* about the theories from what we *hope* they would do in a more fuzzy setting.

UDT1 receives an observation X and then looks for provable facts of the form "if all my instances receiving observation X choose to take a certain action, I'll get a certain utility".

UDT1.1 also receives an observation X, but handles it differently. It looks for provable facts of the form "if all my instances receiving various observations choose to use a certain mapping from observations to actions, I'll get a certain utility". Then it looks up the action corresponding to X in the mapping.

In problem 2, a UDT1 player who's told to press button 1 will look for facts like "if everyone who's told to press button 1 complies, then utility is 500". But there's no easy way to prove such a fact. The utility value can only be inferred from the actions of both players, who might receive different observations. That's why UDT1.1 is needed - to fix UDT1's bug with handling observations.

The crisp setting makes it clear that UDT1.1 is about making more equilibria reachable, not about equilibrium selection. A game can have several equilibria, all of them reachable without UDT1.1, like my problem 1. Or it can have one equilibrium but require UDT1.1 to reach it, like my problem 2.

Of course, when we move to a math-intuition-based setting, the difference might become more fuzzy. Maybe UDT1 will solve some problems it couldn't solve before, or maybe not. The only way to know is by formalizing math intuition.

## comment by ESRogs · 2018-02-06T18:37:14.784Z · LW(p) · GW(p)

I realized today that UDT doesn't really need the assumption that other players use UDT. In any multiplayer game with imperfect recall where all players have the same utility function, "everyone using UDT" is a Nash equilibrium that gives everyone their highest possible expected utility. So you can just use it unilaterally.

Not sure if I'm misunderstanding, but wouldn't it be more accurate to say that you're *justified* in assuming the other players use UDT, not that you don't have to assume it?

If the other players aren't using UDT, it's not necessarily your best option, right? (E.g. in Psy-Kosh's problem, if the other players have all decided to say "yea" for some reason, you don't want to unilaterally say "nay.")

Replies from: cousin_it## comment by ESRogs · 2018-02-06T18:23:10.457Z · LW(p) · GW(p)

In any multiplayer game with imperfect recall where all players have the same utility function, "everyone using UDT" is a Nash equilibrium that gives everyone their highest possible expected utility.

Is it important that the players have imperfect recall? Does it work if they have perfect recall as well (in which case you're just pointing out that perfect recall is not required)?

Replies from: cousin_it## comment by SquirrelInHell · 2018-02-07T13:56:14.634Z · LW(p) · GW(p)

I realized today that UDT doesn't really need the assumption that other players use UDT.

Was there ever such an assumption? I recall a formulation in which the possible "worlds" include everything that feeds into the decision algorithm, and it doesn't matter if there are any games and/or other players inside of those worlds (their treatment is the same, as are corresponding reasons for using UDT).

Replies from: cousin_it## ↑ comment by cousin_it · 2018-02-07T14:55:29.446Z · LW(p) · GW(p)

Yeah, it's a bit subtle and I'm not sure it even makes sense. But the idea goes something like this.

Most formulations of UDT are self-referential: "determine the logical consequences of *this algorithm* behaving so-and-so". That automatically takes into account all other instances of *this algorithm* that happen to exist in the world, as you describe. But in this post I'm trying to handwave a non-self-referential version: "If you're playing a game where everyone has the same utility function, follow the simplest Nash equilibrium maximizing everyone's expected utility, no matter how your beliefs change during the game". That can be seen as an individually rational decision! The other players don't have to be isomorphic to you, as long as they are rational enough and have no incentive to cheat you.

That goes against something I've been telling people for years - that UDT cannot be used in real life, because the self-referential version requires proving detailed theorems about other people's minds. The idea in this post *can* be used in real life. The fact that it can't handle PD is a nice sanity check, because cooperating in PD requires proving detailed theorems to prevent cheating, while the problems I'm solving have no incentives to cheat in the first place.