How to formalize predictors
post by cousin_it · 2018-06-28T13:08:11.549Z · LW · GW · 7 commentsContents
7 comments
In Newcomb's Problem, what information does the predictor get if you flip a coin? Here's some options:
1) "Bob has 50:50 odds of one-boxing or two-boxing"
2) "Bob will one-box" (chosen randomly with the same odds as your decision)
3) "Bob will one-box" (guaranteed to be the same as your decision)
I think (1) is a poor formalization, because the game tree becomes unreasonably huge, and some strategies of the predictor (like "fill the box unless the probability of two-boxing is exactly 1") leave no optimal strategy for the player.
And (3) seems like a poor formalization because it makes the predictor work too hard. Now it must predict all possible sources of randomness you might use, not just your internal decision-making.
That leaves us with (2). Basically we allow the predictor to "sample" your decision at any information set in the game. That means we can add some extra nodes to the player's information sets and get rid of the predictor entirely, ending up with a single player game. Newcomb's Problem and Counterfactual Mugging can be easily analyzed in this way, leading to the same answers we had before (one-box, pay up). It also gives a crisp formalization of the transparent boxes Newcomb problem, where we sample the player's decision at the information set of "seeing both boxes filled".
I think this might end up useful for bounded versions of Counterfactual Mugging, which are confusing to everyone right now. But also it feels good to nail down my understanding of the simple version.
7 comments
Comments sorted by top scores.
comment by Shmi (shminux) · 2018-06-28T16:10:53.387Z · LW(p) · GW(p)
To me the answer depends on what kind of predictor it is. Some options below, in the order of increasing power.
a) Mind reader: Can it predict *only* Bob's mind and not the rest of the world? Then it's 1 before the flip and 2 after, since in this case the predictor's power is the same as Bob's. That's the classic way to draw the game of rock-paper-scissors against someone who can read your mind.
b) Laplace demon: Can it accurately predict the movements of Bob's coin-flipping hand, the currents in the air, and other local classical factors that go into figuring out how the coin lands? Then it's 3. Unless Bob's coin has quantum randomness, then it's 2.
c) Demiurge: Can it accurately predict the whole of the universe because it has seen it run from the Big Bang to heat death, and now is simply replaying the tape? Then it is 3.
Also, any good links to the current state of research into " bounded versions of Counterfactual Mugging, which are confusing to everyone right now"?
Replies from: cousin_it, cousin_it↑ comment by cousin_it · 2018-06-28T16:36:27.857Z · LW(p) · GW(p)
Yeah, see this post and the links from it for examples. I don't understand most of it, to be honest. For this post I was just trying to figure out the cleanest theory.
Replies from: shminux↑ comment by Shmi (shminux) · 2018-06-29T00:58:41.526Z · LW(p) · GW(p)
Had a brief look and gave up. Maybe my math skills are not up to snuff, or the inferential distance is too large, or both.
↑ comment by cousin_it · 2018-06-29T07:57:28.438Z · LW(p) · GW(p)
I just came up with another argument why 2 is nicer than 1: the predictor can use 2 multiple times and get knowledge that's similar to 1, but without the infinite precision that makes 1 problematic.
As for 3, it can also be translated into single player game theory by doing another kind of modification on the game tree, and gives the same answers as 2 in NP and CM. But 2 is more in line with the rest of game theory where players can have private randomness, and it's what you get when you replace predictors with amnesia [LW · GW] or Nash equilibrium prediction [LW · GW].
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-01-30T11:58:07.484Z · LW(p) · GW(p)
Like Shminux, I say: Why not all three? You've just very helpfully pointed out that there are different ways in which someone can be predictable. So going forward, when I contemplate decision theory hypotheticals in which someone is predictable, I'll make sure to specify which of these three kinds is in effect. Usually I'll consider all three kinds, to see if it changes the results.
comment by Caspar Oesterheld (Caspar42) · 2021-02-02T16:50:21.128Z · LW(p) · GW(p)
As I mentioned elsewhere, I don't really understand...
>I think (1) is a poor formalization, because the game tree becomes unreasonably huge
What game tree? Why represent these decision problems as any kind of trees or game trees in particular? At least some problems of this type can be represented efficiently, using various methods to represent functions on the unit simplex (including decision trees)... Also: Is this decision-theoretically relevant? That is, are you saying, a good decision theory doesn't have to deal with 1 because it is cumbersome to write out (some) problems of this type? But *why* is this decision-theoretically relevant?
>some strategies of the predictor (like "fill the box unless the probability of two-boxing is exactly 1") leave no optimal strategy for the player.
Well, there are less radical ways of addressing this. E.g., expected utility-type theories just assign a preference order to the set of available actions. We could be content with that and accept that in some cases, there is no optimal action. As long as our decision theory ranks the available options in the right order... Or we could restrict attention to problems where an optimal strategy exists despite this dependence.
>And (3) seems like a poor formalization because it makes the predictor work too hard. Now it must predict all possible sources of randomness you might use, not just your internal decision-making.
For this reason, I always assume that predictors in my Newcomb-like problems are compensated appropriately and don't work on weekends! Seriously, though: what does "too hard" mean here? Is this just the point that it is in practice easy to construct agents that cannot be realistically predicted in this way when they don't want to be predicted? If so: I find that at least somewhat convincing, though I'd still be interested in developing theory that doesn't hinge on this ability.
Replies from: cousin_it↑ comment by cousin_it · 2021-02-03T11:36:15.259Z · LW(p) · GW(p)
I guess I just like game theory. "Alice chooses a box and Bob predicts her action" can be viewed as a game with Alice and Bob as players, or with only Alice as player and Bob as the shape of the game tree, but in any case it seems that option (2) from the post leads to games where solutions/equilibria always exist, while (1) doesn't. Also see my other comment about amnesia, it's basically the same argument. It's fine if it's not a strong argument for you.