Newcomb's Problem standard positions

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-06T17:05:23.522Z · LW · GW · Legacy · 22 comments

Contents

22 comments

Marion Ledwig's dissertation summarizes much of the existing thinking that's gone into Newcomb's Problem.

(For the record, I myself am neither an evidential decision theorist, nor a causal decision theorist in the current sense.  My view is not easily summarized, but it is reflectively consistent without need of precommitment or similar dodges; my agents see no need to modify their own source code or invoke abnormal decision procedures on Newcomblike problems.)

22 comments

Comments sorted by top scores.

comment by Vladimir_Nesov · 2009-04-06T18:07:23.547Z · LW(p) · GW(p)

Why is your view not easily summarized? From what I see, the solution satisfying all of the requirements looks rather simple, without even any need to define causality and the like. I may write it up at some point in the following months, after some running confusions (not crucial to the main point) are resolved.

Basically, all the local decisions come from the same computation that would be performed to set the most general precommitment for all possible states of the world. The expected utility maximization is defined only once, on the global state space, and then the actual actions only retrieve the global solution, given encountered observations. The observations don't change the state space over which the expected utility optimization is defined (and don't change the optimal global solution or preference order on the global solutions), only what the decisions in a given (counterfactual) branch can affect. Since the global precommitment is the only thing that defines the local agents' decisions, the "commitment" part can be dropped, and the agents' actions can just be defined to follow the resulting preference order.

I admit, it'd take some work to write that up understandably, but it doesn't seem to involve difficult technical issues.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-04-07T03:03:49.904Z · LW(p) · GW(p)

I think your summary is understandable enough, but I don't agree that observations should never change the optimal global solution or preference order on the global solutions, because observations can tell you which observer you are in the world, and different observers can have different utility functions. See my counter-example in a separate comment at http://lesswrong.com/lw/90/newcombs_problem_standard_positions/5u4#comments.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-04-07T10:49:36.736Z · LW(p) · GW(p)

From the global point of view, you only consider different possible experiences, that imply different possible situations. Nothing changes, because everything is determined from the global viewpoint. If you want to determine certain decisions in response to certain possible observations, you also specify that globally, and set it in stone. Whatever happens to you, you can (mathematically speaking) consider that in advance, as an input sequence to your cognitive algorithm, and prepare the plan of action in response. The fact that you participate in a certain mind-copying experiment is also data to which you respond in a certain way.

This is of course not for human beings, this is for something holding much stronger to reflective consistency. And in that setting changing preferences is unacceptable.

comment by cousin_it · 2009-04-06T19:13:31.128Z · LW(p) · GW(p)

Due to my math background, the thesis read like total gibberish. Tons and tons of not even wrong, like the philosophical tomes written on the unexpected hanging paradox before the logical contradiction due to self-reference was pointed out.

But one passage stood out as meaningful:

the predictor just has to be a little bit better than chance for Newcomb's problem to arise... One doesn't need a good psychologist for that. A friend who knows the decision maker well is enough.

This passage is instructively wrong. To screw with such an Omega, just ask a different friend who knows you equally well, take their judgement and do the reverse. (Case 3 "Terminating Omega" in my post.) This indicates the possibility that the problem statement may be a self-contradictory lie, just like the setup of the unexpected hanging paradox. Of course, the amount of computation needed to bring out the contradiction depends on how much mystical power you award to Omega.

I apologize for getting on my horse here. This discussion should come to an end somehow.

Replies from: dclayh, Nick_Tarleton
comment by dclayh · 2009-04-06T21:12:27.455Z · LW(p) · GW(p)

This passage is instructively wrong. To screw with such an Omega, just ask a different friend who knows you equally well, take their judgement and do the reverse.

I think this reply is also illuminating: the stated goal in Newcomb's problem is to maximize your financial return. If your goal is make Omega have predicted wrongly, you are solving a different problem.

I do agree that the problem may be subtly self-contradictory. Could you point me to your preferred writeup of the Unexpected Hanging Paradox?

Replies from: cousin_it, whpearson
comment by cousin_it · 2009-04-06T21:48:33.811Z · LW(p) · GW(p)

Uh, Omega has no business deciding what problem I'm solving.

Could you point me to your preferred writeup of the Unexpected Hanging Paradox?

The solution I consider definitively correct is outlined on the Wikipedia page, but simple enough to be expressed here. The judge actually says "you can't deduce the day you'll be hanged, even if you use this statement as an axiom too". This phrase is self-referential, like the phrase "this statement is false". Although not all self-referential statements are self-contradictory, this one turns out to be. The proof of self-contradiction simply follows the prisoner's reasoning. This line of attack seems to have been first rigorously formalized by Fitch, "A Goedelized formulation of the prediction paradox", can't find the full text online. And that's all there is to it.

Replies from: dclayh
comment by dclayh · 2009-04-06T22:09:13.516Z · LW(p) · GW(p)

Uh, Omega has no business deciding what problem I'm solving.

No, but if you're solving something other than Newcomb's problem, why discuss it on this post?

Replies from: cousin_it
comment by cousin_it · 2009-04-06T22:18:12.294Z · LW(p) · GW(p)

I'm not solving it in the sense of utility maximization. I'm solving it in the sense of demonstrating that the input conditions might well be self-contradictory, using any means available.

Replies from: dclayh
comment by dclayh · 2009-04-06T22:41:50.971Z · LW(p) · GW(p)

Okay yes, I see what you're trying to do and the comment is retracted.

comment by whpearson · 2009-04-06T21:36:58.015Z · LW(p) · GW(p)

Maximising your financial return entails that you make omega's prediction wrong, if you can get it to predict that you one box when you actually two box, you maximise your financial return.

Replies from: Eliezer_Yudkowsky, dclayh
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-06T21:38:48.526Z · LW(p) · GW(p)

Well, it had better not be predictable that you're going to try that. I mean, at the point where Omega realizes, "Hey, this guy is going to try an elaborate clever strategy to get me to fill box B and then two-box" It's pretty much got you pegged.

Replies from: ciphergoth, whpearson
comment by Paul Crowley (ciphergoth) · 2009-04-06T23:43:48.349Z · LW(p) · GW(p)

That's not so - the "elaborate clever strategy" does include a chance that you'll one-box. What does the payoff matrix look like from Omega's side?

comment by whpearson · 2009-04-06T21:46:57.165Z · LW(p) · GW(p)

I never said it was easy thing to do. I just meant that situation is the maximum if it is reachable. Which depends upon the implementation of Omega in the real world.

comment by dclayh · 2009-04-06T22:36:22.253Z · LW(p) · GW(p)

My point is merely that getting Omega to predict wrong is easy (flip a coin). Getting an expectation value higher than $1 million is what's hard (and likely impossible, if Omega is much smarter than you, as Eliezer says above).

comment by Nick_Tarleton · 2009-04-06T19:28:02.277Z · LW(p) · GW(p)

To screw with such an Omega, just ask a different friend who knows you equally well, take their judgement and do the reverse.

I believe this can be made consistent. Your first friend will predict that you will ask your second friend. Your second friend will predict that you will do the opposite of whatever they say, and so won't be able to predict anything. If you ever choose, you'll have to fall back on something consistent, which your first friend will consequently predict.

If you force 2F to make some arbitrary prediction, though, then if 1F can predict 2F's prediction, 1F will predict you'll do the opposite. If 1F can't do that, he'll do whatever he would do if you used a quantum randomizer (I believe this is usually said to be not putting anything in the box).

Replies from: cousin_it, cousin_it
comment by cousin_it · 2009-04-06T20:15:43.459Z · LW(p) · GW(p)

You have escalated the mystical power of Omega - surely it's no longer just a human friend who knows you well - supporting my point about the quoted passage. If your new Omegas aren't yet running full simulations (a case resolved by indexical uncertainty) but rather some kind of coarse-grained approximations, then I should have enough sub-pixel and off-scene freedom to condition my action on 2F's response with neither 1F nor 2F knowing it. If you have some other mechanism of how Omega might work, please elaborate: I need to understand an Omega to screw it up.

comment by cousin_it · 2009-04-06T19:54:46.849Z · LW(p) · GW(p)

To determine exactly how to screw with your Omega, I need to understand what it does. If it's running something less than a full simulation, something coarse-grained, I can exploit it: condition on a sub-pixel or off-scene detail. (The full simulation scenario is solved by indexical uncertainty.) In the epic thread no one has yet produced a demystified Omega that can't be screwed with. Taboo "predict" and explain.

comment by jslocum · 2012-02-21T03:03:31.439Z · LW(p) · GW(p)

I've devised some additional scenarios that I have found to be helpful in contemplating this problem.

Scenario 1: Omega proposes Newcomb's problem to you. However, there is a twist: before he scans you, you may choose on of two robots to perform the box opening for you. Robot A will only open the $1M box; robot B will open both.

Scenario 2: You wake up and suddenly find yourself in a locked room with two boxes, and a note from Omega: "I've scanned a hapless citizen (not you). predicted their course of action, and placed the appropriate amount of money in the two boxes present. Choose one or two and then you may go"

In scenario 1, both evidential and causal decision theories agree that you should one-box. In scenario 2, they both agree that you should two-box. Now, if we replace the robots with your future self and the hapless citizen with your past self, S1 becomes "what should you do prior to being scanned by Omega" and S2 reverts to the original problem. So now, omitting the possibility of fooling Omega as negligible, it can be seen that maximizing the payout from Newcomb's problem is really about finding a way to cause your future self to one-box.

What options are available, to either rational agents or humans, for exerting causal power on their future selves? A human might make a promise to themselves (interesting question: is a promise a precommitment or a self-modification?), ask another person (or other agent) to provide disincentives for two-boxing (e.g. "Hey, Bob, I bet you I'll one-box. If I win, I get $1; if you win, you get $1M), or find some way of modifying the environment to prevent their future self from two-boxing (e.g. drop the second box down a well). A general rational agent has similar options: modify itself into something that will one-box, and/or modify the environment so that one-boxing is the best course of action for its future self.

So now we have two solutions, but can we do better? If rational agent 'Alpha' doesn't want to rely on external mechanisms to coerce it's future's behavior, and also does not want to introduce a hack into its source code, what general solution can it adopt that solves this general class of problem? I have not yet read the Timeless Decision Theory paper; I think I'll ponder this question before doing so, and see if I encounter any interesting thoughts.

comment by Wei Dai (Wei_Dai) · 2009-04-07T02:53:22.816Z · LW(p) · GW(p)

It's not clear that reflective consistency is feasible for human beings.

Consider the following thought experiment. You’re about to be copied either once (with probability .99) or twice (with probability .01). After that, one of your two or three instances will be randomly selected to be the decision-maker. He will get to choose from the following options, without knowing how many copies were made:

A: The decision-maker will have a pleasant experience. The other(s) will have unpleasant experience(s).

B: The decision-maker will have an unpleasant experience. The other(s) will have pleasant experience(s).

Presumably, you’d like to commit your future self to pick option B. But without some sort of external commitment device, it’s hard to see how you can prevent your future self from picking option A.

Replies from: cousin_it
comment by cousin_it · 2009-04-07T14:34:36.603Z · LW(p) · GW(p)

Why so complicated? Just split into two selves and play Prisoner's Dilemma with each other. A philosophically-inclined person could have major fun with this experiment, e.g. inventing some sort of Agentless Decision Theory, while mathematically-inclined people enjoy the show from a safe distance.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-04-07T20:20:38.651Z · LW(p) · GW(p)

I structured my thought experiment that way specifically to avoid superrationality-type justifications for playing Cooperate in PD.

comment by BenRayfield · 2011-01-02T21:24:40.786Z · LW(p) · GW(p)

This is my solution to Newcomb's Paradox.

Causal decision theory is a subset of evidential decision theory. We have much evidence that information flows from past to future. If we observe new evidence that information flows the other direction or the world works a different way than we think which allows Omega (or anyone else) to repeatedly react to the future before it happens, then we should give more weight to other parts of decision theory than causal. Depending on what we observe, our thoughts can move gradually between the various types of decision theory, using evidential decision theory as the meta-algorithm to choose the weighting of the other algorithms.

Observations are all we have. Observations may be that information flows past to future, or they may be that Omega predicts accurately, or some combination. In this kind of decision theory, estimate the size of the evidence for each kind of decision theory.

The evidence for causal theory is large but can be estimated as the log_base_2 of the number of synapses in a Human brain (10^15) multiplied by http://en.wikipedia.org/wiki/Dunbar%27s_number (150). The evidence may be more, but that is a limit on how advanced a thing any size group of people can learn (without changing how we learn). That result is around 57 bits.

The game played in Newcomb's Paradox has 2 important choices: one-boxing and two-boxing, so I used log base 2. Combining the evidence from all previous games and other ways Newcomb's Paradox is played, if the evidence that Omega is good at predicting builds up to exceed 57 bits, then in choices related to that, I would be more likely to one-box. If there have only been 56 observations and in all of them two-boxing lost or one-boxing won, then I would be more likely to two-box because there are more observations that information flows past to future and Omega doesn't know what I will do.

The Newcomb Threshold of 57 is only an estimate for a specific Newcomb problem. For each choice, we should reconsider the evidence for the different kinds of decision theory so we can learn to win Newcomb games more often than we lose.