My Expected Value Approach to Newcomb's Problem

post by Peter Wildeford (peter_hurford) · 2011-08-05T07:24:50.985Z · LW · GW · Legacy · 18 comments

Contents

18 comments

 

Related to: Newcomb's Problem and Regret of Rationality and Newcomb's Problem: A Problem for Casual Decision Theories.

 

From the list of standard problems in rationality that have been talked to death but still don't have a strong consensus, allow me to re-present Newcomb's Problem:

A superintelligence from another galaxy, whom we shall call Omega, comes to Earth and sets about playing a strange little game.  In this game, Omega selects a human being, sets down two boxes in front of them, and flies away.

Box A is transparent and contains a thousand dollars.
Box B is opaque, and contains either a million dollars, or nothing.

You can take both boxes, or take only box B.

And the twist is that Omega has put a million dollars in box B iff Omega has predicted that you will take only box B.

Omega has been correct on each of 100 observed occasions so far - everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars.  (We assume that box A vanishes in a puff of smoke if you take only box B; no one else can take box A afterward.)

Before you make your choice, Omega has flown off and moved on to its next game.  Box B is already empty or already full.

Omega drops two boxes on the ground in front of you and flies off.

Do you take both boxes, or only box B?

 

This problem is famous for not only the fact that the answer is extensively controversial, but also for people who think they should one-box (take only box B) or two-box (take both box A and box B) almost always are very certain of their answer.  I'm one of those people -- I'm very certain I should one-box.

Since I missed out on earlier Newcomb's discussion, I'd like to explore my approach to the problem here.

 

In Newcomb's Problem, there are only four possible outcomes, and they end up in this tree:

Omega predicts you will one-box:

Omega predicts you will two-box:

 

We can use this knowledge to create an expected value formula for all four options (where P is the chance Omega guessed your choice correctly):

EV(One-box) = P*$1000000 + (1-P)*$0

EV(Two-box) = P*$1000 + (1-P)*$1001000

 

If we set the two equations equal to each other, we can solve for the value for P that would make both strategies equally viable, expected value wise.  That value is 50.05%.  So as long as we are confident that Omega has a better than 50.05% chance at predicting our choice, the expected value formula says we can expect to earn more from one-boxing.

 

But what of the dominant strategy approach?  Clearly if Omega predicted one-box, you earn more by two-boxing and if Omega predicted two-box, you also earn more by two-boxing.  Does this not count as a viable approach, and a worthy defense of two-boxing?

However, following this logic is specifically betting that it is likely Omega will predict incorrectly ...and if P < 50%, this is a losing bet.

 

Speaking of likelihood, chances are much better that I'm being way to naive about this than actually have stumbled upon something workable, but that is what discussion is for.

 

18 comments

Comments sorted by top scores.

comment by cousin_it · 2011-08-05T10:52:33.915Z · LW(p) · GW(p)

You're relying on the fact that you have uncertainty about Omega's prediction, which is really an accidental feature of the problem, not shared by other problems in the same vein.

Imagine a variant where both boxes are transparent and you can see what's inside, but the contents of the boxes were still determined by Omega's prediction of your future decision. (I think this formulation is due to Gary Drescher.) I'm a one-boxer in that variant too, how about you? Also see Parfit's Hitchhiker, where the predictor's decision depends on what you would do if you already knew the predictor decided in your favor, and Counterfactual Mugging, where you already know that your decision cannot help the current version of you (but you'd precommit to it nonetheless).

The most general solution to such problems that we currently know is Wei Dai's UDT. Informally it goes something like this: "choose your action so that the fact of your choosing it in your current situation logically implies the highest expected utility (weighted over all apriori possible worlds before you learned your current situation) compared to all other actions you could take in your current situation".

Replies from: Bongo, handoflixue, Manfred, Dr_Manhattan
comment by Bongo · 2011-08-05T13:07:12.228Z · LW(p) · GW(p)

Extremely counterfactual mugging is the simplest such variation IMO. Though it has the same structure as Parfit's Hitchhiker, it's better because issues of trust and keeping promises don't come into it. Here it is:

Omega will either award you $1000 or ask you to pay him $100. He will award you $1000 if he predicts you would pay him if he asked. He will ask you to pay him $100 if he predicts you wouldn't pay him if he asked.

Omega asks you to pay him $100. Do you pay?

comment by handoflixue · 2011-08-05T20:35:01.097Z · LW(p) · GW(p)

Imagine a variant where both boxes are transparent and you can see what's inside

That seems like a weird situation. Given two boxes, both of them with money, I'd take both. Given instead that one box is empty, I'd just take the one with money. So I'd default to doing whatever Omega didn't predict, barring me being told in advance about the situation and precommitting to one-boxing.

Replies from: cousin_it
comment by cousin_it · 2011-08-05T20:41:07.040Z · LW(p) · GW(p)

Sorry for not making things clear from the start. In Gary's version of the transparent boxes problem, Omega doesn't predict what you will do, it predicts what you would do if both boxes contained money. Your actions in the other case are irrelevant to Omega. Would you like to change your decision now?

Replies from: handoflixue
comment by handoflixue · 2011-08-05T21:41:48.612Z · LW(p) · GW(p)

So, basically, I know that if I take both boxes, and both boxes have money, I'm either in a simulation or Omega was wrong? In that case, precommiting to one-boxing seems sensible.

Replies from: orthonormal
comment by orthonormal · 2011-08-06T18:02:39.679Z · LW(p) · GW(p)

Drescher then goes on to consider the case where you know that Omega has a fixed 99% chance of implementing this algorithm, and a 1% chance of instead implementing the opposite of this algorithm, and argues that you should still one-box in that case if you see the million.

comment by Manfred · 2011-08-05T17:57:48.918Z · LW(p) · GW(p)

"choose your action so that the fact of your choosing it in your current situation logically implies the highest expected utility (weighted over all apriori possible worlds before you learned your current situation) compared to all other actions you could take in your current situation".

That sounds awkward. Would you say it's equivalent to "choose globally winning strategies, not just locally winning actions?"

Replies from: cousin_it
comment by cousin_it · 2011-08-05T20:46:56.478Z · LW(p) · GW(p)

As far as I know, you understand UDT and can answer that question yourself :-) But to me your formulation sounds a little vague. If a newbie tries to use it to solve Counterfactual Mugging, I think he/she may get confused about the intended meaning of "global".

Replies from: Manfred
comment by Manfred · 2011-08-05T21:00:36.946Z · LW(p) · GW(p)

I still don't know if I understand UDT :D

And yeah, "globally winning" probably should have been replaced with "optimal," since the "local" means something specific about payoff matrices and I don't want to imply the corresponding "global."

comment by Dr_Manhattan · 2011-08-05T15:45:19.423Z · LW(p) · GW(p)

Imagine a variant where both boxes are transparent and you can see what's inside, but the contents of the boxes were still determined by Omega's prediction of your future decision. (I think this formulation is due to Gary Drescher.) I'm a one-boxer in that variant too, how about you?

Of course you are, Omega says so! Short of being infinitely confident in Omega's abilities (and my understanding of them), I'd reach for both boxes. Are you predicting I will see the million will dissolve into thin air?

Are you trying to pre-commit in order to encourage potential Omegas to hand over the money? Is there more extensive discussion of this variant?

Replies from: cousin_it, novalis
comment by cousin_it · 2011-08-05T20:29:35.934Z · LW(p) · GW(p)

I'd reach for both boxes. Are you predicting I will see the million will dissolve into thin air?

No. Based on your comment, I'm predicting you won't see the million in the first place.

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2011-08-05T22:40:58.504Z · LW(p) · GW(p)

Isn't the problem definition that I see both boxes full?

Replies from: JGWeissman
comment by JGWeissman · 2011-08-05T22:49:04.137Z · LW(p) · GW(p)

No, the problem definition is that if Omega predicts that if you see both boxes full you will take just Box B, then you will see both boxes full, otherwise you will see just Box A full.

comment by novalis · 2011-08-05T16:41:50.781Z · LW(p) · GW(p)

Yes, there is a more extensive discussion, in Good and Real. Basically, you act for the sake of what would be the case if you had acted that way.

In this case, there is actually a plausible causal reason to one-box: you could be the instance that Omega is simulating in order to make its prediction. But even if not, there are all sorts of cases where we act for the sake of what would be the case if we did. Good and Real discusses this extensively.

comment by Alexei · 2011-08-05T07:39:26.869Z · LW(p) · GW(p)

You should see Eliezer Yudkowsky's TDT paper for how to handle all problems of this sort.

Replies from: peter_hurford, Antisuji
comment by Peter Wildeford (peter_hurford) · 2011-08-05T08:13:34.281Z · LW(p) · GW(p)

It's a lot of pages, but it looks like just the resource I was looking for. Thanks.

comment by Antisuji · 2011-08-05T17:50:37.599Z · LW(p) · GW(p)

Is it wrong that my first reaction to that paper is, "Yudkowsky should learn LaTeX"?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-08-05T20:50:54.563Z · LW(p) · GW(p)

Most definitely. :-)