Nate Soares on the Ultimate Newcomb's Problem

post by Rob Bensinger (RobbBB) · 2021-10-31T19:42:01.353Z · LW · GW · 20 comments

Contents

20 comments

Nate Soares's distilled exposition of the Ultimate Newcomb's Problem [LW · GW], plus a quick analysis of how different decision theories perform, copied over from a recent email exchange (with Nate-revisions):


You might be interested in Eliezer's "Ultimate Newcomb's Problem", which is a rare decision problem where EDT and CDT agree, but disagree with FDT. In this variant, the small box contains $1k, and the big box is transparent and contains a number X whose primality you are unsure of. Omega will pay you $1M if the number in the big box is prime, and put a prime number in that box iff they predicted you will take only the big box. Meanwhile, a third actor Omicron chooses a number at random each day, and will pay you $2M if their randomly-selected number is composite, and today they happen to have selected the number X.

The causal decision theorist takes both boxes, reasoning that all they can control is whether they get an extra $1k, so they might as well. The evidential decision theorist takes both boxes, reasoning that this makes X composite, which pays more than making X prime by taking one box (the extra $1k being inconsequential). The functional decision theorist takes one box, reasoning that on days that they're going to get paid by Omicron the number in the big box and the number chosen by Omicron will not coincide, but recognizing that their decision about whether or not to one-box has no effect on the probability that Omicron pays them.

As for who performs better, for clarity assume that Omega makes the number in the big box coincide with the number chosen by Omicron whenever possible, and write  for the probability that Omicron chooses a composite number. Then CDT will always see a composite number (and it will match Omicron's in the  fraction of the time when Omicron's is also composite); EDT will see a number with the same primality as Omicron's number (that matches in the  fraction of cases where Omicron's number is composite, and differs in the () fraction of cases where Omicron's number is prime); FDT will always see a prime number (that matches Omicron's in the () fraction where Omicron's is prime). The payouts, then, will be  for CDT;  for EDT; and  for FDT; a clear victory in terms of expected utility for FDT. (Exercise: a similar ranking holds when Omega only sometimes matches Omicron's number conditional on that being possible.)

20 comments

Comments sorted by top scores.

comment by Vanessa Kosoy (vanessa-kosoy) · 2021-11-05T08:54:46.117Z · LW(p) · GW(p)

What's special about this compared to transparent Newcomb with noise? (for which CDT and EDT also fail)

Replies from: selffriend, Pattern
comment by Selffriend (selffriend) · 2021-11-16T03:00:22.096Z · LW(p) · GW(p)

Hi Vanessa, what is a transparent Newcomb with noise? Any reference or link? Many thanks in advance.

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-11-16T13:41:38.730Z · LW(p) · GW(p)

Hi! Transparent Newcomb means you consider Newcomb's problem but the box is transparent so the agent knows whether it's empty or full. We then need to specify which counterfactual Omega predicts (agent seeing empty box, agent seeing full box, or some combination of both) but for our present purpose it doesn't matter. EDT is undefined because e.g. if you see a full-box in the relevant counterfactual then you cannot condition on two-boxing. This can be circumvented by adding a little noise, i.e. a small probability of Omega mispredicting.

comment by Pattern · 2021-11-10T15:16:38.415Z · LW(p) · GW(p)
You might be interested in Eliezer's "Ultimate Newcomb's Problem", which is a rare decision problem where EDT and CDT agree, but disagree with FDT.
Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-11-10T15:49:26.873Z · LW(p) · GW(p)

Yes, in noisy transparent Newcomb EDT and CDT also agree but disagree with FDT.

comment by Pattern · 2021-11-01T18:55:53.782Z · LW(p) · GW(p)

I figured that the right answer is (and that FDT would also reason):

If I choose to take the big box only, I only get $1M.

If I I don't take the big box only, then that number is composite so I get $2M.

One way to not take the big box only is to take both boxes thus netting $2M+$1,000.


Separately, there's the option of factoring/primality testing the number. (I may be unsure of it's primality, but for less than $1,000 I should be able to get more sure.) (If there's enough time to decide, I could take the small box, use the money in it to get more info about that number, and then go back and decide if I'm going to take the other box.)


Edited to add:

If the two numbers weren't the same, then you could (as a quick primality/composability check):

  • divide the larger by the smaller
  • find the greatest common factor
Replies from: So8res
comment by So8res · 2021-11-01T19:34:06.294Z · LW(p) · GW(p)

The difference between your reasoning and the reasoning of FDT, is that your reasoning acts like the equality of the number in the big box and the number chosen by Omicron is robust, whereas the setup of the problem indicates that while the number in the big box is sensitive to your action, the number chosen by Omicron is not. As such, FDT says you shouldn't imagine them covarying; when you imagine changing your action you should imagine the number in the big box changing while the number chosen by Omicron stays fixed. And indeed, as illustrated in the expected utility calculation in the OP, FDT's reasoning is "correct" in the sense of winning more utility (in all cases, and in expectation).

Replies from: Pattern
comment by Pattern · 2021-11-01T20:04:50.379Z · LW(p) · GW(p)

The consequences of not having enough time to think.

winning more utility

more money.


EDIT: It's not clear what effects the amount of time restriction has. 'Not enough time to factor this number' could still be a lot of time, or it could be very little.

comment by StefanHex (Stefan42) · 2021-11-01T16:56:44.804Z · LW(p) · GW(p)

This scenario seems impossible, as in contradictory / not self-consistent. I cannot say exactly why it breaks, but at least the two statements here seem to be not consistent:

today they [Omicron] happen to have selected the number X

and

[Omega puts] a prime number in that box iff they predicted you will take only the big box

Both of these statements have implications for X and cannot both be always true. The number cannot both, be random, and be chosen by Omega/you, can it?

From another angle, the statement

FDT will always see a prime number

demonstrates that something fishy is going on. The "random" number X that Omicron has chosen -- and is in the box -- and seen my FDT -- is "always prime". Then it is not a random number?

Edit: See my reply below, the contradiction is that Omega cannot predict EDT's behaviour when Omicron chose a prime number. EDT's decision depends on Omega's decision, and EDT's decision depends on Omega's decision (via the "do the numbers coincide" link). On days where Omicron chooses a prime number this cyclic dependence leads to a contradiction / Omega cannot predict correctly.

Replies from: oskar-mathiasen, Stefan42
comment by Oskar Mathiasen (oskar-mathiasen) · 2021-11-01T18:04:21.479Z · LW(p) · GW(p)

The fact that the 2 numbers are equal is not always true, it is randomly true on this day.

Replies from: Insub
comment by Insub · 2021-11-01T21:49:55.453Z · LW(p) · GW(p)

Yes that was my reasoning too. The situation presumably goes:

  1. Omicron chooses a random number X, either prime or composite
  2. Omega simulates you, makes its prediction, and decides whether X's primality is consistent with its prediction
  3. If it is, then:
    1. Omega puts X into the box
    2. Omega teleports you into the room with the boxes and has you make your choice
  4. If it's not, then...? I think the correct solution depends on what Omega does in this case.
    1. Maybe it just quietly waits until tomorrow and tries again? In which case no one is ever shown a case where the box does not contain Omicron's number. If this is how Omega is acting, then I think you can act as though your choice affects Omircon's number, even though that number is technically random on this particular day.
    2. Maybe it just picks its own number, and shows you the problem anyway. I believe this was the assumption in the post.
comment by StefanHex (Stefan42) · 2021-11-01T17:22:36.226Z · LW(p) · GW(p)

I think I found the problem: Omega is unable to predict your action in this scenario, i.e. the assumption "Omega is good at predicting your behaviour" is wrong / impossible / inconsistent.

Consider a day where Omicron (randomly) chose a prime number (Omega knows this). Now an EDT is on their way to the room with the boxes, and Omega has to put a prime or non-prime (composite) number into the box, predicting EDT's action.

If Omega makes X prime (i.e. coincides) then EDT two-boxes and therefore Omega has failed in predicting.

If Omega makes X non-prime (i.e. numbers don't coincide) then EDT one-boxes and therefore Omega has failed in predicting.

Edit: To clarify, EDT's policy is two-box if Omega and Omicron's numbers coincide, one-box if they don't.

Replies from: So8res
comment by So8res · 2021-11-01T17:58:37.759Z · LW(p) · GW(p)

If the agent is EDT and Omicron chooses a prime number, then Omega has to choose a different prime number. Fortunately, for every prime number there exists a distinct prime number.

EDT's policy is not "two-box if both numbers are prime or both numbers are composite", it's "two-box if both numbers are equal". EDT can't (by hypothesis) figure out in the allotted time whether the number in the box (or the number that Omicron chose) is prime. (It can readily verify the equality of the two numbers, though, and this equality is what causes it -- erroneously, in my view -- to believe it has control over whether it gets paid by Omicron.)

comment by JBlack · 2021-11-01T10:57:22.555Z · LW(p) · GW(p)

Why is the evaluation using a condition that isn't part of the problem? Isn't it trivial to construct other evaluation assumptions that yields different payouts?

Strictly speaking, there is no single payout from this problem. It's underspecified, and is actually an infinite family of problems.

Replies from: So8res
comment by So8res · 2021-11-01T17:53:34.623Z · LW(p) · GW(p)

Why is the evaluation using a condition that isn't part of the problem?

For clarity. The fact that the ordinal ranking of decision theories remains the same regardless of how you fill in the unspecified variables is left (explicitly) as an exercise.

Replies from: acgt, JBlack
comment by acgt · 2022-07-25T15:39:44.339Z · LW(p) · GW(p)

This doesn’t seem true, at least in the sense of strict ranking? In the EDT case: if Omega’s policy is to place a prime in Box 1 whenever Omicron chooses a composite number (instead of matching Omicron when possible), then it predicts the EDT agent will choose only Box 1 and so is a stable equilibrium. But since it also always places a different prime whenever Omicron chooses a prime, EDT never sees matching numbers and so always one-boxes, therefore its expected earnings are no less than FDT

comment by JBlack · 2021-11-02T00:09:17.361Z · LW(p) · GW(p)

The variables with no specified value in the template given aren't the problem. The fact that the template has the form that it does is the problem. That form is unjustified.

The only information we have about Omega's choices is that choosing the same number as Omicron is sometimes possible. Assuming that its probability is the same - or even nonzero - for all decision theories is unjustified, because Omega knows what decision theory the agent is using and can vary their choice of number.

For example, it is compatible with the problem description that Omega never chooses the same number as Omicron if the agent is using CDT. Evaluating how well CDT performs in this scenario is then logically impossible, because CDT agents never enter this scenario.

Like many extensions, variations, and misquotings of well-known decision problems, this one opens up far too many degrees of freedom.

Replies from: So8res
comment by So8res · 2021-11-02T04:53:54.631Z · LW(p) · GW(p)

I agree that the problem is not fully specified, and that this is a common feature of many decision problems in the literature. On my view, the ability to notice which details are missing and whether they matter is an important skill in analyzing informally-stated decision problems. Hypothesizing that the alleged circumstances are impossible, and noticing that the counterfactual behavior of various agents is uncertain, are important parts of operating FDT at least on the sorts of decision problems that appear in the literature.

At a glance, it looks to me like the omitted information is irrelevant to all three decision algorithms under consideration, and doesn't change the ordinal ranking of payouts (except to collapse the rankings in some edge cases). That said, I completely agree that the correct answer to various (other, afaict) decision problems in the literature is to cry foul and point to a specific piece of decision-relevant underspecification.

Replies from: JBlack
comment by JBlack · 2021-11-04T02:27:06.126Z · LW(p) · GW(p)

The omitted information seems very relevant. An EDT agent decides to do the action maximizing

Sum P(outcomes | action) U(outcomes, action).

With omitted information, the agent can't compute the P() expressions and so their decision is undetermined. It should already be obvious from the problem setup that something is wrong here: equality of Omega and Omicron's numbers is part of the outcomes, and so arguing for an EDT agent to condition on that is suspicious to say the least.

Replies from: So8res
comment by So8res · 2021-11-04T21:14:31.196Z · LW(p) · GW(p)

The claim is not that the EDT agent doesn't know the mechanism that fills in the gap (namely, Omega's strategy for deciding whether to make the numbers coincide). The claim is that it doesn't matter what mechanism fills the gap, because for any particular mechanism EDT's answer would be the same. Thus, we can figure out what EDT does across the entire class of fully-formal decision problems consistent with this informal problem description without worrying about the gaps.