Newcomb's Lottery Problem
post by Heighn · 2022-01-27T16:28:11.609Z · LW · GW · 9 commentsContents
9 comments
Inspired by and variant of The Ultimate Newcomb's Problem [LW · GW].
In front of you are two boxes, box A and box B. You can either take only box B (one-boxing) or both (two-boxing). Box A visibly contains $1,000. Box B contains a visible number X. X is guaranteed to be equal to or larger than 1 and smaller than or equal to 1000. Also, if X is composite, box B contains $1,000,000. If X is prime, B contains $0. You observe X = 226. Omega the superintelligence has predicted your move in this game. If it predicted you will one-box, it chose X to be composite; otherwise, it made X prime. Omega is known to be correct in her predictions 99% of the time, and completely honest.
The Having Fun With Decision Theory Lottery has randomly picked a number Y, which is guaranteed to fall in the same range as X. Y is displayed on a screen visible to you. The HFWDT Lottery is organized by Omega - but again, Y is picked at random and therefore completely separately from X. If both X and Y are prime, the HFWDT Lottery gives you $4,000,000. Otherwise it gives you $0. You observe Y = 536.
Do you one-box or two-box?
Newcomb's Lottery Problem 2: Everything is the same as before, except the HFWDT Lottery price is now $8,000,000. Do you one-box or two-box?
9 comments
Comments sorted by top scores.
comment by JBlack · 2022-01-28T08:17:19.026Z · LW(p) · GW(p)
I take both boxes for a whole bunch of reasons. Included among them are the facts that Omega is terrible at setting up these problems, and also that she is a very poor predictor.
I have no idea who the faceless mob are who "know" that Omega is 99% accurate in her predictions and completely honest, but they're wrong.
comment by Dagon · 2022-01-27T18:03:14.147Z · LW(p) · GW(p)
I took the original "ultimate" post as mostly a joke - there didn't seem to be any interesting theoretical implications beyond the standard Newcomb's problem interactions between causality and decision theory. This doesn't seem to make the joke any funnier, nor demonstrate any confusions not already identified by simpler thought experiments.
What am I missing? (edit: this comment came out way more negative than I intended, sorry! This question is legitimate, and I'd like someone to ELI5 what new conundrum this adds to decision-theory or modeling of decision causality).
Boring analysis:
before you play the game, but after you learn that you will play the game - EV of making Omega predict you'll one-box is $1000 (or $1001000 if you can make Omega mis-predict), because you can never win the lottery. Making Omega predict you'll two-box is worth $1000 + $4M * 168/998(there are 168 primes in the range 2..999) = $674346. Problem 2, that's 1347693. So problem 2 is simple, just 2-box and let everyone know it.
Problem 1 hinges, like most Newcomb problems, on whether Omega is WRONG in your specific case. Precommitting to one-box, then actually 2-boxing is optimal, and perhaps possible in the world where Omega broadcasts her prediction in advance. It'll depend on the specifics of why she has a 99% success rate.
In the situation given, where you already see the composite X and Y, two-box if you can.
Replies from: Heighn↑ comment by Heighn · 2022-01-27T18:13:22.000Z · LW(p) · GW(p)
I was mostly just having fun, and find almost every new problem I see fun. I figured others might like it. You don't - so be it.
The range is a thousand numbers btw, it includes 1 and 1000, but whatever.
I don't see how precommitting to one thing and then doing the other, thereby fooling Omega is possible. In problem 1, one-boxing is the rational choice.
Replies from: Dagon↑ comment by Dagon · 2022-01-27T20:04:10.121Z · LW(p) · GW(p)
[ epistimic status: commenting for fun, not seriously objecting. I like these posts, even if I don't see how they further our understanding of decisions ]
The range is a thousand numbers btw, it includes 1 and 1000
...
larger than 1 and smaller than or equal to 1000.
We're both wrong. It includes 1000 but not 1. Agreed with the "whatever" :)
I don't see how precommitting to one thing and then doing the other, thereby fooling Omega is possible
That's the problem with underspecified thought experiments. I don't see how Omega's prediction is possible. The reasons for 99% accuracy matter a lot. If she just kills people if they're about to challenge her prediction, then one-boxing in 1 and two-boxing in 2 is right. If she's only tried it on idiots who think their precommitment is binding, and yours isn't, then tricking her is right in 1 and still publicly two-box in 2.
BTW, I think you typo'd your description of one- and two-boxing. Traditionally, it's "take box B or take both", but you write "take box A or take both".
↑ comment by Jiro · 2022-01-31T16:16:23.046Z · LW(p) · GW(p)
I think that, by definition, if you precommitted to something you have to do it. A "nonbinding precommitment" isn't a precommitment despite the grammatical structure of that phrase, just like a "squashed circle" isn't a circle.
(I do separately think Omega is impossible. Predicting someone's actions in full generality, when they're reacting to one of your own actions, implicates the Halting Problem.)
Replies from: Dagon↑ comment by Dagon · 2022-01-31T18:51:25.608Z · LW(p) · GW(p)
Yeah, I should have used more words. The "publicly state and behave as if precommitted sufficient to make Omega predict you will one-box, but then actually two-box" is what I meant. "Fake precommit" may be better than "nonbinding precommit" as a descriptor.
And, as you say, I don't believe Omega is possible in our current world. Which means the thought experiment is of limited validity, except as an exploration of decision theory and theoretical causality.
↑ comment by Heighn · 2022-01-27T20:41:22.407Z · LW(p) · GW(p)
[ epistimic status: commenting for fun, not seriously objecting. I like these posts, even if I don't see how they further our understanding of decisions ]
Cool. I apologize if I came of a bit snarky earlier. Thanks for commenting! I read Eliezer's post and was thinking about how to make a problem I like (even) more, and this was the result. Just for fun, mostly :)
We're both wrong. It includes 1000 but not 1. Agreed with the "whatever" :)
Well, I defined the range. I can't really be wrong, haha ;) But I get your point, with prime and composite, >=2 would make more sense.
That's the problem with underspecified thought experiments. I don't see how Omega's prediction is possible. The reasons for 99% accuracy matter a lot. If she just kills people if they're about to challenge her prediction, then one-boxing in 1 and two-boxing in 2 is right. If she's only tried it on idiots who think their precommitment is binding, and yours isn't, then tricking her is right in 1 and still publicly two-box in 2.
The accuracy is something I need to learn more about at some point, but it should (I think) simply be read as "Whatever choice I make, there's 0.99 probability Omega predicted it."
BTW, I think you typo'd your description of one- and two-boxing. Traditionally, it's "take box B or take both", but you write "take box A or take both".
Thanks Dagon! Fixing it.