0 comments
Comments sorted by top scores.
comment by JBlack · 2024-04-16T07:43:47.823Z · LW(p) · GW(p)
It makes sense to very legibly one-box even if Omega is a very far from perfect predictor. Make sure that Omega has lots of reliable information that predicts that you will one-box.
Then actually one-box, because you don't know what information Omega has about you that you aren't aware of. Successfully bamboozling Omega gets you an extra $1000, while unsuccessfully trying to bamboozle Omega loses you $999,000. If you can't be 99.9% sure that you will succeed then it's not worth trying.
Replies from: Celarix↑ comment by Celarix · 2024-04-20T02:14:22.706Z · LW(p) · GW(p)
The thing about Newcomb's problem for me was always the distribution between the two boxes, one being $1,000,000 and the other being $1,000. I'd rather not risk losing $999,000 for a chance at an extra $1,000! I could just one-box for real, take the million, then put it in an index fund and wait for it to go up by 0.1%.
I do understand that the question really comes into play when the amounts vary and Omega's success rate is lower - if I could one-box for $500 and two-box for $1,500 total and Omega is wrong 25% of the time observed, that would be a different play.
comment by Dagon · 2024-04-16T02:07:23.986Z · LW(p) · GW(p)
I'm not sure I follow why Aumann's agreement theorem is relevant here - the survey does not include any rational agents, agents with mutual knowledge of their rationality, nor agents with the same priors. It makes sense to one-box ONLY if you calculate EV by that assigns a significant probability to causality violation (your decision somehow affecting the previously-committed Omega behavior).
Replies from: JBlack↑ comment by JBlack · 2024-04-20T03:59:37.720Z · LW(p) · GW(p)
It makes sense to one-box ONLY if you calculate EV by that assigns a significant probability to causality violation
It only makes sense to two-box if you believe that your decision is causally isolated from history in every way that Omega can discern. That is, that you can "just do it" without it being possible for Omega to have predicted that you will "just do it" any better than chance. Unfortunately this violates the conditions of the scenario (and everyday reality).
Replies from: Dagon↑ comment by Dagon · 2024-04-20T15:17:30.875Z · LW(p) · GW(p)
It only makes sense to two-box if you believe that your decision is causally isolated from history in every way that Omega can discern.
Right. That's why CDT is broken. I suspect from the "disagree" score that people didn't realize that I do, in fact, assert that causality is upstream of agent decisions (including Omega, for that matter) and that "free will" is an illusion.