Newcomb's Problem In One Paragraph

post by Chris_Leong · 2018-07-10T07:10:17.321Z · LW · GW · 0 comments

Contents

No comments

A TLDR of my recent post [LW · GW] on Newcomb’s. If you have libertarian freewill, then Omega cannot predict your decisions based on the past with near perfect accuracy without backwards causation in which case you’d obviously one-box. Otherwise we can assume determinism. Then, “What decision should I make?” is misleading as it assumes the same “I” over different possible decisions. But given the precise state of the individual, only one decision is possible. Multiple possible decisions implies multiple slightly different versions of you, so the question becomes, “What (version, decision) pair achieves the best outcomes?”. Clearly this is whatever version gets the million. Further, “How can my decision affect the past?” becomes, “Why does Omega make a different prediction for a different version of me?", which isn't hard to answer.

0 comments

Comments sorted by top scores.