Newcomb's Problem In One Paragraphpost by Chris_Leong · 2018-07-10T07:10:17.321Z · score: 8 (4 votes) · LW · GW · None comments
A TLDR of my recent post [LW · GW] on Newcomb’s. If you have libertarian freewill, then Alpha cannot predict your decisions based on the past with near perfect accuracy without backwards causation in which case you’d obviously one-box. Otherwise we can assume determinism. Then, “What decision should I make?” is misleading as it assumes the same “I” over different possible decisions. But given the precise state of the individual, only one decision is possible. Multiple possible decisions implies multiple slightly different versions of you, so the question becomes, “What (version, decision) pair achieves the best outcomes?”. Clearly this is whatever version gets the million. Further, “How can my decision affect the past?” becomes, “How can a (version, decision) pair affect things prior to the decision?”, which isn’t exactly hard to answer.
Comments sorted by top scores.