0 comments
Comments sorted by top scores.
comment by Vaniver · 2020-07-31T19:11:07.189Z · LW(p) · GW(p)
You might be interested in Cheating Death in Damascus, particularly section 3.1, which deals with CDT's instability, and which points towards a more satisfying version of counterfactual reasoning.
comment by Shmi (shminux) · 2020-08-01T01:06:26.545Z · LW(p) · GW(p)
People argue about the Newcomb's paradox because they implicitly bury the freedom of choice of both the agent and the predictor in various unstated assumptions. Counterfactual approach is a prime example of it. For a free-will-free treatment of Newcomb's and other decision theory puzzles, see my old post [LW · GW].
comment by fuego · 2020-08-03T18:13:46.517Z · LW(p) · GW(p)
Really well written and thought out.
Indeed, if both Alice and Omega know that Alice's decision-making will tell her to use the 1-boxer strategy, then Alice will know she will gain $1M
and
In both cases, her counterfactual optimization urged her to be a 2-boxer
feel like the crux to me.
If Alice was such a person to listen to the counterfactual optimization, then both Alice and Omega would not know she would 1box. There is a contradiction being buried in there.