post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Tristan Cook · 2023-11-03T13:41:05.782Z · LW(p) · GW(p)

If we can find a problem where EDT clearly and irrevocably gives the wrong answer, we should not give it any credence

I think this is potentially an overly strong criteria for decision theories - we should probably  restrict to something like the problems to a fair problem class,  else we end up with no decision theory receiving any credence. 

I also think "wrong answer" is doing a lot of work here. Caspar Oesterheld writes

However, there is no agreed-upon metric to compare decision theories, no way to asses even for a particular problem whether one decision theory (or its recommendation) does better than another. (This is why the CDT-versus-EDT-versus-other debate is at least partly a philosophical one.) In fact, it seems plausible that finding such a metric is “decision theory-complete” (to butcher another term with a specific meaning in computer science). By that I mean that settling on a metric is probably just as hard as settling on a decision theory and that mapping between plausible metrics and plausible decision theories is fairly easy.

Replies from: Heighn, Heighn
comment by Heighn · 2023-11-03T13:59:49.276Z · LW(p) · GW(p)

Btw, thanks for your comment! I edited my post with respect to fair problems.

comment by Heighn · 2023-11-03T13:56:57.888Z · LW(p) · GW(p)

I think this is potentially an overly strong criteria for decision theories - we should probably  restrict to something like the problems to a fair problem class,  else we end up with no decision theory receiving any credence.

Good point, I should have mentioned that in my article. (Note that XOR Blackmail is definitely a fair problem (not that you are claiming otherwise)).

I also think "wrong answer" is doing a lot of work here.

I at least in part agree here. This is why I picked XOR Blackmail, because it has such an obvious right answer. That's an intuition, but that's also true for some of the points made in favor of The Evidentialist's Wager to begin with. 

comment by JBlack · 2023-11-03T10:25:11.206Z · LW(p) · GW(p)

Yes, I agree with the main point of this post. There is a false dichotomy in the argument basing the conclusion only on the options CDT or EDT, when in fact both are wrong.

On another note, there is a variation of Newcombe's problem analogous to the XOR blackmail problem. The usual statement of the game conditions don't say anything about who gets to play the game, and if everyone gets to play then you should obviously one-box.

Suppose you know instead that Omega was miserly and almost all of the people who one-box don't get offered the opportunity to play - let's say every two-boxer gets to play but only 0.01% of one-boxers. Should you still choose to 1-box if presented with the opportunity of playing?

Replies from: Heighn
comment by Heighn · 2023-11-03T11:12:30.721Z · LW(p) · GW(p)

Thanks for the comment!

There is a false dichotomy in the argument basing the conclusion only on the options CDT or EDT, when in fact both are wrong.

I wouldn't say there's a false dichotomy: the argument works fine if you also have credence in e.g. FDT. It just says that altruistic, morally motivated agents should favor EDT over CDT. (However, as I have attempted to demonstrate, 2 premises of the argument don't hold up.)

Suppose you know instead that Omega was miserly and almost all of the people who one-box don't get offered the opportunity to play - let's say every two-boxer gets to play but only 0.01% of one-boxers. Should you still choose to 1-box if presented with the opportunity of playing?

Interesting. No, because a 0.01% probability of winning $1,000,000 gives me an expected $100, whereas two-boxing gives me $1,000.