Posts
Comments
Yes that's right, I regret calling it a problem instead of just a "scenario".
As a follow up though,I would say that the standard Newcomb's problem is (essentially) functionally equivalent to:
“Omega scans your brain. If it concludes that you would two-box in Newcomb’s Problem, it hands you at most $1,000 and flies off. If it concludes that you would one-box in Newcomb’s Problem, it hands you at least $1,000,000 and flies off.”
I think I see where you're coming from with the inverse problem feeling "cheaty". It's not like other decision problems in the sense that it is not really a dilemma; two-boxing is clearly the best option. I used the word "problem" instinctively, but perhaps I should have called it the "Inverse Newcomb Scenario" or something similar instead.
However, the fact that it isn't a real "problem" doesn't change the conclusion. I admit that the inverse scenario is not as interesting as the standard problem, but what matters is that it's just as likely, and clearly favours two-boxers. FDT agents have a pre-commitment to being one-boxers, and that would work well if the universe actually complied and provided them with the scenario they have prepared for (which is what the paper seems to assume). What I tried to show with the inverse scenario is that it's just as likely that their pre-commitment to one-boxing will be used against them.
Both Newcomb's Problem and the Inverse Scenario are "unfair" for one of the theories, which is why I think the proper performance measure is the total money for going through botha, where CDT comes out on top.