Posts
Comments
Sadly, we seem to make no progress in any direction. Thanks for trying.
What do you mean? It could have created and run a copy, for instance, but anyhow, there would be no causal link. That's probably the whole point of the 2-Boxer-majority.
I can see a rationale behind one-boxing, and it might even be a standoff, but why almost no one here seems to see the point of 2-boxing, and the amazing overconfidence is beyond me.
I mean, the actual token, the action, the choice, the act of my choosing does not determine the contents. It's Omega's belief (however obtained) that this algorithm is such-and-such that lead it to fill the boxes accordingly.
Is it not rather Omega's undisclosed method that determines the contens? That seems to make all the difference.
You have the same choice as me: Take one box or both. (Or, if you assume there are no choices in this possible world because of determinism: It would be rational to 2-box, because I, the thief, do 2-box, and my strategy is dominant)
You've mentioned 'backwards causality' which isn't assumed in our one-box solution to Newcomb.
Only to rule it out as a solution. No problem here.
How comfortable are you with the assumption of determinism?
In general, very. Concerning Newcomb, I don't think it's essential, and as far as I recall, it isn't mentioned in the orginal problem.
you need to tell me where you think logical and winning diverge
I'll try again: I think you can show with simple counterexamples that winning is neither necessary nor sufficient for being logical (your term for my rational, if I understand you correctly).
Here we go: it's not necessary, because you can be unlucky. Your strategy might be best, but you might lose as soon as luck is involved. It's not sufficient, because you can be lucky. You can win a game even if you're not perfectly rational.
1-boxing seems a variant of the second case, instead of (bad) luck the game is rigged.
So what is your point? That no backwards causation is involved is assumed in both cases. If this scenario is for dialectic purposes, it fails: It is equally clear, if not clearer, that my actual choice has no effect on the content of the boxes.
For what it's worth, let me reply with my own story:
Omega puts the two boxes in front of you, and says the usual. Just as you’re about to pick, I come along, grab both boxes, and run. I do this every time Omega confronts someone with his boxes, and I always do as good as a two-boxer and better than a one-boxer. You have the same choice as me: Just two-box. Why won’t you?
so the best way to make sure the one box has the goodies in it is to plan to actually take only that box.
If we rule out backwards causation, then why on earth should this be true???
May I suggest again that defining rational as winning may be the problem?
I never said I could add anything new to the discussion. The problem is: judging by the comments so far, nobody here can, either. And since most experts outside this community agree on 2-boxing (ore am I wrong about this?), my original question stands.
I deny that 1-boxing nets more money - ceteris paribus.
Agree completely.
But the crucial difference is: in the one-shot case, the box is already filled or not.
I agree with everything you say in this comment, and still find 2-boxing rational. The reason still seems to be: you can consistently win without being rational.
My thinking goes like this: The difference is that you can make a difference. In the advance- or iterated case, you can causally influence your future behaviour, and so the prediction, too . In the original case, you cannot (where backwards causation is forbidden on pain of triviality). Of course that's the oldest reply. But it must be countered, and I don't see it.
That's too cryptic for me. Where's the connection to your first comment?
As i said in reply to byrnema, I don't dispute that wanting to be the kind of person who 1-boxes in iterated games or in advance is rational, but one-shot? I don't see it. What's the rationale behind it?
Right, but exactly this information seems to the 2-boxer to point to 2-boxing! If the game is rigged against you, so what? Take both boxes. You cannot lose, and there's a small chance the conman erred.
Mhm. I'm still far from convinced. Is this my fault? Am I at all right in assuming that 1-boxing is heavily favored in this community? And that this is a minority belief among experts?
If the solution were just to see that optimizing our decision algorithm is the right thing to do, the crucial difference between the original problem and the variant, where Omega tells you he will play this game with you some time in the future, seems to disappear. Hardly anyone denies 1-boxing is the rational choice in the latter case. There must be more to this.
Hi LessWrongers,
I'm aware that Newcomb's problem has been discussed a lot around here. Nonetheless, I'm still surprised that 1-boxing seems to be the consensus view here, contrary to the concensus view. Can someone point to the relevant knockdown argument? (I found Newcomb's Problem and Regret of Rationality but the only argument therein seems to be that 1-boxers get what they want, and that's what makes 1-boxing rational. Now, getting what one wants seems to be neither necessary nor sufficient, because you should get it because of your rational choice, not because the predictor rigged the situation?!)
Many thanks for any links, corrections and help!