Hypothetical Paradoxes
post by Psychohistorian · 2009-09-19T06:28:06.637Z · LW · GW · Legacy · 34 commentsContents
34 comments
When we form hypotheticals, they must use entirely consistent and clear language, and avoid hiding complicated operations behind simple assumptions. In particular, with respect to decision theory, hypotheticals must employ a clear and consistent concept of free will, and they must make all information available to the theorizer available to the decider in the question. Failure to do either of these can make a hypothetical meaningless or self-contradictory if properly understood.
Newcomb's problem and the the Smoking Lesion fail to do both. I will argue that hidden assumptions in both problems imply internally contradictory concepts of free will, and thus both hypotheticals are incomprehensible and irrelevant when used to contradict decision theories.
And I'll do it without math or programming! Metatheory is fun.
Newcomb's problem, insofar as it is used as a refutation of causal decision theory, relies on convenient ignorance and a paradoxical concept of free will, though it takes some thinking to see why, because the concept of naive free will is such an innate part of human thought. In order for Newcomb's to work, there must exist some thing or set of things ("A") that very closely (even perfectly) link "Omega predicts Y-boxing" with "Decider takes Y boxes." If there is no A, Omega cannot predict your behaviour. The existence of A is a fact necessary for the hypothetical and the decision maker should be aware of it, even if he doesn't know anything about how A generates a prediction.
Newcomb's problem assumes two contradictory things about A. It assumes that, for the purpose of Causal Decision Theory, A is irrelevant and completely separated from your actual decision process; it assumes you have some kind of free will such that you can decide to two-box without this decision having been reflected in A. It also assumes that, for purposes of the actual outcome, A is quite relevant; if you decided to two-box, your decision will have been reflected in A. This contradiction is the reason the problem seems complicated. If CDT were allowed to consider A, as it should be, it would realize:
(B), "I might not understand how it works, but my decision is somehow bound to the prediction in such a way that however I decide will have been predicted. Therefore, for all intents and purposes, even though my decision feels free, it is not, and, insofar as it feels free, deciding to one-box will cause that box to be filled, even if I can't begin to comprehend *how*."
"I should one-box" follows rather clearly from this. If B is false, and your decision is *not* bound to the prediction, then you should two-box. To let the theorizer know that B is true, but to forbid the decider from using such knowledge is what makes Newcomb's being a "problem." Newcomb's assumes that CDT operates with naive free will. It also assumes that naive free will is false and that Omega accurately employs purely deterministic free will. It is this paradox of simultaneously assuming naive free will *and* deterministic will that makes Necomb's problem a problem. CDT does not appear to be bound to assume naive free will, and therefore it seems capable of treating your "free" decision as causal, which it seems that it functionally must be.
The Smoking Lesion problem relies on the same trick in reverse. There is, by necessary assumption, some C such that C causes smoking and cancer, but smoking does not actually cause cancer. The decider is utterly forbidden from thinking about what C is and how C might influence the decision under consideration. The *decision to smoke* very, very strongly predicts *being a smoker.*1Indeed, given that there is no question of being able to afford or find cigarettes, the outcome of the decision to smoke is precisely what C predicts. The desire to smoke is essential to the decision to smoke - under the hypothetical, if there were no desire, the decider would always decide not to smoke; if there is a desire and a low enough risk of cancer, the decider will always decide to smoke. Thus, the desire appears to correspond significantly (perhaps perfectly) with C, but Evidential Decision Theory is arbitrarily prevented from taking this into account. This is despite the fact that C is so well understood that we can say with absolute certainty that the correlation between smoking and cancer is completely explained by it.
The problem forces EDT to assume that C operates deterministically on the decision, and that the decision is naively free. It requires that the decision to smoke both is and is not correlated with the desire to smoke - if it were correlated, EDT would consider this and significantly adjust the odds of getting cancer conditional on deciding to smoke *given* that there is a desire to smoke. Forcing the decider to assume a paradox proves nothing, so TSL fails to refute a evidential decision theory that actually uses all of the evidence given to it.
Both TSL and Newcomb's exploit our intuitive understanding of free will to assume paradoxes, then uses these unrecognized paradoxes to undermine a decision strategy. As these problems force the decider to secretly assume a paradox, it is little surprise that they generate convoluted and problematic outputs. This suggests that the problem lies not in these decision theories, but in the challenge of fully and accurately translating our language to our decision maker's decision theory.
Newcomb's, TSL, Counterfactual Mugging, and the Absent-Minded Driver all have another larger, simpler problem, but it is practical rather than conceptual, so I'll address it in a subsequent post.
1 - In the TSL version EY used in the link I provided, C is assumed to be "a gene that causes a taste for cigarettes." Since the decider already *knows* they have a taste for cigarettes, Evidential Decision Theory should take this into account. If it does, it should assume that C is present (or present with high probability), and then the decision to smoke is obvious. Thus, the hypothetical I'm addressing is a more general version of TSL where C is not specified, only the existence of an acausal correlation is assumed.
34 comments
Comments sorted by top scores.
comment by JGWeissman · 2009-09-19T17:42:35.095Z · LW(p) · GW(p)
You appear to be reasoning by placing yourself in the place of CDT and EDT when faced with these problems. If instead you worked with formal descriptions of how these decisions theories work, and in particular how they handle counterfactuals, you will see that the problems are not "assuming" they will make mistakes, but simply presenting situations in which they do make mistakes.
Replies from: Psychohistorian↑ comment by Psychohistorian · 2009-09-22T02:23:45.209Z · LW(p) · GW(p)
Each problem contains an assumption that does not get translated into the relevant decision theory.
In Newcomb's, it seems that any realistic interpretation of "causal" makes your choice causal, which would make CDT work correctly, as it would one-box. Pretending that your choice's causation can't be considered causal is fighting a straw man; our minds don't realize this because of the dichotomous interpretation of free will.
In TSL, the decision invokes a paradox: it requires us to assume the decider is naively free, while also requiring us to assume that there is some factor substantially and materially influencing the decision. This is not possible. EDT cannot interpret this problem correctly because it is inherently contradictory. It requires you to assume an endogenous variable is exogenous and then blames your decision theory when you don't.
Even if the decision theories are relatively set in stone, the mechanism for translating English statements into WFF within those theories does not appear to be so, and I think that is the problem.
Replies from: JGWeissman↑ comment by JGWeissman · 2009-09-22T17:53:30.392Z · LW(p) · GW(p)
In Newcomb's problem, the agent's state before being introduced to the problem causes both Omega's prediction and the agent's choice. The CDT way of computing counterfactuals is to set a value for the agent's choice and look at the effect on events which are caused by that choice, but not to treat that choice as Bayesian evidence of events which caused that choice. Thus CDT, sees the agent's initial state and Omega's prediction as constant regardless of the agent's decision. Yes there is information that is not represented by CDT, but that is a problem with CDT, which cannot account for the correlation of the agent's choice and Omega's prediction.
In TSL, the presence of a lesion causes a greater probability that the agent will get cancer, and that the agent will smoke. The EDT way of computing counterfactuals is to set a value for the agents decision and to look at the effect on events that are caused by that choice, and to treat the choice as Bayesian evidence of events that cause that choice. To EDT it appears that not smoking reduces that probability of having the lesion, and therefor of having cancer. The fact that EDT cannot represent that whether the agent has cancer is independent of its choice is a problem with EDT.
In neither case did I mention free will. The discussion of counterfactuals may describe part of causal free will. Naive free will was never even approached. You should consider the possibility that the free will equivocation you describe is not the reason people believe the CDT and EDT fail at these problems.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-22T19:02:20.202Z · LW(p) · GW(p)
(note: I'm not Psychohistorian)
In TSL, the presence of a lesion causes a greater probability that the agent will get cancer, and that the agent will smoke. The EDT way of computing counterfactuals is to set a value for the agents decision and to look at the effect on events that are caused by that choice, and to treat the choice as Bayesian evidence of events that cause that choice. To EDT it appears that not smoking reduces that probability of having the lesion, and therefor of having cancer. The fact that EDT cannot represent that whether the agent has cancer is independent of its choice is a problem with EDT.
To better clarify what I said in my other comment, imagine we do TSL with a curveball. Let's say that instead the gene will make you smoke. That is, everyone with the gene will certainly smoke. In other words, everyone with the gene will reach the decision to smoke.
The gene will also give the person cancer. And you still retain the sensation of a making a decision about whether you will smoke.
In that case, it most certainly does appear as if I'm choosing whether to have the gene. If I choose to smoke -- hey, the gene's power overtook me! If I don't, well, then I must not have had it all along.
This appears to me isomorphic to Newcomb's problem, which makes sense, given that EDT wins there.
But then, in the original TSL problem, why shouldn't I take into account the fact that my reasoning would be corrupted by the gene? ("Hey, don't worry man, your future cancer is already a done deal! Doesn't matter if you light up now! Come on, it's fun!" "Hey, that's just the gene talking! I'm a rational person not corrupted by that logic!")
Replies from: JGWeissman↑ comment by JGWeissman · 2009-09-24T22:13:15.602Z · LW(p) · GW(p)
This appears to me isomorphic to Newcomb's problem, which makes sense, given that EDT wins there.
It is not isomorphic when you apply Timeless Decision Theory, and add the node that represents the result of your decision theory. This node correlates with Omega's prediction in Newcomb's problem, but the corresponding node in strengthened TSL does not correlate with the presence of the gene. Counterfactually altering that node does not change whether you have the gene.
But the scenario does not really make sense. If everyone with the gene will smoke, and everyone who uses EDT chooses not to smoke, then you could eliminate the gene from a population by teaching everyone to use EDT. I think the ultimate problem is that a scenario that dictates the agents choice regardless of their decision theory is a scenario in which decision theories cannot be faithfully executed, that is the scenario is denying that the agent's innards can implement any decision theory that produces a different choice.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-24T22:35:59.180Z · LW(p) · GW(p)
Counterfactually altering that node does not change whether you have the gene.
But the scenario does not really make sense
Under the (strange) stipulations I gave, altering the node does alter the gene. The fact that it doesn't make sense is a result of the situation's parallel with Newcomb's problem, which, as Psychohistorian argues, requires an equally non-sensical scenario.
I think the ultimate problem is that a scenario that dictates the agents choice regardless of their decision theory is a scenario in which decision theories cannot be faithfully executed, that is the scenario is denying that the agent's innards can implement any decision theory that produces a different choice.
But this problem arises just the same in Newcomb's problem! If Omega perfectly predicts your choice, then you can't do anything but that choice, and the problem is equally meaningless: your choice is dictated irrespective of your decision theory. Just as we could eliminate the gene by teaching EDT, we could make Omega always fill box B with money by teaching TDT.
Replies from: JGWeissman↑ comment by JGWeissman · 2009-09-24T23:04:36.990Z · LW(p) · GW(p)
Just as we could eliminate the gene by teaching EDT, we could make Omega always fill box B with money by teaching TDT.
It makes sense that you can alter Omega's prediction by altering the agents decision theory because the decision theory is examined in making the prediction. This does not correspond to the smoking genes. The inheritance of genes that do not cause a person to have a particular decision theory (or are correlated with genes that do) is not correlated with a person having a decision theory. And if you are postulating that the smoking gene also causes the person to have a particular decision theory, then you have a fully general counterargument against any decision theory: just suppose it is caused by the same gene that causes cancer.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-29T19:31:00.626Z · LW(p) · GW(p)
Sorry, I missed this when you first posted it.
It makes sense that you can alter Omega's prediction by altering the agents decision theory because the decision theory is examined in making the prediction.
But you can't alter Omega's prediction at the point where you enter the problem, just like you can't alter the presence of the gene at the point where you enter the TSL problem. (Yes, redundancy, I know, but it flows better.)
The inheritance of genes that do not cause a person to have a particular decision theory (or are correlated with genes that do) is not correlated with a person having a decision theory ...
Well, under the altered TSL problem I posited, the gene does cause a particular decision theory (or at least, limits you to those decision theories that result in a decision to smoke).
And if you are postulating that the smoking gene also causes the person to have a particular decision theory, then you have a fully general counterargument against any decision theory: just suppose it is caused by the same gene that causes cancer.
And I also have a fully general counterargument against any decision theory in Newcomb's problem too! It (the decision theory) was caused by the same observation by Omega that led it to choose what to put in the second box.
Bringing this back to the original topic: Psychohistorian appears correct to say that the problems force you to make contradictory assumptions.
Replies from: JGWeissman↑ comment by JGWeissman · 2009-09-29T20:11:42.753Z · LW(p) · GW(p)
You are presenting a symmetry between the two cases by ignoring details. If you look at which events cause which, you can see the differences.
And I also have a fully general counterargument against any decision theory in Newcomb's problem too! It (the decision theory) was caused by the same observation by Omega that led it to choose what to put in the second box.
So, you can make a Newcomb like problem (Omega makes a decision based on its prediction of your decision in a way that it explains to you before making the decision) in which TDT does not win?
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-29T20:28:11.677Z · LW(p) · GW(p)
You are presenting a symmetry between the two cases by ignoring details. If you look at which events cause which, you can see the differences.
I don't see it. Would you mind pointing out the obvious for me?
So, you can make a Newcomb like problem (Omega makes a decision based on its prediction of your decision in a way that it explains to you before making the decision) in which TDT does not win?
The modified smoking lesion problem I just gave. TDT reasons (parallel to the normal smoking lesion) that "I have the gene or I don't, so it doesn't matter what I do". But strangely, everyone who doesn't smoke ends up not getting cancer.
Replies from: JGWeissman↑ comment by JGWeissman · 2009-09-29T21:16:05.661Z · LW(p) · GW(p)
The modified smoking problem lesion problem is not based on Omega making predictions. If you tried to come up with such an example which stumps TDT, you will run into the asymmetries between Omega's predictions and the common cause gene.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-29T21:24:58.412Z · LW(p) · GW(p)
The modified smoking problem lesion problem is not based on Omega making predictions
It still maps over. You just replace "omega predicts one or two box" with "you have or don't have the gene". "Omega predicts one box" corresponds to not having the gene.
Replies from: JGWeissman↑ comment by JGWeissman · 2009-09-29T21:42:33.904Z · LW(p) · GW(p)
If it maps over, why does TDT one box in Newcomb's problem and smoke in the modified smoking lesion problem?
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-29T21:54:02.911Z · LW(p) · GW(p)
I meant that something takes the functional equivalent of Omega. There is a dissimilarity, but not enough to make it irrelevant. The point that Psychohistorian and I are making is that the problems have subtly contradictory premises, which I think the examples (including modified TSL) show. Because the premises are contradictory, you can assume away a different one in each case.
In the original TSL, TDT says "hey, it's decided anyway whether I have cancer, so my choice doesn't affect my cancer". But in Newcomb's problem, TDT says, "even though omega has decided the contents of the box, my choice affects my reward".
comment by wedrifid · 2009-09-19T16:13:09.598Z · LW(p) · GW(p)
Newcomb's problem relies on convenient ignorance and a paradoxical concept of free will
Newcomb's problem does no such thing. Except where otherwise specified, hypotheticals are to be considered to occur in the same reductionist universe that we inhabit. When reading the problem as provided I am told that I am ('you are') shown two boxes and given a choice. I am always subject to the standard laws of physics. There is nothing in the problem to suggest that applying a naive conception of free will would be any less naive in the circumstance of the hypothetical than it is anywhere else.
There is nothing wrong with Newcomb's problem. If someone gets confused, imagines paradoxes or makes a poor decision then that is a problem with either the decision theory they are using or in their application thereof. If a particular description of the problem included details about how a decision theory should handle the problem then that part would have to be investigated to see if the problems described by Psycho apply.
Replies from: Psychohistorian↑ comment by Psychohistorian · 2009-09-19T17:34:38.304Z · LW(p) · GW(p)
This is correct; that sentence did not fit in well with the rest of the point and has been amended accordingly. The assumptions necessary to use Newcomb's to refute CDT are paradoxical; the hypothetical itself is not, though we are very much prone to think of it incorrectly, because naive free will is so basic to our intuition.
comment by [deleted] · 2009-09-19T14:01:24.658Z · LW(p) · GW(p)
I confess, when I first wrote out the CDT calculations for Newcombe's Paradox, I assumed that the prediction 'caused' the choice, and got one-boxing as a result.
Then I got confused, because I had heard by then that CDT suggests two-boxing.
Now I'm working on getting a copy of Causality so I can figure out if the network formalism still supports the prediction not being causally binding on the outcome.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2009-09-19T17:27:24.619Z · LW(p) · GW(p)
I confess, when I first wrote out the CDT calculations for Newcombe's Paradox, I assumed that the prediction 'caused' the choice, and got one-boxing as a result.
Then I got confused, because I had heard by then that CDT suggests two-boxing.
I like to say that the framework of CDT is not capable of understanding the statement of Newcomb's problem. But I'm not sure anyone agrees when I say it.
comment by Stuart_Armstrong · 2009-09-24T18:26:06.838Z · LW(p) · GW(p)
Newcomb's problem can be modelled by using the correlated decision principle and viewing yourself and Omega's simulation of you as being the correlated decision-makers.
This modification may even make it amenable to CDT, but I'm not sure about that.
comment by wedrifid · 2009-09-19T16:13:26.811Z · LW(p) · GW(p)
There is a flaw in the Newcomb's Problem description in our wiki. The clause "Omega has never been wrong" does not imply that Omega has ever been right either. It needs an extra phrase or so. Who is able to fix that?
Replies from: JGWeissman↑ comment by JGWeissman · 2009-09-19T17:19:56.982Z · LW(p) · GW(p)
I have modified the article to specify that Omega has played the game many times.
Anyone can make these edits, just sign in or create an account and an edit tab will appear for all articles while you are logged in.
comment by SilasBarta · 2009-09-19T16:15:11.217Z · LW(p) · GW(p)
I will admit I've had similar thoughts on these problems:
On the smoking lesion, my reaction to the problem was, "Having an urge to smoke is evidence that I have the gene, but having a strong enough urge that I actually go through with smoking is even stronger evidence. Therefore, my decision to smoke determines whether I have the gene."
Similarly, on Newcomb's problem, let's say I shift between thinking I should one-box and two-box. That implies that when I temporarily settle on "one box", there should be money in the box, but then as I shift to "two box", the money somehow goes away, all of which implies a causal influence on the box, after Omega has decided to put money in it.
Btw, I suggested a different causal graph for looking at Newcomb's problem.
Replies from: Peter_de_Blanc↑ comment by Peter_de_Blanc · 2009-09-19T16:34:49.409Z · LW(p) · GW(p)
let's say I shift between thinking I should one-box and two-box. That implies that when I temporarily settle on "one box", there should be money in the box, but then as I shift to "two box", the money somehow goes away
Huh? If you end up two-boxing, then there's no money in the box even when you're thinking of one-boxing.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-19T16:44:49.807Z · LW(p) · GW(p)
But at the time you plan to one-box, you should believe there's money. But then when you plan to two-box, you should switch to believing there's no money. But why should you update your beliefs when all that changed was your intent?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-09-19T16:58:35.966Z · LW(p) · GW(p)
But at the time you plan to one-box, you should believe there's money.
Maybe you shouldn't. Money is there if you actually take one box, not if you merely plan to. After you do take one box, or after you precommit to taking one box in a way that you know you won't alter, only then you should believe there is money.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-21T18:44:30.299Z · LW(p) · GW(p)
Okay, then let me put it this way: up until I perform the final, irreversible act of choosing one-box or two-box, I can make estimates of what conclusion I will arrive at.
For a simple example, if I'm going to compute 12957 + 19234, then before I work out the addition, I might first estimate it as being between 30,000 and 40,000. As I get closer, I might provisionally estimate it to a precision of one integer.
I can do these same kinds of estimations of what I will decide is the best option for me. I can think, "as of now, two-boxing looks like a good choice, and what I'll probably do (p=.52), but I need to consider some other stuff first". And during that estimation, I estimate that when I'll two-box, there will be no money in box B (because Omega has forseen this line of reasoning). As I converge on the final decision, this estimate of what I will do alters -- but why should that alter my beliefs about the box?
The only situation consistent with these predicates is one in which I really do switch between between worlds with box B money and those without, as I contemplate my decision -- which is the real sticking point for me.
Replies from: Johnicholas↑ comment by Johnicholas · 2009-09-21T18:54:02.204Z · LW(p) · GW(p)
Why should your beliefs about your own future actions alter your beliefs about what is in the box?
One might also ask "Why should your beliefs about your own future actions alter your beliefs about Omega's past actions?"
Your future actions are coupled to Omega's past actions through the problem's premise - Omega is supposed to be very skilled at predicting you. This might be unusual, but it's not that odd.
Suppose someone may or may not have a gene causing a taste for cilantro, and they are tasting cilantro. As they get information about their own tastes, they also revise their estimate of their own ancestry. Is that paradoxical?
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-21T19:06:26.829Z · LW(p) · GW(p)
Suppose someone may or may not have a gene causing a taste for cilantro, and they are tasting cilantro. As they get information about their own tastes, they also revise their estimate of their own ancestry. Is that paradoxical?
That's not analogous, because you don't choose (directly) whether you like cilantro. In contrast, I am choosing what box to take, which means I'm choosing what Omega's past decision was. That can make sense, but only if you can accept that you shift between e.g. Everett branches as your computation unfolds.
Replies from: Vladimir_Nesov, Johnicholas↑ comment by Vladimir_Nesov · 2009-09-21T20:12:08.016Z · LW(p) · GW(p)
Omega's actions depend, or refer, or are parameterized by your behavior. Behaviour here is a mathematical object specifying the meaning of for example the statement "it's how he'll act in that situation". This phrase can be meaningfully spoken at any time, independently on whether it's possible for you to be in this situation in the world in which this phrase is spoken. This is the sense in which your decisions are timeless: it's not so much you that is living in Platonia, as the referents of the questions about your behavior, future or past or hypothetical. These referents behave in a certain lawful way that is related to your behavior.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-22T21:42:37.070Z · LW(p) · GW(p)
Sorry for going back to the basics on this, but: what does my choice actually mean, then? If there is some mathematical object defining "how I'll act in a given situation", what does my choice mean in terms of that object? Am I e.g. simply learning the output of that object?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-09-27T16:37:51.991Z · LW(p) · GW(p)
Rather, the predictions are learning about your action. Look for what happens in physics, not for what "happens" in Platonia. That you'll believe something means that there are events in place that'll cause the belief to appear, and these events can be observed. When Omega reconstructs your decision, it answers a specific question about a property of the real world. The meaning of this question is your actual decision. This decision is connected to Omega's mind by processes that propagate evidence, mostly from common sources. Mathematical semantics is just a way of representing properties of this process, for example the fact that a correctly predicted algorithm will behave the same as the original algorithm of which this one is a prediction.
The trouble with decision-making is that you care about the effect of your choice, and so about the effect of predictions (or future reconstructions) of your choice. It's intuitively confusing that you can affect the past this way. This is a kind of problem of free will for the evidence about your future decisions found in the past. Just as it's not immediate to see how you can have free will in deterministic world, it's not immediate how the evidence about your future decisions can have the same kind of free will. Even more so, the confusion is deepened by the fact that the evidence about your future action, while having free will, is bound to perform exactly the same action as you yourself will in the future.
Since both you and predictions about you seem to entertain a kind of free will, it's tempting to say that you are in fact all of them. But only the real you is most accurate, and others are reconstructed under the goal of getting as close as possible to the original.
↑ comment by Johnicholas · 2009-09-21T19:31:31.991Z · LW(p) · GW(p)
The analogy is between having imperfect information of your future choice (while choosing), and imperfect information of your own tastes (while tasting).
None of this Newcomb's problem stuff is relevant to quantum physics; even if we were living in a non-quantum, Newtonian world, we would have all the same experiences related to this problem.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-21T19:50:12.343Z · LW(p) · GW(p)
The analogy is between having imperfect information of your future choice (while choosing), and imperfect information of your own tastes (while tasting).
That still doesn't work. In making my choice (per AnnaSalamon's drawing of Eliezer_Yudkowsky's TDT causal model for Newcomb's problem), I get to disconnect my decision-making process from its parents (parents not shown in AnnaSalamon's drawing because they'd be disconnected anyway). I do not disconnect the influence of my genes when learning whether I like cilantro.
Moreover, while I can present myself with reasons to change my mind, I cannot (knowingly) feed myself relevant evidence that I do not like cilantro, arbitrarily changing the probability of a given past ancestry.
None of this Newcomb's problem stuff is relevant to quantum physics; even if we were living in a non-quantum, Newtonian world, we would have all the same experiences related to this problem.
Yes, the Everett branch concept isn't necessary, but still, the weirdness of the implications of the situation do indeed apply to whatever physical laws contain it.
comment by PhilGoetz · 2009-09-19T21:11:16.305Z · LW(p) · GW(p)
Newcomb's problem assumes two contradictory things about A. It assumes that, for the purpose of Causal Decision Theory, A is irrelevant and completely separated from your actual decision process; it assumes you have some kind of free will such that you can decide to two-box without this decision having been reflected in A. It also assumes that, for purposes of the actual outcome, A is quite relevant; if you decided to two-box, your decision will have been reflected in A.
Yes. Thank you. The Newcomb's problem statement assumes that you both do and do not have free will.
EDIT: I suppose that to someone like Eliezer, who can see Newcomb's problem as simply a problem that his decision logic must get right, rather than imagining himself inside the problem and asking what he would do, it isn't necessary to assume free will. But if you see it that way, it isn't a paradox. It's hardly even interesting.