Can anyone explain to me why CDT two-boxes?

post by Andreas_Giger · 2012-07-02T06:06:36.198Z · LW · GW · Legacy · 136 comments

I have read lots of LW posts on this topic, and everyone seems to take this for granted without giving a proper explanation. So if anyone could explain this to me, I would appreciate that.

This is a simple question that is in need of a simple answer. Please don't link to pages and pages of theorycrafting. Thank you.

 

Edit: Since posting this, I have come to the conclusion that CDT doesn't actually play Newcomb. Here's a disagreement with that statement:

If you write up a CDT algorithm and then put it into a Newcomb's problem simulator, it will do something. It's playing the game; maybe not well, but it's playing.

And here's my response:

The thing is, an actual Newcomb simulator can't possibly exist because Omega doesn't exist. There are tons of workarounds, like using coin tosses as a substitution for Omega and ignoring the results whenever the coin was wrong, but that is something fundamentally different from Newcomb.

You can only simulate Newcomb in theory, and it is perfectly possible to just not play a theoretical game, if you reject the theory it is based on. In theoretical Newcomb, CDT doesn't care about the rule of Omega being right, so CDT does not play Newcomb.

If you're trying to simulate Newcomb in reality by substituting Omega with someone who has only empirically been proven right, you substitute Newcomb with a problem that consists of little more than simple calculation of priors and payoffs, and that's hardly the point here.

 

Edit 2: Clarification regarding backwards causality, which seems to confuse people:

Newcomb assumes that Omega is omniscient, which more importantly means that the decision you make right now determines whether Omega has put money in the box or not. Obviously this is backwards causality, and therefore not possible in real life, which is why Nozick doesn't spend too much ink on this.

But if you rule out the possibility of backwards causality, Omega can only make his prediction of your decision based on all your actions up to the point where it has to decide whether to put money in the box or not. In that case, if you take two people who have so far always acted (decided) identical, but one will one-box while the other one will two-box, Omega cannot make different predictions for them. And no matter what prediction Omega makes, you don't want to be the one who one-boxes.

 

Edit 3: Further clarification on the possible problems that could be considered Newcomb:

There's four types of Newcomb problems:

  1. Omniscient Omega (backwards causality) - CDT rejects this case, which cannot exist in reality.
  2. Fallible Omega, but still backwards causality - CDT rejects this case, which cannot exist in reality.
  3. Infallible Omega, no backwards causality - CDT correctly two-boxes. To improve payouts, CDT would have to have decided differently in the past, which is not decision theory anymore.
  4. Fallible Omega, no backwards causality - CDT correctly two-boxes. To improve payouts, CDT would have to have decided differently in the past, which is not decision theory anymore.

That's all there is to it.

 

Edit 4: Excerpt from Nozick's "Newcomb's Problem and Two Principles of Choice":

Now, at last, to return to Newcomb's example of the predictor. If one believes, for this case, that there is backwards causality, that your choice causes the money to be there or not, that it causes him to have made the prediction that he made, then there is no problem. One takes only what is in the second box. Or if one believes that the way the predictor works is by looking into the future; he, in some sense, sees what you are doing, and hence is no more likely to be wrong about what you do than someone else who is standing there at the time and watching you, and would normally see you, say, open only one box, then there is no problem. You take only what is in the second box. But suppose we establish or take as given that there is no backwards causality, that what you actually decide to do does not affect what he did in the past, that what you actually decide to do is not part of the explanation of why he made the prediction he made. So let us agree that the predictor works as follows: He observes you sometime before you are faced with the choice, examines you with complicated apparatus, etc., and then uses his theory to predict on the basis of this state you were in, what choice you would make later when faced with the choice. Your deciding to do as you do is not part of the explanation of why he makes the prediction he does, though your being in a certain state earlier, is part of the explanation of why he makes the prediction he does, and why you decide as you do.

I believe that one should take what is in both boxes. I fear that the considerations I have adduced thus far will not convince those proponents of taking only what is in the second box. Furthermore I suspect that an adequate solution to this problem will go much deeper than I have yet gone or shall go in this paper. So I want to pose one question. I assume that it is clear that in the vaccine example, the person should not be convinced by the probability argument, and should choose the dominant action. I assume also that it is clear that in the case of the two brothers, the brother should not be convinced by the probability argument offered. The question I should like to put to proponents of taking only what is in the second box in Newcomb's example (and hence not performing the dominant action) is: what is the difference between Newcomb's example and the other two examples which make the difference between not following the dominance principle, and following it?

136 comments

Comments sorted by top scores.

comment by [deleted] · 2012-07-02T06:47:31.743Z · LW(p) · GW(p)

CDT acts to physically cause nice things to happen. CDT can't physically cause the contents of the boxes to change, and fails to recognize the non-physical dependence of the box contents on its decision, which is a result of the logical dependence between CDT and Omega's CDT simulation. Since CDT believes its decision can't affect the contents of the boxes, it takes both in order to get any money that's there. Taking both boxes is in fact the correct course of action for the problem CDT thinks its facing, in which a guy may have randomly decided to leave some money around for them. CDT doesn't think that it will always get the $1 million; it is capable of representing a background probability that Omega did or didn't do something. It just can't factor out a part of that uncertainty, the part that's the same as its uncertainty about what it will do, into a causal relation link that points from the present to the past (or from a timeless platonic computation node to both the present and the CDT sim in the past, as TDT does).

Or from a different light, people who talked about causal decision theories historically were pretty vague, but basically said that causality was that thing by which you can influence the future but not the past or events outside your light cone, so when we build more formal versions of CDT, we make sure that's how it reasons and we keep that sense of the word causality.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T06:53:28.144Z · LW(p) · GW(p)

Thank you, you just confirmed what I posted as a reply to "see", which is that CDT doesn't play in Newcomb at all.

Replies from: jsalvatier
comment by jsalvatier · 2012-07-02T08:39:16.209Z · LW(p) · GW(p)

I don't think the way you're phrasing that is very useful. If you write up a CDT algorithm and then put it into a Newcomb's problem simulator, it will do something. It's playing the game; maybe not well, but it's playing.

Perhaps you could say, "'CDT' is poorly named, if you follow actual the actual principles of causality, you'll get an algorithm that gets the right answer" (I've seen people make a claim like that). Or "you can think of CDT reframing the problem as an easier one that it knows how to play, but is substantially different and thus gets the wrong answer". Or something else like that.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T08:55:00.975Z · LW(p) · GW(p)

The thing is, an actual Newcomb simulator can't possibly exist because Omega doesn't exist. There are tons of workarounds, like using coin tosses as a substitution for Omega and ignoring the results whenever the coin was wrong, but that is something fundamentally different from Newcomb.

You can only simulate Newcomb in theory, and it is perfectly possible to just not play a theoretical game, if you reject the theory it is based on. In theoretical Newcomb, CDT doesn't care about the rule of Omega being right, so CDT does not play Newcomb.

If you're trying to simulate Newcomb in reality by substituting Omega with someone who has only empirically been proven right, you substitute Newcomb with a problem that consists of little more than simple calculation of priors and payoffs, and that's hardly the point here.

Replies from: philh
comment by philh · 2012-07-02T11:55:58.740Z · LW(p) · GW(p)

If Omega is fallible (e.g. human), CDT still two-boxes even if Omega empirically seems to be wrong one time in a million.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T14:24:00.274Z · LW(p) · GW(p)

Fallible does not equal human. A human would still determine whether to put money in the box or not based only on the past, not on the future, and at that point the problem becomes "if you've been CDT so far, you won't get the $1,000,000, no matter what you do in this instance of the game."

Replies from: shminux
comment by shminux · 2012-07-03T02:17:38.605Z · LW(p) · GW(p)

Suppose that Omega is wrong with probability p<1 (this is a perfectly realistic and sensible case). What does (your interpretation of) CDT do in this case, and with what probability?

Here is my EDT calculation:

  • calculate p(2box|1box prediction)1001000+p(2box|2box prediction)1000=1001000(1-p)+1000p

  • calculate p(1box|1box prediction)1001000+p(1box|2box prediction)1000=1001000p+1000(1-p)

  • pick largest of the two (which is 1-box if p < 50%, 2-box if p > 50%).

Thus one should 1-box even if Omega is slightly better than chance.

comment by Desrtopa · 2012-07-04T14:10:47.146Z · LW(p) · GW(p)

Omniscient Omega doesn't entail backwards causality, it only entails omniscience. If Omega can extrapolate how you would choose boxes from complete information about your present, you're not going to fool it no matter how many times you play the game.

Imagine a machine that sorts red balls from green balls. If you put in a red ball, it spits it out Terminal A, and if you put in a green ball it spits it out Terminal B. If you showed a completely colorblind person how you could predict in which terminal a ball would get spit out of before putting it into the machine, it might look to them like backwards causality, but only forwards causality is involved.

If you know that Omega can predict your actions, you should condition your decisions on the knowledge that Omega will have predicted you correctly.

Humans are predictable enough in real life to make this sort of reasoning salient. For instance, I have a friend who, when I ask her questions such as "you know what happened to me?" or "You know what I think is pretty cool?" or any similarly open ended question, will answer "Monkeys?" as a complete non sequitur, more often than not (it's functionally her way of saying "no, go on.") However, sometimes she will not say this, and instead say something like "No, what?" A number of times, I have contrived situations where the correct answer is "monkeys," but only asked the question when I predicted that she would not say "monkeys." So far, I have predicted correctly every time; she has never correctly guessed "monkeys."

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-04T14:39:12.458Z · LW(p) · GW(p)

Omniscient Omega doesn't entail backwards causality, it only entails omniscience. If Omega can extrapolate how you would choose boxes from complete information about your present, you're not going to fool it no matter how many times you play the game.

I agree if you say that a more accurate statement would have been "omniscient Omega entails either backwards causality or the absence of free will."

I actually assign a rather high probability to free will not existing; however discussing decision theory under that assumption is not interesting at all.

Regardless of the issue of free will (which I don't want to discuss because it is obviously getting us nowhere), if Omega makes its prediction solely based on your past, then your past suddenly becomes an inherent part of the problem. This means that two-boxing-You either has a different past than one-boxing-You and therefore plays a different game, or that Omega makes the same prediction for both versions of you, in which case two-boxing-You wins.

Replies from: Desrtopa
comment by Desrtopa · 2012-07-04T15:04:41.012Z · LW(p) · GW(p)

Two-boxing-you is a different you than one-boxing-you. They make different decisions in the same scenario, so something about them must not be the same.

Omega doesn't make its decision solely based on your past, it makes the decision based on all information salient to the question. Omega is an omniscient perfect reasoner. If there's anything that will affect your decision, Omega knows about it.

If you know that Omega will correctly predict your actions, then you can draw a decision tree which crosses off the outcomes "I choose to two box and both boxes contain money," and "I choose to one box and the other box contains no money," because you can rule out any outcome that entails Omega having mispredicted you.

Probability is in the mind. The reality is that either one or both boxes already contain money, and you are already going to choose one box or both, in accordance with Omega's prediction. Your role is to run through the algorithm to determine what is the best choice given what you know. And given what you know, one boxing has higher expected returns than two boxing.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-04T16:33:14.202Z · LW(p) · GW(p)

Omega doesn't make its decision solely based on your past, it makes the decision based on all information salient to the question. Omega is an omniscient perfect reasoner. If there's anything that will affect your decision, Omega knows about it.

Omega cannot have the future as an input; any knowledge Omega has about the future is a result of logical reasoning based upon its knowledge of the past.

If you know that Omega will correctly predict your actions

You cannot know this, unless you (a) consider backwards causality, which is wrong, or (b) consider absence of free will, which is uninteresting.

You can also not know that Omega will correctly predict your choice with p≠0.5. At best, you can only know that Omega predicts you to one-box/two-box with p=whatever.

Replies from: wedrifid, TrE, Desrtopa
comment by wedrifid · 2012-07-04T16:44:04.816Z · LW(p) · GW(p)

If you know that Omega will correctly predict your actions

You cannot know this, unless you (a) consider backwards causality, which is wrong, or (b) consider absence of free will, which is uninteresting.

Yes you can. Something existing that can predict your actions in no way precludes free will. (I suppose definitions of "free will" could be constructed such that predicting negates it, in which case you can still be predicted, don't have free will and the situation is exactly as interesting as it was before.)

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-04T17:33:36.212Z · LW(p) · GW(p)

Let us assume a repeated game where an agent is presented with a decision between A and B, and Omega observes that the agent chooses A in 80% and B in 20% of the cases.

If Omega now predicts the agent to choose A in the next instance of the game, then the probability of the prediction being correct is 80% - from Omega's perspective as long as the roll hasn't been made, and from the agent's perspective as long as no decision has been made. However, once the decision has been made, the probability of the prediction being correct from the perspective of the agent is either 100% (A) or 0% (B).

If, instead, Omega is a ten-sided die with 8 A-sides and 2 B-sides, then the probability of the prediction being correct is 68% - from Omega's perspective, and from the agent's perspective as long as no decision has been made. However, once the decision has been made, the probability of the prediction being correct from the perspective of the agent is either 80% (A) or 20% (B).

If the agent knows that Omega makes the prediction before the agent makes the decision, then the agent cannot make different decisions without affecting the probability of the prediction being correct, unless Omega's prediction is a coin toss (p=0.5).

The only case where the probability of Omega being correct is unchangeable with p≠0.5 is the case where the agent cannot make different decisions, which I call "no free will".

Replies from: FAWS
comment by FAWS · 2012-07-04T18:49:17.091Z · LW(p) · GW(p)

You are using the wrong sense of "can" in "cannot make different decisions". The every day subjective experience of "free will" isn't caused by your decisions being indeterminate in an objective sense, that's the incoherent concept of libertarian free will. Instead it seems to be based on our decisions being dependent on some sort of internal preference calculation, and the correct sense of "can make different decisions" to use is something like "if the preference calculation had a different outcome that would result in a different decision".

Otherwise results that are entirely random would feel more free than results that are based on your values, habits, likes, memories and other character traits, i. e. the things that make you you. Not at all coincidentally this is also the criterion whether it makes sense to bother thinking about the decision.

You yourself don't know the result of the preference calculation before you run it, otherwise it wouldn't feel like a free decision. But whether Omega knows the result in advance has no impact on that at all.

comment by TrE · 2012-07-04T19:50:30.947Z · LW(p) · GW(p)

So apparently you have not followed my advice to consider free will. I really recommend that you read up on this because it seems to cause a significant part of our misunderstanding here.

comment by Desrtopa · 2012-07-04T17:26:58.997Z · LW(p) · GW(p)

You cannot know this, unless you (a) consider backwards causality, which is wrong, or (b) consider absence of free will, which is uninteresting.

You can have "free will" in the sense of being able to do what you want within the realm of possibility, while your wants are set deterministically.

If I offer most people a choice between receiving a hundred dollars, or being shot in the head, I can predict with near certainty that they will choose the hundred dollars, because I know enough about what kind of agents they are. Any formulation of "free will" which says I should not be able to do this is simply wrong. If I were making the same offer to Queebles (a species which hates money and loves being shot in the head,) I would predict the reverse. Omega, having very complete information and perfect reasoning, can predict in advance whether you will one-box or two-box.

You can also not know that Omega will correctly predict your choice with p≠0.5. At best, you can only know that Omega predicts you to one-box/two-box with p=whatever.

You can predict that Kasparov will beat you in a chess match without knowing the specific moves he'll make. If you could predict all the moves he'd make, you could beat him in a chess match, but you can't. Similarly, if you could assign nonequal probabilities to how Omega would fill the boxes irrespective of your own choice, then you could act on those probabilities and beat Omega more than half the time, so that would entail a p≠0.5. probability of Omega predicting your choice.

If you play chess against a perfect chess playing machine, which has solved the game of chess, then you can predict in advance that if you decide to play black, black will lose,and if you decide to play white, white will lose, because you know that the machine is playing on a higher level than you. And if you play through Newcomb's problem with Omega, you can predict that if you one box, both boxes will contain money, and if you two box, only one will. Omega is on a higher level than you, the game has been played, and you already lost.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-04T17:45:30.856Z · LW(p) · GW(p)

The reason why you lose in chess is because you will make the wrong moves, and the reason why you will make the wrong moves is because there are much too many of them to make it likely enough that you will find the right ones by chance. This is not the case in a game that consists of only two different moves.

If I offer most people a choice between receiving a hundred dollars, or being shot in the head, I can predict with near certainty that they will choose the hundred dollars, because I know enough about what kind of agents they are.

What if you also tell them that you've made a prediction about them, and if your prediction is correct, they will get the money and not be shot even if their decision was to get shot? (If your prediction was wrong, the same happens as in your original game.)

What if you were in that very situation, with Omega, whose predictions are always right, holding the money and the gun? Could you make a distinction between the choices offered to you?

Replies from: Desrtopa
comment by Desrtopa · 2012-07-04T18:34:02.153Z · LW(p) · GW(p)

The reason why you lose in chess is because you will make the wrong moves, and the reason why you will make the wrong moves is that there are much too many of them to make it likely enough that you will find the right ones by chance. This is not the case in a game that consists of only two different moves.

In a game with two moves, you want to model the other person, and play one level higher than that. So if I take the role of Omega and put you in Newcomb's problem, and you think I'll expect you to two box because you've argued in favor of two boxing, then you expect me to put money in only one box, so you want to one box, thereby beating your model of me. But if I expect you to have thought that far, then I want to put money in both boxes, making two boxing the winning move, thereby beating my model of you. And you expect me to have thought that far, you want to play a level above your model of me and one box again.

If humans followed this kind of recursion infinitely, it would never resolve and you couldn't do better than maximum entropy in predicting the other person's decision. But people don't do that, humans tend to follow very few levels of recursion when modeling others (example here, you can look at the comments for the results.) So if one person is significantly better at modeling the other, they'll have an edge and be able to do considerably better than maximum entropy in guessing the other person's choice.

Omega is a hypothetical entity who models the universe perfectly. If you decide to one box, his model of you decides to one box, so he plays a level above that and puts money in both boxes. If you decide to two box, his model of you decides to two box, so he plays a level above that and only puts money in one box. Any method of resolving the dilemma that you apply, his model of you also applies; if you decide to flip a coin, his model of you also decides to flip a coin, and because Omega models the whole universe perfectly, not just you, the coin in his model shows the same face as the coin you actually flip. This does essentially require Omega to be able to fold up the territory and put it in his pocket, but it doesn't require any backwards causality. Real life Newcomblike dilemmas involve predictors who are very reliable, but not completely infallible.

What if you also tell them that you've made a prediction about them, and if your prediction is correct, they will get the money and not be shot even if their decision was to get shot? (If your prediction was wrong, the same happens as in your original game.)

What if you were in that very situation, with Omega, whose predictions are always right, holding the money and the gun? Could you make a distinction between the choices offered to you?

I could choose either, knowing that the results would be the same either way. Either I choose the money, in which case Omega has predicted that I will choose the money, and I get the money and don't get shot, or I choose the bullet, in which case, Omega has predicted that I choose the bullet, and I will get the money and not get shot. In this case, you don't need Omega's perfect prediction to avoid shooting the other person, you can just predict that they'll choose to get shot every time, because whether you're right or wrong they won't get shot, and if you want to shoot them, you should always predict that they'll choose the money, because predicting that they'll choose the money and having them choose the bullet is the only branch that results in shooting them. Similarly, if you're offered the dilemma, you should always pick the money if you don't want to get shot, and the bullet if you do want to get shot. It's a game with a very simple dominant strategy on each side.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-04T19:22:24.453Z · LW(p) · GW(p)

In a game with two moves, you want to model the other person

I don't see why you think this would apply to Newcomb. Omega is not an "other person"; it has no motivation, no payoff matrix.

I could choose either, knowing that the results would be the same either way.

Really? If your decision theory allows you to choose either option, then how could Omega possibly predict your decision?

Replies from: Desrtopa, Vladimir_Nesov
comment by Desrtopa · 2012-07-04T19:48:13.859Z · LW(p) · GW(p)

I don't see why you think this would apply to Newcomb. Omega is not an "other person"; it has no motivation, no payoff matrix.

Whatever its reasons, Omega wants to set up the boxes so that if you one box, both boxes have money, and if you two box, only one box has money. It can be said to have preferences insofar as they lead to it using its predictive powers to try to do that.

I can't play at a higher level than Omega's model of me. Like playing against a stronger chess player, I can only predict that they will win. Any step where I say "It will stop here, so I'll do this instead," it won't stop there, and Omega will turn out to be playing at a higher level than me.

Really? If your decision theory allows you to choose either option, then how could Omega possibly predict your decision?

Because on some level my choice is going to be nonrandom (I am made of physical particles following physical rules,) and if Omega is an omniscient perfect reasoner, it can determine my choice in advance even if I can't.

But as it happens, I would choose the money, because choosing the money is a dominant strategy for anything up to absolute certainty in the other party's predictive abilities, and I'm not inclined to start behaving differently as soon as I theoretically have absolute certainty.

comment by Vladimir_Nesov · 2012-07-04T19:26:46.411Z · LW(p) · GW(p)

If your decision theory allows you to choose either option

What you actually choose is one particular option (you may even strongly suspect in advance which one; and someone else might know it even better). "Choice" doesn't imply lack of determinism. If what you choose is something definite, it could as well be engraved on a stone tablet in advance, if it was possible to figure out what the future choice turns out to be. See Free will (and solution)).

comment by wedrifid · 2012-07-02T09:09:02.708Z · LW(p) · GW(p)

This is a simple question that is in need of a simple answer.

Because $1,000 is greater than $0, $1,001,000 is greater than $1,000,000 and those are the kind of comparisons that CDT cares about.

Please don't link to pages and pages of theorycrafting. Thank you.

You haven't seemed to respond to the 'simple' thus far and have instead defied it aggressively. That leaves you either reading the theory or staying confused.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T09:36:24.141Z · LW(p) · GW(p)

Because $1,000 is greater than $0, $1,001,000 is greater than $1,000,000 and those are the kind of comparisons that CDT cares about.

... which means that CDT doesn't play Newcomb, because there are no $1,001,000 or $0 in Newcomb.

You haven't seemed to respond to the 'simple' thus far and have instead defied it aggressively. That leaves you either reading the theory or staying confused.

I have never said that CDT one-boxes in Newcomb, and I don't feel very confused right now. In fact the very first reply I got fully answered my question. I have edited my top level post, please refer to that if you feel the need to discuss this further.

Replies from: wedrifid
comment by wedrifid · 2012-07-02T09:49:20.858Z · LW(p) · GW(p)

and I don't feel very confused right now

We can see that. That's why neither simple explanations nor theorycrafting have all that much chance of being understood.

I have edited my top level post, please refer to that if you feel the need to discuss this further.

If it seemed like the top level post wasn't going to be downvoted below visibility I would have to respond to it in order to prevent muddled thinking from spreading.

comment by shokwave · 2012-07-02T12:45:35.613Z · LW(p) · GW(p)

If you ask a mathematician to find 0x + 1 for x = 3, they will answer 1. If you then ask the mathematician to find the 10th root of the factorial of the eighth Mersenne prime, multiplied by zero, plus one, they will answer 1. You may protest they didn't actually calculate the eighth Mersenne prime, find its factorial, or calculate the tenth root of that, but you can't deny they gave the right answer.

If you put CDT in a room with a million dollars in Box A and a thousand dollars in Box B (no Omega, just the boxes), and give it the choice of either A or both, it will take both, and walk away with one million and one thousand dollars. If you explain this whole Omega thing to CDT, then put it in the room, it will notice that it doesn't actually need to calculate the eighth Mersenne prime, etc, because when Omega leaves you are effectively multiplying by zero - all the fancy simulating is irrelevant because the room is just two boxes that may contain money, and you can take both.

Yes, CDT doesn't think it's playing Newcomb's Puzzle, it thinks it's playing "enter a room with money".

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T14:07:37.994Z · LW(p) · GW(p)

You're completely right, except that (assuming I understand you correctly) you're implying CDT only thinks it's playing "room with money", while in reality it would be playing Newcomb.

And that's the issue; in reality Newcomb cannot exist, and if in theory you think you're playing something, you are playing it.

Does that make sense?

Replies from: shokwave
comment by shokwave · 2012-07-02T16:22:10.211Z · LW(p) · GW(p)

Perfect sense. Theorising that CDT would lose because it's playing a different game is uninteresting as a thought experiment; if I theorise that any decision theory is playing a different game it will also lose; this is not a property of CDT but of the hypothetical.

Let's turn to the case of playing in reality, as it's the interesting one.

If you grant that Newcomb paradoxes might exist in reality, then there is a real problem: CDT can't distinguish between free money boxes and Newcomb paradoxes, so so when it encounters a Newcomb situation it underperforms.

If you claim Newcomb cannot exist in reality, then this is not a problem with CDT. I (and hopefully others, though I shan't speak for them) would accept that this is not a problem with CDT if it is shown that Newcomb's is not possible in real life - but we are arguing against you here because we think Newcomb is possible. (Okay, I did speak for them).

I disagree on two points: one, I think a simulator is possible (that is, Omega 's impossibility comes from other powers we've given it, we can remove those powers and weaken Omega to a fits-in-reality definition without losing prediction), and two, I don't think the priors-and-payoffs approach to an empirical predictor is correct (for game-theoretic reasons which I can explicate if you'd like, but if it's not the point of contention it would only distract).

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T17:00:35.299Z · LW(p) · GW(p)

CDT can't distinguish between free money boxes and Newcomb paradoxes

No, CDT can in fact distinguish very well. It always concludes that the money is there, and it is always right, because it never encounters Newcomb.

we think Newcomb is possible.

To clarify: You are talking about actual Newcomb with an omniscient being, yes? Because in that case, I think several posters have already stated they deem this impossible, and Nozick agrees.

If you're talking about empirical Newcomb, that certainly is possible, but it is impossible to do better than CDT without choosing differently in other situations, because if you've acted like CDT in the past, Omega is going to assume you are CDT, even if you're not.

I disagree on two points: one, I think a simulator is possible (that is, Omega 's impossibility comes from other powers we've given it, we can remove those powers and weaken Omega to a fits-in-reality definition without losing prediction)

I agree on the "we can remove those powers and weaken Omega to a fits-in-reality definition without losing prediction" part, but this will change what the "correct" answer is. For example, you could substitute Omega with a coin toss and repeat the game if Omega is wrong. This is still a one-time problem, because Omega is a coin and therefore has no memory, but CDT, which would two-box in empirical Newcomb, one-boxes in this case and takes the $1,000,000.

and two, I don't think the priors-and-payoffs approach to an empirical predictor is correct (for game-theoretic reasons which I can explicate if you'd like, but if it's not the point of contention it would only distract).

I don't think this is the point of contention, but after we've settled that, I would be interested in hearing your line of thought on this.

Replies from: Emile
comment by Emile · 2012-07-02T17:35:01.898Z · LW(p) · GW(p)

To clarify: You are talking about actual Newcomb with an omniscient being, yes? Because in that case, I think several posters have already stated they deem this impossible, and Nozick agrees.

How about the version where agents are computer programs, and Omega runs a simulation of the agent facing the choice, observes it's behavior, and fills the boxes accordingly?

I see no violation of causality in that version.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T17:41:34.890Z · LW(p) · GW(p)

If you are a computer program that can be simulated, then the problem also becomes trivial, because either the simulation can be incorrect, in which case Omega is not omniscient, or the simulation cannot be incorrect, in which case you don't have a choice.

Replies from: Emile
comment by Emile · 2012-07-02T18:06:39.063Z · LW(p) · GW(p)

If the simulation is correct, a program that chooses to one-box will get $1,000,000, and a program that chooses to two-box will get $1,000. I wouldn't call that "not having a choice".

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T20:33:13.887Z · LW(p) · GW(p)

So if a program is programmed to print zeroes on a screen, and another program is programmed to print ones, you would say both programs chose their number?

I hope you don't, because that would be an insane statement. However if you disagree with this, I fail to see how you could be a computer program that can always be correctly simulated, but still has a choice.

Replies from: JenniferRM, Emile, TrE
comment by JenniferRM · 2012-07-02T22:40:26.418Z · LW(p) · GW(p)

You seem to be fighting the hypothetical, but I don't know if you're doing it out of mistrust or because some background would be helpful. I'll assume helpful background would be helpful... :-)

A program could be designed to (1) search for relevant sensory data within a larger context, (2) derive a mixed strategy given the input data, (3) gets more bits of salt from local thermal fluctuations than log2(number of possible actions), (4) drop the salt into a pseudo-random number generator over its derived mixed strategy, and (5) output whatever falls out as its action. This rough algorithm seems strongly deterministic in some ways, and yet also strongly reminiscent of "choice" in others.

This formulation reduces the "magic" of Omega to predicting the relatively fixed elements of the agent (ie, steps 1, 2, and 4) which seems roughly plausible as a matter of psychology and input knowledge and so on, and also either (A) knowing from this that the strategy that will be derived isn't actually mixed so the salt is irrelevant, or else (B) having access/control of the salt in step 3.

In AI design, steps 1 and 2 are under the programmer's control to some degree. Some ways of writing the program might make the AI more or less tractable/benevolent/functional/wise and it seems like it would be good to know which ways are likely to produce better outcomes before any such AI is built and achieves takeoff rather than after. Hence the interest in this thought experiment as an extreme test case. The question is not whether step 3 is pragmatically possible for an imaginary Omega to hack in real life. The question is how to design steps 1 and 2 in toy scenarios where the program's ability to decide how to pre-commit and self-edit are the central task, so that harder scenarios can be attacked as "similar to a simpler solved problem".

If you say "Your only choices are flipping a coin or saying a predetermined answer" you're dodging the real question. You can be dragged back to the question by simply positing "Omega predicts the coin flip, what then?" If there's time and room for lots and lots of words (rather than just seven words) then another way to bring attention back to the question is to explain about fighting the hypothetical, try to build rapport, see if you can learn to play along so that you can help advance a useful intellectual project.

If you still "don't get it", then please, at least don't clog up the channel. If you do get it, please offer better criticism. Like, if you know of a different but better thought experiment where effectively-optimizing self-modifying pre-commitment is the central feature of study, that would be useful.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-03T10:08:54.723Z · LW(p) · GW(p)

I don't fight any hypothesis. If backwards causality is possible, one-boxing obviously wins.

But backwards causality cannot exist in reality, and therefore my decision cannot affect Omega's prediction of that decision. I would be very surprised if the large majority of LW posters would disagree with that statement; most of them seem to just ignore this level of the problem.

A program could be designed to (1) search for relevant sensory data within a larger context, (2) derive a mixed strategy given the input data, (3) gets more bits of salt from local thermal fluctuations than log2(number of possible actions), (4) drop the salt into a pseudo-random number generator over its derived mixed strategy, and (5) output whatever falls out as its action. This rough algorithm seems strongly deterministic in some ways, and yet also strongly reminiscent of "choice" in others. This formulation reduces the "magic" of Omega to predicting the relatively fixed elements of the agent (ie, steps 1, 2, and 4) which seems roughly plausible as a matter of psychology and input knowledge and so on, and also either (A) knowing from this that the strategy that will be derived isn't actually mixed so the salt is irrelevant, or else (B) having access/control of the salt in step 3.

In this example, the correct solution would not be to "choose" to one-box, but to choose to adopt a strategy that causes you to one-box before Omega makes its prediction, and therefore before you know you're playing Newcomb. This is not Newcomb anymore, this is a new problem. In this new problem, CDT will decide to adopt a strategy that causes it to one-box (it will precommit).

Replies from: wedrifid
comment by wedrifid · 2012-07-03T10:27:32.112Z · LW(p) · GW(p)

In this new problem, CDT will decide to adopt a strategy that causes it to one-box (it will precommit).

Similarly, if a CDT agent is facing no immediate decision problem but has the capability to self modify it will modify itself to an agent that implements a new decision theory (call it, for example, CDT++). The self modified agent will then behave as if it implements a Reflective Decision Theory (UDT, TDT, etc) for the purpose of all influence over the universe after the time of self modification but like CDT for the purpose of all influence before the time of self modification. This means roughly that it will behave as if it had made all the correct 'precommitments' at that time. It'll then cooperate against equivalent agents in prisoner's dilemmas and one box on future Newcomb's problems unless Omega says "Oh, and I made the prediction and filled the boxes back before you self modified away from CDT, I'm just showing them to you now".

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-03T10:40:54.289Z · LW(p) · GW(p)

A CDT agent will do this, if it can be proven that it cannot make worse decisions after the modification than if it had not modified itself. I actually tried to find literature on this a while back, but couldn't find any, so I assigned a very low probability to the possibility that this could be proven. Seeing how you seem to be familiar with the topic, do you know of any?

Replies from: wedrifid
comment by wedrifid · 2012-07-03T11:08:24.053Z · LW(p) · GW(p)

A CDT agent will do this, if it can be proven that it cannot make worse decisions after the modification than if it had not modified itself. I actually tried to find literature on this a while back, but couldn't find any, so I assigned a very low probability to the possibility that this could be proven. Seeing how you seem to be familiar with the topic, do you know of any?

I am somewhat familiar with the topic but note that I am most familiar with the work that has already moved past CDT (ie. considers CDT irrational and inferior to a reflective decision theory along the lines of TDT or UDT). Thus far nobody has got around to formally writing up a "What CDT self modifies to" paper that I'm aware of (I wish they would!). It would be interesting to see what someone coming from the assumption that CDT is sane could come up with. Again I'm unfamiliar with such attempts but in this case that is far less evidence about such things existing.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-03T11:24:16.796Z · LW(p) · GW(p)

I wasn't asking for a concrete alternative for CDT. If anything, I'm interested in a proof that such a decision theory can possibly exist. Because trying to find an alternative when you haven't proven this seems like a task with a very low chance of success.

Replies from: wedrifid
comment by wedrifid · 2012-07-03T11:34:24.132Z · LW(p) · GW(p)

I wasn't asking for a concrete alternative for CDT.

I wasn't offering alternatives - I was looking specifically at what CDT will inevitably self modify into (which is itself not optimal - just what CDT will do). The mention of alternatives was to convey to you that what I say on the subject and what I refer to would require making inferential steps that you have indicated you aren't likely to make.

Incidentally, proving that CDT will (given the option) modify into something else is a very different thing than proving that there is a better alternative to CDT. Either could be true without implying the other.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-03T11:50:12.738Z · LW(p) · GW(p)

Either could be true without implying the other.

That is true, and if you cannot prove that such a decision theory exists, then CDT modifying itself is not the necessarily correct answer to meta-Newcomb, correct?

comment by Emile · 2012-07-02T22:27:13.270Z · LW(p) · GW(p)

Consider programs that, given the description of a situation (possibly including a chain of events leading to it) and a list of possible actions, returns one of the actions. It doesn't seem to be a stretch of language to say that such programs are "choosing", because the way those programs react to their situation can be very similar to the way humans react (consider: finding the shortest path between two points; playing a turn-based strategy game, etc.).

Whether programs that are hard-coded to always return a particular answer "choose" or not is a very boring question of semantics, like "does a tree falling in the forest make a sound if no-one is around to hear it".

Given a description of Newcomb's problem, a well-written program will one-box, and a badly-written one will two-box. The difference between the two is not trivial.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-03T10:19:22.940Z · LW(p) · GW(p)

Given a description of Newcomb's problem, a well-written program will one-box, and a badly-written one will two-box. The difference between the two is not trivial.

I see your point now, and I agree with the quoted statement. However, there's a difference between Newcomb, where you make your decision after Omega made its prediction, and "meta-Newcomb", where you're allowed to precommit before Omega makes its prediction, for example by choosing your programming. In meta-Newcomb, I don't even have to consider being a computer program that can be simulated; I can just give my good friend Epsilon, who always exactly does what he is told, a gun and tell him to shoot me if I lie, then tell Omega I'm going to one-box, and then Omega would make its prediction. I would one-box, get $1,000,000 and, more importantly, not shot.

This is a decision that CDT would make, given the opportunity.

Replies from: Emile, wedrifid
comment by Emile · 2012-07-03T13:17:30.841Z · LW(p) · GW(p)

there's a difference between Newcomb, where you make your decision after Omega made its prediction, and "meta-Newcomb", where you're allowed to precommit before Omega makes its prediction, for example by choosing your programming.

I agree that meta-Newcomb is not the same problem, and that in meta-newcomb CDT would precommit to one-box.

However, even in normal Newcomb, it's possible to have agents that behave as if they had precommited when they realize precomitting would have been better for them. More specifically, in pseudocode:

function take_decision(information_about_world, actions):

for each action:

calculate the utility that an agent that always returns that action would have got

return the action that got the highest utility

There are some subtleties, notably about how to take the information about the world into account, but an agent built along this model should one-box on problems like Newcomb's, while two-boxing in cases where Omega decides by flipping a coin.

(such an agent; however, doesn't cooperate with itself in prisonner's dilemma, you need a better agent for that)

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-03T13:25:01.386Z · LW(p) · GW(p)

You are 100% correct. However, if you say "it's possible to have agents that behave as if they had precommited", then you are not talking about what's the best decision to make in this situation, but what's the best decision theory to have in this situation, and that is, again, meta-Newcomb, because the decision which decision theory you're going to follow is a decision you have to make before Omega makes its prediction. Switching to this decision theory after Omega makes its prediction doesn't work, obviously, so this is not a solution for Newcomb.

comment by wedrifid · 2012-07-03T10:48:20.649Z · LW(p) · GW(p)

I can just give my good friend Epsilon, who always exactly does what he is told, a gun and tell him to shoot me if I lie, then tell Omega I'm going to one-box, and then Omega would make its prediction. I would one-box, get $1,000,000 and, more importantly, not shot.

When I first read this I took it literally, as using Epsilon directly as a lie detector. That had some interesting potential side effects (like death) for a CDT agent. On second reading I take it to mean "Stay around with the gun until after everything is resolved and if I forswear myself kill me". As a CDT agent you need to be sure that Epsilon will stay with the gun until you have abandoned the second box. If Epsilon just scans your thoughts, detects whether you are lying and then leaves then CDT will go ahead and take both boxes anyway. (It's mind-boggling to think of agents that couldn't even manage cooperation with themselves with $1m on the line and a truth oracle right there to help them!)

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-03T11:40:10.384Z · LW(p) · GW(p)

Yeah, I meant that Epsilon would shoot if you two-box after having said you would one-box. In the end, "Epsilon with a gun" is just a metaphor for / specific instance of precommitting, as is "computer program that can choose its programming".

comment by TrE · 2012-07-03T07:21:59.312Z · LW(p) · GW(p)

Have you also already thought about free will?

comment by Viliam_Bur · 2012-07-02T08:40:10.260Z · LW(p) · GW(p)

This is a simple question that is in need of a simple answer.

And this is an Open Thread, which exists precisely for this kind of questions.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T08:58:04.748Z · LW(p) · GW(p)

If it's worth saying, but not worth its own post, even in Discussion, it goes here.

It is worth its own post.

You're not helping.

Downvoted.

comment by TimS · 2012-07-02T13:39:32.814Z · LW(p) · GW(p)

Newcomb's problem has sequential steps - that's the key difference between it and problems like Prisoner's Dilemma. By the time the decision-agent is faced with the problem, the first step (where Omega examines you and decides how to seed the box) is already done. Absent time travel, nothing the agent does now will affect the contents of the boxes.

Consider the idea of the hostage exchange - the inherent leverage is in favor of the person who receives what they want first. It takes fairly sophisticated analysis to decide that what happened before should affect what happens after because it appears that there is no penalty for ignoring what happened before. (ie the hostage taker should release the hostage after being paid - but they already have the money).

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T14:33:43.011Z · LW(p) · GW(p)

But Omega figuring out your decision is time travel. That's the whole point of Newcomb, and why you need a "timeless" decision theory to one-box.

As soon as you're talking about reality (hostages, empirical evidence, no time travel, ...) you're talking about weak Newcomb, which is not an issue here. Also, Newcomb becomes a very different problem if you repeat it, similar to PD.

Replies from: TimS
comment by TimS · 2012-07-02T14:49:08.841Z · LW(p) · GW(p)

Newcomb's problem is not particularly interesting if one assumes the mechanism is time travel. If Omega really (1) wants to reduce the amount it spends and (2) can send information backward in time (ie time travel). no decision theory can do well. The fact that Eliezar's proposed decision theory is called "timeless" doesn't actually mean anything - and it hasn't really been formalized anyway.

In short, try thinking about the problem with time travel excluded. What insights there are to gain from the problem are most accessible from that perspective.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T15:00:09.634Z · LW(p) · GW(p)

If Omega really (1) wants to reduce the amount it spends and (2) can send information backward in time (ie time travel), no decision theory can do well.

This statement is clearly false. Any decision theory that gives time-travelling Omega enough incentive to believe that you will one-box will do well. I don't think this is possible without actually one-boxing, though.

You can substitute "timeless" with "considering violation of causality, for example time travel". "Timeless" is just shorter.

In short, try thinking about the problem with time travel excluded.

Without time travel, this problem either ceases to exist, or becomes simple calculus.

Replies from: Randaly
comment by Randaly · 2012-07-02T15:07:51.772Z · LW(p) · GW(p)

No; Timeless Decision Theory does not violate causality. It is not a physical theory, which postulates new timetravelling particles or whatever; almost all of its advocates believe in full determinism, in fact. (Counterfactual mugging is an equivalent problem.)

Newcomb's Problem has never included time travel. Every standard issue was created for the standard, non-time travel version. In particular, if one allows for backward causation (ie for one's decision to causally affect what's in the box) then the problem becomes trivial.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T15:25:52.120Z · LW(p) · GW(p)

No; Timeless Decision Theory does not violate causality.

I didn't say (or mean) that it violated causality. I meant it assigned a probability p>0 to violation of causality being possible. I may be wrong on this, since I only read enough about TDT to infer that it isn't interesting or relevant to me.

Newcomb's Problem has never included time travel.

Actual Newcomb includes an omniscient being, and omniscience is impossible without time travel / violation of causality.

If you say that Omega makes its prediction purely based on the past, Newcomb becomes trivial as well.

Replies from: wedrifid, Randaly, wedrifid
comment by wedrifid · 2012-07-02T15:49:40.032Z · LW(p) · GW(p)

I meant it assigned a probability p>0 to violation of causality being possible.

It intrinsically says nothing about causality violation. All zero is not a probability and lack of infinite certainty issues are independent of the decision theory. The decision theory just works with whatever your map contains.

comment by Randaly · 2012-07-02T15:38:44.030Z · LW(p) · GW(p)

Actual Newcomb doesn't include an omniscient being; I quote from Wikipedia:

However, the original discussion by Nozick says only that the Predictor's predictions are "almost certainly" correct, and also specifies that "what you actually decide to do is not part of the explanation of why he made the prediction he made".

Except that this is false, so nevermind.

Also, actual knowledge of everything aside from the Predictor is possible without time travel. It's impossible in practice, but this is a thought experiment. You "just" need to specify the starting position of the system, and the laws operating on it.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T16:27:58.493Z · LW(p) · GW(p)

Well, the German Wikipedia says something entirely different, so may I suggest you actually read Nozick? I have posted a paragraph from the paper in question here.

Translation from German Wiki: "An omniscient being..."

What does this tell us? Exactly, that we shouldn't use Wikipedia as a source.

Replies from: Randaly
comment by Randaly · 2012-07-02T16:43:11.546Z · LW(p) · GW(p)

Oops, my apologies.

comment by wedrifid · 2012-07-02T15:31:26.391Z · LW(p) · GW(p)

If you say that Omega makes its prediction purely based on the past, Newcomb becomes trivial as well.

Omega makes its prediction purely based on the past (and present).

That being the case which decision would you say is trivially correct? Based on what you have said so far I can't predict which way your decision would go.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T16:39:52.882Z · LW(p) · GW(p)

Ruling out backwards causality, I would two-box, and I would get $1000 unless Omega made a mistake.

No, I wouldn't rather be someone who two-boxes in Newcomb, because if Omega makes its predictions based on the past, this would only lead to me losing $1000, because Newcomb is a one-time problem. I would have to choose differently in other decisions for Omega to change its prediction, and that is something I'm not willing to do.

Of course if I'm allowed to communicate with Omega, I would try to convince it that I'll be one-boxing (while still two-boxing), and if I can increase the probability of Omega predicting me to one-box enough to justify actually precommiting to one-boxing (by use of a lie detector or whatever), then I would do that.

However, in reality I would probably get some satisfaction out of proving Omega wrong, so the payoff matrix may not be that simple. I don't think this is in any way relevant to the theoretical problem, though.

comment by see · 2012-07-02T06:27:08.259Z · LW(p) · GW(p)

CDT calculates it this way: At the point of decision, either the million-dollar box has a million or it doesn't, and your decision now can't change that. Therefore, if you two-box, you always come out ahead by $1,000 over one-boxing.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T06:49:30.985Z · LW(p) · GW(p)

and your decision now can't change that

So what you're saying is that CDT refuses the whole setup and then proceeds to solve a completely different problem, correct?

Replies from: see, wedrifid, fubarobfusco
comment by see · 2012-07-02T07:26:46.625Z · LW(p) · GW(p)

Well, Nozick's formulation in 1969, which popularized the problem in philosophy, went ahead and specified that "what you actually decide to do is not part of the explanation of why he made the prediction he made".

Which means smuggling in a theory of unidirectional causality into the very setup itself, which explains how it winds up called "Newcomb's Paradox" instead of Newcomb's Problem.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T07:42:09.695Z · LW(p) · GW(p)

That is not a specification, it is a supposition. It is the same supposition CDT makes (rejection of backwards causality) and leads to the same result of not playing Newcomb.

It's like playing chess and saying "dude, my rook can go diagonal, too!"

At that point, you're not playing chess anymore.

comment by wedrifid · 2012-07-02T09:00:36.348Z · LW(p) · GW(p)

So what you're saying is that CDT refuses the whole setup and then proceeds to solve a completely different problem, correct?

No.

comment by fubarobfusco · 2012-07-02T07:11:43.307Z · LW(p) · GW(p)

No, it's just not aware that it could be running inside Omega's head.

Replies from: drethelin, Andreas_Giger
comment by drethelin · 2012-07-02T07:19:30.933Z · LW(p) · GW(p)

Another way of putting it is that CDT doesn't model entities as modeling it.

comment by Andreas_Giger · 2012-07-02T07:16:32.422Z · LW(p) · GW(p)

What it is aware of is highly irrelevant.

  1. Newcomb has a payoff matrix.
  2. CDT refuses this payoff matrix and substitutes it with its own.
  3. Therefore CDT solves a different problem.

Which of (1,2,3) do you disagree with?

Replies from: Randaly, MinibearRex
comment by Randaly · 2012-07-02T07:51:18.567Z · LW(p) · GW(p)

CDT doesn't change the payoffs. If it takes the single box and there is money in in, it still receives a million dollars; if it also takes the other box, it will receive 10,000 additional dollars. These are the standard payoffs to Newcomb's Problem.

What you are assuming is that your decision affects Omega's prediction. While it is nice that your intuition is so strong, CDTers disagree with this claim, as your decision has no causal impact on Omega's prediction.

This formulation of Newcomb's Problem may clarify the wrong intuition:

Suppose that the boxes are transparent, ie you can already see whether or not there's a million dollars present. Suppose you see that there is a million dollars present; then you have no reason not to grab an additional 10,000- after all, per Givewell, that probably let's you save ~8 lives. You wouldn't want to randomly toss away 8 lives, would you? And the million is already present, you can see it with your own eyes, it's there no matter what you do. If you take both boxes, the money won't magically vanish- the payoff matrix is that, if the money's there, you get it, end of story.

But, suppose there isn't a million dollars there. Are you really going to walk away empty-handed, when you know, for sure, that you won't be getting a million dollars? After all, the money's already not there, Omega will not be paying a million dollars, no matter what happens. So your choice is between 0$ and 10,000$- again, are you really going to toss away those ~8 lives for nothing, no reason, no gain?

This is equivalent to the standard formulation: you may not be able to see the million, but it is already either present or not present.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T07:59:49.909Z · LW(p) · GW(p)

You're not talking about Newcomb. In Newcomb, you don't get any "additional" $1000; these are the only dollars you get, because the $1000000 do magically vanish if you take the "additional" $1000.

The payoff matrix for Newcomb is as follows:

  • You take two boxes, you get $n>0.
  • You take one box, you get $m>n.
Replies from: TrE, Randaly
comment by TrE · 2012-07-02T10:31:08.209Z · LW(p) · GW(p)

CDT, then, isn't aware of the payoff matrix. It reasons as follows: Either Omega put money in boxes A and B, or only in box B. If Omega put money in both boxes, I'm better off taking both boxes. If Omega put money only in box B, I should also take both boxes instead of only box A. CDT doesn't deal with the fact that which of these two games it's playing depends on what it will choose to do in each case.

comment by Randaly · 2012-07-02T14:20:56.543Z · LW(p) · GW(p)

No, this is false. CDT is the one using the standard payoff matrix, and you are the one refusing to use the standard payoff matrix and substituting your own.

In particular: the money is either already there, or not already there. Once the game has begun, the Predictor is powerless to change things.

The standard payoff matrix for Newcomb is therefore as follows:

  • Omega predicts you take two boxes, you take two boxes, you get $n>0.
  • Omega predicts you take two boxes, you take two boxes, you get 0.
  • Omega predicts you take one box, you take one box, you get $m>n.
  • Omega predicts you take one box, you take two boxes, you get $m+n>m.

The problem becomes trivial if, as you are doing, you refuse to consider the second and fourth outcomes. However, you are then not playing Newcomb's Problem.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T14:52:07.728Z · LW(p) · GW(p)

No, only then am I playing Newcomb. What you're playing is weak Newcomb, where you assign a probability of x>0 for Omega being wrong, at which point this becomes simple math where CDT will give you the correct result, whatever that may turn out to be.

Replies from: Randaly
comment by Randaly · 2012-07-02T15:10:40.822Z · LW(p) · GW(p)

No, you are assuming that your decision can change what's in the box, which everybody agrees is wrong: the problem statement is that you cannot change what's in the million-dollar box.

Also, what you describe as "weak Newcomb" is the standard formulation: Nozick's original problem stated that the Predictor was "almost always" right. CDT still gives the wrong answer in simple Newcomb, as its decision cannot affect what's in the box.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T16:17:47.802Z · LW(p) · GW(p)

Nozick's original problem stated that the Predictor was "almost always" right.

That's not the "original problem", that's just the fleshed-out introduction to "Newcomb's Problem and Two Principles of Choice" where he talks about aliens and other stuff that has about as much to do with Newcomb as prisoners have to do with the Prisoner's Dilemma. Then after outlining some common intuitive answers, he goes on a mathematical tangent and later returns to the question of what one should do in Newcomb with this paragraph:

Now, at last, to return to Newcomb's example of the predictor. If one believes, for this case, that there is backwards causality, that your choice causes the money to be there or not, that it causes him to have made the prediction that he made, then there is no problem. One takes only what is in the second box. Or if one believes that the way the predictor works is by looking into the future; he, in some sense, sees what you are doing, and hence is no more likely to be wrong about what you do than someone else who is standing there at the time and watching you, and would normally see you, say, open only one box, then there is no problem. You take only what is in the second box. But suppose we establish or take as given that there is no backwards causality, that what you actually decide to do does not affect what he did in the past, that what you actually decide to do is not part of the explanation of why he made the prediction he made. So let us agree that the predictor works as follows: He observes you sometime before you are faced with the choice, examines you with complicated apparatus, etc., and then uses his theory to predict on the basis of this state you were in, what choice you would make later when faced with the choice. Your deciding to do as you do is not part of the explanation of why he makes the prediction he does, though your being in a certain state earlier, is part of the explanation of why he makes the prediction he does, and why you decide as you do. I believe that one should take what is in both boxes. I fear that the considerations I have adduced thus far will not convince those proponents of taking only what is in the second box. Furthermore I suspect that an adequate solution to this problem will go much deeper than I have yet gone or shall go in this paper. So I want to pose one question. I assume that it is clear that in the vaccine example, the person should not be convinced by the probability argument, and should choose the dominant action. I assume also that it is clear that in the case of the two brothers, the brother should not be convinced by the probability argument offered. The question I should like to put to proponents of taking only what is in the second box in Newcomb's example (and hence not performing the dominant action) is: what is the difference between Newcomb's example and the other two examples which make the difference between not following the dominance principle, and following it?

And yes, I think I can agree with him on this.

comment by MinibearRex · 2012-07-02T07:30:38.587Z · LW(p) · GW(p)

CDT is a solution to Newcomb's problem. It happens to be wrong, but it isn't solving a completely separate problem. It's going about solving Newcomb's problem in the wrong way.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T07:34:48.245Z · LW(p) · GW(p)

I assume this means that you disagree with 3?

Edit: You're just contradicting me without responding to any of my arguments. That doesn't seem very reasonable, unless your aim is to never change your opinion no matter what.

Replies from: jsalvatier, MinibearRex
comment by jsalvatier · 2012-07-02T08:29:06.181Z · LW(p) · GW(p)

I think people people may be confused by your word choice.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T08:35:44.298Z · LW(p) · GW(p)

What are you referring to? I'd like to avoid confusion if possible.

Replies from: jsalvatier
comment by jsalvatier · 2012-07-02T08:41:35.836Z · LW(p) · GW(p)

I think people are finding phrases "CDT is solving a separate problem" and "CDT refuses to play this game and plays a different one" jarring. See my other response. Edit: people might also find your tone adversarial in a way that's off-putting.

Replies from: wedrifid, Andreas_Giger
comment by wedrifid · 2012-07-02T09:02:29.765Z · LW(p) · GW(p)

I think people are finding phrases "CDT is solving a separate problem" and "CDT refuses to play this game and plays a different one" jarring. See my other response. Edit: people might also find your tone adversarial in a way that's off-putting.

Jarring, wrong and adversarial. Not a good combination.

comment by Andreas_Giger · 2012-07-02T08:51:57.791Z · LW(p) · GW(p)

Yes, I saw your other reply, thank you for that.

comment by MinibearRex · 2012-07-03T06:05:27.188Z · LW(p) · GW(p)

I do disagree with 3, though I disagree (mostly connotatively) with 1 and 2 as well.

The arguments you refer to were not written at the time I wrote my previous response, so I'm not sure what your point in the "Edit" is.

Nevertheless, I'll write my response to your argument now.

In theoretical Newcomb, CDT doesn't care about the rule of Omega being right, so CDT does not play Newcomb.

You are correct when you say that CDT "doesn't care" about Omega being right. But that doesn't mean that CDT agents don't know that Omega is going to be right. If you ask a CDT agent to predict how they will do in the game, they will predict that they will earn far less money than someone who one-boxes. There is no observable fact that a one-boxer and a two-boxer will disagree on (at least in this sense). The only disagreement the two will have is about the counterfactual statement "if you had made a different choice, that box would/would not have contained money".

That counterfactual statement is something that different decision theories implicitly give different views on. Its truth or falsity is not in the problem; it's part of the answer. CDT agents don't rule out the theoretical possibility of a predictor who can accurately predict their actions. CDT just says that the counterfactual which one-boxers use is incorrect. This is wrong, but CDT is just giving a wrong answer to the same question.

comment by Randaly · 2012-07-02T14:56:48.832Z · LW(p) · GW(p)

I am not sure what you mean by "substitute Newcomb with a problem that consists of little more than simple calculation of priors and payoffs". If you mean that the decision algorithm should chose the the option correlated with the highest payoffs, then that's Evidential Decision Theory, and it fails on other problems- eg the Smoking Lesion.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T15:18:09.122Z · LW(p) · GW(p)

If Omega makes its prediction based on the past instead of the future, CDT two-boxes and gets $1,000. However, that is a result not of the decision CDT is making, but of the decisions it has made in the past. If Omega plays this game with e.g. TDT, and you substitute TDT with CDT without Omega noticing, CDT two-boxes and takes $1,001,000. Vice versa, if you substitute CDT with TDT, it gets nothing.

If Omega makes its prediction based on the future, CDT assigns a probability of 0 to being in that situation, which is correct, since this is purely theoretical.

comment by shminux · 2012-07-02T19:17:40.875Z · LW(p) · GW(p)

Here is my take on the whole thing, fwiw.

The issue is assigning probability to the outcome (Omega predicted player one-boxing whereas player two-boxed), as it is the only one where two-boxing wins. Obviously any decision theorists who assigns a non-zero probability to this outcome hasn't read the problem statement carefully enough, specifically the part that says that Omega is a perfect predictor.

EDT calculates the expected utility by adding, for all outcomes (probability of outcome given specific action)*payoff of the outcome. In the Newcomb case the contentious outcome has zero probability because "perfect predictor" means that a player never two-boxes unless Omega predicted so.

CDT does not calculate (probability of a certain outcome given specific action), but rather (probability that, if the action were performed, the outcome happens). In the Newcomb case this (probability of $1,001,000 payout were the player to two-box) can be (mis-)interpreted as certainty because "boxes already contain the prizes". This statement is in contradiction with the the "perfect predictor" clause.

In other words, the argument "Because your choice of one or two boxes can't causally affect the Predictor's guess, causal decision theory recommends the two-boxing strategy." is not really a CDT argument, but a misunderstanding of the problem statement. The suggested option ($M+$T) in SEP is actually never an option.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T20:22:11.403Z · LW(p) · GW(p)

The issue is assigning probability to the outcome (Omega predicted player one-boxing whereas player two-boxed), as it is the only one where two-boxing wins.

No, because two-boxing also wins if Omega predicts you to two-box, and therefore always wins if your decision doesn't alter Omega's prediction of that very decision. CDT would two-box because n+1000 > n for both n = 0 and n = 1000000.

But, because Newcomb can't exist, CDT can never choose anything in Newcomb.

Other than that, your post seems pretty accurate.

Replies from: JonathanLivengood, shminux
comment by JonathanLivengood · 2012-07-02T20:42:15.198Z · LW(p) · GW(p)

I'm still not at all sure what you mean when you say that Newcomb can't exist. Could you say a bit more about what exactly you think cannot exist?

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T21:10:08.691Z · LW(p) · GW(p)

Newcomb assumes that Omega is omniscient, which more importantly means that the decision you make right now determines whether Omega has put money in the box or not. Obviously this is backwards causality, and therefore not possible in real life, which is why Nozick doesn't spend too much ink on this.

But if you rule out the possibility of backwards causality, Omega can only make his prediction of your decision based on all your actions up to the point where it has to decide whether to put money in the box or not. In that case, if you take two people who have so far always acted (decided) identical, but one will one-box while the other one will two-box, Omega cannot make different predictions for them. And no matter what prediction Omega makes, you don't want to be the one who one-boxes.

Replies from: JonathanLivengood
comment by JonathanLivengood · 2012-07-02T21:17:25.136Z · LW(p) · GW(p)

Newcomb assumes that Omega is omniscient ...

No, it doesn't. Newcomb's problem assumes that Omega has enough accuracy to make the expected value of one boxing greater than the expected value of two boxing. That is all that is required in order to give the problem the air of paradox.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-03T08:54:57.209Z · LW(p) · GW(p)

Read Nozick instead of making false statements.

There's four types of Newcomb-like problems:

  • Omniscient Omega (backwards causality) - CDT rejects this case, which cannot exist in reality.
  • Fallible Omega, but still backwards causality - CDT rejects this case, which cannot exist in reality.
  • Infallible Omega, no backwards causality - CDT correctly two-boxes. To improve payouts, CDT would have to have decided differently in the past, which is not decision theory anymore.
  • Fallible Omega, no backwards causality - CDT correctly two-boxes. To improve payouts, CDT would have to have decided differently in the past, which is not decision theory anymore.

That's all there is to it.

Replies from: JonathanLivengood
comment by JonathanLivengood · 2012-07-03T18:12:18.426Z · LW(p) · GW(p)

This will be my last comment on this thread. I've read Nozick. I've also read much of the current literature on Newcomb's problem. While Omega is sometimes described as a perfect predictor, assuming that Omega is a perfect predictor is not required in order to get an apparently paradoxical result. The reason is that given no backwards causation (more on that below) and as long as Omega is good enough at predicting, CDT and EDT will recommend different decisions. But both approaches are derived from seemingly innocuous assumptions using good reasoning. And that feature -- deriving a contradiction from apparently safe premisses through apparently safe reasoning -- is what makes something a paradox.

Partisans will argue for the correctness or incorrectness of one or the other of the two possible decisions in Newcomb's problem. I have not given any argument. And I'm not going to give one here. For present purposes, I don't care whether one-boxing or two-boxing is the correct decision. All I'm saying is what everyone who works on the problem agrees about. In Newcomb problems, CDT chooses two boxes and that choice has a lower expected value than taking one box. EDT chooses one box, which is strange on its face, since the decision now is presumed to have no causal relevance to the prediction. Yet, EDT recommends the choice with the greater expected value.

The usual story assumes that there is no backwards causation. That is why Nozick asks the reader (in the very passage you quoted, which you really should read more carefully) to: "Suppose we establish or take as given that there is no backwards causality, that what you actually decide to do does not affect what [the predictor] did in the past, that what you actually decide to do is not part of the explanation of why he made the prediction he made." If we don't follow Nozick in making this assumption -- if we assume that there is backwards causation -- CDT does not "reject the case" at all. If there is backwards causation and CDT has that as an input, then CDT will agree with EDT and recommend taking one box. The reason is that in the case of backwards causation, the decision now is causally relevant to the prediction in the past. That is precisely why Nozick ignores backwards causation, and he is utterly explicit about it in the first three sentences of the passage you quoted. So, there is good reason to consider only the case where you know (or believe) that there is no backwards causation because in that case, CDT and EDT paradoxically come apart.

But neither CDT nor EDT excludes any causal structure. CDT and EDT are possible decision theories in worlds with closed time-like curves. They're possible decision theories in worlds that have physical laws that look nothing like our own physical laws. CDT and EDT are theories of decision, not theories of physics or metaphysics.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-03T18:47:52.085Z · LW(p) · GW(p)

If we don't follow Nozick in making this assumption -- if we assume that there is backwards causation -- CDT does not "reject the case" at all. If there is backwards causation and CDT has that as an input, then CDT will agree with EDT and recommend taking one box.

I consider CDT with "there is backwards causality" as an input something that isn't CDT anymore; however I doubt disputing definitions is going to get us anywhere and it doesn't seem to be the issue anyway.

The reason is that given no backwards causation (more on that below) and as long as Omega is good enough at predicting, CDT and EDT will recommend different decisions. But both approaches are derived from seemingly innocuous assumptions using good reasoning. And that feature -- deriving a contradiction from apparently safe premisses through apparently safe reasoning -- is what makes something a paradox.

The reason why a CDT agent two-boxes is because Omega makes its prediction based on the fact that the agent is a CDT agent, therefore no money will in the box. The reason why an EDT agent one-boxes is because Omega makes its prediction based on the fact that the agent is an EDT agent, therefore money will in the box. Both decisions are correct.*

This becomes a paradox only if your premise is that a CDT agent and an EDT agent are in the same situation, but if the decision theory of the agent is what Omega bases its prediction on, then they aren't in the same situation.

(*If the EDT agent could two-box, then it should do that; however an EDT agent that has been predicted by Omega to one-box cannot choose to two-box.)

comment by shminux · 2012-07-02T20:54:22.677Z · LW(p) · GW(p)

I don't see a problem with the perfect predictor existing, I see the statement like "one can choose something other than what Omega predicted" as a contradiction in the problem's framework. I suppose the trick is to have an imperfect predictor and see if it makes sense to take a limit (prediction accuracy -> 100%).

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-03T09:47:58.135Z · LW(p) · GW(p)

It's not a matter of accuracy, it's a matter of considering backwards causality or not. Please read this post of mine.

comment by Dorikka · 2012-07-02T12:34:30.111Z · LW(p) · GW(p)

It seems like if you haven't understood what's going on in a problem until very recently, when people explained it to you, and then you've come up with an answer to the problem that most people familiar with the subject material are objecting to.

How high is the prior for your hypothesis, that your posterior is still high after so much evidence pointing the other way?

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T14:05:38.750Z · LW(p) · GW(p)

Rather high. I noticed that the average LW poster is not very well versed in mathematical logic in general and game theory in particular; for example about 80-90% of the posts in this thread are nonsensical, resulting in a large amount of insane strategies in this tournament where most people didn't think farther than whether to defect only on the very last or on the two last turns, without doing any frickin' math.

I understand Newcomb very well. When I posted my original question, I had the hypothesis that people here didn't understand CDT. It turned out that they understood CDT, but not the distinction between actual Newcomb (Omega) and weak Newcomb (empirical evidence), and didn't realize that the first problem couldn't exist in causal space, a.k.a reality, and that the second problem is simple calculus based on priors, and if people disagree on whether to one-box or two-box in weak Newcomb, that is a result of different priors, and not of different algorithms.

Replies from: JonathanLivengood
comment by JonathanLivengood · 2012-07-02T19:07:19.711Z · LW(p) · GW(p)

The claim that actual Newcomb or something suitably close cannot exist in reality is very strong. I wonder how you propose to think about research by Haynes et al. (pdf), which suggests that yes/no decisions may be predicted from brain states as long as 7-10 seconds before the decision is made and with accuracy ranging from 54% to 59% depending on the brain region used (assuming I'm reading their Supplementary Figure 6 correctly). In Newcomb's problem, the predictor doesn't need to do much better than chance in order for two-boxing to have lower expectation than one-boxing; in particular, the accuracy obtained by Haynes et al. is good enough.

So ... do you really mean to say that it is impossible that the sort of predictions Haynes et al. make could be done in real time in advance of the choice that a person makes in a Newcomb problem?

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-02T20:59:46.509Z · LW(p) · GW(p)

Huh? It's not as if my current brain state was influenced by a decision I'm going to make in ten seconds; it's the decision I make right now that is influenced by my brain state from 10 seconds ago.

So I don't see your point; a good friend of mine could make a far more accurate prediction than 60%. Hell, you could.

Replies from: JonathanLivengood
comment by JonathanLivengood · 2012-07-02T21:13:30.374Z · LW(p) · GW(p)

Then let me just re-iterate, I don't see what about Newcomb you think is impossible.

The Newcomb set-up is just the following:

Predictor tells you that you are going to play a game in which you pick one box or two. Predictor tells you the payouts for those choices under two scenarios: (1) that Predictor predicts you will choose one box and (2) that Predictor predicts you will choose two boxes. Predictor also tells you its success rate (or you are allowed to learn this empirically). Predictor then looks at something about you (behavior, brain states, writings, whatever) and predicts whether you will take one box or two boxes. After the prediction is made and the payouts determined, you make your decision.

Indeed, your decision now does not affect your brain states in the past. Nor does your decision now affect Predictor's prediction, though your past brain states might depending on how the scenario is realized. And that's kind of the whole point. What you decide now doesn't affect the payouts. So, for CDT, you should take both boxes.

Notice, though, that the problem is not pressing unless the expected value of choosing one box is greater than the expected value of choosing two boxes. That is, the problem is not pressing if Predictor's accuracy is too low.

Now, you have been claiming that Newcomb is impossible. But in your comment here, you seem to be saying that it is really easy to set up. So, I don't know what you are trying to say cannot exist.

Replies from: Alejandro1
comment by Alejandro1 · 2012-07-03T00:22:10.586Z · LW(p) · GW(p)

Predictor tells you that you are going to play a game in which you pick one box or two. Predictor tells you the payouts for those choices under two scenarios: (1) that Predictor predicts you will choose one box and (2) that Predictor predicts you will choose two boxes. Predictor also tells you its success rate (or you are allowed to learn this empirically). Predictor then looks at something about you (behavior, brain states, writings, whatever) and predicts whether you will take one box or two boxes. After the prediction is made and the payouts determined, you make your decision.

I prefer the formulation in which Predictor first looks at something about you (without you knowing), makes its prediction and sets the boxes, then presents you with the boxes and tells you the rules of the game (and ideally, you have never heard of it before). Your setting allows you to sidestep the thorny issues peculiar to Newcomb if you have strong enough pre-commitment faculties.

Replies from: JonathanLivengood, TheOtherDave
comment by JonathanLivengood · 2012-07-03T01:12:05.470Z · LW(p) · GW(p)

Both ways of setting it up allow pre-commitment solutions. The fact that one might not have thought about such solutions in time to implement them is not relevant to the question of which decision theory one ought to implement.

Why do you think it matters whether you have heard of the game before or whether you know that Predictor (Omega ... whatever) is looking?

I admit that your way of setting it up makes a Haynes-style realization of the problem unlikely. But so what? My way of setting it up is still a Newcomb problem. It has every bit of logical / decision theoretical force as your way of setting it up. The point of the problem is to test decision theories. CDT (without pre-commitment) is going to take two boxes on your set-up and also on mine. And that should be enough to say that Newcomb problems are possible. (Nomologically possible, not just metaphysically possible or logically possible.)

Replies from: Alejandro1
comment by Alejandro1 · 2012-07-03T21:27:53.042Z · LW(p) · GW(p)

What I meant was simply this: if I am told of the rules first, before the prediction is made, and I am capable of precommitment (by which I mean binding my future self to do in the future what I choose for it now) then I can win with CDT. I can reason "if I commit to one-box, Omega will predict I will one-box, so the money will be there", which is a kind of reasoning CDT allows. I thought the whole point of Newcombe is to give an example where CDT loses and we are forced to use a more sophisticated theory.

I am puzzled by you saying "Both ways of setting it up allow pre-commitment solutions." If I have never heard of the problem before being presented with the boxes, then how can I precommit?

I confess I thought this was obvious, so the fact that both you and Dave jumped on my statement makes me suspect we have some miscommunication, or that I have some "unknown unknown" misconception on these issues.

Replies from: TheOtherDave, JonathanLivengood
comment by TheOtherDave · 2012-07-03T22:00:43.407Z · LW(p) · GW(p)

I agree that if I know the rules, I can reason "if I commit to one-box, Omega will predict I will one-box, so the money will be there", and if I don't know the rules, I can't reason that way (since I can't know the relationship between one-boxing and money).

It seems to me that if I don't know the rules, I can similarly reason "if I commit to doing whatever I can do that gets me the most money, then Omega will predict that I will do whatever I can do that gets me the most money. If Omega sets up the rules such that I believe doing X gets me the most money, and I can do X, then Omega will predict that I will do X, and will act accordingly. In the standard formulation, unpredictably two-boxing gets me the most money, but because Omega is a superior predictor I can't unpredictably two-box. Predictably one-boxing gets me the second-most money. Because of my precommitment, Omega will predict that upon being informed of the rules I will one-box, and the money will be there. "

Now, I'm no kind of decision theory expert, so maybe there's something about CDT that precludes reasoning in this way. So much the worse for CDT if so, since this seems like an entirely straightforward way to reason.

Incidentally, I don't agree to the connotations of "jumped on."

Replies from: Alejandro1
comment by Alejandro1 · 2012-07-03T22:15:39.842Z · LW(p) · GW(p)

Checking the definition, it seems that "jump on" is more negative that I thought it was. I just meant both of you disagreed in similar way and fairly quickly; I didn't feel reprimanded or attacked.

I do not understand at all the reasoning that follows "if I don't know the rules". If you are presented with the two boxes out of the blue and explained the rules then for the first time, there is no commitment to make (you have to decide in the moment) and the prediction has been made before, not after.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-07-03T22:26:15.751Z · LW(p) · GW(p)

The best time to plant a tree is twenty years ago. The second-best time is now.

Similarly, the best time to commit to always doing whatever gets me the most utility in any given situation is at birth, but there's no reason I shouldn't commit to it now. I certainly don't have to wait until someone presents me with two boxes.

Replies from: Alejandro1
comment by Alejandro1 · 2012-07-03T22:39:32.267Z · LW(p) · GW(p)

Sure, I can and should commit to doing "whatever gets me the most utility", but this is general and vague. And the detailed reasoning that follows in your parent comment is something I cannot do now if I have no conception of the problem. (In case it is not clear, I am assuming in my version that before being presented with the boxes and explained the rules, I am an innocent person who has never thought of the possibility of my choices being predicted, etc.)

Replies from: TheOtherDave
comment by TheOtherDave · 2012-07-03T23:37:04.348Z · LW(p) · GW(p)

Consider the proposition C: "Given a choice between A1 and A2, if the expected value of A1 exceeds the expected value of A2, I will perform A1."

If I am too innocent to commit to C, then OK, maybe I'm unable to deal with Newcombe-like problems.
But if I can commit to C, then... well, suppose I've done so.

Now Omega comes along, and for reasons of its own, it decides it's going to offer me two boxes, with some cash in them, and the instructions: one-box for N1, or two-box for N2, where N1 > N2. Further, it's going to put either N1 or N1+N2 in the boxes, depending on what it predicts I will do.

So, first, it must put money in the boxes.
Which means, first, it must predict whether I'll one-box or two-box, given those instructions.

Are we good so far?

Assuming we are: so OK, what is Omega's prediction?

It seems to me that Omega will predict that I will, hypothetically, reason as follows:
"There are four theoretical possibilities. In order of profit, they are:
1: unpredictably two-box (nets me N1 + N2)
2: predictably one-box (nets me N1)
3: predictably two-box (nets me N2)
4: unpredictably one-box (nets me N2)

So clearly I ought to pick 1, if I can.
But can I?
Probably not, since Omega is a very good predictor. If I try to pick 1, I will likely end up with 3. Which means the expected value of picking 1 is less than the expected value of picking 2.
So I should pick 2, if I can.
But can I?
Probably, since Omega is a very good predictor. If I try to pick 2, I will likely end up with 2.
So I will pick 2."

And, upon predicting that I will pick 2, Omega will put N1 + N2 in the boxes.

At this point, I have not yet been approached, am innocent, and have no conception of the problem.

Now, Omega approaches me, and what do you know: it was right! That is in fact how I reason once I'm introduced to the problem. So I one-box.

At this point, I would make more money if I two-box, but I am incapable of doing so... I'm not the sort of system that two-boxes. (If I had been, I most likely wouldn't have reached this point.)

If there's a flaw in this model, I would appreciate having it pointed out to me.

comment by JonathanLivengood · 2012-07-03T23:14:32.091Z · LW(p) · GW(p)

I agree with what Dave says in his comment, which is basically that you could have a very generic pre-commitment strategy.

But suppose you couldn't come up with a very generic pre-commitment strategy or that it is really implausible that you could come up with a pre-commitment solution at all before hearing the rules of the game. Would that mean that there are no pre-commitment solutions? No. You've already identified a pre-commitment solution. We only seem to disagree about how important it is that the reasoner be able to discover a pre-commitment solution quickly enough to implement it.

What I am saying is that the logic of the problem does not depend on the reasoning capacity of the agent involved. Good reasoning is good reasoning whether the agent can carry it out or not.

Also, sorry if I jumped too hard.

comment by TheOtherDave · 2012-07-03T01:41:40.006Z · LW(p) · GW(p)

Can you expand on the difference here?

If I have a "strong enough" precommitment to do whatever gets me the most money, and Predictor privately sets the payoffs for (1box, 2box) such that 1box > 2box, then Predictor can predict that upon being told the payoffs, I will 1box, and Predictor will therefore set the boxes as for the 1box scenario. Conversely, if Predictor privately sets the payoffs such that 2boxing gets me the most money, then Predictor can predict that upon being told the payoffs I'll 2box.

So what thorny issues does this delayed specification of the rules prevent me from sidestepping?

Replies from: Alejandro1
comment by Alejandro1 · 2012-07-03T21:28:04.262Z · LW(p) · GW(p)

See my response to Jonathan.

comment by TrE · 2012-07-03T13:29:05.455Z · LW(p) · GW(p)

What, exactly, is your goal in this conservation? What could an explanation why CDT two-boxes look like in order to make you accept that explanation?

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-03T14:10:28.266Z · LW(p) · GW(p)

We've already established that some of the disagreement comes from whether Newcomb includes backwards causality or not, with most posters agreeing that Newcomb including backwards causality is not realistic or interesting (see the excerpt from Nozick that I edited into my top level post) and the focus instead shifting onto weak (empirical) Newcomb, where Omega makes its predictions without looking into the future.

Right now, most posters also seem to be of the opinion that the answer to Newcomb is not to just one-box, but to precommit to one-boxing before Omega can make its decision, for example by choosing a different decision theory before encountering Newcomb. I argued that this is a different problem ("meta-Newcomb") that is fundamentally different from both Newcomb and weak Newcomb. The question of whether a CDT agent should change strategies (precommit) in meta-Newcomb seems to be dependent on whether such a strategy can be proven to never perform worse than CDT in non-Newcomb problems.

The last sentence is my personal assessment; the rest should be general consensus by now.

comment by Kindly · 2012-07-02T23:44:30.284Z · LW(p) · GW(p)

You don't need to perfectly simulate Omega to play Newcomb. I am not Omega, but I bet that if I had lots of money and decided to entertain my friends with a game of Newcomb's boxes, I would be able to predict their actions with better than 50.1% accuracy.

Clearly CDT (assuming for the sake of the argument that I'm friends with CDT) doesn't care about my prediction skills, and two-boxes anyway, earning a guaranteed $1000 and a 49.9% chance of a million, for a total of $500K in expectation.

On the other hand, if one of my friends one-boxes, then he gets a 50.1% chance of a million, for a total of $501K in expectation.

Not quite as dramatic a difference, but it's there.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-03T09:15:59.003Z · LW(p) · GW(p)

It's not a question of whether Omega is fallible or not, it's a question of whether Omega's prediction (no matter how incorrect) is dependent on the decision you are going to make (backwards causality), or only on decisions you have made in the past (no backwards causality). The first case is uninteresting since it cannot occur in reality, and in the second case it is always better to two-box, no matter the payouts or the probability of Omega being wrong.

  • If Omega is 100% sure you're one-boxing, you should two-box.
  • If Omega is 75% sure you're one-boxing, you should two-box.
  • If Omega is 50% sure you're one-boxing, you should two-box.
  • If Omega is 25% sure you're one-boxing, you should two-box.
  • If Omega is 0% sure you're one-boxing, you should two-box.
Replies from: Kindly, wedrifid
comment by Kindly · 2012-07-03T12:23:01.169Z · LW(p) · GW(p)

What if Omega makes an identical copy of you, puts the copy in an identical situation, and uses the copy's decision to predict what you will do? Is "whatever I decide to do, my copy will have decided the same thing" a valid argument?

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-03T12:54:01.568Z · LW(p) · GW(p)

No, because if Omega tells you that, then you have information that your copy doesn't, which means that it's not an identical situation; and if Omega doesn't tell you, then you might just as well be the copy itself, meaning that either you can't be predicted or you're not playing Newcomb.

If Omega tells both of you the same thing, it lies to one of you; and in that case you're not playing Newcomb either.

Replies from: Kindly
comment by Kindly · 2012-07-03T21:29:03.195Z · LW(p) · GW(p)

Could you elaborate on this?

and if Omega doesn't tell you, then you might just as well be the copy itself, meaning that either you can't be predicted or you're not playing Newcomb.

That's certainly the situation I have in mind (although certainly Omega can tell both of you "I have made a copy of the person that walked into this room to simulate; you are either the copy or the original" or something to that effect). But I don't see how either one of "you can't be predicted or you're not playing Newcomb" makes sense.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-04T11:54:38.422Z · LW(p) · GW(p)

If you're the copy that Omega bases its prediction of the other copy on, how does Omega predict you?

comment by wedrifid · 2012-07-03T09:18:21.140Z · LW(p) · GW(p)

If Omega is 100% sure you're one-boxing, you should two-box.

Unless you like money, in which case you should one box.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-03T09:26:01.406Z · LW(p) · GW(p)

If Omega is 100% sure you're one-boxing, you can one-box and get $1,000,000 or you can two-box and get $1,001,000. You cannot make the argument that one-boxing is better in this case unless you argue that your decision affects Omega's prediction, and that would be backwards causality. If you think backwards causality is a possibility, that's fine and you should one-box; but then you still have to agree that under the assumption that backwards causality cannot exist, two-boxing wins.

Replies from: wedrifid
comment by wedrifid · 2012-07-03T10:12:28.006Z · LW(p) · GW(p)

If you think backwards causality is a possibility, that's fine and you should one-box; but then you still have to agree that under the assumption that backwards causality cannot exist, two-boxing wins.

Backwards causality cannot exist. I still take one box. I get the money. You don't. Your reasoning fails.

On a related note: The universe is (as far as I know) entirely deterministic. I still have free will.

Replies from: Vladimir_Nesov, Andreas_Giger
comment by Vladimir_Nesov · 2012-07-03T10:32:38.768Z · LW(p) · GW(p)

Backwards causality cannot exist.

It's not completely clear what "backward causality" (or any causality, outside the typical contexts) means, so maybe it can exist. Better to either ignore the concept in this context (as it doesn't seem relevant) or taboo/clarify it.

Replies from: wedrifid
comment by wedrifid · 2012-07-03T10:54:42.034Z · LW(p) · GW(p)

It's not completely clear what "backward causality" (or any causality, outside the typical contexts) means, so maybe it can exist. Better to either ignore the concept in this context (as it doesn't seem relevant) or taboo/clarify it.

The meaning of what Andreas was saying was sufficiently clear. He means "you know, stuff like flipping time travel and changing the goddamn past". Trying to taboo causality and sending everyone off to read Pearl would be a distraction. Possibly a more interesting distraction than another "CDT one boxes! Oh, um.... wait... No, Newcomb's doesn't exist. Err... I mean CDT two boxes and it is right to do so so there!" conversation but not an overwhelmingly relevant one.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-07-03T11:17:12.812Z · LW(p) · GW(p)

He means "you know, stuff like flipping time travel and changing the goddamn past".

We are in a certain sense talking about determining the past, the distinction is between shared structure (as in, the predictor has your source code) and time machines. The main problem seems to be unwillingness to carefully consider the meaning of implausible hypotheticals, and continued distraction by the object level dispute doesn't seem to help.

("Changing" vs. "determining" point should probably be discussed in the context of the future, where implausibility and fiction are less of a distraction.)

comment by Andreas_Giger · 2012-07-03T10:29:24.302Z · LW(p) · GW(p)

If backwards causality cannot exist, would you say that your decision can affect the prediction that Omega made before you made your decision?

Replies from: wedrifid
comment by wedrifid · 2012-07-03T10:57:21.883Z · LW(p) · GW(p)

If backwards causality cannot exist, would you say that your decision can affect the prediction that Omega made before you made your decision?

No. Both the prediction and my decision came about due to past states of the universe (including my brain). They do not influence each other directly. I still take one box and get $1,000,000 and that is the best possible outcome.o. Both the prediction and my decision came about due to past states of the universe (including my brain). I still take one box and get $1,000,000 and that is the best possible outcome I can expect.

comment by Will_Newsome · 2012-07-02T08:42:39.873Z · LW(p) · GW(p)

Because academic decision theorists say that CDT two boxes. A real causal decision theorist would, of course, one box. But the causal decision theorists in academic decision theorists' heads two box, and when people talk about causal decision theory, they're generally talking about the version of causal decision theory that is in academics' heads. This needn't make any logical sense.

Replies from: wedrifid
comment by wedrifid · 2012-07-02T08:57:58.780Z · LW(p) · GW(p)

A real causal decision theorist would, of course, one box.

Only in the "No True Scottsman" sense. What Will calls CDT is an interesting decision theory and from what little I've seen of Will talking about it it may also be a superior decision theory, but it doesn't correspond to the decision theory called CDT. The version of Causal Decision Theory that is in academics' heads is CDT, the one that is in Will's head needs a new name.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-07-03T12:19:22.126Z · LW(p) · GW(p)

(Fair enough. My only real problem with causal decision theory being called causal decision theory is that at best it's a strange use of the word "causal", breaking with thousands of years of reasonable philosophical tradition. That's my impression anyway—but there's like a billion papers on Newcomb's problem, and maybe one of them gives a perfectly valid explanation of the terminology.)

Replies from: wedrifid
comment by wedrifid · 2012-07-03T13:29:27.633Z · LW(p) · GW(p)

I'm not familiar with the philosophical tradition that would be incompatible with the way CDT uses 'causality'. It quite possibly exists and my lack of respect for philosophical tradition leaves me ignorant of such.

Replies from: JonathanLivengood
comment by JonathanLivengood · 2012-07-03T18:39:49.647Z · LW(p) · GW(p)

From my perspective, it's a shame that you have little regard for philosophical tradition. But as someone who is intimately familiar with the philosophical literature on causation, it seems to me that the sense of "causal" in causal decision theory, while imprecise, is perfectly compatible with most traditional approaches. I don't see any reason to think the "causal" in "causal decision theory" is incompatible with regularity theories, probabilistic theories, counterfactual theories, conserved quantity theories, agency/manipulation/intervention theories, primitivism, power theories, or mechanism theories. It might be a tense relation between CDT and projectivist theories, but I suspect that even there, you will not find outright incompatibility.

For a nice paper in the overlap between decision theory and the philosophy of causation and causal inference, you might take a look at the paper Conditioning and Intervening (pdf) by Meek and Glymour if you haven't seen it already. Of course, Glymour's account of causation is not very different from Pearl's, so maybe you don't think of this as philosophy.

Replies from: wedrifid
comment by wedrifid · 2012-07-03T19:19:13.968Z · LW(p) · GW(p)

But as someone who is intimately familiar with the philosophical literature on causation, it seems to me that the sense of "causal" in causal decision theory, while imprecise, is perfectly compatible with most traditional approaches.

That was my impression (without sufficient confidence that I wished to outright contradict on facts.)

comment by mwengler · 2012-07-03T07:46:39.197Z · LW(p) · GW(p)

You won't get it in this echo chamber, and I hate to spend my own scarce karma to tell you this, but you are definitely on to something. Building FAI's so that they get problems right which can never occur in the real world IS a problem. It seems clear enough that the harder problem than getting an FAI to one-box Newcomb is getting an FAI to correctly determine when it has enough evidence to believe that a particular claimant to being Omega is legit. In the absence of a proper Omega detector (or Omega-conman rejector), the FAI will be scammed by entities, some of whom may be unfriendly.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2012-07-03T09:36:26.775Z · LW(p) · GW(p)

I'm not sure you understand my position correctly, but I definitely don't care about FAI or UFAI at this point. This is a mathematical problem with a correct solution that depends on whether you consider backwards causality or not; what implications this solution has is of no interest to me.