The Interrupted Ultimate Newcomb's Problem
post by linkhyrule5 · 2013-09-10T23:04:54.042Z · LW · GW · Legacy · 18 commentsContents
18 comments
While figuring out my error in my solution to the Ultimate Newcomb's Problem, I ran across this (distinct) reformulation that helped me distinguish between what I was doing and what the problem was actually asking.
... but that being said, I'm not sure if my answer to the reformulation is correct either.
The question, cleaned for Discussion, looks like this:
You approach the boxes and lottery, which are exactly as in the UNP. Before reaching it, you come to sign with a flashing red light. The sign reads: "INDEPENDENT SCENARIO BEGIN."
Omega, who has predicted that you will be confused, shows up to explain: "This is considered an artificially independent experiment. Your algorithm for solving this problem will not be used in my simulations of your algorithm for my various other problems. In other words, you are allowed to two-box here but one-box Newcomb's problem, or vice versa."
This is motivated by the realization that I've been making the same mistake as in the original Newcomb's Problem, though this justification does not (I believe) apply to the original. The mistake is simply this: that I assumed that I simply appear in medias res. When solving the UNP, it is (seems to be) important to remember that you may be in some very rare edge case of the main problem, and that you are choosing your algorithm for the problem as a whole.
But if that's not true - if you're allowed to appear in the middle of the problem, and no counterfactual-yous are at risk - it sure seems like two-boxing is justified - as khafra put it, "trying to ambiently control basic arithmetic".
(Speaking of which, is there a write up of ambient decision theory anywhere? For that matter, is there any compilation of decision theories?)
EDIT: (Yes to the first, though not under that name: Controlling Constant Programs.)
18 comments
Comments sorted by top scores.
comment by Vladimir_Nesov · 2013-09-11T17:53:52.850Z · LW(p) · GW(p)
When you consider expected utility in a broader context, expected utility in a smaller event (a particular scenario) only contributes proportionally to the probability of that event. The usual heuristic of locally (independently) maximizing expected utility of particular scenarios stops working if your decisions within the scenario are able to control the probability of the scenario. And this happens naturally when the decision as to whether to set up the scenario is based on predictions of your hypothetical decisions within the scenario.
One way of influencing whether a hypothetical scenario is implemented is by making it impossible. When faced with the hypothetical scenario, make a decision that contradicts the assumptions of the hypothetical (such as a prediction of your decision, or something based on that prediction). That would make the hypothetical self-contradictory, impossible to implement.
If you find yourself in an "independent hypothetical", the prior probability of this event is not apparent, but it matters if it can be controlled, and it can be controlled by for example making decisions that contradict the assumptions of the hypothetical. If a certain decision increases expected utility of a game conditional on the game being played, but decreases the probability that the game gets played, this can well make the outcome worse, if the alternative to the game being played is less lucrative.
In our case, if you make a decision that implies that a prime number is composite, given the assumption that the prediction is accurate, this makes the scenario (in the case where the number is prime) impossible, its measure zero, and its conditional expected utility irrelevant (it doesn't contribute to the overall expected utility). Alternatively, if we permit some probability of error in the prediction, this significantly reduces the probability of the scenario, as it now requires the prediction to be in error. (By eliminating the case where the Lottery number is prime, you increase the conditional probability of the number being composite, given the assumption that the game gets played, but you don't change the absolute probability of the number being composite, outside the assumption that the game gets played.)
Replies from: linkhyrule5↑ comment by linkhyrule5 · 2013-09-11T21:00:38.761Z · LW(p) · GW(p)
Pardon me, how does this differ from the examples in this post?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2013-09-11T21:19:44.308Z · LW(p) · GW(p)
What analogy are you thinking about (between which points, more specifically)? These discussions don't seem particularly close.
Replies from: linkhyrule5↑ comment by linkhyrule5 · 2013-09-11T22:12:35.015Z · LW(p) · GW(p)
It seems like that Omega's actions make the primality of 1033 a constant you can ambiently control. (Though, to be fair, I don't understand that post very well, and it's probably true that if I did I wouldn't have this question.)
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2013-09-12T23:17:05.661Z · LW(p) · GW(p)
You control the posterior probability of 1033 being composite, conditional on the event of playing the game where both numbers are 1033. Seen from outside that event (i.e. without conditioning on it), what you are controlling is the probability of Omega's number being composite, but not the probability of Lottery number being composite. The expected value of composite 1033 comes from the event of Lottery number being composite, but you don't control the probability of this event. Instead, you control the conditional probability of this event given another event (the game with both 1033). This conditional probability is therefore misleading for the purposes of overall expected utility maximization, where outcomes are weighed by their absolute (prior) probability, not conditional probability on arbitrary sub-events.
Replies from: linkhyrule5↑ comment by linkhyrule5 · 2013-09-12T23:53:28.331Z · LW(p) · GW(p)
Ah, I see. (Thanks! Some rigour helps a lot.)
So, I can definitely see why this applies to the Ultimate Newcomb's Problem. As a contrast to help me understand it, I've adjusted this problem so that P(playing the game, both numbers 1033|playing the game) = ~1. See my response to Manfred here.
It is, of course, possible that your algorithm results in you not playing the game at all - but if Omega does this every year, say, then the winners will be the ones who make the most when the numbers are the same, since no other option exists.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2013-09-13T00:00:49.975Z · LW(p) · GW(p)
(If Omega can choose whether to let you play the game, which is the only game available to you, and the game has the rule that the numbers must be equal, then you should two-box to improve your chances of being allowed to play when the Lottery number is composite, and thus capture more of the composite Lottery outcomes. This works not because you are increasing the conditional probability of the number you get being composite (though you do), but because you are increasing the prior probability of playing the game with a composite Lottery number.)
Replies from: linkhyrule5, linkhyrule5↑ comment by linkhyrule5 · 2013-09-13T00:11:42.545Z · LW(p) · GW(p)
(Responded before your edit, so doubly-curious about your answer.)
↑ comment by linkhyrule5 · 2013-09-13T00:07:28.312Z · LW(p) · GW(p)
Okay. I feel much more confident in my answer, then :P.
Double-checking: What if the lottery picks primes and composites with equal frequency?
... then, on average, you'll get into half the games and make twice the money, so you should still two-box. I think.
So, assuming you/Manfred don't poke holes into this, I'll edit the original post. Thanks - having a clear, alternative two-box problem makes understanding the original much easier.
comment by calef · 2013-09-11T03:26:21.048Z · LW(p) · GW(p)
What's to prevent omega from performing the simulation of you wherein a sign appears reading "INDEPENDENT SCENARIO BEGIN", and he tells you "This is considered an artificially independent experiment. Your algorithm for solving this problem will not be used in my simulations of your algorithm for my various other problems. In other words, you are allowed to two-box here but one-box Newcomb's problem, or vice versa."?
Replies from: Watercressed↑ comment by Watercressed · 2013-09-11T03:35:15.158Z · LW(p) · GW(p)
The usual formulation of Omega does not lie.
comment by Manfred · 2013-09-12T18:57:45.364Z · LW(p) · GW(p)
Suppose you play this interrupted newcomb + lottery problem 1 jillion times, and the lottery outputs a suitably wide range of numbers. Which strategy wins, 1-boxing or 1-boxing except for when the numbers are the same, then 2-boxing?
Or suppose that you only get to play one game of this problem in your life, so that people only get one shot. So we take 1 jillion people and have them play the game - who wins more money during their single try, the 1-boxers or the 2-box-if-same-ers?
How can 1-boxers win more over 1 jillion games when they win less when the numbers are the same? Because seeing identical numbers is a lot less common for 1-boxers than 2-boxers.
That is, not only are you controlling arithmetic, you're also controlling the probability that the thought experiment happens at all, and yes, you do have to keep track of that.
Replies from: linkhyrule5↑ comment by linkhyrule5 · 2013-09-12T20:07:34.139Z · LW(p) · GW(p)
Which strategy wins, 1-boxing or 1-boxing except for when the numbers are the same, then 2-boxing?
The 2-boxers, because you've misunderstood the problem.
The thought experiment only ever occurs when the numbers coincide. Equivalently, this experiment is run such that Omega will always output the same number as the lottery, in addition to its other restrictions. That's why it's called the Interrupted Newcomb's Problem: it begins in medias res, and you don't have to worry about the low probability of the coincidence itself - you don't have to decide your algorithm to optimize for the "more likely" case.
Or at least, that's my argument. It seems fairly obvious, but it also says "two-box" on a Newcomb-ish problem, so I'd like to have my work checked :p.
Replies from: Manfred↑ comment by Manfred · 2013-09-12T21:45:47.894Z · LW(p) · GW(p)
I guess I'm not totally clear on how you're setting up the problem, then - I thought it was the same as in Eliezer's post.
Consider this extreme version though: let's call it "perverse Newcomb's problem with transparent boxes."
The way it works is that the boxes are transparent, so that you can see whether the million dollars is there or not (and as usual you can see $1000 in the other box). And the reason it's perverse is that Omega will only put the million dollars there if you will not take the box with the thousand dollars in it no matter what. Which means that if the million dollars for some reason isn't there, Omega expects you to take the empty box. And let's suppose that Omega has a one in a trillion error rate, so that there's a chance that you'll see the empty box even if you were honestly prepared to ignore the thousand dollars.
Note that this problem is different from the vanilla Newcomb's problem in a very important way: it doesn't just depend on what action you eventually take, it also depends on what actions you would take in other circumstances. It's like how in the Unexpected Hanging paradox, the prisoner who knows your strategy won't be surprised based on what day you hang them, but rather based on how many other days you could have hanged them.
You agree to play the perverse Newcomb's problem with transparent boxes (PNPTB), and you get just one shot. Omega gives some disclaimer (which I would argue is pointless, but may make you feel better) like "This is considered an artificially independent experiment. Your algorithm for solving this problem will not be used in my simulations of your algorithm for my various other problems. In other words, you are allowed to two-box here but one-box Newcomb's problem, or vice versa." Though of course Omega will still predict you correctly.
So you walk into the next room and....
a) see the boxes, with the million dollars in one box and the thousand dollars in the other. Do you one-box or two-box?
b) see the boxes, with one box empty and a thousand dollars in the other box. Do you take the thousand dollars or not?
I'd guess you avoided the thousand dollars in both scenarios. But suppose that you walk into the room and see scenario b and are a bit more conflicted than normal.
Omega gave that nice disclaimer about how no counterfactual selves would be impacted by this experiment, after all, so you really only gets one shot to make some money. Your options: either get $1000, or get nothing. So you take the $1000 - who can it hurt, right?
And since Omega predicted your actions correctly, Omega predicted that you would take the $1000, which is why you never saw the million.
Replies from: linkhyrule5↑ comment by linkhyrule5 · 2013-09-12T22:23:59.354Z · LW(p) · GW(p)
Right, which would be silly, so I wouldn't do that.
Oh, I see what's confusing me. The "Interrupted" version of the classic Newcomb's Problem is this: replace Omega with DumbBot that doesn't even try to predict your actions, it just gives you outcomes at random. So you can't affect your counterfactual selves, and don't even bother - just two-box.
This problem - which I should rename to the Interrupted Ultimate Newcomb's Problem - does require Omega. It would look like this: from Omega's end, Omega simulates a jillion people, as you put it, and finds all the people who produce primes or nonprimes (depending on the primality of 1033), and then poses this question only to those people. From your point of view, though, you know neither the primality of 1033 nor your own eventual answer, so it seems like you can ambiently control 1033 to be composite - and the versions of you that didn't make whatever choice you make are never part of the experiment, so who cares?
comment by [deleted] · 2013-09-11T23:15:43.149Z · LW(p) · GW(p)
Gary Drescher fleshes this out in Good and Real (referring to Newcomb's problem where both boxes are transparent, emphasis is mine):
Another way to justify the one-box choice is to note that for all you know, you might be the simulated you; hence you should act in part for your (causal) influence on the simulation outcome....Say the real you assumes that it is the real you. Nothing false can logically follow from that true (even if unjustified) assumption; in particular, nothing false follows as to which choice is in fact more lucrative for you to make. (The simulated you might, however, infer false conclusions from its false assumption that it is the real you.)
And I paraphrase the last sentence of that footnote: Therefore, the one-box choice is more lucrative for you only if you cannot safely assume that you are the real you.
(The original sentence was, "So the one-box choice is more lucrative for you only if you cannot infer otherwise by assuming that you are indeed the real you," which, to me, is most clearly read as expressing the opposite of what Drescher meant to express.)
comment by blacktrance · 2013-09-11T03:14:41.644Z · LW(p) · GW(p)
I still one-box. If Omega believes I will one-box, then I get a million dollars. I want Omega to think I will one-box, because that gives me the maximum payoff. Therefore, I one-box.
Replies from: linkhyrule5↑ comment by linkhyrule5 · 2013-09-11T04:34:29.220Z · LW(p) · GW(p)
But you can get 2 million from the lottery, by choosing to two-box, for a net gain of 1 million.
I think. It certainly seems like you can change your epistemic probability of "1033 is prime" by changing how you decide in this scenario.
In the non-Interrupted scenario, this just means that you're in a very rare universe and you've generally given up between 1 million and zero dollars, but I don't quite see how this applies to a truly one-off case.