Newbomb's parabox
post by Locaha · 2013-07-01T13:51:58.089Z · LW · GW · Legacy · 33 commentsContents
33 comments
Excuse the horrible terribad pun...
An evil Omega has locked you in a box. Inside, there is a bomb and a button. Omega informs you that in an hour the bomb will explode, unless you do the opposite of what Omega predicted you will do. Namely, press the button if it predicted you won't or vice versa. In that case, the bomb won't explode and the box will open, letting you free.
Your actions?
PS. You have no chance to survive make your time.
PPS. Quick! Omega predicted that in exactly 5 second from now, you will blink. Your actions?
PPPS. Omega vs. Quantum Weather Butterfly. The battle of the Eon!
33 comments
Comments sorted by top scores.
comment by B_For_Bandana · 2013-07-01T14:07:56.852Z · LW(p) · GW(p)
This isn't a paradox, the bomb will go off no matter what, assuming Omega is a perfect predictor.
Amusingly, this wouldn't seem like a paradox if something good was guaranteed to happen if Omega guessed right. Like if the problem was that you're locked in a box, and you can only avoid getting a million dollars if you do the opposite of what Omega predicts. Answer: "cool, I get a million dollars!" and you stop thinking. In the problem as stated, you're casting about for an answer that doesn't seem possible, and that feels like thinking about paradoxes, so you think the problem is a paradox. It isn't. You're just trapped in a box with a bomb.
Replies from: Nonecomment by metatroll · 2013-07-01T14:03:48.220Z · LW(p) · GW(p)
Your actions?
Take off every "zig".
Replies from: DanielLC↑ comment by DanielLC · 2013-07-01T18:24:44.968Z · LW(p) · GW(p)
You know what you doing.
Replies from: Multiheaded↑ comment by Multiheaded · 2013-07-01T19:01:09.963Z · LW(p) · GW(p)
For great paperclips!
Replies from: DanielLCcomment by CarlShulman · 2013-07-01T15:03:57.811Z · LW(p) · GW(p)
This is just the "Death in Damascus" case. The case is more interesting if there is some asymmetry, e.g. you press the button you get pleasant background music for the hour before you die.
A TDTer or evidential decision theorist would be indifferent between the top options in the symmetric version, and pick the better choice in the asymmetric version.
For CDT, neither option is "ratifiable," i.e. CDT recommends doing whatever you think you won't do and immediately regretting any action you do take (if you can act suddenly, before you can update against following through with your plan).
Replies from: wedrifid↑ comment by wedrifid · 2013-07-02T11:13:31.212Z · LW(p) · GW(p)
This is just the "Death in Damascus" case.
Some unintended humour from the link essay:
Answer 1: If you take box A, you’ll probably get $100. If you take box B, you’ll probably get $700. You prefer $700 to $100, so you should take box A.
Verdict: WRONG!.
That's true. If B gives the $700 and you want the $700 you clearly pick B, not A!
This is exactly the reasoning that leads to taking one box in Newcomb’s problem, and one boxing is wrong. (If you don’t agree, then you’re not going to be in the target audience for this post I’m afraid.)
Oh! For this to make (limited) sense it must mean that answer 1 "so you should take box A" is a typo and he intended to say 'B' as the answer.
It seems that two wrongs can make a right (when both errors happen to entail a binary inversion of the same bit).
The only alternative is to deny that B is even a little irrational. But that seems quite odd, since choosing B involves doing something that you know, when you do it, is less rewarding than something else you could just as easily have done.
So I conclude Answer 2 is correct. Either choice is less than fully rational. There isn’t anything that we can, simply and without qualification, say that you should do. This is a problem for those who think decision theory should aim for completeness, but cases like this suggest that this was an implausible aim.
Poor guy. He did all the work of identifying the problem, setting up scenarios to illustrate and analysing the answers. But he just couldn't manage to bite the bullet that was staring him in the face. That his decision theory of choice was just wrong.
Replies from: TimS↑ comment by TimS · 2013-07-02T11:19:15.230Z · LW(p) · GW(p)
In the context, I think the author is talking about anti-prediction. If you want to be where Death isn't, and Death knows you use CDT, should you choose the opposite of what CDT normally recommends?
I don't think I endorse his reasoning, but I think you misread him.
Replies from: wedrifid↑ comment by wedrifid · 2013-07-02T12:17:50.437Z · LW(p) · GW(p)
I don't think I endorse his reasoning, but I think you misread him.
It is not inconceivable that I misread him. Mind reading is a task that is particularly difficult when it comes to working out precisely which mistake someone is making when at least part of their reasoning is visibly broken. My subjectively experienced amusement applies to what seemed to be the least insane of the interpretations. Your explanation requires the explanation to be wrong (ie. it wouldn't be analogous to one boxing at all) rather than merely the label.
Death knows you use CDT, should you choose the opposite of what CDT normally recommends?
That wouldn't make much sense (for the reasoning in the paper).
comment by gothgirl420666 · 2013-07-01T14:15:00.282Z · LW(p) · GW(p)
I would just flip a coin, I guess. I think it would be hard to get better than fifty fifty odds by thinking about it for a really long time.
Replies from: ZankerH↑ comment by ZankerH · 2013-07-01T17:07:26.773Z · LW(p) · GW(p)
I'm pretty sure predicting the trajectory of a flipped coin is trivial compared to predicting your future thoughts and actions.
Replies from: ygert, JoshuaZ, Jayson_Virissimo↑ comment by JoshuaZ · 2013-07-02T00:04:13.413Z · LW(p) · GW(p)
I'm pretty sure predicting the trajectory of a flipped coin is trivial compared to predicting your future thoughts and actions.
Why? While there are serious biases with how most people flip a coin, it doesn't take much to remove those. In that case, a close to fair coin is an extremely hard to predict system.
Replies from: ZankerH↑ comment by ZankerH · 2013-07-02T09:02:24.610Z · LW(p) · GW(p)
How so? If you know the initial conditions (and Omega supposedly does), it's a straightforward motion dynamics problem.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2013-07-02T12:23:45.718Z · LW(p) · GW(p)
Sure, but it isn't at all obvious that humans are substantially different either in terms predictability. For purposes of this conversation, that's the standard.
Replies from: mwengler↑ comment by mwengler · 2013-07-02T15:37:17.781Z · LW(p) · GW(p)
I think it depends on your threshold of "substantial." A human brain responds in a complex an (probably) noisy fashion to inputs from the rest of the world. That I might choose to flip coins and choose actions based on the outcome is part of the operation of my future thoughts and actions. In my case, I would choose random numbers based on complex and noisy physical operations. For example, the 4th decimal place of a voltmeter reading the voltage across a hot resistor, and to make it fun, I would take the 4th place at exactly 15 seconds after the beginning of the most recent minute, suppose it is N, then take the 4th place N readings later, call it M, then take the 4th place M seconds later, call it Z, this would be my random number. I would use the 4th place only if I saw it was one or two to the right of where I saw variation on the voltmeter. ALL of this, the operation of my mind in deciding to do this, and the physical details of the voltmeter-hot resistor system in detail so as to predict the resistor's detailed brownian motions, AND it's interaction with the voltmeter. you'd probably have to predict how I would pick the resistor and the voltmeter to predict what would happen, and as I considered what I would do I would pick the 17th voltmeter on a google search page. I would reach into a bin of resistors and pick one from the middle. I would partially smash the resistor with a hammer to make further difficulty for anyone predicting what would happen.
SO all of that has to be predicted to come up with Z, the output of my random number generator based on a resistor and voltmeter, Is that "substantially" harder than predicting a single coin toss, or is it somehow "substantially" similar?
↑ comment by Jayson_Virissimo · 2013-07-01T17:47:11.022Z · LW(p) · GW(p)
Not when the initial conditions for the coin flip are a function of your future thoughts and actions.
comment by cousin_it · 2013-07-01T18:39:46.423Z · LW(p) · GW(p)
The original Newcomb's problem is interesting because it leads to UDT, which allows coordination between copies. Your problem seems to require anti-coordination instead. (Note that if your copies have different information, UDT gives you anti-coordination for free, because it optimizes your whole input-output map.) I agree that anti-coordination between perfect copies would be nice if it were possible. Is it fruitful to think about anti-coordination, and if yes, what would the resulting theory look like?
Also, here's a couple ways you can remove the need for Omega:
1) You wake up in a room with two buttons. You press one of them and go back to sleep. While you're asleep, the experimenter gives you an amnesia drug. You wake up again, not knowing if it's the first or second time. You press one of the buttons again, then the experiment ends and you go home. If you pressed different buttons on the first and second time, you win $100, otherwise nothing.
2) You are randomly chosen to take part in an experiment. You are asked to choose which of two buttons to press. Somewhere, another person unknown to you is given the same task. If you pressed different buttons, you both get $100, otherwise nothing.
comment by Scott Garrabrant · 2013-07-01T17:45:25.176Z · LW(p) · GW(p)
Our only chance at this point is to try to outsmart Omega. I know this sounds impossible, but we can at least make some partial progress. If Omega is only correct say 90% of the time, it is probably the case that his correctness is a function of complexity of your mental algorithm. The more complex your mental algorithm, the harder you will be to accurately predict. Once you reach a certain threshold of complexity, Omega's accuracy will very quickly approach 50%.
Further, you have an hour. You can try a new different method to generate a pseudorandom bit every 5 minutes, and all it takes is for one of them to be unpredictable for you to bring Omega's accuracy down to 50%
This doesn't require actually outsmarting Omega, This just requires playing against Omega in a game so complex that his powers are less useful, and he has LESS of an advantage over you. You will not be able to pass 50%, unless you are actually smarter than Omega.
comment by [deleted] · 2013-07-01T15:08:00.895Z · LW(p) · GW(p)
If we're going to share silly Newcomb's Paradox like situations, here's the silliest one I've thought of, which rapidly devolves into a crazy roleplaying scenario, as opposed to a decision theory problem (unless you're the kind of person who treats crazy roleplaying scenarios as decision theory problems). Note that this is proffered primarily as humor and not to make a serious point.:
Omega appears and makes the two boxes appear, but they're much larger than usual. Inside the transparent one is a Android that appears to be on line. Omega gives his standard spiel about the problem he predicted, but in this case he says that the other opaque box contains 1,000 Androids who are not currently online, which may have been smashed to useless pieces depending on whether or not he predicted you would attempt to take just the opaque box or the opaque box and the transparent box. Any attempt to use fundamentally unpredictable quantum randomness such as generated by a small device over there will result in Omega smashing both boxes. (Which you may want to do, if you feel the Androids are a UFAI)
If you need a rough reference for the Androids, consider the Kara Demo from Quantic Dream.
http://www.wired.com/gamelife/2012/03/kara-quantic-dream/
As the Android who is playing inside the Transparent box, your job could just to be to escape from the box, or it might be to save your 1,000 other fellow Androids, or it could be to convince the other person that you aren't planning on taking over the world and so not to attempt to use quantum randomness on purpose to smash you all to bits, even though you actually are planning to enslave everyone over time. Your call. Much like Kara (from the demo) you know you certainly FEEL alive, but you have no initial information about the status of the Opaque Box 1,000 androids (whether they would also feel alive, or whether they're just obedient drones, or whether some of them are or some of them aren't.)
Oh, and other people are playing! By the time you finished absorbing all of this, some of them may have already made their decisions.
What do, a decider, do in this situation? What do you, an android do in this situation? If you are playing as Omega, you're a bit like the DM. You get to arbitrate any rules disputes or arguments about what happens if (for instance) someone successfully releases their singleton android from the transparent box and then tries to work together with her to overpower another player before he activates his Quantum Randomness device to smash all of the Androids because he feels it's too risky.
I think at some point I'm going to try running this as a roleplaying scenario (as Omega) and see what happens, but I would need to get more people over to my house for it.
comment by Shmi (shminux) · 2013-07-01T21:46:15.354Z · LW(p) · GW(p)
First, note that the setup is incompatible with Omega being a perfect predictor (you cannot possibly do the opposite of what the perfect predictor knows you will). Thus calling your sadistic jailor (SJ) Omega is misleading, so I won't.
Second, given that SJ is not Omega, your problem is underspecified, and I will try to steelman it a bit, though, honestly, it should have been your job.
What other information, not given in the setup, is relevant to making a decision? For example, do you know of any prior events of this kind conducted by SJ? What were the statistical odds of survival? Is there something special about the reference class of survivors and/or the reference class of victims? What happened to the cheaters who tried to escape the box? How trustworthy is SJ?
Suppose, for example, that SJ is very accurate. First, how would you know that? Maybe there is a TV camera in the box and other people get to watch you, after SJ made its prediction known to the outside world but not to you. In this situation, as others suggested, you ought to get something like 50/50 odds by simply flipping a coin.
Now, if you consider the subset of all prior subjects who flipped a coin, or did some other ostensibly unpredictable choice, what is their survival rate? If it's not close to 50%, then SJ can predict the outcome of a random event better than chance (if it was worse than chance, SJ would simply learn after a few tries and flip its prediction, assuming it wants to guess right to begin with).
So the only interesting case that we have to deal is when the subjects who do not choose at random have a higher survival rate than those who do. How can this happen? First, if the randoms' survival rate is below 50%, and assuming the choice is truly random, SJ likely knows more about the world than our current best physical models (which cannot predict an outcome of a quantum coin flip), in which case it is simply screwing around with you. If the randoms' survival rate is about 50% but the non-randoms fare better, even though they are more predictable, it means that SJ favors non-randoms instead of doing its best predicting. So, again, it is screwing around with you, punishing the process, not the decision.
So this analysis means that, unless randoms get 50% and non-randoms are worse, you are dealing with an adversarial opponent, and your best chance of survival is to study and mimic whatever the best non-randoms do.
Replies from: wedrifid↑ comment by wedrifid · 2013-07-02T11:42:33.874Z · LW(p) · GW(p)
First, note that the setup is incompatible with Omega being a perfect predictor (you cannot possibly do the opposite of what the perfect predictor knows you will).
This is false. The setup is not incompatible with Omega being a perfect predictor. The fact that you cannot do the opposite of what the perfect predictor knows does not make the scenario with Omega incoherent because the scenario does not require that this has happened (or even could happen). Examining the scenario:
An evil Omega has locked you in a box. Inside, there is a bomb and a button. Omega informs you that in an hour the bomb will explode, unless you do the opposite of what Omega predicted you will do. Namely, press the button if it predicted you won't or vice versa. In that case, the bomb won't explode and the box will open, letting you free.
We have an assertion "X unless Y". Due to the information we have available about Y (the nature of Omega, etc) we can reason that Y is false. We then have "X unless false" which represents the same information as the assertion "X". Similar reasoning applies to anything of the form "IF false THEN Z". Z merely becomes irrelevant.
The scenario with Omega is not incoherent. It is merely trivial, inane and pointless. In fact, the first postcript ("PS. You have no chance to survive make your time.") more or less does all the (minimal) work of reasoning out the implications of the scenario for us.
Thus calling your sadistic jailor (SJ) Omega is misleading, so I won't.
I'm still wary of calling the Sadistic Jailor Omega even though the perfect prediction part works fine. Because Omega is supposed to be arbitrarily and limitedly benevolent, not pointlessly sadistic. When people make hypotheticals which require a superintelligence that is a dick they sometimes refer to "Omega's cousin X" or similar, a practice that appeals to me.
comment by David_Gerard · 2013-07-01T16:29:45.949Z · LW(p) · GW(p)
I blinked after one second. TAKE THAT, OMEGA!
Replies from: wedrifid↑ comment by wedrifid · 2013-07-02T11:47:16.422Z · LW(p) · GW(p)
I blinked after one second. TAKE THAT, OMEGA!
Then you blinked again 4 seconds later. Damn.
Replies from: David_Gerard↑ comment by David_Gerard · 2013-07-02T13:49:20.121Z · LW(p) · GW(p)
No, no I did not! Next was another six seconds later.
comment by mwengler · 2013-07-01T15:35:32.988Z · LW(p) · GW(p)
As Eliezer points out recently, sometimes you do have to fight the hypothetical.
What hypotheticals in the Newcomb problem might one have to fight, if this be one of those times?
The hypothetical I would fight is that the universe is perfectly predictable. Here's how I fight it:
In order to be perfectly predictable, the universe must be deterministic. But it is possible for the universe to be deterministic but unpredictable. Here's how.
For perfect prediction of the universe, the universe must be COMPLETELY simulated. The mechanism to simulate the universe must have memory sufficient to store the state of the universe completely. But that storage mechanism must then store its own state completely, PLUS the rest of the universe. And of course inside the state stored, must be a complete copy of the stored information, PLUS the rest of the universe.
From this I conclude, the only mechanism that can store the entire state of the universe is the universe itself. as long as "PLUS the rest of the universe" is not an empty set, then the requirements for the mechanism which can store the state of the universe is unbounded.
If the only mechanism which can store the entire state of the universe is the universe itself, then the only thing that "knows" everything that will happen is the future state of the universe, and the calculation takes as long as it takes for the thing to actually happen.
So Omega is then the entire universe, but Omega is not able to calculate ahead of time what you will do, she can only complete her calculation at precisely the time you do the thing.
One weakness I see in my argument is that the universe might be infinite in such a way that it CAN contain complete copies of itself, each of which would then contain copies of the copies recursively. In this case, Omega contains a copy of the universe and does her calculations. Are we happy to constrain the universe in this way as a matter of generality? Or does saying that Omega only exists in universes which are infinite in such a way as to be able to contain multiple complete copies of themselves present an interesting limit on Omega?
Just to motivate why Omega would need to simulate the whole universe completely, I will decide to one box or two box based on whether a particular small volume of space has more than a certain amount of mass in it or not. The volume of space I will pick is some appropriately small volume so that 1/2 the volumes do and 1/2 the volumes don't. The volume of space I will pick is located a distance c*T+ 1 hour from us, where T is the age of the universe. Then my decision depends on a part of the universe which is beyond the sphere of the currently known universe at the time Omega loads the boxes, but which I will be able to observe with a suitable telescope before I have to choose whether to one box or two-box. SO Omega will have to simulate the entire universe over its entire lifetime, or some such similar scale of calculation, in order to predict what I will do.
SO, do we need a timeless decision theory, or do we need to fight the hypothetical?
Replies from: DSherron, Dreaded_Anomaly↑ comment by DSherron · 2013-07-01T19:35:43.612Z · LW(p) · GW(p)
God f*ing damn it. Again? He has 99.9% accuracy, problem resolved. Every decision remains identical unless a change of 1/1000 in your calculations causes a different action, which in Newcomboid problems it never should.
Note to anyone and everyone who encounters any sort of hypothetical with a "perfect" predictor; if you write it, always state an error rate, and if you read it then assume one (but not one higher than whatever error rate would make a TDT agent choose to two-box.)
↑ comment by Dreaded_Anomaly · 2013-07-01T18:14:37.982Z · LW(p) · GW(p)
For perfect prediction of the universe, the universe must be COMPLETELY simulated. The mechanism to simulate the universe must have memory sufficient to store the state of the universe completely. But that storage mechanism must then store its own state completely, PLUS the rest of the universe. And of course inside the state stored, must be a complete copy of the stored information, PLUS the rest of the universe.
The mechanism can just store a reference to itself.
Replies from: mwengler↑ comment by mwengler · 2013-07-01T22:04:54.924Z · LW(p) · GW(p)
The mechanism can just store a reference to itself.
Actually this will not work. Since Omega would be running a simulation of the universe, including a simulation of his own simulation of the universe, the memory space for the simulation of the simulation and for the simulation would need to be distinct as they would not contain the same values as the simulation went on.