Omega can be replaced by amnesia
post by Bongo · 2011-01-26T12:31:04.595Z · LW · GW · Legacy · 44 commentsContents
44 comments
Let's play a game. Two times, I will give you an amnesia drug and let you enter a room with two boxes inside. Because of the drug, you won't know whether this is the first time you've entered the room. On the first time, both boxes will be empty. On the second time, box A contains $1000, and Box B contains $1,000,000 iff this is the second time and you took only box B the first time. You're in the room, do take both boxes or only box B?
This is equivalent to Newcomb's Problem in the sense that any strategy does equally well on both, where by "strategy" I mean a mapping from info to (probability distributions over) actions.
I suspect that any problem with Omega can be transformed into an equivalent problem with amnesia instead of Omega.
Does CDT return the winning answer in such transformed problems?
Discuss.
44 comments
Comments sorted by top scores.
comment by lucidfox · 2011-01-26T13:17:52.508Z · LW(p) · GW(p)
This is actually insightful, given that the most frequently proposed way for Omega to make predictions is to simulate the decision-maker - in which case you run into a Sleeping Beauty problem in which you are the real or simulated decision-maker.
I like this phrasing. It's less ambiguous.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2011-01-26T16:18:24.137Z · LW(p) · GW(p)
I agree that this takes us into the world of Sleeping Beauty problems. But those are much harder. This makes things worse.
Replies from: Bongo↑ comment by Bongo · 2011-01-27T12:57:40.870Z · LW(p) · GW(p)
Sleeping beauty is not hard as a decision problem.
comment by Vladimir_Nesov · 2011-01-26T23:48:28.714Z · LW(p) · GW(p)
The whole point of Newcomb's problem is that CDT two-boxes and prediction "isn't really you", so that we have a conflict of intuition to one-box and CDT, and need to resolve it somehow, thus gaining new understanding. What is your thought experiment for?
Replies from: Bongo↑ comment by Bongo · 2011-01-27T12:55:34.295Z · LW(p) · GW(p)
Problems where CDT loses can be (probably mechanically) transformed to "strategy-equivalent" problems where CDT wins. That's at least interesting.
It even suggests a decision theory. Just transform the problem and use the strategy that CDT recommends for this new problem.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-01-27T14:01:49.446Z · LW(p) · GW(p)
This is unsurprising: CDT relies on explicit dependencies given by causal definitions, while what you want is to look for logical (ambient) dependencies for which the particular way the problem was specified (e.g. physical content defined by causality) is irrelevant. After you find the dependencies as a result of such analysis, all that's left is applying expected utility, at which point any CDT-specificity is gone (see Controlling Constant Programs).
comment by cousin_it · 2011-01-27T10:02:24.515Z · LW(p) · GW(p)
How would you port Counterfactual Mugging to amnesia?
Replies from: Bongo↑ comment by Bongo · 2011-01-27T12:53:29.861Z · LW(p) · GW(p)
Flip a coin. If tails, induce amnesia, ask the player for $100, if they pay, keep it, game over. If heads, induce amnesia, ask the player for $100, if they pay, return their money and award them an additional $10000, game over.
EDIT: Emile noted you can omit the amnesia.
Replies from: Emile, cousin_it↑ comment by cousin_it · 2011-01-27T13:46:41.559Z · LW(p) · GW(p)
Nah, that doesn't look very convincing. The whole point of CM was that once a branch of you gets asked for $100, agreeing to pay cannot benefit that branch. Also, what Emile said.
Replies from: Bongo↑ comment by Bongo · 2011-01-27T17:25:48.692Z · LW(p) · GW(p)
Convincing schmonvincing. All I promised is that all strategies will do equally well.
Replies from: cousin_it↑ comment by cousin_it · 2011-01-27T19:08:57.771Z · LW(p) · GW(p)
If that's really all the information you want to preserve, then I don't understand why you bother with amnesia in Newcomb's Problem. Just offer the player two boxes, the first one contains $1K, the second contains $1M, taking both boxes triggers a bomb that destroys the second box. I'm not sure what insight into decision theory we're supposed to get from such translations.
Replies from: Bongo↑ comment by Bongo · 2011-01-27T20:24:25.388Z · LW(p) · GW(p)
offer the player two boxes, the first one contains $1K, the second contains $1M, taking both boxes triggers a bomb that destroys the second box.
Hmm. This form has the same expected winnings for all strategies, but the $0 and $1,001,000 outcomes are impossible, unlike in the transformed Newcomb and the original Newcomb (given an Omega that doesn't punish mixed strategies). Also, expected winnings doesn't equal expected utility. For some utility functions, your problem has different expected utility than the normal or amnesiac Newcomb even if you play the same strategy in each. So it's not really equivalent.
Another example: consider (the tranformation of) Parfit's Hitchiker. If you use a coinflipping strategy there, the expected utility is
0.5*U(die)+0.5*(0.5*U($0)+0.5*U(-$100) = 0.5*U(die)+0.25*U($0)+0.25*U(-$100)
While the expected utility in the version where you simply plop the player in front of an ATM and drive them to the desert and dump them there if they don't pay $100 is:
0.5*U(die) + 0.5-U(-$100)
Which is clearly different.
Replies from: cousin_it↑ comment by cousin_it · 2011-01-27T20:42:01.561Z · LW(p) · GW(p)
Your transformation seems to require weird Omegas that respond to randomizing players by randomizing too. It's not clear to me why an Omega would want to behave like that (probabilistically reward cheaters). Can you handle other kinds of Omegas, e.g. the original kind specified by Eliezer?
Replies from: Bongo↑ comment by Bongo · 2011-01-27T21:07:39.462Z · LW(p) · GW(p)
I don't think they're weird. I think Omegas that go out of their way to discriminate against mixed strategies are weird. A strategy that one-boxes with probability 0.999 never gets a million, while one that one-boxes with probability 1 always gets a million. You could call that a discontinuity.
And I thought 1 was not a probability anyway! Any real rational one-boxing agent will expect to one-box with probability ~1, not with "probability" 1. Does that mean that the agent is using a mixed strategy? On the other hand, any agent that isn't using quantum randomness will in fact either one-box or two-box, even if it flips coins and stuff. Does that mean the agent is using a pure strategy? I can't answer this off the top of my head.
I assume the following is the key thing about Eliezer's original Omega:
Omega has been correct on each of 100 observed occasions so far - everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars
I didn't see Eleizer saying that Omega doesn't tolerate mixed strategies. If there were coinflippers among that 100, presumably Omega predicted the results of their coinflips and set up box B accordingly. To the extent that I can't duplicate the conditions perfectly to make sure any coin will land the same way both times, I can't do that. To the extent that I can, I can.
Replies from: cousin_itcomment by jimrandomh · 2011-01-27T13:20:11.447Z · LW(p) · GW(p)
Amnesia can replace Omega for the prediction part of Newcomb's problem, but that is not the only function Omega serves. Omega is also shorthand for some simplifying assumptions: that you accept the problem statement as definitely true, that no clever third options are available, etc.
comment by Perplexed · 2011-01-26T13:25:58.566Z · LW(p) · GW(p)
This is equivalent to Newcomb's Problem in the sense that any strategy does equally well on both, where by "strategy" I mean a mapping from info to (probability distributions over) actions.
I don't see this. For example, the mixed strategy of one-boxing half the time and two-boxing half the time generates very different results in the transformed problem than in the original Newcomb's Problem.
Though I suppose there may be some ambiguity in the question of what amnesia is supposed to do to the 'seed' in your pseudo-random number generator.
Does CDT return the winning answer in such transformed problems?
It does fine on your transformation of Newcomb. I won't venture a guess on more general problems, because I don't understand how the general transformation is imagined to work. What is the transformation of the Hitchhiker, for example?
Replies from: timtyler, Bongo↑ comment by timtyler · 2011-01-26T13:35:31.183Z · LW(p) · GW(p)
I don't see this. For example, the mixed strategy of one-boxing half the time and two-boxing half the time generates very different results in the transformed problem than in the original Newcomb's Problem.
Conventionally, you are not allowed access to a random number generator in Newcomb's Problem - and so can't use a mixed strategy. Any such usage would tarnish Omega's reputation. Omega - being a mind-reading superintelligence - can fairly easily discourage such a tactic by punishing randomising agents economically - and letting the punishment strategy be known.
↑ comment by Bongo · 2011-01-26T14:12:40.907Z · LW(p) · GW(p)
I don't see this. For example, the mixed strategy of one-boxing half the time and two-boxing half the time generates very different results in the transformed problem than in the original Newcomb's Problem.
Nope? Let's say you flip a coin. Then your expected winnings are
- 0.5x(0.5x1000000+0.5x(1000000+1000))+0.5x(0.5x0+0.5x(0+1000)) = 500500 dollars
in both versions if Omega follows the rule:
- if you one-box with probability p, Omega fills box B with probability p
What is the transformation of the Hitchhiker, for example?
Put the player in front of an ATM and give them the amnesia drug. If they don't pay you $100, take them to the desert and dump them there. If they paid, put the money from the first round back into their bank account and give them the amnesia drug again. If they pay you again, keep their money. And the player knows these rules.
I don't have the general transformation down yet.
Replies from: Perplexed↑ comment by Perplexed · 2011-01-26T23:52:20.957Z · LW(p) · GW(p)
if you one-box with probability p, Omega fills box B with probability p
Really? I thought Omega would correctly predict the results of the coin flip and whether I called heads or tails. I guess this shows that Omega is better at predicting what I do than I am at predicting what he does.
In any case, thank you for the thought experiment. I agree with Snowyowl that your version is philosophically different from the original, but if we want our philosophical concepts to pay rent, they are going to have to have different consequences than some cheap amnesia drug. Otherwise, why keep them around?
comment by Snowyowl · 2011-01-26T13:24:26.216Z · LW(p) · GW(p)
I don't think it's quite the same. The underlaying mathematics are the same, but this version side-steps the philosophical and game-theoretical issues with the other (namely, acausal behaviour).
Incidentally; If you take both boxes with probability p each time you enter the room, then your expected gain is p1000 + (1-p) 1000000. For maximum gain, take p=0; i.e. always take only box B.
EDIT: Assuming money is proportional to utility.
comment by [deleted] · 2011-01-26T13:10:45.266Z · LW(p) · GW(p)
I'm not convinced it's quite the same. If you owe the mafia $1001000 and they're coming to collect the money this afternoon, you're best off if you toss a coin to decide whether to choose two boxes. Omega, if I remember the formulation correctly, doesn't stand for such tricks.
Replies from: Bongo, Snowyowl↑ comment by Bongo · 2011-01-26T13:20:16.605Z · LW(p) · GW(p)
I could change the rules and decide not to stand for such tricks (mixed strategies) either. EDIT: No, I couldn't.
And on the other hand, Omega could deal with mixed strategies perfectly well, and I don't really understand why people make it so that he explicitly doesn't tolerate mixed strategies in their problems. For example, in Newcomb's Problem, if you one-box with probability p, Omega can just fill box B with probability p - for example if p=0.5 your expected winnings in Newcomb's Problem are $500,500.
Replies from: timtyler, lucidfox, timtyler↑ comment by lucidfox · 2011-01-26T14:37:59.050Z · LW(p) · GW(p)
In the traditional formulation of Newcomb's Problem (at least here on Less Wrong), if Omega predicts you'll use a randomizer, it will leave box B empty.
Replies from: None↑ comment by [deleted] · 2011-01-26T15:04:14.099Z · LW(p) · GW(p)
That's weird. Assuming human decision making is caused by neural processes, which aren't perfectly reliable, there'd be no way for a human to not use a randomizer.
Replies from: ata↑ comment by ata · 2011-01-26T22:50:16.804Z · LW(p) · GW(p)
We assume that Omega is powerful enough to simulate your brain and the environment precisely, and that quantumness is negligible.
In that case, you could still say that there's no way not to use a randomizer, but Omega would be using the same randomizer with the same seed.
Replies from: Bongo↑ comment by timtyler · 2011-01-26T13:44:33.600Z · LW(p) · GW(p)
Omega could deal with mixed strategies perfectly well, and I don't really understand why people make it so that he explicitly doesn't tolerate mixed strategies in their problems.
Use of a mixed strategy might tarnish Omega's reputation.
↑ comment by Snowyowl · 2011-01-26T13:15:13.404Z · LW(p) · GW(p)
The first time you enter the room, the boxes are both empty, so you can't ever get more than $1,000,000. But you're otherwise correct.
Replies from: None↑ comment by [deleted] · 2011-01-26T13:18:04.102Z · LW(p) · GW(p)
No, I can get $1001,000. If I randomly choose to take one box the first time, then both boxes will contain money the second time, where I might randomly choose to take both.
(Unless randomising devices are all somehow forced to come up with the same result both times)
Replies from: Snowyowl, Broggly↑ comment by Broggly · 2011-01-26T20:56:08.681Z · LW(p) · GW(p)
Hang on a minute though
1 box then 2 box = $1,001,000 1 box then 1 box = $1,000,000 2 box then 2 box = $1,000 2 box then 1 box = $0
$2,002,000 divided by 4 is $500,500. Effectively you're betting a million dollars on two coinflips, the first to get your money back (1-box on the first day) and the second to get $1000 (2-box on the second day). Omega could just use a randomizer if it thinks you will, in which case people would say "Omega always guesses right, unless you use a randomizer. But it's stupid to use one anyway."
Where p is the probability of 1 boxing, $E = p^2 $1,000,000 + p(1-p $1,001,000 + (1-p)^2 * $1,000 = $999,000 p + $1000. So the smart thing to do is clearly always one-box, unless showing up Omega who thinks he's so big is worth $499,500 to you.
Replies from: None↑ comment by [deleted] · 2011-01-27T11:03:21.509Z · LW(p) · GW(p)
I completely agree that to maximise your expected gain you should one-box every time. I was thinking of the specific case where you really, really need $1001,000 and are willing to reduce your expected gain to maximise the chance of getting it.
comment by Manfred · 2011-01-27T22:47:40.291Z · LW(p) · GW(p)
"Discuss" is not a sentence. It's also redundant - if you didn't say it, would people not reply? And it moderately annoys me when people tell me to "discuss." If other people feel similarly, could we make a habit of not using it like that?
Replies from: Bongocomment by anonym · 2011-01-26T16:29:47.640Z · LW(p) · GW(p)
I don't see that the scenario is the same. If you one-box everytime in your thought experiment, you are guaranteed to get the million; if you two box everytime, you will certainly not get the million. With Omega, there is a high probability but not certainty.
Also, what you do in the first round causes what happens in the second round, but with Omega, it is debatable whether what you end up doing causes there to be a million dollars or not.
Replies from: DanielLC, SRStarin↑ comment by SRStarin · 2011-01-27T00:22:17.868Z · LW(p) · GW(p)
You point out perhaps the only potentially meaningful difference, and it is the main salient point in dispute between one-boxers and two-boxers in the Omega problem.
First subpoint: With Omega, you are told (by Omega) that there is certainty--that he is never wrong--and you have a large but finite number of previous experiments that do not refute him. Any uncertainty is merely hoped for/dreaded. (There are versions in which there is definite uncertainty, but those are clearly not similar to the OP.)
Second subpoint: If there is truly, really, actually no uncertainty, then correlation is perfect. It is hard to determine cause and effect in such conditions with no chance to design experiments to separate them. I'd argue that cause is a low-value concept in such a situation.