No Anthropic Evidence
post by Vladimir_Nesov · 2012-09-23T10:33:06.994Z · LW · GW · Legacy · 34 commentsContents
34 comments
Closely related to: How Many LHC Failures Is Too Many?
Consider the following thought experiment. At the start, an "original" coin is tossed, but not shown. If it was "tails", a gun is loaded, otherwise it's not. After that, you are offered a big number of rounds of decision, where in each one you can either quit the game, or toss a coin of your own. If your coin falls "tails", the gun gets triggered, and depending on how the original coin fell (whether the gun was loaded), you either get shot or not (if the gun doesn't fire, i.e. if the original coin was "heads", you are free to go). If your coin is "heads", you are all right for the round. If you quit the game, you will get shot at the exit with probability 75% independently of what was happening during the game (and of the original coin). The question is, should you keep playing or quit if you observe, say, 1000 "heads" in a row?
Intuitively, it seems as if 1000 "heads" is "anthropic evidence" for the original coin being "tails", that the long sequence of "heads" can only be explained by the fact that "tails" would have killed you. If you know that the original coin was "tails", then to keep playing is to face the certainty of eventually tossing "tails" and getting shot, which is worse than quitting, with only 75% chance of death. Thus, it seems preferable to quit.
On the other hand, each "heads" you observe doesn't distinguish the hypothetical where the original coin was "heads" from one where it was "tails". The first round can be modeled by a 4-element finite probability space consisting of options {HH, HT, TH, TT}, where HH and HT correspond to the original coin being "heads" and HH and TH to the coin-for-the-round being "heads". Observing "heads" is the event {HH, TH} that has the same 50% posterior probabilities for "heads" and "tails" of the original coin. Thus, each round that ends in "heads" doesn't change the knowledge about the original coin, even if there were 1000 rounds of this type. And since you only get shot if the original coin was "tails", you only get to 50% probability of dying as the game continues, which is better than the 75% from quitting the game.
(See also the comments by simon2 and Benja Fallenstein on the LHC post, and this thought experiment by Benja Fallenstein.)
The result of this exercise could be generalized by saying that counterfactual possibility of dying doesn't in itself influence the conclusions that can be drawn from observations that happened within the hypotheticals where one didn't die. Only if the possibility of dying influences the probability of observations that did take place, would it be possible to detect that possibility. For example, if in the above exercise, a loaded gun would cause the coin to become biased in a known way, only then would it be possible to detect the state of the gun (1000 "heads" would imply either that the gun is likely loaded, or that it's likely not).
34 comments
Comments sorted by top scores.
comment by Benya (Benja) · 2012-09-23T22:46:05.525Z · LW(p) · GW(p)
If all the coins are quantum-mechanical, you should never quit, nor if all the coins are logical (digits of pi). If the first coin is logical ("what laws of physics are true?", in the LHC dilemma), the following coins are quantum, and your utility is linear in squared amplitude of survival, again you should never quit. However, if your utility is logarithmic in squared amplitude (i.e., dying in half of your remaining branches seems equally bad no matter how many branches you have remaining), then you should quit if your first throw comes up heads.
Replies from: Brilliand, adamisom↑ comment by Brilliand · 2016-02-05T04:08:01.536Z · LW(p) · GW(p)
I'm not getting the same result... let's see if I have this right.
If you quit if the first coin is heads: 50%*75% death rate from quitting on heads, 50%*50% death rate from tails
If you never quit: 50% death rate from eventually getting tails (minus epsilon from branches where you never get tails)
These deathrates are fixed rather than a distribution, so switching to a logarithm isn't going to change which of them is larger.
I don't think the formula you link to is appropriate for this problem... it's dominated by the log(2^-n) factor, which fails to account for 50% of your possible branches being immune to death by tails. Similarly, your term for quitting damage fails to account for some of your branches already being dead when you quit. I propose this formula as more applicable.
comment by Wei Dai (Wei_Dai) · 2012-09-24T23:05:29.429Z · LW(p) · GW(p)
you only get to 50% probability of dying as the game continues, which is better than the 75% from quitting the game.
20% probability of losing $100 can be better than 10% probability of losing $100 dollars, if the 20% is independent but the 10% is correlated with other events (e.g., if you lose $100 in the 10% of states of the world where you are already poorest). (This is well known in investment theory where being uncorrelated with market risk is valuable for an asset.) Similarly, 50% probability of dying is not necessarily better than 75% probability of dying, if the 50% is correlated with other events (in this case, dying in other quantum branches), and the 75% is independent.
To be more specific, let's analyze the decision problem using UDT. Suppose every copy of you in every branch is facing the same problem, and all of their "original" coins are perfectly correlated (which makes sense since the "original" coin is supposed to be a stand-in for "the laws of physics are such that LHC would destroy Earth if some accident didn't intervene"). You're trying to choose between the strategies (A) "keep flipping until the game ends" and (B) "keep flipping until either the game ends or I get to 1000 heads, then quit".
- Consequences of A: 50% chance nobody survives, 50% chance nobody dies.
- Consequences of B: 50% chance 1/4 2^-1000 of your copies survive, 50% chance 3/4 2^-1000 of your copies die.
A UDT agent might choose B over A because saving 1/4 * 2^-1000 of its copies is considered more valuable in the state where there are already very few copies of itself. Perhaps our intuitions for "anthropic evidence" should be translated into such preferences, in line with my previous suggestions?
(My answer is very similar to Benja's. I'm guessing that our perspective is not the easiest to understand, and it helps to have multiple explanations.)
comment by Irgy · 2012-09-24T00:55:42.807Z · LW(p) · GW(p)
There are 2^1001 equally likely (in the prior) scenarios, by combinations of coin flips. Applying evidence, anthropic or otherwise, means eliminating possibilities which have been excluded by the evidence and renormalising what's left. Doing this leaves two equally likely scenarios, that the original coin was tails and the other 1000 flips were heads, or that all 1001 flips were heads. The chances of the original coin being heads are therefore, still, 50-50. Keep flipping.
If this is meant to highlight some of the horribly flawed thinking that comes with anthropic evidence then you've done an excellent job. You've imposed an assymetry between "dieing" and "surviving the game and moving on", but no such assymetry exists. Both possibilities would equally prevent you from being in the situation you describe.
Replies from: None↑ comment by [deleted] · 2012-09-24T20:23:15.092Z · LW(p) · GW(p)
Good analysis.
You are assuming that we have absolute certainty in the game, but evidence as strong as 1000 heads in a row digs up a lot of previously unlikely hypotheses. Do you think this would change the answer?
Replies from: Irgy↑ comment by Irgy · 2012-09-25T11:55:59.062Z · LW(p) · GW(p)
Oh I'd definately be questioning my assumptions well before the 1000th head. Taking a good hard look at the coin, questioning my memory, and whatever else. As Sherlock Holmes most definately didn't say but probably should have, "When all possibilities have been eliminated, the answer most probably is something you simply haven't thought of". I don't think I'd be privileging any flawed theories of anthropic evidence and embracing the idea quantum suicide for fun and/or profit until I'd eliminated a fair few other options though.
comment by Luke_A_Somers · 2012-09-24T13:30:55.889Z · LW(p) · GW(p)
Why would you ever keep playing? You don't get anything for staying in the game, only an additional chance of getting shot.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-09-24T14:20:16.695Z · LW(p) · GW(p)
If you quit, you'll be shot with probability 75% on the exit, so you get not risking that by staying in the game.
Replies from: evand↑ comment by evand · 2012-09-24T14:46:56.742Z · LW(p) · GW(p)
Under what circumstances can I exit the game without a known 75% chance of getting shot?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-09-24T14:55:17.900Z · LW(p) · GW(p)
If you survive tossing "tails" (the "you are free to go" condition in the description). (Also, when the game ends after thousands of rounds.)
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2012-09-24T15:57:09.603Z · LW(p) · GW(p)
Ah, I see. I missed the 'free to go' part since it was in the tail part of a parenthetical expression that seemed unnecessary, so when I ran across 'quit', I thought it meant 'when the game ends'.
comment by Shmi (shminux) · 2012-09-23T18:48:47.829Z · LW(p) · GW(p)
I am missing something. The total probability of being shot while playing is never more than 50%, why would you ever quit?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-09-23T18:54:21.022Z · LW(p) · GW(p)
If you believe that the gun is loaded, you would want to quit, as in that case the probability of dying approaches 100% as you continue playing (see the second paragraph). The puzzle is in interpretation of observing 1000 "heads": in a certain sense it may feel that this implies that the gun is likely loaded, but it actually doesn't, so indeed you won't want to quit, as you won't be able to ever conclude anything about the state of the gun.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-09-23T19:55:30.383Z · LW(p) · GW(p)
in a certain sense it may feel that this implies that the gun is likely loaded
I have no idea in what sense (short of the quantum immortality speculations) the gun can be considered likely loaded. 1000 heads means unfair coin, not loaded gun. If you are 100% sure (how un-Bayesian) that the coin is fair, then the past history does not matter. In which case you should keep playing, since your probability of survival is at least 50%, better than in the case of quitting.
Replies from: Manfred, Richard_Kennaway↑ comment by Manfred · 2012-09-24T01:19:11.412Z · LW(p) · GW(p)
Basically it's the same sort of reasoning that gets you quantum suicide - "if the gun is loaded all the 'me' sees all heads, while if the gun is unloaded only a small fraction of me sees all heads, therefore if I see all heads the gun is more likely to be loaded." One simply ignores the fact that the probability of all heads is the same in both cases.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-09-24T02:35:08.440Z · LW(p) · GW(p)
Basically it's the same sort of reasoning that gets you quantum suicide
I guess, but then quantum suicide is a belief in belief, no one ever tests it for real. I wonder if all anthropics are like that.
Replies from: JenniferRM↑ comment by JenniferRM · 2012-09-24T02:39:10.755Z · LW(p) · GW(p)
More precisely, no one you've ever met has experimented with quantum suicide and then reported their success to you :-P
Replies from: shminux↑ comment by Shmi (shminux) · 2012-09-24T02:42:05.321Z · LW(p) · GW(p)
No, no one I know of announced their intention to run the experiment and then did so. I would certainly not be able to tell if they succeeded or not, but that's beside the point.
↑ comment by Richard_Kennaway · 2012-09-24T09:38:51.709Z · LW(p) · GW(p)
Since the coin flips after the first are made with a different coin, it doesn't matter whether that coin is fair. You probably can conclude that it's unfair, but as long as it's capable of coming down tails at all, the probability of surviving the game indefinitely is still 50%.
But biased coins do not exist short of the total bias of a double-headed coin. Biased coin-flippers do. So since it's you flipping your own coin, 1000 heads means you rigged the game.
comment by jacobt · 2012-09-23T18:37:52.889Z · LW(p) · GW(p)
I think you're wrong. Suppose 1,000,000 people play this game. Each of them flips the coin 1000 times. We would expect about 500,000 to survive, and all of them would have flipped heads initially. Therefore, P(I flipped heads initially | I haven't died yet after flipping 1000 coins) ~= 1.
This is actually quite similar to the Sleeping Beauty problem. You have a higher chance of surviving (analogous to waking up more times) if the original coin was heads. So, just as the fact that you woke up is evidence that you were scheduled to wake up more times in the Sleeping Beauty problem, the fact that you survive is evidence that you were "scheduled to survive" more in this problem.
On the other hand, each "heads" you observe doesn't distinguish the hypothetical where the original coin was "heads" from one where it was "tails".
This is the same incorrect logic that leads people to say that you "don't learn anything" between falling asleep and waking up in the Sleeping Beauty problem.
I believe the only coherent definition of Bayesian probability in anthropic problems is that P(H | O) = the proportion of observers who have observed O, in a very large universe (where the experiment will be repeated many times), for whom H is true. This definition naturally leads to both 2/3 probability in the Sleeping Beauty problem and "anthropic evidence" in this problem. It is also implied by the many-worlds interpretation in the case of quantum coins, since then all those observers really do exist.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-09-23T18:49:28.290Z · LW(p) · GW(p)
It's often pointless to argue about probabilities, and sometimes no assignment of probability makes sense, so I was careful to phrase the thought experiment as a decision problem. Which decision (strategy) is the right one?
Replies from: jacobt↑ comment by jacobt · 2012-09-23T19:12:31.404Z · LW(p) · GW(p)
Actually you're right, I misread the problem at first. I thought that you had observed yourself not dying 1000 times (rather than observing "heads" 1000 times), in which case you should keep playing.
Applying my style of analyzing anthropic problems to this one: Suppose we have 1,000,000 * 2^1000 players. Half flip heads initially, half flip tails. About 1,000,000 will get heads 1,000 times. Of them, 500,000 will have flipped heads initially. So, your conclusion is correct.
comment by [deleted] · 2012-09-23T15:55:02.105Z · LW(p) · GW(p)
"Intuitively, it seems as if 1000 "heads" is "anthropic evidence" for the original coin being "tails", that the long sequence of "heads" can only be explained by the fact that "tails" would have killed you."
Can someone explain why this is? My intuitive answer wasn't the above, because I'd already internalized that, under uncertainty, it's a fair coin with no future reaching bias. My intuitive feeling was that I'm likely playing against a biased-towards-heads coin. No more, no less, and at most a little information that the original flip possibly came up heads rather than tails.
Replies from: FeepingCreature↑ comment by FeepingCreature · 2012-09-23T21:35:11.629Z · LW(p) · GW(p)
Well, either way you're in a vanishingly-unlikely future. I think it's that we don't appreciate the unlikelihood of our own existence to some extent - being alive, we expect that event to have been somewhat expected. In the branch where you die with every tails throw, going by quantum immortality thinking, you expect to observe yourself throwing only heads, and you fail to internalize how much this diminishes your measure - ie. you don't account for your failing branches. In the branch where you may throw tails without dying, the reasoning goes, you would have expected - almost certainly, in fact - to see a fairly even distribution of heads and tails, so the event of seeing no tails feels unlikelier there, despite having the same likelihood.
Replies from: None, adamisom↑ comment by [deleted] · 2012-09-24T20:29:38.574Z · LW(p) · GW(p)
What's this about quantum? Coins are pretty well deterministic in the quantum sense.
Replies from: FeepingCreature↑ comment by FeepingCreature · 2012-09-26T16:26:46.938Z · LW(p) · GW(p)
The notion that you should not anticipate observing outcomes that entail a failure of your ability to observe or remember observing them, while coherent in a deterministic universe, is at least to me, solidly associated with the quantum suicide/immortality thought experiment.
rephrase: yeah, I didn't think of that.
↑ comment by adamisom · 2012-09-25T05:34:46.527Z · LW(p) · GW(p)
We are indeed in a "vanishingly-unlikely future" and (obviously) if you say what's P(me existing|no contingencies except the existence of the Universe) it's so small as to be ridiculous.
I've often wondered at this. In my darker moments I've thought "what if some not-me who was very like me but more accomplished and rational had existed instead of me?"
Replies from: FeepingCreature↑ comment by FeepingCreature · 2012-09-26T16:28:48.804Z · LW(p) · GW(p)
If you really want a dark thought, consider the cold war and the retrospective unlikelihood of your existence in that context. Some of the coincidences that prevented the extinction of large parts of the human species look suspiciously similar to the kind of "the gun jammed" events you'd expect in a quantum suicide experiment. And then consider that you should expect this to have been the likeliest history ..
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-09-26T16:31:00.785Z · LW(p) · GW(p)
extinction of large parts of the human species
I'm having trouble making sense of this phrase. Can you describe what the extinction of a large part of a species looks like?
Replies from: FeepingCreature↑ comment by FeepingCreature · 2012-09-26T16:34:51.402Z · LW(p) · GW(p)
Sorry, that was terribly phrased. I meant death of a large fraction of cultural and social clusters (species in a social/cultural, if not the biological sense). In other words, Western Europe, the US and Russia becoming largely uninhabited via nuclear war, rest to follow depending on how the fallout develops.
comment by kilobug · 2012-09-24T10:33:51.388Z · LW(p) · GW(p)
Let's took at the different possible assumptions.
If you assume a unique classical universe with one-way causality, then obviously, the fact you only throw heads is just luck, it doesn't say anything about the initial throw. You could promote hypothesis like "the coin is biased, because the ones running that experiment are sadistic who like to see my fear every time the coin is tossed", and maybe from that you could change your estimate of the first coin being biased, depending of your understanding of human psychology.
If you assume MWI it gets more complicated, and depends if the coin tossing is "quantum-random" or not. A usual coin tossing is not "quantum-random", the reasons behind the coin ending heads or tails are in classical physics, well above the level of "quantum noise". So in all the existing worlds (or a very, very large majority of them), the toss will give the same results, so you're back to the first case.
If you assume MWI and the coin toss are "quantum random", then after two tosses (not counting the initial), you've 8 outcomes : HHH, HHT, HTH, HTT, THH, THT, TTH, TTT. You can only experience H and THH. On the 8 copies of you, 3 will be dead, 5 alive. But for there are two copies seeing HH, you can't tell apart THH and HHH, both have p=1/2. p(HH | I'm alive) = 2/5, but p(T | *HH and I'm alive) is still 1/2.
Now, if you assume the first coin was not quantum, but the subsequent are ... there is no longer 8 worlds having the same Born probabilities (the same "level of existence") but only two sets of 4 worlds, each set having a 1/2 probability of existing. In one set of world, there will be just one copy of you seeing "HH", on the other, there will be 4 copies of you seeing all the possible outcomes.
Then we can rephrase the problem in a way that doesn't involve the anthropic principle, making it a more classical problems of probabilities. We take 5 similar pieces of paper. Two are written HH, the others HT, TH and TT. One HH is put aside, then each are folded. Then someone tosses a coin. If it's tails, you are given the lone HH paper. If it's heads, you're giving one at random among the 4 papers. You open your paper, and it's HH. What's the probability the coin landed tails ? Well, it's 4/5, not 1/2.
So, my answer is : if MWI holds, and the first coin is not quantum random, but the later are quantum random, you should consider the 1000 heads (and even stop much before) to be strong evidence towards "the initial was tails". If none or all of the coins are quantum random, or you don't believe in MWI, you shouldn't.
comment by FeepingCreature · 2012-09-23T14:57:04.292Z · LW(p) · GW(p)
I am confused.
I think, depending on how unlikely a world with anybody finding such an unlikely chain of throws is, and how his/her priors look, the gambler might want to update to almost certainty on quantum realism or one of the tegmarks; ie. the fundamental improbability of a finite world containing this outcome implies that reality is such that such an observation can be expected.
At least if I'm not completely confused. Wait, is that what the "unloaded gun" was supposed to emulate? But then it comes down to whether you're optimizing over total universes or surviving universes - if you don't care about the branches where you die, the game is neutral to you (if you're certain you're in a quantum realism universe) - you can play, you can leave, it makes no difference. The probability number would be irrelevant to your decisionmaking. I suppose this is one reason we should expect quantum immortality as a philosophy to weed itself out of the observable universe.
If you look back at 1000 heads in a quantum immortality mode where you ignore dead branches, you have an anticipation of 0.5 (initial throw) of observing that series in the loaded universe, but only 0.5/2^1000 in the unloaded universe, so the strategy "after turn 1000, if you got 1000 heads, assume you're in the loaded half" will succeed in 0.5/total anticipated cases and fail in (0.5/2^1000)/total anticipated cases. But then again, in a quantum immortality mode, the entire game is irrelevant and you can quit or play however you like. So this is somewhat confusing to me. Excuse me for rambling.
comment by Kindly · 2012-09-24T13:02:49.529Z · LW(p) · GW(p)
Anthropic evidence depends on the reference class you put yourself into, which in this case has a very definite meaning: "What kind of other person would be faced with the same problem you are?"
If you play the game for up to 1000 rounds, then with probability 1/2 - 1/2^1000 you got a T at some point and survived anyway (we call this a Type 1 outcome). Otherwise, you are probably dead, but with probability 1/2^999, you saw a run of 1000 heads (a Type 2 outcome).
It's true that Pr[gun is loaded | you survived] is close to 0. In fact, if you didn't see the coin flips then this is the probability you should use: given that you survived 1000 coin flips, you are better off continuing the game, because the gun most likely isn't loaded.
But once you get shown the coin flips, then the decisions you can make are different. In a Type 1 outcome, you know the gun isn't loaded, because it's been triggered and didn't kill you (Edit: so this means you're free to leave anyway). So when we decide "what should we do if the coin came up "heads" 1000 times?" we should only be looking at the Type 2 outcomes, because that's the only situation in which we need to make that decision.
And the Type 2 outcome happens with equal probability whether or not the gun was loaded. Therefore you should expect the gun to be loaded with 50% probability, and keep playing.