Learning Russian Roulette
post by Bunthut · 2021-04-02T18:56:26.582Z · LW · GW · 38 commentsContents
Problems for any future solution None 38 comments
Surprisingly, our current theories of anthropics don't seem to cover this.
You have a revolver with six chambers. One of them has a bullet in it. You are offered $1 for spinning the barrel, pointing at your head, and pulling the trigger. You remember doing this many times, and surviving each time. You also remember many other people doing this many times, and dying about 1/6th of the time. Should you play another round?
It seems to me that the answer is no, but existing formal theories disagree. Consider two hypothesis: A says that everyone has a 1/6 chance of dying. B says that everyone else has a 1/6 chance of dying, but I survive for sure. Now A has a lot more prior probability, but the likelihood ratio is 5:6 for every time I played. So if I played often enough, I will have updated to mostly believing B. Neither Self Indication Assumption nor Self Selection Assumption update this any further. SIA, because theres one of me in both worlds. SSA, because that one me is also 100% of my reference class. UDT-like approaches reason that in the A world, you want to never play, and in the B world you want to always play. Further, if I remember playing enough rounds, almost all my remaining measure will be in the B world, and so I should play, imitating the simple bayesian answer.
I'm not sure how we got to this point. It seems like most of the initial anthropics-problems were about birth-related uncertainty, and this stuck pretty well.
Problems for any future solution
Now one obvious way to fix this is to introduce a [death] outcome, which you can predict but which doesn't count towards the normalization factor when updating. Trying to connect this [death] with the rest of your epistemology would require some solution to embedding.
Worse than that however, this would only stop you from updating on your survival. I think the bigger problem here is that we aren't learning anything (in the long term) from the arbitrarily large control group. After all even if we don't update on our survival, that only means our odds ratio between A and B stays fixed. Its hardly a solution to the problem if "having the right prior" is doing all the work.
Learning from the control group has its own problems however. Consider for example the most obvious way of doing so: we observe that most things work out similarly for them as they do for us, and so we generalize this to playing russian roulette. But this is not a solution at all. Because how can we distinguish the hypothesis "most things generalize well from others to us, including russian roulette" and "most things generalize well from others to us, but not russian roulette"? This is more or less the same problem as distinguishing between A and B in the first place. And this generalizes: Every way to learn about us from others involves reasoning from something that isn't our frequency of survival, to our frequency of survival. Then we can imagine a world where the inference fails, and then we must be unable to update towards being in that world.
Note that the use of other humans here is not essential; a sufficient understanding of physics should be able to stand in for observing them I think. And to make things yet more difficult, there doesn't seem to be any metaphysical notion like "what body your soul is in" or "bridging laws" or such that a solution could fill in with something more reasonable. There is one particular gun, and whether a bullet will come out of its barrel is already affected.
Is this just the problem of induction repackaged? After all we are in a not fully episodic environment (with our potential death), so perhaps we just can't figure out everything? That may be related, but I think this is worse. With the problem of induction, you can at least assume the world is regular, and be proven wrong. Here though, you can believe either that you are an exception to natural regularity, or not, and either way you will never be proven wrong. Though a revision of Humean possibility could help with both.
38 comments
Comments sorted by top scores.
comment by Donald Hobson (donald-hobson) · 2021-04-02T20:54:29.347Z · LW(p) · GW(p)
I think that playing this game is the right move, in the contrived hypothetical circumstances where
- You have already played a huge number of times. (say >200)
- Your priors only contain options for "totally safe for me" or "1/6 chance of death."
I don't think you are going to actually make that move in the real world much because
- You would never play the first few times
- Your going to have some prior on "this is safer for me, but not totally save, it actually has a 1/1000 chance of killing me." This seems no less reasonable than the no chance of killing you prior.
- If for some strange reason, you have already played a huge huge number of times, like billions. Then you are already rich, diminishing marginal utility of money. An agent with logarithmic utility in money, nonzero starting balance, uniform priors over lethality probability and a fairly large dis-utility of death will never play.
↑ comment by NunoSempere (Radamantis) · 2021-04-02T22:05:56.386Z · LW(p) · GW(p)
You would never play the first few times
This isn't really a problem if the rewards start out high and gradually diminish.
I.e., suppose that you value your life at $L (i.e., you're willing to die if the heirs of your choice get L dollars), and you assign a probability of 10^-15 to H1 = "I am immune to losing at Russian roulette", something like 10^ 4 to H2 = "I intuitively twist the gun each time to avoid the bullet",, and a probability of something like 10^-3 to H3 = "they gave me an empty gun this time". Then you are offered to play enough rounds of Russian roulette for a price of $L/round until you update to arbitrary levels.
Now, if you play enough times, H3 becomes the dominant hypothesis with say 90% probability, so you'd accept a payout for, say, $L/2. Similarly, if you know that H3 isn't the case, you'd still assign very high probability to something like H2 after enough rounds, so you'd still accept a bounty of $L/2.
Now, suppose that all the alternative hypothesis H2, H3,... are false, and your only other alternative hypothesis is H1 (magical intervention). Now the original dilemma has been saved. What should one do?
↑ comment by Bunthut · 2021-04-02T21:18:18.985Z · LW(p) · GW(p)
Your going to have some prior on "this is safer for me, but not totally save, it actually has a 1/1000 chance of killing me." This seems no less reasonable than the no chance of killing you prior.
If you've survived often enough, this can go arbitrarily close to 0.
I think that playing this game is the right move
Why? It seems to me like I have to pick between the theories "I am an exception to natural law, but only in ways that could also be produced by the anthropic effect" and "Its just the anthropic effect". The latter seems obviously more reasonable to me, and it implies I'll die if I play.
Replies from: donald-hobson↑ comment by Donald Hobson (donald-hobson) · 2021-04-02T23:26:03.960Z · LW(p) · GW(p)
Work out your prior on being an exception to natural law in that way. Pick a number of rounds such that the chance of you winning by luck is even smaller. You currently think that the most likely way for you to be in that situation is if you were an exception.
What if the game didn't kill you, it just made you sick? Would your reasoning still hold? There is no hard and sharp boundary between life and death.
Replies from: Bunthut↑ comment by Bunthut · 2021-04-03T08:30:51.099Z · LW(p) · GW(p)
Hm. I think your reason here is more or less "because our current formalisms say so". Which is fair enough, but I don't think it gives me an additional reason - I already have my intuition despite knowing it contradicts them.
What if the game didn't kill you, it just made you sick? Would your reasoning still hold?
No. The relevant gradual version here is forgetting rather than sickness. But yes, I agree there is an embedding question here.
comment by ADifferentAnonymous · 2021-04-05T18:00:37.820Z · LW(p) · GW(p)
Shorter statement of my answer:
The source of the apparent paradox here is that the perceived absurdity of 'getting lucky N times in a row' doesn't scale linearly with N, which makes it unintuitive that an aggregation of ordinary evidence can justify an extraordinary belief.
You can get the same problem with less anthropic confusion by using coin-flip predictions instead of Russian Roulette. It seems weird that predicting enough flips successfully would force you to conclude that you can psychically predict flips, but that's just a real and correct implication of having on nonzero prior on psychic abilities in the first place.
comment by Shmi (shminux) · 2021-04-02T20:27:06.408Z · LW(p) · GW(p)
If you appear to be an outlier, it's worth investigating why precisely, instead of stopping at one observation and trying to make sense of it using essentially an outside view. There are generally higher-probability models in the inside view, such as "I have hallucinated other people dying/playing" or "I always end up with an empty barrel"
Replies from: Bunthut↑ comment by Bunthut · 2021-04-02T21:13:33.191Z · LW(p) · GW(p)
Sure, but with current theories, even after you've gotten an infinite amount of evidence against every possible alternative consideration, you'll still believe that youre certain to survive. This seems wrong.
Replies from: ADifferentAnonymous↑ comment by ADifferentAnonymous · 2021-04-02T21:28:53.379Z · LW(p) · GW(p)
Even after you've gotten an infinite amount of evidence against every possible alternative consideration, you'll still believe that youre certain to survive
Isn't the prior probability of B the sum over all specific hypotheses that imply B? So if you've gotten an arbitrarily large amount of evidence against all of those hypotheses, and you've won at Russian Roulette an arbitrarily high number of times... well, you'll just have to get more specific about those arbitrarily large quantities to say what your posterior is, right?
Replies from: Bunthut↑ comment by Bunthut · 2021-04-02T22:29:07.050Z · LW(p) · GW(p)
Isn't the prior probability of B the sum over all specific hypotheses that imply B?
I would say there is also a hypothesis that just says that your probability of survival is different, for no apparent reason, or only similarly stupid reasons like "this electron over there in my pinky works differently from other electrons" that are untestable for the same anthropic reasons.
Replies from: ADifferentAnonymous↑ comment by ADifferentAnonymous · 2021-04-05T17:27:40.180Z · LW(p) · GW(p)
Okay. So, we agree that your prior says that there's a 1/N chance that you are unkillable by Russian Roulette for stupid reasons, and you never get any evidence against this. And let's say this is independent of how much Russian Roulette one plays, except insofar as you have to stop if you die.
Let's take a second to sincerely hold this prior. We aren't just writing down some small number because we aren't allowed to write zero; we actually think that in the infinite multiverse, for every N agents (disregarding those unkillable for non-stupid reasons), there's one who will always survive Russian Roulette for stupid reasons. We really think these people are walking around the multiverse.
So now let K be the base-5/6 log of 1/N. If N people each attempt to play K games of Russian Roulette (i.e. keep playing until they've played K games or are dead), one will survive by luck, one will survive because they're unkillable, and the rest will die (rounding away the off-by-one error).
If N^2 people across the multiverse attempt to play 2K games of Russian Roulette, N of them will survive for stupid reasons, one of them will survive by luck, and the rest will die. Picture that set of N immortals and one lucky mortal, and remember how colossal a number N must be. Are the people in that set wrong to think they're probably immortals? I don't think they are.
Replies from: Bunthut↑ comment by Bunthut · 2021-04-05T19:05:20.728Z · LW(p) · GW(p)
I have thought about this before posting, and I'm not sure I really believe in the infinite multiverse. I'm not even sure if I believe in the possibility of being an individual exception for some other sort of possibility. But I don't think just asserting that without some deeper explanation is really a solution either. We can't just assign zero probability willy-nilly.
comment by sapphire (deluks917) · 2021-04-02T21:52:25.617Z · LW(p) · GW(p)
It is not a serious problem if your epistemology gives you the wrong answer in extremely unlikely worlds (ie ones where you survived 1000 rounds of Russian Roulette). Don't optimize for extremely unlikely scenarios.
Replies from: Charlie Steiner, Radamantis↑ comment by Charlie Steiner · 2021-04-08T02:20:06.075Z · LW(p) · GW(p)
We can make this point even more extreme by playing a game like the "unexpected hanging paradox," where surprising the prisoner most of the time is only even possible if you pay for it in the coin of not surprising them at all some of the time.
↑ comment by NunoSempere (Radamantis) · 2021-04-02T22:13:57.611Z · LW(p) · GW(p)
I disagree; this might have real world implications. For example, the recent OpenPhil report on Semi-informative Priors for AI timelines [EA · GW] updates on the passage of time, but if we model creating AGI as playing Russian roulette*, perhaps one shouldn't update on the passage of time.
* I.e., AGI in the 2000s might have lead to an existential catastrophe due to underdeveloped safety theory
Replies from: deluks917↑ comment by sapphire (deluks917) · 2021-04-04T02:43:46.591Z · LW(p) · GW(p)
That is not a similar situation. In the AI situation, your risks obviously increase over time.
comment by faul_sname · 2021-04-05T22:45:18.687Z · LW(p) · GW(p)
I'm not convinced this is a problem with the reasoner rather than a problem with the scenario.
Let's say we start with an infinite population of people, who all have as a purpose in life to play Russian Roulette until they die. Let's further say that one in a trillion of these people has a defective gun that will not shoot, no matter how many times they play.
If you select from the people who have survived 1000 rounds, your population will be made almost entirely out of people with defective guns (1 / 1e12 with defective guns vs 1/6e80 with working guns who have just gotten lucky).
Alternatively, we could say that none of the guns at all are defective. Even if we make that assumption, if we count the number of observer moments of "about to pull the trigger", we see that the median observer-moment is someone who has played 3 rounds, the 99.9th percentile observer-moment has played 26 rounds, and by the time you're up to 100 rounds, approximately 99.999999% of observer-moments are from people who have pulled the trigger fewer times than you have survived. If we play a game of Follow the Improbability [LW · GW], we find that the improbability is the fact that we're looking at a person who has won 1000 rounds of Russian Roulette in a row, so if we figure out why we're looking at that particular person I think that solves the problem.
comment by Slider · 2021-04-02T23:57:16.275Z · LW(p) · GW(p)
Meshing together the beliefs that you are a holding a pistol and that you have survived all those times are hard to mesh. To the extent that you try to understand the "chances are low" stance you can't imagine to be holding a pistol. We are in hypothetical lands and fighting the hypothetical might not be proper, but atleast to me there seems to be a a complexity asymmetry in that "I have survived" is very foggy on the details but the "1/6 chance" can answer many details and all kinds of interventions. Would your survival chance increase if athmospheric oxygen drops fivefold? The "I have survived" stance has no basis to answer yes or no. In pure hypothesis landia the 1/6th stance would also have these problems but in practise we know about chemistry and it is easy to reason that if the powder doesn't go off then bullets are not that dangerous.
If I believe that red people need oxygen to live and I believe I am blue I do not believe that I need oxygen to live. You don't live a generic life, you live a particular life. And this extends to impersonal things also. If you believe a coin is fair then it can come up heads or tails. If you believe it is fair under rainy conditions, cloudy conditions, under heavy magnetic fields and under gold rushes that can seem like a similar belief. But your sampling of the different kind of conditions are probably very different and knowing that we are under very magnetic environment might trigger a clause that maybe metal coins are not fair in these conditions.
comment by ChristianKl · 2021-04-09T15:09:45.146Z · LW(p) · GW(p)
Consider two hypothesis: A says that everyone has a 1/6 chance of dying. B says that everyone else has a 1/6 chance of dying, but I survive for sure.
There's no reason to only consider those two hypothesis.
If I would trust my memory, which I likely wouldn't given the stakes my available data only suggest that there's something that's in common with my trials that's not the case for the average person.
It could be that I did all my previous tries at a specific outside temperature and at that temperature prevents the bullet from getting fired. It could be bound to a specific location of firing the gun. It could be bound to a lot of other factors.
I have no reason to believe more strongly that I'm very special then that previous rounds of me playing the game share a systematic bias that's not just due to me being me and I have no gurantee that this systematic bias will continue in the future.
Replies from: Bunthut↑ comment by Bunthut · 2021-04-09T19:26:07.612Z · LW(p) · GW(p)
Adding other hypothesis doesn't fix the problem. For every hypothesis you can think of, theres a version of it that says "but I survive for sure" tacked on. This hypothesis can never lose evidence relative to the base version, but it can gain evidence anthropically. Eventually, these will get you. Yes, theres all sorts of considerations that are more relevant in a realistic scenario, thats not the point.
Replies from: ChristianKl↑ comment by ChristianKl · 2021-04-09T22:43:06.643Z · LW(p) · GW(p)
You don't need to add other hypothesis to know that there might be unknown additional hypothesis.
comment by Charlie Steiner · 2021-04-08T02:16:38.225Z · LW(p) · GW(p)
I see a lot of object-level discussion (I agree with the modal comment) but not much meta.
I am probably the right person to stress that "our current theories of anthropics," here on LW, are not found in a Bostrom paper.
Our "current theory of anthropics" around these parts (chews on stalk of grass) is simply to start with a third-person model of the world and then condition on your own existence (no need for self-blinding or weird math, just condition on all your information as per normal). The differences in possible world-models and self-information subsumes, explains, and adds shades of grey to Bostrom-paradigm disagreements about "assumption" and "reference class."
This is, importantly, the sort of calculation done in UDT/TDT. See e.g. a Wei Dai post [LW · GW], or my later, weirder post [LW · GW].
Replies from: Bunthut↑ comment by Bunthut · 2021-04-08T18:46:20.990Z · LW(p) · GW(p)
To clarify, do you think I was wrong to say UDT would play the game? I've read the two posts you linked. I think I understand Weis, and I think the UDT described there would play. I don't quite understand yours.
Replies from: Charlie Steiner↑ comment by Charlie Steiner · 2021-04-09T04:55:48.177Z · LW(p) · GW(p)
I agree with faul sname, ADifferentAnonymous, shminux, etc. If every single person in the world had to play russian roulette (1 bullet and 5 empty chambers), and the firing pin was broken on exactly one gun in the whole world, everyone except the person with the broken gun would be dead after about 125 trigger pulls.
So if I remember being forced to pull the trigger 1000 times, and I'm still alive, it's vastly more likely that I'm the one human with the broken gun, or that I'm hallucinating, or something else, rather than me just getting lucky. Note that if you think you might be hallucinating, and you happen to be holding a gun, I recommend putting it down and going for a nap, not pulling the trigger in any way. But for the sake of argument we might suppose the only allowed hypotheses are "working gun" and "broken gun."
Sure, if there are miraculous survivors, then they will erroneously think that they have the broken gun, in much the same way that if you flipped a coin 1000 times and just so happened to get all heads, you might start to think you had an unfair coin. We should not expect to be able to save this person. They are just doomed.
It's like poker. I don't know if you've played poker, but you probably know that the basic idea is to make bets that you have the best hand. If you have 4 of a kind, that's an amazing hand, and you should be happy to make big bets. But it's still possible for your opponent to have a royal flush. If that's the case, you're doomed, and in fact when the opponent has a royal flush, 4 of a kind is almost the worst hand possible! It makes you think you can bet all your money when in fact you're about to lose it all. It's precisely the fact that four of a kind is a good hand almost all the time that makes it especially bad that remaining tiny amount of the time.
The person who plays russian roulette and wins 1000 times with a working gun is just that poor sap who has four of a kind into a royal flush.
(P.S.: My post is half explanation of how I would calculate the answer, and half bullet-biting on an unusual anthropic problem. The method has a short summary: just have a probabilistic model of the world and then condition on the existence of yourself (with all your memories, faculties, etc). This gives you the right conditional probability distribution over the world. The complications are because this model has to be a fancy directed graph that has "logical nodes" corresponding to the output of your own decision-making procedure, like in TDT. )
Replies from: Bunthut↑ comment by Bunthut · 2021-04-09T08:53:36.090Z · LW(p) · GW(p)
Maybe the disagreement is in how we consider the alternative hypothesis to be? I'm not imagining a broken gun - you could examine your gun and notice it isn't, or just shoot into the air a few times and see it firing. But even after you eliminate all of those, theres still the hypothesis "I'm special for no discernible reason" (or is there?) that can only be tested anthropically, if at all. And this seems worrying.
Maybe heres a stronger way to formulate it: Consider all the copies of yourself across the multiverse. They will sometimes face situations where they could die. And they will always remember having survived all previous ones. So eventually, all the ones still alive will believe they're protected by fate or something, and then do something suicidal. Now you can bring the same argument about how there are a few actual immortals, but still... "A rational agent that survives long enough will kill itself unless its literally impossible for it to do so" doesn't inspire confidence, does it? And it happens even in very "easy" worlds. There is no world where you have a limited chance of dying before you "learn the ropes" and are safe - its impossible to have a chance of eventual death other than 0 or 1, without the laws of nature changing over time.
just have a probabilistic model of the world and then condition on the existence of yourself (with all your memories, faculties, etc).
I interpret that as conditioning on the existence of at least one thing with the "inner" properties of yourself.
Replies from: Charlie Steiner↑ comment by Charlie Steiner · 2021-04-09T16:44:02.920Z · LW(p) · GW(p)
I think in the real world, I am actually accumulating evidence against magic faster than I am trying to commit elaborate suicide.
Replies from: Bunthut↑ comment by Bunthut · 2021-04-09T19:20:47.832Z · LW(p) · GW(p)
The problem, as I understand it, is that there seem to be magical hypothesis you can't update against from ordinary observation, because by construction the only time they make a difference is in your odds of survival. So you can't update them from observation, and anthropics can only update in their favour, so eventually you end up believing one and then you die.
Replies from: Charlie Steiner↑ comment by Charlie Steiner · 2021-04-09T23:47:04.128Z · LW(p) · GW(p)
The amount that I care about this problem is proportional to the chance that I'll survive to have it.
comment by Dmitriy Vasilyuk (dmitriy-vasilyuk) · 2021-04-03T01:59:58.504Z · LW(p) · GW(p)
Reference class issues.
SSA, because that one me is also 100% of my reference class.
I think it's not necessarily true that on SSA you would also have to believe B, because the reference class doesn't necessarily have to involve just you. Defenders of SSA often have to face the problem/feature that different choices of a reference class yield different answers. For example, in Anthropic Bias Bostrom argues that it's not very straightforward to select the appropriate reference class, some are too wide and some (such as the trivial reference class) often too narrow.
The reference class you are proposing for this problem, just you, is even narrower than the trivial reference class (which includes everybody in your exact same epistemic situation so that you couldn't tell which one you are.) It's arguably not the correct reference class, given that even the trivial reference class is often too narrow.
Reproducing your intuitions.
It seems to me that your intuition of not wanting to keep playing can actually be reproduced by using SSA with a more general reference class, along with some auxiliary assumptions about living in a sufficiently big world. This last assumption is pretty reasonable given that the cosmos is quite likely enormous or infinite. It implies that there are many versions of Earth involving this same game where a copy of you (or just some person, if you wish to widen the reference class beyond trivial) participates in many repetitions of the Russian roulette, along with other participants who die at the rate of 1 in 6.
In that case, after every game, 1 in 6 of you die in the A scenario, and 0 in the B scenario, but in either scenario there are still plenty of "you"s left, and so SSA would say you shouldn't increase your credence in B (provided you remove your corpses from your reference class, which is perfectly fine a la Bostrom).
My take on the answer.
That said, I don't actually share your intuition for this problem. I would think, conditional on my memory being reliable etc., that I have better and better evidence for B with each game. Also, I fall on the side of SIA, in large part because of the weird reference class issues involved in the above analysis. So to my mind, this scenario doesn't actually create any tension.
Replies from: Bunthut↑ comment by Bunthut · 2021-04-03T08:18:24.570Z · LW(p) · GW(p)
In that case, after every game, 1 in 6 of you die in the A scenario, and 0 in the B scenario, but in either scenario there are still plenty of "you"s left, and so SSA would say you shouldn't increase your credence in B (provided you remove your corpses from your reference class, which is perfectly fine a la Bostrom).
Can you spell that out more formally? It seems to me that so long as I'm removing the corpses from my reference class, 100% of people in my reference class remember surviving every time so far just like I do, so SSA just does normal bayesian updating.
The reference class you are proposing for this problem, just you, is even narrower than the trivial reference class (which includes everybody in your exact same epistemic situation so that you couldn't tell which one you are.) It's arguably not the correct reference class, given that even the trivial reference class is often too narrow.
I did mean to use the trivial reference class for the SSA assesment, just not in a large world. And, it still seems strange to me that it would change the conclusion here how large the world is. So even if you get this to work, I don't think it reproduces my intuition. Besides, if the only reason we successfully learn from others is that we defined our reference class to include them - well, then the assumption we can't update against is just "what reference class were in". I'd similarly count this as a non-solution thats just hard-wiring the right answer.
Replies from: dmitriy-vasilyuk↑ comment by Dmitriy Vasilyuk (dmitriy-vasilyuk) · 2021-04-03T19:37:11.969Z · LW(p) · GW(p)
Can you spell that out more formally? It seems to me that so long as I'm removing the corpses from my reference class, 100% of people in my reference class remember surviving every time so far just like I do, so SSA just does normal bayesian updating.
Sure, as discussed for example here: https://www.lesswrong.com/tag/self-sampling-assumption, [? · GW] if there are two theories, A and B, that predict different (non-zero) numbers of observers in your reference class, then on SSA that doesn't matter. Instead, what matters is what fraction of observers in your reference class have the observations/evidence you do. In most of the discussion from the above link, those fractions are 100% on either A or B, resulting, according to SSA, in your posterior credences being the same as your priors.
This is precisely the situation we are in for the case at hand, namely when we make the assumptions that:
- The reference class consists of all survivors like you (no corpses allowed!)
- The world is big (so there are non-zero survivors on both A and B).
So the posteriors are again equal to the priors and you should not believe B (since your prior for it is low).
I did mean to use the trivial reference class for the SSA assesment, just not in a large world. And, it still seems strange to me that it would change the conclusion here how large the world is.
I completely agree, it seems very strange to me too, but that's what SSA tells us. For me, this is just one illustration of serious problems with SSA, and an argument for SIA.
If your intuition says to not believe B even if you know the world is small then SSA doesn't reproduce it either. But note that if you don't know how big the world is you can, using SSA, conclude that you now disbelieve the combination small world + A, while keeping the odds of the other three possibilities the same - relative to one another - as the prior odds. So basically you could now say: I still don't believe B but I now believe the world is big.
Finally, as I mentioned, I don't share your intuition, I believe B over A if these are the only options. If we are granting that my observations and memories are correct, and the only two possibilities are: I just keep getting incredibly lucky OR "magic", then with every shot I'm becoming more and more convinced in magic.
Replies from: Bunthut↑ comment by Bunthut · 2021-04-03T21:42:34.653Z · LW(p) · GW(p)
In most of the discussion from the above link, those fractions are 100% on either A or B, resulting, according to SSA, in your posterior credences being the same as your priors.
For the anthropic update, yes, but isn't there still a normal update? Where you just update on the gun not firing, as an event, rather than your existence? Your link doesn't have examples where that would be relevant either way. But if we didn't do this normal updating, then it seems like you could only learn from an obervation if some people in your reference class make the opposite observation in different worlds. So if you use the trivial reference class, you will give everything the same probability as your prior, except for eliminating worlds where noone has your epistemic state and renormalizing. You will expect to violate bayes law even in normal situations that dont involve any birth or death. I don't think thats how its meant to work.
Replies from: dmitriy-vasilyuk↑ comment by Dmitriy Vasilyuk (dmitriy-vasilyuk) · 2021-04-04T07:50:33.387Z · LW(p) · GW(p)
You have described some bizarre issues with SSA, and I agree that they are bizarre, but that's what defenders of SSA have to live with. The crucial question is:
For the anthropic update, yes, but isn't there still a normal update?
The normal updates are factored into the SSA update. A formal reference would be the formula for P(H|E) on p.173 of Anthropic Bias, which is the crux of the whole book. I won't reproduce it here because it needs a page of terminology and notation, but instead will give an equivalent procedure, which will hopefully be more transparently connected with the normal verbal statement of SSA, such as one given in https://www.lesswrong.com/tag/self-sampling-assumption:
SSA: All other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers (past, present and future) in their reference class.
That link also provides a relatively simple illustration of such an update, which we can use as an example:
Notice that unlike SIA, SSA is dependent on the choice of reference class. If the agents in the above example were in the same reference class as a trillion other observers, then the probability of being in the heads world, upon the agent being told they are in the sleeping beauty problem, is ≈ 1/3, similar to SIA.
In this case, the reference class is not trivial, it includes N + 1 or N + 2 observers (observer-moments, to be more precise; and N = trillion), of which only 1 or 2 learn that they are in the sleeping beauty problem. The effect of learning new information (that you are in the sleeping beauty problem or, in our case, that the gun didn't fire for the umpteenth time) is part of the SSA calculation as follows:
- Call the information our observer learns E (in the example above E = you are in the sleeping beauty problem)
- You go through each possibility for what the world might be according to your prior. For each such possibility i (with prior probability Pi) you calculate the chance Qi of having your observations E assuming that you were randomly selected out of all observers in your reference class (set Qi = 0 if there no such observers).
- In our example we have two possibilities: i = A, B, with Pi = 0.5. On A, we have N + 1 observers in the reference class, with only 1 having the information E that they are in the sleeping beauty problem. Therefore, QA = 1 / (N + 1) and similarly QB = 2 / (N + 2).
- We update the priors Pi based on these probabilities, the lower the chance Qi of you having E in some possibility i, the stronger you penalize it. Specifically, you multiply Pi by Qi. At the end, you normalize all probabilities by the same factor to make sure they still add up to 1. To skip this last step, we can work with odds instead.
- In our example the original odds of 1:1 then update to QA:QB, which is approximately 1:2, as the above quote says when it gives "≈ 1/3" for A.
So if you use the trivial reference class, you will give everything the same probability as your prior, except for eliminating worlds where noone has your epistemic state and renormalizing. You will expect to violate bayes law even in normal situations that dont involve any birth or death. I don't think thats how its meant to work.
In normal situations using the trivial class is fine with the above procedure with the following proviso: assume the world is small or, alternatively, restrict the class further by only including observers on our Earth, say, or galaxy. In either case, if you ensure that at most one person, you, belongs to the class in every possibility i then the above procedure reproduces the results of applying normal Bayes.
If the world is big and has many copies of you then you can't use the (regular) trivial reference class with SSA, you will get ridiculous results. A classic example of this is observers (versions of you) measuring the temperature of the cosmic microwave background, with most of them getting correct values but a small but non-zero number getting, due to random fluctuations, incorrect values. Knowing this, our measurement of, say, 2.7K wouldn't change our credence in 2.7K vs some other value if we used SSA with the trivial class of copies of you who measured 2.7K. That's because even if the true value was, say, 3.1K there would still be a non-zero number of you's who measured 2.7K.
To fix this issue we would need to include in your reference class whoever has the same background knowledge as you, irrespective of whether they made the same observation E you made. So all you's who measured 3.1K would then be in your reference class. Then the above procedure would have you severely penalize the possibility i that the true value is 3.1K, because Qi would then be tiny (most you's in your reference class would be ones who measured 3.1K).
But again, I don't want to defend SSA, I think it's quite a mess. Bostrom does an amazing job defending it but ultimately it's really hard to make it look respectable given all the bizarre implications imo.
Replies from: Bunthut↑ comment by Bunthut · 2021-04-05T17:22:02.794Z · LW(p) · GW(p)
That link also provides a relatively simple illustration of such an update, which we can use as an example:
I didn't consider that illustrative of my question because "I'm in the sleeping beauty problem" shouldn't lead to a "normal" update anyway. That said I haven't read Anthropic Bias, so if you say it really is supposed to be the anthropic update only then I guess. The definition in terms of "all else equal" wasn't very informative for me here.
To fix this issue we would need to include in your reference class whoever has the same background knowledge as you
But background knowledge changes over time, and a change in reference class could again lead to absurdities like this. So it seems to me like the sensible version of this would be to have your reference class always be "agents born with the same prior as me", or indentical in an even stronger sense, which would lead to something like UDT.
Now that I think of it SSA can reproduce SIA, using the reference class of "all possible observers", and considering existence a contingent property of those observers.
Replies from: dmitriy-vasilyuk↑ comment by Dmitriy Vasilyuk (dmitriy-vasilyuk) · 2021-04-06T00:59:05.492Z · LW(p) · GW(p)
Learning that "I am in the sleeping beauty problem" (call that E) when there are N people who aren't is admittedly not the best scenario to illustrate how a normal update is factored into the SSA update, because E sounds "anthropicy". But ultimately there is not really much difference between this kind of E and the more normal sounding E* = "I measured the CMB temperature to be 2.7K". In both cases we have:
- Some initial information about the possibilities for what the world could be: (a) sleeping beauty experiment happening, N + 1 or N + 2 observers in total; (b) temperature of CMB is either 2.7K or 3.1K (I am pretending that physics ruled out other values already).
- The observation: (a) I see a sign by my bed saying "Good morning, you in the sleeping beauty room"; (b) I see a print-out from my CMB apparatus saying "Good evening, you are in the part of spacetime where the CMB photons hit the detector with energies corresponding to 3.1K ".
In either case you can view the observation as anthropic or normal. The SSA procedure doesn't care how we classify it, and I am not sure there is a standard classification. I tried to think of a possible way to draw the distinction, and the best I could come up with is:
Definition (?). A non-anthropic update is one based on an observation E that has no (or a negligible) bearing on how many observers in your reference class there are.
I wonder if that's the definition you had in mind when you were asking about a normal update, or something like it. In that case, the observations in 2a and 2b above would both be non-anthropic, provided N is big and we don't think that the temperature being 2.7K or 3.1K would affect how many observers there would be. If, on the other hand, N = 0 like in the original sleeping beauty problem, then 2a is anthropic.
Finally, the observation that you survived the Russian roulette game would, on this definition, similarly be anthropic or not depending on who you put in the reference class. If it's just you it's anthropic, if N others are included (with N big) then it's not.
The definition in terms of "all else equal" wasn't very informative for me here.
Agreed, that phrase sounds vague, I think it can simply be omitted. All SSA is trying to say really is that P(E|i), where i runs over all possibilities for what the world could be, is not just 1 or 0 (as it would be in naive Bayes), but is determined by assuming that you, the agent observing E, is selected randomly from the set of all agents in your reference class (which exist in possibility i). So for example if half such agents observe E in a given possibility i, then SSA instructs you to set the probability of observing E to 50%. And in the special case of a 0/0 indeterminacy it says to set P(E|i) = 0 (bizarre, right?). Other than that, you are just supposed to do normal Bayes.
What you said about leading to UDT sounds interesting but I wasn't able to follow the connection you were making. And about using all possible observers as your reference class for SSA, that would be anathema to SSAers :)
Replies from: Bunthut↑ comment by Bunthut · 2021-04-06T23:42:30.386Z · LW(p) · GW(p)
Definition (?). A non-anthropic update is one based on an observation E that has no (or a negligible) bearing on how many observers in your reference class there are.
Not what I meant. I would say anthropic information tells you where in the world you are, and normal information tell you what the world is like. An anthropic update, then, reasons about where you would be, if the world were a certain way, to update on world-level probabilities from anthropic information. So sleeping beauty with N outsiders is a purely anthropic update by my count. Big worlds generally tend to make updates more anthropic.
What you said about leading to UDT sounds interesting but I wasn't able to follow the connection you were making.
One way to interpret the SSA criterion is to have beliefs in such a way that in as many (weighed by your prior) worlds as possible, you would as right as possible in the position of an average member of your reference class. If you "control" the beliefs of members in your reference class, then we could also say to believe in such a way as to make them as right as possible in as many worlds as possible. "Agents which are born with my prior" (and maybe "and using this epistemology", or some stronger kind of identicalness) is a class whichs beliefs are arguably controlled by you in the timeless sense. So if you use it, you will be doing a UDT-like optimizing. (Of course, it will be a UDT that believes in SSA.)
And about using all possible observers as your reference class for SSA, that would be anathema to SSAers :)
Maybe, but if there is a general form that can produce many kinds of anthropics based on how its free parameter is set, then calling the result of one particular value of the parameter SIA and the results of all others SSA does not seem to cleave reality at the joints.
comment by NunoSempere (Radamantis) · 2021-04-02T22:26:23.191Z · LW(p) · GW(p)
I also have the sense that this problem is interesting.