Anthropics made easy?
post by Stuart_Armstrong · 2018-06-14T00:56:50.555Z · LW · GW · 61 commentsContents
Formalised probabilities beat words Doing anthropic probability, properly and easily The rest of this post is obsolete; see the post here for the most recent version of anthropic probability in Fermi situations. Decision theory considerations cannot be avoided. Fermi paradox and large universes The simulation argument Doomsday argument Sleeping Beauty Boltzmann brains None 61 comments
tl;dr: many effective altruists and rationalists seem to have key misunderstandings of anthropic reasoning; but anthropic probability is actually easier than it seems.
True or false:
- The fact we survived the cold war is evidence that the cold war was less dangerous.
I'd recommend trying to answer that question in your head before reading more.
Have you got an answer?
Or at least a guess?
Or a vague feeling?
Anyway, time's up. That statement is true - obviously, surviving is evidence of safety. What are the other options? Surviving is evidence of danger? Obviously not. Evidence of nothing at all? It seems unlikely that our survival has exactly no implications about the danger.
Well, I say "obviously", but, until a few months ago [LW · GW], I hadn't realised it either. And five of the seven people I asked at or around EA Global also got it wrong. So what's happening?
Formalised probabilities beat words
The problem, in my view, is that we focus on true sentence like:
- If we're having this conversation, it means humanity survived, no matter how safe or dangerous the cold war was.
And this statement is indeed true. If we formalise it, it becomes: P(survival | conversation) = 1, and P(survival | conversation, cold war safe) = P(survival | conversation, cold war dangerous) = 1.
Thus our conversation screens off the danger of the cold war. And, intuitively, from the above formulation, the danger or safety of the cold war is irrelevant, so it feels like the we can't say anything about it.
I think it's similar linguistic or informal formulations that have led people astray. But for the question at the beginning of the post, we aren't asking about the probability of survival (conditional on other factors), but the probability of the cold war being safe (conditional on survival). And that's something very different:
- P(cold war safe | survival) = P(cold war safe)*P(survival | cold war safe)/P(survival).
Now, P(survival | cold war safe) is greater that P(survival) by definition - that's what "safe" means - hence P(cold war safe | survival) is greater than P(cold war safe). Thus survival is positive evidence for the cold war being safe.
Note that this doesn't mean that the cold war was actually safe - it just means that the likelihood of it being safe is increased when we notice we survived.
Doing anthropic probability, properly and easily
The rest of this post is obsolete; see the post here [LW · GW] for the most recent version of anthropic probability in Fermi situations. Decision theory considerations cannot be avoided.
I've recently concluded that anthropic probability is actually much easier than I thought (though the work in this post is my interpretation rather than a generally accepted fact, currently). Don't worry about reference classes, SIA, SSA, and other complications that often seem to come up in anthropic reasoning.
Instead, start with a prior chosen somehow, then update it according to the evidence that a being such as you exists in the universe.
That's it. No extra complications, or worries about what to do with multiple copies of yourself (dealing with multiple copies comes under the purview of decision theory, rather than probability).
Let's look how this plays out in a few test cases.
Fermi paradox and large universes
First, we can apply this to the probability that life will appear. Let's assume that there is a probability x for life appearing on any well-situated terrestrial-like planet around a long-lived star.
Then we choose a prior over this probability x of life appearing. We can then update this prior given our own existence.
Assume first that the universe is very small - maybe a few trillion times bigger than the current observable universe. What that means is that there is likely to be only a single place in the universe that looks like the solar system: humanity, as we know it, could only have existed in one place. In that case, the fact that we do exists increases the probability of life appearing in the solar system (and hence on suitable terrestrial-like planets).
The update is easy to calculate: we update our prior P by weighting P(x) with x (which is the probability of life on Earth in this model) and renormalising (so P(x | we exist) = ). This updates P quite heavily towards large x (ie towards a larger probability of life existing).
Now suppose that the universe is very large indeed - 3^^^^3 times larger than the observable universe, say. In that case, there will be a huge number of possible places that look identical to our solar system. And, for all but the tiniest of x's, one of these places will have, with probability almost 1, beings identical with us. In that case, the update is very mild: P(x | we exist) = , where w(x) is the probability that a humanity like us exists somewhere in the universe, given that the probability of life existing on a given planet is x. As I said before, for all but the tiniest values of x, w(x) ≈ 1, so the update is generally mild.
If the universe is actually infinite in size, then P(x) does not get update at all, since the tiniest x>0 still guarantees our existence in an infinite universe.
There is a converse - if we take a joint prior over the size of the universe and the probability x of life, then our existing will push towards a large universe, especially for low x. Essentially, us existing is evidence against "small universe, small x", and pushes the probability away from that and into the other possibilities.
The simulation argument
The simulation argument is interesting to consider in this anthropic formulation. In a small universe, we can conclude that we are almost certainly in a simulation. The argument is that the probability of life emerging exactly as us is very low; but the probability is higher of some life emerging, and running lots of simulations which would include us. The key assumption is that any advanced civilization is likely to create more than one simulated alien civilizations.
Hence this updates towards simulation in small universes.
If the universe is large, however, we are more likely to exist without needing to be in a simulation, so the update is far smaller.
Doomsday argument
Dealing with the Doomsday argument is even simpler: there ain't such a critter.
This anthropic theory cares about the probability of us existing right now; whether there is a doom tomorrow, or in a trillion years, or never, does not change this probability. So there is no anthropic update towards short-lived humanity.
But what about those arguments like "supposing that all humans are born in a random order, chances are that any one human is born roughly in the middle"?
Normally to counter this argument, we bring up the "reference class issue", asking why we're talking about humans, when we could instead be talking about all living beings, all sentient beings, all those who share your family name, all left-handed white males in the 35-45 year range currently writing articles about anthropics...
Let's be more specific: if I put myself in the reference class of males, then I expect that males are more likely than females to die out in the future. Any females friend of mine would conclude the opposite.
This is enough to show that a lot of fishy things are going on with reference classes. I my view, the problem is more fundamental: imagine that (hypothetical) moment when you're thinking "I know I am a human, but I will now open my eyes to know which human I am." In that very moment, I am... identical to any other hypothetical human who hasn't opened their eyes either. In other words, I am now part of a class of duplicates.
Thus, as I will argue in a later post, reference classes are fundamentally objects of decision theory, not of probability theory.
Sleeping Beauty
Let's apply this to the famous Sleeping Beauty problem. A coin is flipped and, if it comes up heads, you are to be awoken once more; if if comes up tails, you are to be awoken twice more, with an amnesia potion in between the two awakenings.
By the argument I presented above, the prior is obviously 50-50 on heads; in both the heads and tails universes, we are going to be awoken, so there is no update: after awakening, the probability of heads and tails remains 50-50.
This is the standard "halfer"/Self-Sampling Assumption answer to the Sleeping Beauty problem, one that I previously [LW · GW] "demonstrated" to be wrong.
Note a few key differences with standard SSA, though. First of all, there are no references classes here, which were the strongest arguments against the original SSA. Secondly, there are multiple identical agents, so the 50-50 odds may be "true" from some abstract point of view, but as soon as you have to act, this becomes a question of decision theory, and decision theory will most often imply that you should behave "as if" the odds were 2/3-1/3 (the "thirder"/Self-Indication Assumption answer).
So the odds are 50-50, but this doesn't mean that Sleeping Beauty has to behave in the way that the halfers would naïvely expect.
Boltzmann brains
What of the possibility that we are a Boltzmann brain (a human brain-moment created by random thermal or quantum fluctuations)? If the universe is large or infinite, then it is almost certain to contain us as a Boltzmann brain.
And this would seem to stop us from reasoning any further. Any large universe in which there were some fluctuations would get the same update, so it seems that we cannot distinguish anything about the universe at all; the fact that we (momentarily) exist doesn't even mean that the universe must be hospitable to human life!
This is another opportunity to bring in decision theory. Let's start by assuming that not only we exist, but that our memories are (sorta) accurate, that the Earth exists pretty much as we know it, and that the laws of physics that we know and, presumably, love, will apply to the whole observable universe.
Given that assumption, we can reason about our universe, locally, as if it's exactly as we believe it to be.
Ok, that's well and good, but why make that assumption? Simply because if we don't make that assumption, we can't do anything. If there's a probability that you may spontaneously vanish in the next microsecond, without being able to do anything about it - then you should ignore that probability. Why? Because if that happens, none of your actions have any consequences. Only if you assume your own post-microsecond survival do your actions have any effects, so, when making a decision, that is exactly what you should assume.
Similarly, if there's a chance that induction itself breaks down tomorrow, you should ignore that possibility, since you have no idea what to do if that happens.
Thus Boltzmann brains can mess things up from a probability theory standpoint, but we should ignore them from a decision theory standpoint.
61 comments
Comments sorted by top scores.
comment by Wei Dai (Wei_Dai) · 2018-06-14T02:47:16.704Z · LW(p) · GW(p)
Instead, start with a prior chosen somehow, then update it according to the evidence that a being such as you exists in the universe.
This seems very similar to Radford Neal’s full non-indexical conditioning:
I will here consider what happens if you ignore such indexical information, conditioning only on the fact that someone in the universe with your memories exists. I refer to this procedure as “Full Non-indexical Conditioning” (FNC).
Note however that Neal says this idea implies that the thirder answer is correct in Sleeping Beauty:
In this regard, note that the even though the experiences of Beauty upon wakening on Monday and upon wakening on Tuesday (if she is woken then) are identical in all “relevant” respects, they will not be subjectively indistinguishable. On Monday, a fly on the wall may crawl upwards; on Tuesday, it may crawl downwards. Beauty’s physiological state (heart rate, blood glucose level, etc.) will not be identical, and will affect her thoughts at least slightly. Treating these and other differences as random, the probability of Beauty having at some time the exact memories and experiences she has after being woken this time is twice as great if the coin lands Tails than if the coin lands Heads, since with Tails there are two chances for these experiences to occur rather than only one. This computation assumes that the chance on any given day of Beauty experiencing exactly what she finds herself experiencing is extremely small, as will be the case in any realistic version of the experiment.
Assuming your idea is the same as FNC, I think from a decision theory perspective it's still worse than just not updating. See this comment [LW(p) · GW(p)] of mine under a previous post about FNC.
Replies from: Wei_Dai, Stuart_Armstrong, Stuart_Armstrong↑ comment by Wei Dai (Wei_Dai) · 2018-06-14T21:13:34.092Z · LW(p) · GW(p)
It looks like I should actually claim priority for this idea myself, since I came up with something very similar on the way to UDT. From this 1998 post:
Replies from: radford-nealOne piece of information about the real universe you have direct access to is your own mind state. This is captured in the statement D = "The real universe contains at least one person with mind state M" where M is your current mind state. I'm going to assume this is the ONLY piece of information about the real universe you have direct access to. Everything else must be computed from the prior and this data. The justification for this is that I can't think of any other information that is not part of or derived from D.
Right away you know that any universe that does not contain at least one person with mind state M cannot be real. It's also not hard to see that for any two universes that both contain at least one person with mind state M, the ratio of their posterior probabilities is the same as the ratio of their priors. This means the universe most likely to be real given D is the one that has the highest prior among the universes that contain at least one person with mind state M.
↑ comment by Radford Neal (radford-neal) · 2018-06-14T21:41:50.487Z · LW(p) · GW(p)
Well, I wouldn't be surprised if a bunch of people have come up with similar ideas, but in the post you link to, you apply it only to a rather strange scenario in which the universe is the output of a program, which is allowed to simply generate all possible bit strings, and then decide that in this context the idea has absurd consequences. So I'm not sure that counts as coming up with it as an idea to take seriously...
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2018-06-14T22:10:17.294Z · LW(p) · GW(p)
But I took it seriously enough to come up with a counter-argument against it. Doesn't that count for something? :)
To be clear I'm referring to the second post in that thread, where I wrote:
Let me try to generalize the argument that under the universal prior the 1UH gives really wierd results. The idea is simply that any sufficiently large and/or long universe that doesn't repeat has a good chance of including a person with mind state M, so knowing that at least one person with mind state M exists in the real universe doesn't allow you to eliminate most of them from the set of possible universes. If we want to get a result that says the real universe is likely to be in a class of intuitively acceptable universes, we would have to build that directly into our prior. That is, make them a priori more likely to be real than all other large/long universes.
Several questions follow if this argument is sound. First, is it acceptable to consciously construct priors with a built in preference for intuitively acceptable universes? If so how should this be done? If not the 1UH is not as intuitive as we thought. We would have to either reject the 1UH or accept the conclusion that the real universe is likely to be really weird.
(In that post 1UH refers to the hypothesis that only one universe exists, and I was apparently assuming that what you call FNC is the only way to do Bayesian updating under 1UH so I was thinking this is an argument against 1UH, but looking at it now, it's really more of an argument against FNC.)
Replies from: radford-neal↑ comment by Radford Neal (radford-neal) · 2018-06-14T23:17:10.065Z · LW(p) · GW(p)
Rather than abandon FNC for the reason you describe, I make the meta-argument that we don't know that the universe is actually large enough for FNC to have problems, and it seems strange that local issues (like Doomsday or Sleeping Beauty) should depend on this. So whatever modifications to FNC might be needed to make it work in a very large universe should in the end not actually change the answers FNC gives for such problems when a not-incredibly-large universe is assumed.
Do you see your "not updating" scheme as the appropriate new theory applicable to very large universes? If so, does it in fact give the same result as applying FNC while assuming the universe is not so large?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2018-06-19T06:50:12.019Z · LW(p) · GW(p)
Do you see your “not updating” scheme as the appropriate new theory applicable to very large universes?
It doesn't fully solve problems associated with very large universes, but I think it likely provides a framework in which those problems will eventually be solved. See this post [LW · GW] for more details.
See also this post [LW · GW] which explains my current views on the nature of probabilities, which may be needed to understand the "not updating" approach.
If so, does it in fact give the same result as applying FNC while assuming the universe is not so large?
Sort of. As I explained in a linked comment [LW(p) · GW(p)], when you apply FNC you assign zero probability to the universes not containing someone with your memories and then renormalize the rest, but if your decisions have no consequences in the universes not containing someone with your memories, you end up making the same decisions whether you do this "updating" computation or not. So "not updating" gives the same result in this sense.
↑ comment by Stuart_Armstrong · 2018-06-20T13:12:09.961Z · LW(p) · GW(p)
Ok, I've revised the idea entirely.
See here for why FNC doesn't work as a probability theory (and neither do SIA or SSA): https://www.lesswrong.com/posts/iNi8bSYexYGn9kiRh/paradoxes-in-all-anthropic-probabilities
See here for how you can use proper scoring functions to answer the probability of seeing alien life in the galaxy; depending on whether you average the scores or total them, you get SSA-style or SIA-style answers: https://www.lesswrong.com/posts/M9sb3dJNXCngixWvy/anthropics-and-fermi
↑ comment by Stuart_Armstrong · 2018-06-15T02:55:24.850Z · LW(p) · GW(p)
I'd forgotten this argument (I think I made it myself a few times too). I'm planning a new post to see what can be done about it (for some reason, I can't edit my current post to add a caveat).
Replies from: habryka4↑ comment by habryka (habryka4) · 2018-06-15T03:17:46.510Z · LW(p) · GW(p)
Huh, what is preventing you from editing your post?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2018-06-20T11:36:08.444Z · LW(p) · GW(p)
It seemed it was a general lesswrong problem, now fixed; update done.
comment by cousin_it · 2018-06-14T13:37:50.559Z · LW(p) · GW(p)
Thus Boltzmann brains can mess things up from a probability theory standpoint, but we should ignore them from a decision theory standpoint.
Is that true? Imagine you have this choice:
1) Spend the next hour lifting weights
2) Spend the next hour eating chocolate
Lifting weights pays off later, but eating chocolate pays off right away. If you believe there's a high chance that, conditional on surviving the next hour, you'll dissolve into Boltzmann foam immediately after that - why not eat the chocolate?
Replies from: Gram Stone, Stuart_Armstrong, travisrm89↑ comment by Gram Stone · 2018-06-14T16:54:15.352Z · LW(p) · GW(p)
Just taking the question at face value, I would like to choose to lift weights for policy selection reasons. If I eat chocolate, the non-Boltzmann brain versions will eat it too, and I personally care a lot more about non-Boltzmann brain versions of me. Not sure how to square that mathematically with infinite versions of me existing and all, but I was already confused about that.
The theme here seems similar to Stuart's past writing claiming that a lot of anthropic problems implicitly turn on preference. Seems like the answer to your decision problem easily depends on how much you care about Boltzmann brain versions of yourself.
↑ comment by Stuart_Armstrong · 2018-06-20T13:13:43.544Z · LW(p) · GW(p)
New and better reason to ignore Boltzmann brains in (some) anthropic calculations: https://www.lesswrong.com/posts/M9sb3dJNXCngixWvy/anthropics-and-fermi
↑ comment by travisrm89 · 2018-06-14T16:14:07.582Z · LW(p) · GW(p)
If you believe you're a Boltzmann brain, you shouldn't even be asking the question of what you should do next because you believe that in the next microsecond you won't exist. If you survive any longer than that, that would be extremely strong evidence that you're not a Boltzmann brain, so conditional on you actually being able to make a choice of what to do in the next hour, it still makes sense to choose to lift weights.
Replies from: Charlie Steiner↑ comment by Charlie Steiner · 2018-06-14T16:26:10.080Z · LW(p) · GW(p)
In a truly max-entropy universe, the probability of being a boltzmann brain that survives for one hour is greater than the probability of being on Earth. High entropy is a weird place.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2018-06-14T21:20:26.331Z · LW(p) · GW(p)
I'm not so sure about that. An hour-long Boltzmann brain requires an hour of coincidences; a one-off coincidence that produces a habitable environment (maybe not a full Earth, but something that lasts a few hours) seems much more likely.
Replies from: Charlie Steiner↑ comment by Charlie Steiner · 2018-06-14T23:53:32.823Z · LW(p) · GW(p)
Sure. I am using "Boltzmann brain" as shorthand for a person who has the same memories as me, but was actually created out of fluctuations in a high entropy, long-lived universe and merely has my memories by coincidence. The most likely way for such a person to have experiences for an hour is probably for them to be connected to some kind of coincidental simulation device, with a coincidental hour of tolerable environment around that simulation.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2018-06-20T00:56:14.830Z · LW(p) · GW(p)
Just wanting to second what Charlie says here. As best as I can tell the decision-theoretic move made in the Boltzmann Brains section doesn't work; Neal's FNC has the result that (a) we become extremely confident that we are boltzmann brains, and (b) we end up having an extremely high time and space discount rate at first approximation and at second approximation we end up acting like solipsists as well, i.e. live in the moment, care only about yourself, etc. This is true even if you are standing in front of a button that would save 10^40 happy human lives via colonizing the light-cone. Because a low-entropy region the size of the light cone is unbelievably less common than a low-entropy region the size of a matrix-simulation pod.
comment by Chris_Leong · 2018-06-19T08:31:51.591Z · LW(p) · GW(p)
I'm not a fan of Shut up and Multiply where it means taking a maths equation and then just applying it without stopping to think about the assumptions that it is based upon and whether it is appropriate for this context. You can certainly catch errors by writing up a formal proof, but we need to figure out whether our formalisation is appropriate first.
Indeed, Said Achmiz was able to obtain a different answer [LW(p) · GW(p)] by formalising the problem differently. So the key question ends up being what is the appropriate formalisation. As I explain on my comment [LW(p) · GW(p)], the question is whether p(survival) is
a) the probability that a pre-war person will survive until after the cold war and observe that he didn't die, if they will survive apart from a nuclear holocaust (following Stuart Armstrong)
b) the probability that a post-war person will observe that they survived (following Said Achmiz)
Stuart Armstrong is correct because probability problems usually implicitly assume that the agent knows the problem, so a post-war person is already assumed to know that they survived. In other words, b) involves asking someone who already knows that they survived to update on the fact that they survived again. Of course they aren't going to update!
Concrete example
Anyway, it'll be easier to understand what is happening here if we make it more concrete. On a gameshow, if a coin comes up heads, the contestants face a dangerous challenge that only a 1/3 survive, otherwise they face a safe(r) challenge that 1/2 survive. We will assume there are two lots of 6 people and that those who are "eliminated" aren't actually killed, but just fail to make it to the next round.
This leads us to expect that one group faces the dangerous challenge and one the safe challenge. So overall we expect: Survivors (3 safe, 2 dangerous), Eliminated (2 safe, 4 unsafe). This leads to the following results:
- If we survey everyone: The survivors have a higher ratio of people from the safe challenge than those who were eliminated
- If we only survey survivors: A disproportional number of survivors come from the safer world
So regardless of which group we consider relevant, we get the result that we claimed above. I'll consider the complaint that "dead people don't get asked the question". If we are asked a conditional probability question like, "What is the chance of scoring at least 10 with two dice if the first dice is a 4?", then the systematic way to answer that is to list all the (36) possibilities and eliminate all the possibilities that don't have a 4 for the first dice. Applying this to "If we survived the cold war, what is the probability..." we see that we should begin by eliminating all people who don't survive the cold war from the set of possibilities. Since we've already eliminated the people who die, it doesn't matter that we can't ask them questions. How could it? We don't even want to ask them questions! The only time we need to handle this is when the if statement is correlated with our ability to be asking the question, but doesn't guarantee it. An example would be if a few people end up in a coma; then we might want to update separately on our ability to be asking the question.
Boltzman Brains
I think you've engaged in something of a dodge here. Yes, all of our predictions would be screwed if we are a Boltzman Brain; so if we want to get physics correct we have to hope that that this isn't the case. However, your version of anthropics require us to hope much harder. In other theories, we just have to hope that the calculations indicating that Boltzman Brains are the most likely scenario are wrong. However, in your anthropics, if the probability of at least one Boltzman Brain with our sensations approaches 1, then we can't update on our state at all. This holds even if we know that the vast majority of beings with that state aren't Boltzmann brains. That makes the problem much, much worse than it is under other theories.
Clarifying this with the Tuesday Problem
I think you've made the same mistake that I've identified here [LW · GW]:
A man has two sons. What is the chance that both of them are born on the same day if at least one of them is born on a Tuesday?
Most people expect the answer to be 1/7, but the usual answer is that 13/49 possibilities have at least one born on a Tuesday and 1/49 has both born on Tuesday, so the chance in 1/13. Notice that if we had been told, for example, that one of them was born on a Wednesday we would have updated to 1/13 as well.
The point is that there is a difference between the following:
a) meeting a random son and noting that he was born on Tuesday
b) discovering that one of the two sons (you don't know which) was born on a Tuesday
Similarly there is a difference between:
a) Discovering a random consciousness is experiencing a stream of events
b) Discovering that at least one consciousness is experiencing that stream of events
The only reason why this gives the correct answer for the first problem is that (in the simplification) we assume all consciousnesses before the war either survive or all of them die. This makes a) and b) coincide, so that it doesn't matter which one is used.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2018-06-19T15:36:31.846Z · LW(p) · GW(p)
Thanks for the concrete example, and I agree with the Boltzmann brain issue. I've actually concluded that no anthropic probability theory works in the presence of duplicates: https://www.lesswrong.com/posts/iNi8bSYexYGn9kiRh/paradoxes-in-all-anthropic-probabilities
It's all a question of decision theory, not probability.
https://www.lesswrong.com/posts/RcvyJjPQwimAeapNg/torture-vs-dust-vs-the-presumptuous-philosopher-anthropic
https://arxiv.org/abs/1110.6437
https://www.youtube.com/watch?v=aiGOGkBiWEo
As for the Tuesday problem, that seems to go away if you consider the process that told you at least one of them was born on a Tuesday (similar to the Monty Hall problem, depending on how the presenter chooses the door to open). If you model it as "it randomly selected one son and reported the day he was born on", then that selects Tuesday with twice the probability in the case where the two sons were born on a Tuesday, and this gives you the expected 1/7.
Replies from: Chris_Leong↑ comment by Chris_Leong · 2018-06-20T00:24:21.239Z · LW(p) · GW(p)
"As for the Tuesday problem, that seems to go away if you consider the process that told you at least one of them was born on a Tuesday" - I don't think we disagree about how the Tuesday problem works. The argument I'm making is that your method of calculating probabilities is calculating b) when we actually care about a).
To bring it back to the Tuesday problem, let's suppose you'll meet the first son on Monday and the second on Tuesday, but in between your memory will be wiped. You wake up on a day (not knowing what day it is) and you notice that they are a boy. This observation corresponds to a) meeting a random son and noting that he was born on Tuesday, not b) discovering that one of the two sons (you don't know which) was born on a Tuesday. Similarly, our observation corresponds to a) not b) for Sleeping Beauty. Admittedly, a) requires indexicals and so isn't defined in standard probability theory. This doesn't mean that we should attempt to cram it in, but instead extend the theory.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2018-06-20T11:11:20.386Z · LW(p) · GW(p)
but instead extend the theory.
I'm not sure that can be done: https://www.lesswrong.com/posts/iNi8bSYexYGn9kiRh/paradoxes-in-all-anthropic-probabilities
comment by Said Achmiz (SaidAchmiz) · 2018-06-16T19:59:45.595Z · LW(p) · GW(p)
… But for the question at the beginning of the post, we aren’t asking about the probability of survival (conditional on other factors), but the probability of the cold war being safe (conditional on survival). And that’s something very different:
- P(cold war safe | survival) = P(cold war safe)*P(survival | cold war safe)/P(survival).
Now, P(survival | cold war safe) is greater that P(survival) by definition—that’s what “safe” means—hence P(cold war safe | survival) is greater than P(cold war safe). Thus survival is positive evidence for the cold war being safe.
Note that this doesn’t mean that the cold war was actually safe—it just means that the likelihood of it being safe is increased when we notice we survived.
No. Here is the correct formalization:
- S = we observe that we have survived
- Ws = Cold War safe
- Wd = Cold War dangerous
We want P(Ws|S)—probability that the Cold War is safe, given that we observe that we have survived:
P(Ws|S) = P(Ws) × P(S|Ws) / P(S)
Note that P(S) = 1, because the other possibility—P(we observe that we haven’t survived)—is impossible. (Is it actually 1 minus epsilon? Let’s say it is; that doesn’t materially change the reasoning.)
This means that this—
… P(survival | cold war safe) [i.e., P(S|Ws). —SA] is greater that P(survival) [P(S)]** by definition …
… is false. P(S|Ws) is 1, and P(S) is 1. (Cogito ergo sum and so on.)
So observing that we have survived is not positive evidence for the Cold War being safe.
Replies from: Stuart_Armstrong, Chris_Leong↑ comment by Stuart_Armstrong · 2018-06-19T13:04:44.773Z · LW(p) · GW(p)
The negation of is "we don't observe we have survived", which is perfectly possible.
Otherwise, your argument proves too much, and undoes all of probability theory. Suppose for the moment that a nuclear war wouldn't have actually killed us, but just mutated us into mutants. Then let S' be "us non-mutants observe that there was no nuclear war". By your argument above, P(S')=1, because us non-mutants cannot observe a nuclear war - only the mutant us can do so.
But the problem is now entirely non-anthropic. It seems to me that you have to either a) give up on probability altogether, or b) accept that the negation of S' includes "us mutants observe a nuclear war". Therefore the negation of a "X observes Y" can include options where X doesn't exist.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-06-19T17:30:59.192Z · LW(p) · GW(p)
The negation of S is “we don’t observe we have survived”, which is perfectly possible.
What do you mean by this? If you are referring to the fact that we can ask “have we survived the Cold War” and answer “not yet” (because the Cold War isn’t over yet), then I don’t see how this salvages your account. The question you asked to begin with is one which it is only possible to ask once the Cold War is over, so “not yet” is inapplicable, and “no” remains impossible.
If you mean something else, then please clarify.
As for the rest of your comment… it seems to me that if you accept that “us, after we’ve suffered some mutations” is somehow no longer the same observers as “us, now”, then you could also say that “us, a second from now” is also no longer the same observers as “us, now”, at which point you’re making some very strong (and very strange) claim about personal identity, continuity of consciousness, etc. Any such view does far more to undermine the very notion of subjective probability than does my account, which only points out that dead people can’t observe things.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2018-06-19T18:04:25.286Z · LW(p) · GW(p)
I'm pointing out that the negation of S="X observes A at time T" does not imply that X exists. S'="X observes ~A at time T" is subset of ~S, but not the whole thing (X not existing at all at time T is also a negation, for example). Therefore, merely because S' is impossible, does not mean that S is certain.
The point about introducing differences in observers, is that this is the kind of thing that your theory has to track, checking when an observer is sufficiently divergent that they can be considered different/the same. Since I take a more "god's eye view" of these problems (extinctions can happen, even without observers to observe them), it doesn't matter to me whether various observers are "the same" or not.
↑ comment by Chris_Leong · 2018-06-19T06:46:28.429Z · LW(p) · GW(p)
The key question here is what exactly is P(S). Let's simplify and assume that there is a guy called Bob who was born before the Cold War and will survive to the end unless there is a nuclear holocaust. The question is: is P(S)
a) the probability that a pre-war Bob will survive until after the war and observe that he didn't die
b) the probability that a post-war Bob will observe that they survived
Stuart Armstrong says a), you say b). Both of you are mathematically correct, so the question is which calculation is based upon the correct assumptions. If we use a), then we conclude that a pre-war Bob should update on observing that they didn't die. Before we can analyse b), we have to point out that there is an implicit assumption in probability problems that the agent knows the problem. So Bob is implicitly assumed to know that he is a post-war Bob before he observes anything, then he is told to update on the observation that he survived. Obviously, updating on information that you already know doesn't cause you to update at all.
This leads to two possibilities. Either we have a method of solving the problem directly for post-war Bobs, in which case there we solve this without ever updating on this information. Like perhaps we imagine running this experiment multiple times and we count up the number of Bobs who survive in the safe world and divide it by the number of Bobs in total (I'm not stating whether or not this approach is correct, just providing an example of a solution that avoids updating). Or we imagine a pre-war Bob who doesn't have this information, then updates on receiving it. The one thing we don't do is assume a Bob who already knows this information and then get him to update on it.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-06-19T17:39:18.507Z · LW(p) · GW(p)
Before we can analyse b), we have to point out that there is an implicit assumption in probability problems that the agent knows the problem.
…
… we imagine a pre-war Bob who doesn’t have this information, then updates on receiving it.
Consider this formulation:
Bob has himself cryonically frozen in 1950. He is then thawed in 2018, but not told what year it is (yet). We ask Bob what he thinks is the probability of surviving the Cold War; he gives some number. We then let Bob read all about the Soviet Union’s collapse on Wikipedia; thus he now gains the information that he has survived the Cold War. Should he update on this?
Replies from: Chris_Leong↑ comment by Chris_Leong · 2018-06-20T00:48:29.689Z · LW(p) · GW(p)
Let's suppose that there are two Bobs. One on a planet that ends up in a nuclear holocaust and one on a planet that doesn't (effectively they flipped a coin). His subjective probability of the cold war not ending in a nuclear holocaust before he is told any information including the year is now effectively the sleeping beauty problem! So you've reduced an easier problem to a harder one, as we can't calculate the impact of updating until we figure out the prior probability (we will actually need prior probabilities given different odds of surviving).
The way I was formulating this is as follows. It is 1950. Bob knows that the Cold War is going to happen and that there is a good chance of it ending in destruction. After the Cold War ends, we tell Bob (if he survives) or his ghost (if he does not) whether he survived or he is a ghost. If we tell Bob that he is not a ghost, then he'll update on this information; note that this isn't actually contingent on us talking to his ghost at all. So the fact that we can't talk to his ghost doesn't matter.
Replies from: SaidAchmiz, SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-06-20T07:02:39.300Z · LW(p) · GW(p)
By the way, I don’t think I saw an explicit answer to my question about Bob who is cryonically frozen in 1950. Should he update his probability estimate of Cold War safety upon learning of recent history, or not?
Replies from: Chris_Leong↑ comment by Chris_Leong · 2018-06-20T08:57:00.076Z · LW(p) · GW(p)
Why not have him update? If it's new info his probability will change, if it's old info it will remain the same. It's never new info and the probability doesn't change. This only happens if you've implicitly assumed he knows it.
↑ comment by Said Achmiz (SaidAchmiz) · 2018-06-20T04:24:47.920Z · LW(p) · GW(p)
Can you write this formulation without invoking ghosts, spirits, mediums, or any other way for dead people to be able to think / observe / update (which they cannot, in fact, do—this being the very core of the whole “being dead” thing)? If you cannot, then this fact makes me very suspicious, even if I can’t pinpoint exactly where the error (if any) is.
Having said that, let me take a crack at pinpointing the problem. It seems to me that one way of formulating probabilities of future events (which I recall Eliezer using many a time in the Sequences) is “how much do you anticipate observation X?”.
But the crux of the matter is that Bob does not anticipate ever observing that he has not survived the Cold War! And no matter what happens, Bob will never find that his failure-to-anticipate this outcome has turned out to be wrong. (Edit: In other words, Bob will never be surprised by his observations.)
Another way of talking about probabilities is to talk about bets. So let’s say that Bob offers to make a deal with you: “If I observe that I’ve survived the Cold War, you pay me a dollar. But if I ever observe that I haven’t survived the Cold War, then I will pay you a million trillion kajillion dollars.”[1] Will you take this bet, if you estimate the probability of Bob surviving the Cold War to be, perhaps, 99.9%? Follow-up question #1: what does a willingness to make this bet imply about Bob’s probability estimate of his survival? Follow-up question #2: (a) what odds should Bob give you, on this bet; (b) what odds would leave Bob with a negative expected profit from the bet?
[1] After the disastrous outcome of the Cold War has devalued the dollar, this figure is worth only about $1,000 in today’s money. Still, that’s more than $1!
Replies from: Chris_Leong, clone of saturn↑ comment by Chris_Leong · 2018-06-20T05:57:55.159Z · LW(p) · GW(p)
"Can you write this formulation without invoking ghosts, spirits, mediums, or any other way for dead people to be able to think / observe / update?" - As I've already argued, this doesn't matter because we don't have to even be able to talk to them! But I already provided a version of this problem where it's a gameshow and the contestants are eliminated instead of killed.
Anyway, the possibilities are actually:
a) Bob observes that he survives the cold war
b) Bob observes that he didn't survive the cold war
c) Bob doesn't observe anything
You're correct that b) is impossible, but c) isn't, at least from the perspective of a pre-war Bob. Only a) is possible from the perspective of a post-war Bob, but only if he already knows that he is a post-war Bob. If he doesn't know he is a post-war Bob, then it is new information and we should expect him to update on it.
"Another way of talking about probabilities is to talk about bets" - You can handle these bets in the decision theory rather than probability layer. See the heading A red herring: betting arguments in this post [LW · GW].
Update: The following may help. Bob is a man. Someone who never lies or is mistaken tells Bob that he is a man. Did Bob learn anything? No, if he already knew his gender; yes if he didn't. Similarly, for the cold war example, Bob always know that he is alive, but it doesn't automatically follow that he knows he survived the cold war or that such a war happened.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-06-20T07:13:33.639Z · LW(p) · GW(p)
“Another way of talking about probabilities is to talk about bets”—You can handle these bets in the decision theory rather than probability layer. See the heading A red herring: betting arguments in this post.
I find this view unsatisfying, in the sense that if we accept “well, maybe it’s just some problem with our decision theory—nothing to do with probability…” as a response in a case like this, then it seems to me that we have to abandon the whole notion that probability estimates imply anything about willingness to bet in some way (or at all).
Now, I happen to hold this view myself (for somewhat other reasons), but I’ve seen nothing but strong pushback against it on Less Wrong and in other rationalist spaces. Am I to understand this as a reversal? That is, suppose I claim that the probability of some event X is P(X); I’m then asked whether I’d be willing to make some bet (my willingness for which, it is alleged, is implied by my claimed probability estimate); and I say: “No, no. I didn’t say anything at all about what my decision theory is like, so you can’t assume a single solitary thing about what bets I am or am not willing to make; and, in any case, probability theory is prior to decision theory, so my probability estimate stands on its own, without needing any sort of validation from my betting behavior!”—is this fine? Is it now the consensus view, that such a response is entirely valid and unimpeachable?
Replies from: Stuart_Armstrong, Chris_Leong↑ comment by Stuart_Armstrong · 2018-06-20T11:26:55.381Z · LW(p) · GW(p)
I personally think decision theory is more important than probability theory. And anthropics does introduce some subtleties into the betting setup - you can't bet or receive rewards if you're dead.
But there are ways around it. For instance, if the cold war is still on, we can ask how large X has to be if you would prefer X units of consumption after the war, if you survive, to 1 unit of consumption now.
Obviously the you that survived the cold war and knows they survived, cannot be given a decent bet on the survival. But we can give you a bet on, for instance "new evidence has just come to light showing that the cuban missile crisis was far more dangerous/far safer than we thought. Before we tell you the evidence, care to bet in which direction the evidence will point?"
Then since we can actually express these conditional probabilities in bets, the usual Dutch Book arguments show that they must update in the standard way.
↑ comment by Chris_Leong · 2018-06-20T08:51:28.257Z · LW(p) · GW(p)
Well, creating a decision theory that takes into account the possibility of dying is trivial. If the fraction of wins where you survive is a and the fraction of loses you survive is b, then if your initial probability of winning is w, we get:
Adjusted probability = ap/(ap+bq)
This is 1 when b=0.
This works for any event, not just wins or losses. We can easily derive the betting scheme from the adjusted probability. Is having to calculate the betting scheme from an adjusted probability really a great loss?
↑ comment by clone of saturn · 2018-06-20T09:31:11.073Z · LW(p) · GW(p)
If we're going to bet, we have to bet on the right thing. The question isn't about whether I survived the Cold War (which we already know), it's about whether the Cold War was dangerous. So, what would I actually bet on? I don't know how to quantify the dangerousness of the Cold War in real life, but here's a simpler scenario: if Omega flipped a coin before the Cold War and, if it came up heads, he precisely adjusted the circumstances to make it so the Cold War had a 25% chance of killing me, and if tails, a 75% chance, then asked me to bet on heads or tails now in 2018, I would be willing to bet on heads at up to 3:1 odds.
comment by Chris_Leong · 2018-06-19T07:04:43.413Z · LW(p) · GW(p)
Sorry for the stupid question, but what is P(x) in the Fermi Paradox section. It's a prior given x (the probability of life appearing on a well-situated terrestrial-like planet around a long-lived star). But the prior of what?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2018-06-19T15:37:28.853Z · LW(p) · GW(p)
It's the prior of x, not a prior given x; sorry for the confusion.
comment by ryan_b · 2018-06-14T17:01:55.156Z · LW(p) · GW(p)
It feels like we are leaning extremely hard on the reasonable priors condition. I am deeply confused by the relationship between anthropics and priors; it feels like as soon as we start considering anthropics it should race away into the past, because it's not like we are suddenly alive now and weren't before when we got our priors. This makes me suspect that updating on the fact that I am alive is some kind of updating on old evidence.
comment by Rafael Harth (sil-ver) · 2018-06-14T07:43:39.934Z · LW(p) · GW(p)
I don't think you're correct.
P(cold war safe | survival) = P(cold war safe)*P(survival | cold war safe)/P(survival). [...]
That's it. No extra complications, or worries about what to do with multiple copies of yourself (dealing with multiple copies comes under the purview of decision theory, rather than probability).
Copies seem to me to matter. In a quantum universe where you share consciousness with all of your copies, you have P(survival|) = 1, and therefore
You might not have P(survival | ) = 1 for your copy's survival, but you have it for the survival of any of your copies, and that's the probability which matters, because that's the observation you make.
Replies from: Stuart_Armstrong, gworley↑ comment by Stuart_Armstrong · 2018-06-20T13:34:33.428Z · LW(p) · GW(p)
I'm still uncertain about what happens in the many world scenarios, see https://www.lesswrong.com/posts/NiA59mFjFGx9h5eB6/duplication-versus-probability
↑ comment by Gordon Seidoh Worley (gworley) · 2018-06-14T19:34:16.568Z · LW(p) · GW(p)
I think this is outside the scope of what is being argued here. This seems to be about subjective probability within the observable universe. What you are considering seems to require knowledge about the world we don't have access to. That doesn't make you wrong or Stuart wrong, but I do think you're talking about different things.
(FWIW I brought up a similar concern to Stuart in person.)
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2018-06-14T20:24:24.759Z · LW(p) · GW(p)
But it is relevant for whether the leading proposition is true or false.
The fact we survived the cold war is evidence that the cold war was less dangerous.
So if the intention is to be agnostic about quantum copies, then it is wrong to assert that the proposition is true, as the post does.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2018-06-14T23:10:01.762Z · LW(p) · GW(p)
I'm sympathetic to your line of reasoning (I'm not even sure what a counterfactual would really mean without using something like MWI), but I would suppose you could imagine a subjective, self-world-bounded interpretation where "less dangerous" means "less dangerous in expectation from back when I would have assessed the probability prior to knowing the outcome of the event".
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2018-06-15T08:08:11.227Z · LW(p) · GW(p)
I don't get that interpretation. I don't know how you can bound it to one world when other worlds matter.
Let's simplify and consider the cold war as a one-time event which either kills you or doesn't. I'm claiming the observation that you survived tells you literally nothing about the probability that it kills you except that it's not 1. It could be that 99% of the mass of copies originated out of you from the point where you assessed the probability prior to knowing the outcome are now dead. This seems to be the expectation you've described, and your observations do nothing to update on that in either direction. Our survival does in fact have exactly no implications about the danger.
If you have a probability distribution over whether quantum worlds (and if needed, shared consciousness) exist, then in that sense the probability has changed, since the probability conditioned on a single world has changed. But that seems like a cheap way out and not what anyone intended to say. Or if you consider that a super dangerous cold war would also increase the probability out outcomes that look different from reality right now but where you're not dead in, then that could be – and I think is – a legitimate reason to update. But that's also not what's been argued here.
Replies from: radford-neal↑ comment by Radford Neal (radford-neal) · 2018-06-15T13:15:01.361Z · LW(p) · GW(p)
A problem with this line of reasoning is that it would apply to many other matters too. It's thought that various planet-sized object are wandering in interstellar space, but I think no one has a clear idea how many there are. One of them could zip into the solar system, hit earth, and destroy all life on earth. Do you think that the fact that this hasn't happened for a few billion years is NO EVIDENCE AT ALL that the probability of it happening in any given year is low? The same question could be asked about many other possible catostrophic events, for some of which there might be some action we could take to mitigate the problem (for instance, a mutation making some species of insect become highly aggressive, and highly prolific, killing off all mammals, for which stockpiling DDT might be prudent). Do you think we should devote large amounts of resources to preventing such eventualities, even though ordinary reasoning would seem to indicate that they are very unlikely?
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2018-06-15T17:55:48.886Z · LW(p) · GW(p)
I happen to believe that there is a reason why my argument does not have the uncomfortable implications you're sketching out here. But before I present it, I want to point out that this has no bearing on whether the argument is true or false. If it had as an implication that we have a 30% chance of going extinct on any given day, that wouldn't make it any less plausible.
Do you think that the fact that this hasn't happened for a few billion years is NO EVIDENCE AT ALL that the probability of it happening in any given year is low?
Well, it is some evidence because the quantum assumptions may not hold.
But if we assume the necessary things then yes; I think the fact that we haven't been killed by an asteroid in a few billion years is no evidence at all that the probability of it happening is low. However! The fact that we also had very few large but non-fatal asteroids across a few billion years is very strong evidence that it is unlikely. And the same argument works for most stuff.
We only have to worry about the things for which we have a special reason to believe that they won't come in less-than-lethal forms. Three candidates for such things are, I think, nuclear weapons, aliens, and superintelligence. And I am indeed totally biting the bullet on the implications there. I made a post about the first two here [LW · GW]. And as for superintelligence, I think there will be some versions of us still around after a singularity, and it will indeed be incorrect for our future selves to conclude that, since we survived it, it wasn't that dangerous after all.
Replies from: radford-neal↑ comment by Radford Neal (radford-neal) · 2018-06-15T23:51:25.860Z · LW(p) · GW(p)
I agree that the possibility of serious but less than catastrophic effects renders the issue here moot for many problems (which I think includes nuclear war.) I tried to make the interstellar planet example one where the issue is real - the number of such planets seems to me to be unrelated to how many asteroids are in the solar system, and might collide with less-catastrophic effects (or at least we could suppose so), whereas even a glancing collision with a planet-sized object would wipe out humanity. However, I may have failed with the mutated insect example, since one can easily imagine less catastrophic mutations.
I'm unclear on what your position is regarding such catastrophes, though. Something that quickly kills me seems like the most plausible situation where an argument regarding selection effects might be valid. But you seem to have in mind things that kill me more slowly as well, taking long enough for me to have lots of thoughts after realizing that I'm doomed. And you also seem to have in mind things that would have wiped out humanity before I was born, which seems like a different sort of thing altogether to me.
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2018-06-16T08:18:17.758Z · LW(p) · GW(p)
I tried to make the interstellar planet example one where the issue is real - the number of such planets seems to me to be unrelated to how many asteroids are in the solar system
Mh. I see. Well, my position on that isn't complicated, it's whatever the argument implies. If it is indeed true that we have no evidence on the probability of this even now, then I think it is possible that it happens quite frequently. (I'm ignorant on this, so I just have to take your word.) In regard to things that kill you "slowly," I think time just matters proportionately. If an event sets your expected lifespan to one year. then it would have to happen with the frequency of once per year for you to have even odds of finding yourself in that world, which would then be moderate evidence. (I might have made a mistake there, but it seems to me like that's how it works.) I think we can conclude that nukes probably don't go off once per month, but not that they go off less than once per lifetime.
comment by Donald Hobson (donald-hobson) · 2018-06-14T18:06:38.866Z · LW(p) · GW(p)
Imagine a dumbbell shaped space-time, two hyper-spheres, each the size of the observable universe connected by a thin wormhole. While that wormhole remains open, both pieces are part of the same universe, when the wormhole collapses,the universe is suddenly smaller, and the probability we should assign to a planet producing intelligent life grows. If we knew the wormhole would collapse next week, we would break conservation of expected evidence.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2018-06-14T21:24:22.410Z · LW(p) · GW(p)
That's a universe in two disconnected pieces, not two universes.
(and I'm still uncertain about what to do with many worlds and quantum mechanics, see https://www.lesswrong.com/posts/NiA59mFjFGx9h5eB6/duplication-versus-probability )
comment by avturchin · 2018-06-15T17:31:16.019Z · LW(p) · GW(p)
I think that the fact that "I exist" presents zero evidence for anything, as - most likely - all possible observers exist. However, if I observer some random variable, like distance to the Sun from the center of the galaxy, it should be distributed randomly, and thus I am most likely found it the middle of its interval. And, not surprisingly, the Sun is approximately in the middle between the center of the Galaxy and its edges (ignoring here for simplicity the difference of star density and impossibility to live in the galactic center).
The same way my birthday is non-surpassingly located somewhere in the middle of the year.
The only random variable which is not obey this rule is my position in supposedly very long future civilisation: I found my date of birth surprisingly early. And exactly this surprise is the nature of the Doomsday argument: the contradiction between expectation that we will live very long as civilisation and my early position.
Speaking about the Cold war survival, the fact of the survival provide zero evidence about nuclear war extinction probability. But, as Bostrom showed in his article about surviving space catastrophes, if Cold war survival is unlikely, a random observer is more likely to find herself earlier in time, maybe in 1950s.
Replies from: Stuart_Armstrong, TheWakalix↑ comment by Stuart_Armstrong · 2018-06-19T13:10:12.218Z · LW(p) · GW(p)
the fact of the survival provide zero evidence about nuclear war extinction probability
There are a thousand 1940s universes. In 500 of them, the probability of surviving the cold war is 1/500 (dangerous); in the other 500, the probability of surviving is 499/500 (safe).
In 2018, there are 500 universes where humans survive, 1 from the dangerous class, and 499 from the safe class.
Updating on safeness seems eminently reasonable.
Replies from: avturchin↑ comment by TheWakalix · 2018-06-26T15:42:19.287Z · LW(p) · GW(p)
That isn't how probability works. Birthdays are roughly evenly distributed throughout the year, so a July birthday is just as likely as a January birthday. Just because we decide to mark a certain day as the beginning of the year doesn't mean that people are most likely to be born half a year from that day.
If you're saying that the "expected birthday", in the sense of "minimizing the average squared difference between your guess and the true value," is in the middle of the year, then that is true - but only as long as December and January are considered to be a year apart, which in reality they are not, since each year is followed by another. (An analogy - time is like a helix projected onto a circle, and the circle is the Year, with a point on the circle considered to be the endpoint.) Dates might be a uniquely bad example for this reason: the beginning and end of the window are generally arbitrary, while (ferex) the beginning and end of a civilization are far less arbitrary. (It would be very strange to define the beginning of a civilization as halfway through its lifespan, and then say that the beginning happened immediately after the end. It would not be very strange to have the new year begin on July 1st.)
But that's not exactly it. For one, that definition is probably neither what you meant nor an actual definition of "more likely". Additionally, even if we're talking about the year 1967 specifically (thus invalidating my argument from circularity), people aren't more likely to be born in July than January (barring variation in how often sex occurs). I think that you're misunderstanding the anthropic doomsday argument, which says that we are with 90% prior certainty in the middle 90% of humanity, so our prior should be that if X people have lived so far, the future will contain at most 9X people. It doesn't say that we're probably very close to the midpoint - in fact, it assumes that the chance of being any particular person is uniform!
Your birthday and galactic location examples both have a similar problem - arbitrary framing. I could just as easily argue that we'd be most likely to find ourselves at the galactic center. You can transform a random distribution into another one with a different midpoint. For example, position-in-galaxy to distance-from-galactic-center.
It's possible that you got this idea from looking at bell curves, in which the middle is more common than the edges, even if you compare equally-sized intervals. But that's a property of the distribution, not something inherent in middles - in fact, with a bimodal distribution, the edges are more common than the middle.
Replies from: avturchin↑ comment by avturchin · 2018-06-26T18:43:55.253Z · LW(p) · GW(p)
No, I don't use bell curve distribution. By saying "middle in the year" I mean everything which is not 1 of January or 31 of December. Surely I understand that people have the same chance to be born in July and in December.
The example was needed to demonstrate real (but weak) predictive power of mediocrity reasoning. For example, I could claim that it is very unlikely that you was born in any of these dates (31 December or 1 of Janury), and most likely you was born somewhere between them; the same way I could claim that it is unlikely that it is exactly midnight on your clock.
And this is not depending on the choice of the starting point or framing. If our day change will be in 17.57, it would be still unlikely that your time now is 17.57.
Replies from: TheWakalix↑ comment by TheWakalix · 2018-06-28T16:14:32.408Z · LW(p) · GW(p)
I interpreted "the middle" as a point and its near surroundings, which explains some of the disagreement. (All of your specific examples in your original post were near the midpoint, which didn't clarify which interpretation you intended.)
I think that the more fundamental rule isn't about middles, and (as demonstrated here) that's easily misinterpreted without including many qualifiers and specifics. "Larger intervals are more likely, to the extent that the distribution is flat" is more fundamental, but there are so many ways to define large intervals that it doesn't seem very useful here. It all depends on what you call a very unusual point - if it's the middle that's most unusual to me, then my version of the doomsday argument says "we will probably either die out soon or last for a very long time, but not last exactly twice as long as we already have." (In this case, my large interval would be time-that-civilization-exists minus (midpoint plus neighborhood of midpoint), and my small interval would be midpoint plus neighborhood of midpoint.)
comment by Tetraspace (tetraspace-grouping) · 2018-06-14T14:13:56.440Z · LW(p) · GW(p)
The argument at the start just seems to move the anthropics problem one step back - how do we know whether we "survived"* the cold war?
*Not sure how to succinctly state this better; I mean if Omega told me that the True Probability of surviving the Cold War was 1%, I would update on the safety of the Cold War in a different direction than if it told me 99%, even though both entail me, personally, surviving the Cold War.
Replies from: Stuart_Armstrong, Charlie Steiner↑ comment by Stuart_Armstrong · 2018-06-19T13:06:48.509Z · LW(p) · GW(p)
how do we know whether we "survived"* the cold war?
We estimate how likely it is we are delusional about the universe, versus us surviving the cold war. Omega would have to put the probability of survival pretty low before I started to consider delusions as the most likely option.
↑ comment by Charlie Steiner · 2018-06-14T16:51:32.695Z · LW(p) · GW(p)
What seems to be a "true probability" is usually something subtly different - a parameter in a toy model of the world. Omega knows you survived juat as well as you did - its P(survived) is 1, true as can be. When you talk about "true probability", you are talking about some property of a mental model of the cold war - the same category of thing as "dangerousness," which makes comparing the two more like a direct analogy than a probabilistic update.
See also: probability is in the mind. [LW · GW].