Solving the Doomsday argument

post by Stuart_Armstrong · 2019-01-17T12:32:23.104Z · score: 12 (6 votes) · LW · GW · 9 comments

The Doomsday argument gives an anthropic argument for why we might expect doom to come reasonably soon. It's known that the Doomsday argument works under SSA, but not under SIA.

Ok, but since different anthropic probability theories are correct answers to different questions [LW · GW], what are the question versions of the Doomsday argument, and is the original claim correct?

No Doomsday on birth rank

Simplify the model into assuming there is a large universe (no Doomsday any time soon) with many, many future humans, and a small one (a Doomsday reasonably soon - within the next 200 billion people, say), with equal probability. In order to think in terms of frequencies, which comes more naturally to humans, we can imagine running the universe many, many times, each with the Doomsday chance.

There are roughly a 108.5 billion humans who have ever lived. So, asking:

The answer to that question converges to , the SIA probability. Half of the people with that birth rank live in small universes, half in large universes.

Doomsday for time travellers

To get an SSA version of the problem, we can ask [LW · GW]:

This will give an answer close to as it converges on the SSA probability.

But note that this is generally not the question that the Doomsday argument is posing. If there is a time traveller who is choosing people at random from amongst all of space and time - then if they happen to choose you, that is a bad sign for the future (and yet another reason you should go with them). Note that this is consistent with conservation of expected evidence [LW · GW]: if the time traveller is out there but doesn't choose you, then this a (very mild) update towards no Doomsday.

But for the classical non-time-travel situation, the Doomsday argument fails.


Comments sorted by top scores.

comment by shminux · 2019-01-17T16:30:41.210Z · score: 9 (5 votes) · LW · GW

The Doomsday argument is utter BS because one cannot reliably evaluate probabilities without fixing a probability distribution first. Without knowing more than just the number of humans existing so far, the argument devolves into arguing which probability distribution to pick out of uncountable number of possibilities. An honest attempt to address this question would start with modeling human population fluctuations including various extinction events. In such a model there are multiple free parameters, such as rate of growth, distribution of odds of various extinction-level events, distribution of odds of surviving each type of events, event clustering and so on. The the minimum number of humans does not constrain the models in any interesting way, i.e. to privilege a certain class of models over others, or a certain set of free parameters over others to the degree where we could evaluate a model-independent upper bound for the total number of humans with any degree of confidence.

If you want to productively talk about Doomsday, you have to get your hands dirty and deal with specific x-risks and their effects, not armchair-theorize based on a single number and a few so-called selection/indication principles that have nothing to do with the actual human population dynamics.

comment by Stuart_Armstrong · 2019-01-17T16:36:46.295Z · score: 2 (1 votes) · LW · GW

The DA, in it's SSA form (where it is rigorous) comes as a posterior adjustment to all probabilities computed in the way above - it's not an argument that doom is likely, just that doom is more likely than objective odds would imply, in a precise way that depends on future (and past) population size.

However my post shows that the SSA form does not apply to the question that people generally ask, so the DA is wrong.

comment by Lookingforyourlogic · 2019-02-04T18:07:12.578Z · score: 1 (1 votes) · LW · GW

Interesting post. Could the same argument not be used against the Simulation argument?

Simplify the model into assuming there is a universe in which I, the observer, are one of many many observers in an ancestor simulation run by some future civilization, and a universe in which I am a biological human naturally created by evolution on earth, with equal probability. Again, we can imagine running the universe many, many times. But no matter how many people are in the considered universe, I can only have the experience of being one at a time. So, asking:

  • What proportion of people whose experiences I have live in a simulated world?

The answer to that question converges to 1/2, as well. But if every observer reasoned like this when asked whether they are in a simulation, most would get the wrong answer (assuming there are more simulated than real observers)! How can we deal with this apparent inconsistency? Of course, as you say, different answers to different questions. But which one should we consider to be valid, when both seem intuitively to make sense?

comment by Dr. Jamchie · 2019-01-17T19:30:43.705Z · score: 1 (1 votes) · LW · GW

Lets say you do not know your birth rank at first. Then someone asks you to guess whether the universe is around 200 billion or some very large number. Without any additional data you should estimate 50% for either one. Then you get to know that your birth rank is around 100 billion. Do you not then update, that smaller universe have bigger than 50% chance estimated previously?

comment by Stuart_Armstrong · 2019-01-17T21:04:50.009Z · score: 3 (2 votes) · LW · GW

Again, we have to be clear about the question. But if it's "what proportions of versions of me are likely to be in a large universe", then the answer is close to 1 (which is the SIA odds). Then you update on your birthrank, notice, to your great surprise, that it is sufficiently low to exist in both large and small universes, so update towards small and end up at 50:50.

comment by Dr. Jamchie · 2019-01-20T11:08:59.089Z · score: 1 (1 votes) · LW · GW

So what you are saying, is, before one knows his birth rank, one should assume infinite universe? This does actually corresponds to evidence about universe size, but not about human population size.

comment by Stuart_Armstrong · 2019-01-21T13:11:18.244Z · score: 2 (1 votes) · LW · GW

Again, it's what question you're asking. "If a copy of me existed, would it be more likely to exist in small universe or in an infinite one" has a pretty clear answer :-)

comment by avturchin · 2019-01-17T13:38:17.061Z · score: 1 (1 votes) · LW · GW

It is probably wrong to interpret DA as "doom is imminent". DA just say that we are likely in the middle of total population of all humans (or other relevant observers) ever born.

For some emotional reasons we are not satisfied to be in the middle and interprets it as "doom" - but there are 100 billion more people in the future according to DA. It becomes look like a doom, if we account for expected population growth, as in that case, the next 100 billion people will appear in a few hundreds years.

More over, DA tells that doom very soon is very unlikely, which I call "reverse DA".

comment by Stuart_Armstrong · 2019-01-17T13:45:52.686Z · score: 3 (2 votes) · LW · GW

There are two versions of the DA; the first is "we should roughly be in the middle", and the second is "our birth rank is less likely if there were many more humans in the future".

I was more thinking of the second case, but I've changed the post slightly to make it more compatible with the first.