SIA doomsday

post by Stuart_Armstrong · 2012-09-06T16:01:06.576Z · LW · GW · Legacy · 26 comments

Contents

  SIA doomsday
None
26 comments

Edit: the argument is presented more clearly in a subsequent post.

Many thanks to Paul Almond for developing the initial form of this argument.

It is well known in these circles that the self-sampling assumption (SSA) leads to the doomsday argument. The self-indication assumption (SIA) was developed to counter the doomsday argument. This is a old debate; but what is interesting is that SIA has its own doomsday argument - of a rather interesting and different form.

To see this, let's model the population of a planet somewhat like Earth. From century to century, the planet's population can increase, decrease or stay the same with equal probability. If it increase, it will increase by one billion two thirds of the time, and by two billion one third of the time - and the same for decreases (if it overshoots zero, it stops at zero). Hence, each year, the probability of population change is:

Pop level change
+2 Billion +1 Billion +0 Billion -1 Billion -2 Billion
Probability
1/9
2/9
3/9
2/9
1/9

During the century of the Three Lice, there were 3 billion people on the planet. Two centuries later, during the century of the Anchovy, there will still be 3 billion people on the planet. If you were alive on this planet during the intermediate century (the century of the Fruitbat), and knew those two facts, what would your estimate be for the current population?

From the outside, this is easy. The most likely answer if that there is still 3 billion in the intermediate century, which happens with probability 9/19 (= (3/9)*(3/9), renormalised). But there can also be 4 or 2 billion, with probabilities 4/19 each, or 5 or 1 billion, with probabilities 1/19 each. The expected population is 3 billion, as expected.

Now let's hit this with SIA. This weighs the populations by their sizes, changing the probabilities to 5/57, 16/57, 27/57, 8/57and 1/57, for populations of five, four, three, two and one billion respectively. Larger populations are hence more likely; the expected population is about 3.28 billion.

(For those of you curious about what SSA says, that depends on the reference class. For the reference class of people alive during the century of the Fruitbat, it gives the same answer as the outside answer. As the reference class increases, it moves closer to SIA.)

 

SIA doomsday

So SIA tells us that we should expect a spike during the current century - and hence a likely decline into the next century. The exact numbers are not important: if we know the population before our current time and the population after, then SIA implies that the current population should be above the trendline. Hence (it seems) that SIA predicts a decline from our current population (or a least a decline from the current trendline) - a doomsday argument.

Those who enjoy anthropic reasoning can take a moment to see what is misleading about that statement. Go on, do it.

Go on.

Still there? Then you've certainly already solved the problem, and are just reading to check that I got it too, and compare stylistic notes. Then for you (and for those lazy ones who've peaked ahead), here is the answer:

It's one of those strange things that happen when you combine probabilities and expectations, like the fact that E(X/Y)>1 does not imply that E(X)>E(Y). Here, the issue is that:

Confused? Let's go back to our previous problem, still fix the past population at 3 billion, and let the future population vary. As we've seen, if the future population was 3 billion, SIA would boost the probability of an above-trendline present population; so, for instance, 3-5-3 is more likely among the 3-?-3 than it would be without SIA.

But now consider what would happen if the future population was 7 billion - if we were in 3-?-7. In this case, there is only one possibility in our model, namely 3-5-7. So large present populations are relatively more likely if the future population is large - and SIA makes large present populations more likely. And this removes the effects of the SIA doomsday.

To sumarise:

  1. For known future population, SIA increase the probability of the present population being above trend (hence increasing the chance of a trend downswing in future).
  2. The current population is more likely to be high if the future population is high (hence increasing the chance of a trend upswing in future).
  3. These two effects exactly compensate. SIA has no impact on the future, once the present is known.

 

26 comments

Comments sorted by top scores.

comment by Mitchell_Porter · 2012-09-06T16:32:25.928Z · LW(p) · GW(p)

Stuart, sorry for being lazy or something, but "SIA" and "SSA" aside, I just don't see how I could be one of the first few billion ever to live, in a cosmic civilization whose ultimate population is in the quadrillions, and not be in a historical position that is apriori unlikely, with odds of about 1 million to 1 against. Do you have a comment on this naive argument?

Replies from: Manfred, Stuart_Armstrong, Desrtopa, ArisKatsaris
comment by Manfred · 2012-09-06T19:16:45.423Z · LW(p) · GW(p)

It's a tough job, but someone has to do it :P

Replies from: Matt_Caulfield, Matt_Caulfield
comment by Matt_Caulfield · 2012-09-07T04:38:08.152Z · LW(p) · GW(p)

"Were you born on Earth before interstellar spaceflight? Enlist in the Confessor corps today! Service guarantees citizenship!"

comment by Matt_Caulfield · 2012-09-07T02:43:01.803Z · LW(p) · GW(p)

It is a tough job, but I would rather be born now than any other era: I want to be a Confessor when I grow up.

comment by Stuart_Armstrong · 2012-09-06T17:49:51.907Z · LW(p) · GW(p)

Imagine these cosmic civilizations of different sizes exist in parallel universes. Then you are more likely to exist in the universe with many humans than with few. This exactly compensates for the effect you describe.

More pictorially: there are two parallel universes (think of them as different planets). In one, Mitchell Porter lives and nobody else. In the other, Mitchell Porter lives along with a billion other people.

But from your perspective, these other people are irrelevant: what you care about is that there are two Mitchell Porters, one in each universe/planet. So you should feel that it's equally likely that you are in either universe.

Replies from: endoself
comment by endoself · 2012-09-07T05:47:09.856Z · LW(p) · GW(p)

The usual way that I dispel the illusion of a 'reference class' based on something like sentience (as opposed to something sensible like the class of beings making the same observations as you) is by asking what inferences a nonsentient AI should make, but of course that line of argument won't convince Mitchell.

comment by Desrtopa · 2012-09-06T18:31:16.161Z · LW(p) · GW(p)

The situation may be a priori unlikely, but I don't think it's at all presumptuous to argue that the amount of evidence available to us is enough to shift the likelihood upwards dramatically.

The Doomsday Argument applies to any population of any species, ever. I think it's pretty reasonable to say that humans are in a pretty exceptional position out of the set of all known species.

comment by ArisKatsaris · 2012-09-06T17:43:09.946Z · LW(p) · GW(p)

Here's an idea, based on Quantum Mechanics, by someone who hardly knows any Quantum Mechanics at all -- therefore it's probably complete nonsense. (If it's not nonsense, feel free to name said solution after me. :-)

Doesn't the amplitude of a configuration need to be squared in order to figure out the probability of its observation? And also don't configurations tend to split more than merge? So doesn't that mean that if an amplitude-10 configuration splits to two amplitude-5 configurations, the relative probability of the first configuration is 10^2=100 compared to 5^2 + 5^2=50?

So, to put it differently, don't earlier instances have a greater probability of observation than later instances?

Replies from: JGWeissman
comment by JGWeissman · 2012-09-06T17:52:54.075Z · LW(p) · GW(p)

It is the squared magnitude of quantum amplitude that is conserved, not the quantum amplitude itself (which is represented as a complex number). Otherwise, the Born rule would not produce coherent probabilities.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-10T11:36:04.705Z · LW(p) · GW(p)

Concretely: A configuration with amplitude 10 (and measure 100) will split its flow into two configurations 7.07 (and hence measure 50 each).

(Of course these are actually blobs whose measure is a multidimensional integral over the amplitude's squared modulus, and we'd be looking at the equivalent of 5 + 5i and 5 - 5i so that they linearly sum to the original 10 while having length 7.07 each, but whatever...)

comment by HeatDeath · 2012-09-06T20:35:40.851Z · LW(p) · GW(p)

My take on the Doomsday Argument is that it boils down to the mundane and rather non-apocalyptic statement: "All other things being equal, I should expect to be born during a period in history when birthrates [of beings in my reference class] are high."

I can't quite see why this should imply anything at all about the shape of the population curve in the immediate future, let alone the long-term future.

comment by shminux · 2012-09-06T17:14:34.472Z · LW(p) · GW(p)

I was lost in the original Doomsday argument's logic:

supposing the humans alive today are in a random place in the whole human history timeline, chances are we are about halfway through it.

This assumes that the timeline is finite, otherwise "half-way" makes no sense. The following argument

we could be 95% certain that we would be within the last 95% of all the humans ever to be born.

relies on this assumption. The remaining logic is "if we assume a finite timeline, then we can estimate how long it is with some probability, given the population growth curve so far". The calculations themselves are irrelevant to the conclusion, which is "the total number of humans that will ever live is finite".

So, the whole argument boils down to "if something is finite, a reasonable function of it is also finite", which is hardly interesting.

Am I missing something?

Replies from: Pentashagon, Stuart_Armstrong
comment by Pentashagon · 2012-09-06T19:33:35.115Z · LW(p) · GW(p)

So long as we have imprecise measurements and only a finite amount of matter and energy then time is not an issue because we eventually run out of unique (or distinguishable by any of our measurements) humans that can exist and we can just ask the nearly equivalent question: "what is the probability that I am in the last 95% of unique humans to ever exist?" At some point every possible human has been created and out of that huge finite number each one of us is an index. The question is whether the heat death stops the creation of humans in this universe before 20 times the current number of humans has been created (for the 95% argument).

However, the statement of the problem completely ignores prior probabilities. For instance it's possible that we now have more than 5% of the total human population alive right now (that would be true if only 100 billion have ever lived). That would mean that the oldest people alive right now have a 0 probability of being in the last 95%. Additionally, we have a very good idea of whether we are in the last 99.9999999% of humans who have ever lived; the last 7 babies baby born on Earth have a slightly positive probability, everyone else has a 0 probability. What is the probability that 7 more babies will be born, replacing the current 7 who have the possibility of being the last? 1 for all practical purposes. 100 babies? Same. In fact it probably already happened while I was writing this comment. A billion? Probably still very, very close to 1. The calculation can be extended into the future based on our knowledge of our environment and society and the probability of existential risk. Unless we have a good reason for predicting the destruction of all humans we shouldn't predict that we are in the last 95% of unique humans. Humans are built to reproduce, they have an environment that will probably last thousands if not hundreds of thousands of years, and they are learning to build their own environments anywhere they want. These facts are far more important than which particular brain is randomly selected to think of the doomsday argument.

EDIT: The part about the last 95% of humans is wrong if there are more than 5% still alive; it would have to be some of the first 5% still alive for the argument to be wrong for them today.

Replies from: shminux
comment by shminux · 2012-09-06T19:41:53.092Z · LW(p) · GW(p)

only a finite amount of matter and energy

This is an assumption based on our current level of science and technology and failure of imagination. There are many possible ways around it: baby universes, false vacuum decay, even possibly conversion of dark energy (which appears to be inexhaustible) to normal matter, to name a few.

However, the statement of the problem completely ignores prior probabilities

I was unable to follow your logic there... Are you saying that the Doomsday argument is wrong or that it is irrelevant or what?

Replies from: Pentashagon
comment by Pentashagon · 2012-09-06T20:14:04.168Z · LW(p) · GW(p)

This is an assumption based on our current level of science and technology and failure of imagination. There are many possible ways around it: baby universes, false vacuum decay, even possibly conversion of dark energy (which appears to be inexhaustible) to normal matter, to name a few.

That may reveal a weakness in the doomsday argument itself. For instance, did any of the first hundred bllion billion humans even think of the doomsday argument? If the doomsday argument is flawed, will many future humans think of it more than briefly from historical interest? The nature of the human considering the doomsday argument significantly affects the sample space. Future humans in a free-energy universe would immediately see the falsehood of the doomsday argument and so from our perspective wouldn't even be eligible for the sample space. It may be that the doomsday argument only has a chance of being seriously considered by a 20th/21st century human, which simply turns the question into "what is the probability that I am somewhere between the 100 billionth and 130 billionth human to ever live?" or whatever the appropriate bounds might be.

I was unable to follow your logic there... Are you saying that the Doomsday argument is wrong or that it is irrelevant or what?

The doomsday argument is correct if no other information besides the total number of humans who have lived before me and finite resources are assumed. With additional information our confidence should be increased beyond what the doomsday argument can provide, making the doomsday argument irrelevant in practice.

Replies from: shminux
comment by shminux · 2012-09-06T20:20:20.321Z · LW(p) · GW(p)

Thanks, this makes sense.

comment by Stuart_Armstrong · 2012-09-06T17:36:31.455Z · LW(p) · GW(p)

SSA Doomsday: You are the hundred billionth human who ever lived (or something like that). If there are two hundred billion humans total, this is not very unusual. But if there are to be a quadrillion humans, you would be in a very unlikely situation, in the first 0.0...% or something.

Analogous argument: you're told that you're the smartest person in your extended family. This makes it more likely your extended family is small, rather than large. Replace "smartest" with any property that distinguishes you from the rest of humanity, and the argument follows.

Replies from: ArisKatsaris, shminux
comment by ArisKatsaris · 2012-09-06T18:30:04.847Z · LW(p) · GW(p)

Isn't it more analogous to having a child be told "you're 5 years old!" and for this reason this makes the kid believe they're only likely to survive to around age 10?

Replies from: evand, gwern
comment by evand · 2012-09-06T21:52:01.448Z · LW(p) · GW(p)

That seems like a reasonable conclusion for the child to draw, in the absence of other evidence (like, say, the ages of the people around them).

Actually, I think the right conclusion is that they're likely to live to about 5*e years, not 10, but the idea is similar.

comment by gwern · 2012-09-07T00:14:27.569Z · LW(p) · GW(p)

Imagine you're told you're 50 years old... is only surviving to age 100 that bad a guess for a statistical argument which takes into account close to no information whatsoever?

(Reminds me of the reaction to the sunrise example for Laplace's law.)

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-09-07T10:00:56.819Z · LW(p) · GW(p)

Let's imagine that I'm an immortal being who has lived since prehistorical times -- except that I lose all memories every 50 years or so, and my body and my mind reverts to the form of an infant, so that for all intends and purposes I'm a new person.

From prehistorical times, I can therefore think the Doomsday argument to myself -- if I had the knowledge to do so. So when I'm Urgh the caveman, among the first 10,000 people I think that mankind is only likely to survive to around double that size, and when I'm Marconius the Roman, I think it likely that mankind is only likely to survive to around double that present size, and when I'm Aris Katsaris the modern Westerner, I think it likely that mankind is only likely to survive to around double this present size...

And each time, I effectively die and forget all my thoughts about Doomsday, and get born anew and reconstruct the Doomsday argument for myself. And can do so for as long as humanity lasts, and it never actually provides me any new information about how long humanity will last.

Until the point where I'm made physically or mentally immortal I guess, and no longer die, at which point I no longer ask the Doomsday argument again, because I first asked it millenia back and remember it.

I don't know. This chain of thought above makes me intuitively think the Doomsday argument is bollocks, but I don't know if it will have the same effect on anyone else.

Replies from: Luke_A_Somers, gwern
comment by Luke_A_Somers · 2012-09-07T15:04:59.839Z · LW(p) · GW(p)

In an argument made confusing by manipulating the sample, I don't think it's very helpful to make an even weirder sample.

comment by gwern · 2012-09-07T15:08:06.670Z · LW(p) · GW(p)

Why does one example matter? The Doomsday argument is over billions of people (something like >100b so far), so one immortal - who doesn't even exist - shows nothing. He's wrong a few hundred times, so what - your immortal adds nothing at all to just pointing out that Romans or cavemen would've been wrong.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-09-07T15:47:11.933Z · LW(p) · GW(p)

The Doomsday argument is over billions of people.

Shouldn't it be either over all lifeforms or only over people who've heard and are able to appreciate the Doomsday argument?

so one immortal - who doesn't even exist - shows nothing

The example of the immortal is just a trick helpful of thinking about individual lives as not especially meaningful to probabilities in an external sense. Your brain loses cognition and memory, its atoms eventually become part of many other people -- in a sense, we're all this "immortal" -- is it meaningful in a mathematical sense to label one particular "life" and say "I was born early" or "I was born late"?

I don't know. I admit myself just confused over all this.

comment by shminux · 2012-09-06T17:55:57.572Z · LW(p) · GW(p)

But if there are to be a quadrillion humans, you would be in a very unlikely situation, in the first 0.0...% or something.

Again the finiteness assumption, which presumes the Doomsday (barring infinite lifespan, which is just another infinity) to begin with. This is a trivial tautology. I'm growing more and more disenchanted with this area of research :(

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2012-09-07T11:14:53.331Z · LW(p) · GW(p)

Which is one of the reason I recommend sidstepping the whole argument, and using "Anthropic decision theory" http://lesswrong.com/lw/891/anthropic_decision_theory_i_sleeping_beauty_and/