# A possible solution to the Fermi Paradox

post by sil ver (sil-ver) · 2018-05-05T14:56:03.143Z · score: 10 (3 votes) · LW · GW · 5 comments

[tl;dr: I argue that, if Many Worlds is true, the Survival Bias might explain the lack of observed life in the universe, under the assumption that there are no humans in most worlds where life on a reachable planet has reached technological maturity.]

[Content Warning: possibly unsettling]

Suppose you are offered a deal: you'll be put to sleep, cloned 99 times, and each of the 100 versions of you will be sent to an identical-looking room where they'll be woken up. Suppose that you all share the same consciousness: the lights are on for the same entity in all 100 copies (but they have no way of communicating with each other). A minute after waking up, one randomly chosen clone will receive five million dollars, while the other 99 will die a painless death, so quick that they will neither see it coming nor feel any pain.

It's not relevant for the plausibility of this argument whether you would take this deal (though it might have other implications). For now, suppose you take it. You're put to sleep and next thing you know, you wake up in a room. After you wait for a minute, the experimenter enters the room, hands you your five million dollars, and politely thanks you for your participation.

Should you be surprised? I'd say no. Nothing surprising has happened; in fact, there was only one way this could have gone all along. On the other hand, if the 99 unlucky copies were not killed but put in prison, I would argue that surprise is warranted. The survival bias is real, but it only applies when there is an experiment on a set of people, and a subset of those won't be able to tell afterwards.

Now consider how many times we narrowly avoided nuclear war. The base theory for why this happened is that we got lucky. But if nuclear war always results in your death, and if Many-Worlds is true, then us being still alive isn't surprising at all; rather it's the only observation possible.

Okay, so let's examine the Fermi Problem. Let be the odds that primitive life on some planet results in a species inventing space travel, and let be the number of other planets in reach with primitive life on them. A classical explanation for our observations either requires that species who reach earth generally choose to leave us undisturbed, or that be sufficiently large (that's the probability that no alien species in reach makes it to space travel). One way this could be the case is if the first step towards intelligent life is extremely hard and therefore is actually fairly small, perhaps .

The Many-Worlds look on the survival of our own species helps the classical explanation out by making it more plausible that is very small. A mixture of both might also be true, perhaps if is and is .

What I'm arguing in this post is to consider a different explanation that works through the survival bias. Suppose that, if a space-traveling species reaches another planet, they don't generally leave them to themselves, rather they almost always end life there. Then, the only possible observation we could have is the current one, regardless of the values of and . Put plainly: there are lots of technologically mature species out there, they do travel to other planets, and in a large majority of worlds, they've reached earth and humanity doesn't exist. But because of quantum physics, there are still worlds where a chance has come true, and this is one such.

But is that assumption plausible? Many might disagree, but I would say yes. A paperclipper scenario on a reachable planet would certainly lead to the extinction of life on earth, but even a species with an aligned AI would probably find more effective ways to use this planet than to allow life and suffering to continue there, especially considering that, in a vast majority of cases, life on earth would be incredibly primitive at the time of their arrival. The question seems to depend primarily on how one images the morality of a technologically mature civilization to look like.

[Footnote #1]: I don't know how wrong this assumption is. If someone feels qualified to estimate the probability of personal survival in the case that any one of the incidents listed on Wikipedia had gone wrong, please feel free to do so.

comment by waveman · 2018-05-05T22:00:10.233Z · score: 8 (2 votes) · LW · GW

To add to this, if we assume

a) many universes (1),

b) that it is unlikely that any given random universe supports intelligent life (2),

then a universe that does support intelligent life would most likely just barely do so. That is a kind of prediction.

And sure enough, our universe does seem to barely support intelligent life. We only appeared after 15 billion years or so. Vast portions of the universe are devoid of life. It appears that after a while all life will die out and the universe will be sterile forever. Humans were down to a few thousand breeding pairs and could have easily gone extinct.

(1) There are different kinds of many worlds. Quantum many-worlds, but also it is possible there are many worlds from multiple inflationary processes, each with their own physics constants, and some people think that every mathematically possible universe exists in some sense.

(2) Most toy universes seem to be degenerate in some way e.g. everything collapses into a black hole, no atoms form, etc.

comment by alexei (alexei.andreev) · 2018-05-07T07:41:51.466Z · score: 5 (1 votes) · LW · GW

Yeah, makes sense. Also note that if Many Worlds is true and quantum immortality exists, you will never (from your own point of view) die.

comment by sil ver (sil-ver) · 2018-05-07T09:26:10.126Z · score: 4 (1 votes) · LW · GW

This is actually not obvious to me, probably because I don't understand thermo-dynamics. My sense was that (provided some assumptions which might or might not turn out false) the universe eventually dies in a heat-death, which after a finite amount of time should happen in all universes.

comment by lifelonglearner · 2018-05-05T15:55:06.828Z · score: 5 (1 votes) · LW · GW

Quick clarification: Does the reasoning in the 99:1 scenario differ from the sort of reasoning used in anthropic [LW · GW] reasoning? (I'm unsure of how that differs a lot from survivorship bias.)

comment by sil ver (sil-ver) · 2018-05-05T16:18:25.006Z · score: 4 (1 votes) · LW · GW

(Edited) I think it differers from the reasoning used in that post by Eliezer, but not from the original anthropic principle. But I'm not particularly qualified to answer this.