A possible solution to the Fermi Paradoxpost by sil ver (sil-ver) · 2018-05-05T14:56:03.143Z · score: 10 (3 votes) · LW · GW · 5 comments
[tl;dr: I argue that, if Many Worlds is true, the Survival Bias might explain the lack of observed life in the universe, under the assumption that there are no humans in most worlds where life on a reachable planet has reached technological maturity.]
[Content Warning: possibly unsettling]
Suppose you are offered a deal: you'll be put to sleep, cloned 99 times, and each of the 100 versions of you will be sent to an identical-looking room where they'll be woken up. Suppose that you all share the same consciousness: the lights are on for the same entity in all 100 copies (but they have no way of communicating with each other). A minute after waking up, one randomly chosen clone will receive five million dollars, while the other 99 will die a painless death, so quick that they will neither see it coming nor feel any pain.
It's not relevant for the plausibility of this argument whether you would take this deal (though it might have other implications). For now, suppose you take it. You're put to sleep and next thing you know, you wake up in a room. After you wait for a minute, the experimenter enters the room, hands you your five million dollars, and politely thanks you for your participation.
Should you be surprised? I'd say no. Nothing surprising has happened; in fact, there was only one way this could have gone all along. On the other hand, if the 99 unlucky copies were not killed but put in prison, I would argue that surprise is warranted. The survival bias is real, but it only applies when there is an experiment on a set of people, and a subset of those won't be able to tell afterwards.
Now consider how many times we narrowly avoided nuclear war. The base theory for why this happened is that we got lucky. But if nuclear war always results in your death, and if Many-Worlds is true, then us being still alive isn't surprising at all; rather it's the only observation possible.
Okay, so let's examine the Fermi Problem. Let be the odds that primitive life on some planet results in a species inventing space travel, and let be the number of other planets in reach with primitive life on them. A classical explanation for our observations either requires that species who reach earth generally choose to leave us undisturbed, or that be sufficiently large (that's the probability that no alien species in reach makes it to space travel). One way this could be the case is if the first step towards intelligent life is extremely hard and therefore is actually fairly small, perhaps .
The Many-Worlds look on the survival of our own species helps the classical explanation out by making it more plausible that is very small. A mixture of both might also be true, perhaps if is and is .
What I'm arguing in this post is to consider a different explanation that works through the survival bias. Suppose that, if a space-traveling species reaches another planet, they don't generally leave them to themselves, rather they almost always end life there. Then, the only possible observation we could have is the current one, regardless of the values of and . Put plainly: there are lots of technologically mature species out there, they do travel to other planets, and in a large majority of worlds, they've reached earth and humanity doesn't exist. But because of quantum physics, there are still worlds where a chance has come true, and this is one such.
But is that assumption plausible? Many might disagree, but I would say yes. A paperclipper scenario on a reachable planet would certainly lead to the extinction of life on earth, but even a species with an aligned AI would probably find more effective ways to use this planet than to allow life and suffering to continue there, especially considering that, in a vast majority of cases, life on earth would be incredibly primitive at the time of their arrival. The question seems to depend primarily on how one images the morality of a technologically mature civilization to look like.
[Footnote #1]: I don't know how wrong this assumption is. If someone feels qualified to estimate the probability of personal survival in the case that any one of the incidents listed on Wikipedia had gone wrong, please feel free to do so.
Comments sorted by top scores.