Implications of the Doomsday Argument for x-risk reduction
post by maximkazhenkov
This is a question post.
Lesswrong contains a large intersection of people who are interested in x-risk reduction and people who are aware of the Doomsday Argument. Yet these two things seem to be incompatible with each other, so I'm going to ask about the elephant in the room:
What are your stances on the Doomsday Argument? Does it encourage or discourage you from working on x-risks? Is it a significant concern for you at all?
Do most people working on x-risks believe the Doomsday Argument to be flawed?
If not, it seems to me that avoiding astronomical waste is also astronomically unlikely, thus balancing out x-risk reduction to a moderately important issue for humanity at best. From an individual perspective (or altruistic perspective with future discounting), we perhaps should focus on having a good time before inevitable doom? What am I missing?
answer by Donald Hobson
) · GW
Suppose we ignore the simulation argument and take the evidence of history and astronomy at face value. The doomsday argument provides a good prior. However, the evidence that shows we are on early earth is really strong, and the prior is updated away. If we take the simulation hypothesis into account, then there could be a version of us in reality, and many in simulations. The relative balance of preventing X risk vs having a good time is swung, but still strongly cares about X risk. Actually, the doomsday argument puts the probability that infinitely many people will exist, but only finitely many have existed so far at 0, so I'm don't think I believe it.
answer by Charlie Steiner
) · GW
People are bad at interpreting the Doomsday Argument, because people are bad at dealing with evidence as Bayesian evidence, rather than a direct statement of the correct belief.
The Doomsday Argument is evidence that we should update on. But it is not a direct statement of the correct belief.
On a parallel earth, humanity is on the decline. Some disaster has struck, and the once-billions of proud humanity have been reduced to a few scattered thousands. Now the last exiles of civilization hide in sealed habitats that they no longer have the supply chains to repair, and they know that soon enough the end will come for them too. But on the other hand, the philosophers among them remark, at least there's the Doomsday Argument, which says that on average we should expect to be in the middle of humanity. So if the DA is right, the current crisis is merely a bottleneck in the middle of humanity's time, and everything will probably work itself out any day now. The last philosopher dies after breathing in contaminated air, with the last words "No! The position I occupy is... very unlikely!"
Your eyes and ears also provide you evidence about the expected span of humanity.
↑ comment by maximkazhenkov ·
2020-04-03T13:44:27.610Z · LW(p) · GW(p)
But isn't the point of the Doomsday Argument that we'll need very very VERY strong evidence to the contrary to have any confidence that we're not doomed? Perhaps we should focus on drastically controlling future population growth to better our chances of prolonged survival?Replies from: Charlie Steiner
↑ comment by Charlie Steiner ·
2020-04-03T17:40:44.086Z · LW(p) · GW(p)
To believe that you're a one in a million case (e.g. in the first or last millionth of all humans), you need 20 bits of information (because 2^20 is about 1000000).
So on the one hand, 20 bits can be hard to get if the topic is hard to get reliable information about. But we regularly get more than 20 bits of information about all sorts of questions (reading this comment has probably given you more than 20 bits of information). So how hard this should "feel" depends heavily on how well we can translate our observational data into information about the future of humanity.
Extra note: In the case that there are an infinite number of humans, this uniform prior actually breaks down (or else naively you'd think you have a 0.0% chance of being anyone at all), so there can be a finite contribution from the possibility that there are infinite people.
answer by Daniel Kokotajlo
) · GW
I'd like to see someone explore the apparent contradiction in more detail. Even if I were convinced that we will almost certainly fail, I might still prioritize x-risk reduction, since the stakes are so high.
Anyhow, my guess is that most people think the doomsday argument probably doesn't work. I am not sure myself. If it does work though, its conclusion is not that we will all go extinct soon, but rather that ancestor simulations are one of the main uses of cosmic resources.
↑ comment by Lanrian ·
2020-04-03T08:59:48.831Z · LW(p) · GW(p)
If ancestor simulations are one of the main uses of cosmic resources, we probably will go extinct soon (somewhat depending on how you define extinction), because we're probably in an ancestor simulation that will be turned off. If the simulators were to keep us alive for billions of years, it would be pretty unlikely that we didn't find ourselves living in those billions of years, by the same logic as the doomsday argument.
Replies from: daniel-kokotajlo
↑ comment by maximkazhenkov ·
2020-04-03T14:38:56.060Z · LW(p) · GW(p)
Even if I were convinced that we will almost certainly fail, I might still prioritize x-risk reduction, since the stakes are so high.
In this case, it isn't so much that "stakes are high and chances are low so they might cancel out", rather there is an exact inverse proportionality between the stakes and the chances because the Doomsday Argument operates directly through the number of observers.
If it does work though, its conclusion is not that we will all go extinct soon, but rather that ancestor simulations are one of the main uses of cosmic resources.
I feel like being in a simulation is just as terrible a predicament as doom soon; given all the horrible things that happen in our world the simulators are clearly UnFriendly, they could easily turn off the simulation or thwart our efforts at creating an AI. Basically we're already living in a post-Singularity dystopia so it's too late to work on it.
I have a much harder time accepting the Simulation Hypothesis though because there are so many alternative philosophical considerations that could be pursued. Maybe we are (I am) Boltzmann brains. Maybe we live in an inflationary universe that expands 10^37 fold every second. Maybe minds do not need instantiation, or anything like a rock could be an instantiation. Etc.
Going one meta level up, I can't help but feel like a hypocrite to lament the lack of attention given to intelligence explosion and x-risks by the general public yet fail to seriously consider all these other big weird philosophical ideas. Are we (the rationalist community) doing the same as people outside it, just with a slightly shifted Overton Window? When is it Ok to sweep ideas under the rug and throw hands up in the air?Replies from: daniel-kokotajlo
answer by avturchin
) · GW
There is an uncertainty if DA valid or not. Around 40 per cent of scientists who analysed it, think that some version of DA is true, and if we treat as a prediction market, it is a 40 per cent bet. So there is a 60 per cent chance that DA is not valid and thus we should continue to work on x-risks prevention.
Also, it is possible to cheat DA, if we precomit to forget our position number in the future (may be via creating enough simulations of early past).
answer by shminux
) · GW
What are your stances on the Doomsday Argument?
The doomsday argument strikes me as complete and utter misguided bullshit, notwithstanding the fact that smart and careful physicists have worked on it, including J. Richard Gott and Brandon Carter, whose work in actual physics I had used extensively in my research. There are plenty of good reasons for x-risk work, no need to invoke lousy ones. The main issue with the argument is the misuse of probability.
First, the argument assumes a specific distribution (usually uniform) a priory without any justification. Indeed one needs a probability distribution to meaningfully talk about probabilities, but there is no reason to pick one specific distribution over another until you have a useful reference class.
Second, the potential infinite expectation value makes any conclusions from the argument moot.
Basically, the Doomsday argument has zero predictive power. Consider a set of civilizations with a fixed number of humans at any given time, each existing for a finite time T, randomly distributed with a distribution function f(T), which does not necessarily have a finite expectation value, standard deviation or any other moments. Now, given a random person from a random civilization at the time t, the Doomsday argument tells them that their civilization will exist for about as long as it had so far. It gives you no clue at all about the shape of f(t) beyond it being non-zero (though maybe measure zero) at t.
Now, shall we lay this nonsense to rest and focus on something productive?
Comments sorted by top scores.