The doomsday argument is normal
post by avturchin · 2022-04-03T15:17:41.066Z · LW · GW · 44 commentsContents
44 comments
TL;DR: crazy predictions based on anthropic reasoning seem crazy only because they contradict our exaggerated expectations about too good future.
Let’s look at the two statements:
- I am in the middle of the set of all humans ranged by birth rank and therefore large human civilization will exist for only a few more millennia at least.
- I am in the very beginning of human history and humanity will exist for billions of years and will colonize the whole galaxy.
The first statement is the Doomsday argument in a nutshell and it is generally regarded as wrong. It could be false for two reasons: either the conclusion is false, or its logic is false.
Often someone argues that DA logic must be false because the conclusion is false, and therefore optimistic statement 2 is true. That is, we will survive for billions of years; therefore, we will not die in the next millennia, and thus logic of DA is wrong.
However, the optimistic statement 2 is only true for a person who is deep into transhumanism, nanotech etc but ignores x-risks. For many people, the doomy statement 1 is more probable, especially for those, who are deep into climate change, nuclear war risks and etc.
For a techno-optimist, the conclusion of DA is wrong not because it is inherently wrong, but because it contradicts our best hopes for the great future – so hating DA is wishful thinking.
Everything adds up to normality. As DA is using mediocrity thinking, it is “normal” by definition, in a tautological sense. I am typical, therefore, I am in the middle, therefore, the end is as far as the beginning. DA doesn’t say that the end is very near.
But the end becomes surprisingly near if we use the birth rank for the calculation, but the real clock time for the timing of the end. As I said in “Each reference class has its own end” we should define “the end” in the same terms as we define the reference class. Therefore, being in the middle of birth rank is neither surprising nor especially bad. It only says that there will be tens of billions of births in the future. It even doesn’t say that there will be a catastrophe in the end.
The DA prediction about being in the middle of the birth ranks becomes bad and surprising only when we compare it with the expected exponential growth of the population ( or very high plato). In that case, all these billions of births will happen in the next millennia. This suggests an abrupt end to the exponential growth which is interpreted as a global catastrophe. But there are obviously other possible population scenarios: population could slowly decline without extinction, or everyone becomes immortal but the birth rate decline (as now happening in rich countries).
Anyway, DA becomes surprising only when it is applied to our optimistic expectation that the human population will continue to be very high.
44 comments
Comments sorted by top scores.
comment by jessicata (jessica.liu.taylor) · 2022-04-03T16:21:16.630Z · LW(p) · GW(p)
I'm reminded of a recent discussion about Boltzmann brains; one of the easiest ways to disbelieve in them is to believe the universe is probably finite and not-extremely-large due to anthropic arguments of the form that also imply the Doomsday paradox.
Perhaps a bigger problem than the Doomsday paradox for SSA is probability-pumping, e.g. Nick Bostrom's Adam and Eve thought experiment.
My default anthropic theory to go with is something like SIA but based on computational density of observers rather than number of observers in the universe (similar to Tomasik's PSA).
(related: How the Simulation Argument Dampens Future Fanaticism)
Replies from: MackGopherSena↑ comment by MackGopherSena · 2022-04-07T00:34:51.488Z · LW(p) · GW(p)
[edited]
Replies from: avturchin↑ comment by avturchin · 2022-04-07T06:36:44.896Z · LW(p) · GW(p)
How?
Replies from: dadadarren↑ comment by dadadarren · 2022-04-07T14:48:53.908Z · LW(p) · GW(p)
I think he is describing the paradox of supernatural predicting power suggested by the doomsday argument and SSA in general. It will boost the probability of scenarios with a smaller reference class. Like in the sleeping beauty problem, SSA suggests the probability of heads is 2/3 after learning now is Monday, even though the toss is yet to happen.
Following similar logic, the astronaut can boost his survival chance by limiting the number of people saved. He can form this intention: select and reheat the passengers one by one. As soon as he feels he has been reheated halt the entire process and let all the remaining astronauts die. This will link his survival to a smaller reference class, which can boost its probability. How much will it help depends on the "correct" reference class. If the correct reference class only entails the astronauts then it would be very significant. If the correct reference class includes all "observers" in the universe then the increase would be marginal almost zero. But nonetheless, it would be greater than 50%.
This conclusion is very counter-intuitive. E.g. if I have been reheated, should I keep to the intention of killing the remaining astronauts? How can it still affect my chance of survival? It seems retro-causing.
I consider this a counterargument against SSA and the doomsday argument. But I like this thought experiment. It shows in order to actually conduct a sampling process among a group of agents that includes the first person, it has to forfeit the first-person perspective. e.g. from the viewpoint of an impartial outsider, in this case the security cameras
Replies from: Flaglandbase↑ comment by Flaglandbase · 2022-04-17T13:17:44.012Z · LW(p) · GW(p)
This maybe sounds more like he is preventing possible futures in which he doesn't exist, like if I rig a world destroying bomb when I die then a larger percentage of possible futures will have an older me.
comment by dadadarren · 2022-04-03T18:55:10.853Z · LW(p) · GW(p)
The problem with the Doomsday Argument is not it's too pessimistic about the future. One can be as optimistic or pessimistic about humanity's future as he likes. But according to DA that prior belief must inevitably get much bleaker once he considers his own birth rank.
One's own birth rank is information about which physical person the first person is: "out of all the human beings which one is me". It is perspective-specific, yet it is used to make conclusions on something perspective-independent: the total number of human beings ever exists. If DA's logic is correct, then I can have supernatural predicting power as long as it is related to the size of the reference class. E.g. in the sleeping beauty problem, I can predict the probability of heads for a fair coin yet to be tossed to be 2/3. Or as jessicata [LW · GW] pointed out earlier, consider's Adam and Eve problem by Bostrom.
I don't support SIA either. Because it uses the existence of the first person as evidence. That boosts the size of the reference class which counters the DA. But that leads to biasing to theories with a bigger reference class. I constantly see the claim that SIA is independent of how you define the reference class. That is not true. Are brains-in-vats in the reference class? What about programmes simulating humans or Boltzmann brains? Whether they shall be considered valid "observers", i.e. in the same reference class as me, will greatly change the judgments on the relating theories once SIA is applied.
I think the problem lies in the assumption of treating the first-person perspective as an observation selection effect. e.g. considering "I" a random sample from all observers. There is no basis for such assumptions. From a first-person perspective, one naturally knows who "I" is. It is primitively clear, no need to explain it with a selection process. On the other hand, if we wish to not take any specific perspective: to reason "objectively", then there shouldn't be an "I" in the logic at all.
Check this thought-experiment [LW · GW] about my view.
Replies from: MackGopherSena↑ comment by MackGopherSena · 2022-04-07T00:20:32.574Z · LW(p) · GW(p)
[edited]
Replies from: dadadarren↑ comment by dadadarren · 2022-04-07T13:42:35.696Z · LW(p) · GW(p)
Not sure what you mean. Doomsday argument is about how to think about the information that the first-person "I" is a particular physical person. It suggests treating it the same way as if a random sampling process has selected said physical person. SIA agrees with using sampling process, but disagrees with the range it is sampled from.
comment by Quintin Pope (quintin-pope) · 2022-04-03T18:00:13.558Z · LW(p) · GW(p)
The actual DA reference class is not "all of humanity" but "all of humanity who think about DA". Another solution to DA is that there are lots of future humans, but they don't think about DA.
Replies from: dadadarren, Radford Neal, avturchin↑ comment by dadadarren · 2022-04-03T18:59:43.160Z · LW(p) · GW(p)
That is the position of some DA supporters. Not all. I would even hesitate to call it mainstream.
Anyway, let's say that is the dominating take on DA. Is avturchin [LW · GW] committing genocide of future generations by discussing it on an open forum, making more people aware of the doomsday argument?
Replies from: quintin-pope, avturchin↑ comment by Quintin Pope (quintin-pope) · 2022-04-03T20:16:04.114Z · LW(p) · GW(p)
I think that most futures where we succeed at realizing our cosmic potential, we become competent enough that we stop thinking about doomsday arguments (or at least leave such thoughts to superhuman AIs). But yes, I do think we should discuss DA less often.
Replies from: avturchin↑ comment by Radford Neal · 2022-04-07T17:52:27.511Z · LW(p) · GW(p)
Since few people have thought of the Doomsday argument independently, and there is lots of discussion of it, why should one look at individual people? Shouldn't the reference class be "civilizations" or "intellectual communities"? And then it's not at all clear that doom is coming anytime soon.
Really, though, the whole idea of picking a "reference class" is just arbitrary, indicating that the whole line of reasoning has no sound basis.
Replies from: avturchin↑ comment by avturchin · 2022-04-08T13:54:55.204Z · LW(p) · GW(p)
As I said in "Each reference class has its own end [LW · GW]", the problem of reference class is not problem, because for each class there are its own type of the end.
In your example, our civilization started to think about doomsday argument around 1973, almost 50 years ago. And in around 50 years from now we will stop to think about it. It is not necessary a global catastrophe, may be wу will just lose interest. But combined with other forms of DA (birth rank) and other non-DA ideas, like x-risks, it looks like a plausible explanation.
Replies from: dadadarren, Radford Neal↑ comment by dadadarren · 2022-04-08T19:45:29.562Z · LW(p) · GW(p)
This actually demonstrates the problem further. If using "civilization" as the reference class then as you said humans would stop thinking about DA in about 50 years since it started 50 years ago. But what if we use "people thinking about DA" as the reference class? Due to the internet giving it more exposure, there are a lot more people thinking about DA now than in the 80s and 90s. If I am in the middle of all these people, then we would likely stop thinking about DA a lot sooner.
Similarly, human civilization has existed for about 5000 years so it would exist for another 5000. But for much of history, the global population is way less than a billion. We are likely around the 100 billionths human every born. So if use each person as the reference class then with the population boom the end shall arrive much sooner.
The forecast of the future changes drastically when different reference classes are used. So for DA to be valid there must be an exclusively "correct" reference class. But they all seem arbitrary.
Replies from: avturchin, MackGopherSena↑ comment by avturchin · 2022-04-09T07:52:44.611Z · LW(p) · GW(p)
I agree with your prediction: complex civilization capable to think about DA will collapse soon, in a few decades, but some form of medieval civilization can exist a few millennia. It is completely normal and typical outcome, if we ignore hopes on space exploration.
This staged collapse prediction is what follows from the idea that “each reference class has its own end”: for the reference class of DA thinkers the end is nigh. For written civilization it is in few thousands years.
Replies from: dadadarren↑ comment by dadadarren · 2022-04-09T17:41:43.182Z · LW(p) · GW(p)
I would regard the world in 1900 as a "complex civilization capable of thinking about DA". It's just that nobody bothered to think about it or publish their thoughts. So shouldn't we expect our society to remain that capability for another 120 years? At the same time, we also expect everyone to stop thinking about DA in 50 years. Because DA has been only discussed for 50 years so far?
For any choice of reference of class to have the same prediction of the future, that prediction would effectively be a mirror image of the past.
Replies from: avturchin, avturchin↑ comment by avturchin · 2022-04-13T09:22:27.838Z · LW(p) · GW(p)
BTW, what is your opinion about medocrity principle, that is, the idea of typicality of you, me and Earth?
Replies from: dadadarren↑ comment by dadadarren · 2022-04-13T15:49:35.848Z · LW(p) · GW(p)
I think the very idea of "I am a typical observer" is misguided. Because "observer" is a target drawn around where the arrow is. The arrow is the first person "I" in this analogy.
Everyone knows who the first-person "I" refers to since the only subjective experience felt is due to that particular physical body. We then put physical systems similar to this body into a category, and give it a name. But what similar feature is chosen to perform this grouping is arbitrary. From my personal perspective, such groups can be middle-aged men, things that can do simple arithmetic, synapsids, carbon-based lifeforms, macroscopic physical system, etc. It would be rather absurd to think the first-person "I" is typical for all these groups.
Furthermore, what does "typical" among a group really mean? If we look at the features that define a category, then of course I am similar to everything else. Since this grouping is based on me having that feature in the first place. This gives a false sense of mediocracy. But why would I be typical in terms of other features? e.g. for macroscopic physical systems, the defining feature is its scale, why should I expect myself exist at a typical time for this group? There is no reason for it. Various anthropic camps try to support this by regarding "I" as a random sample of some sort. But that is just adding ad-hoc assumptions.
It is not a coincidence that most anthropic theories have trouble defining what "observer" really means, which in turn messes up the reference class. (This is not exclusive to SSA. SIA and FNC are plagued by it too). Because it has no hard definition. It is just a circle drawn around the first-person "I" with a radius of anyone's choosing.
Many think "observer" can be conclusively defined as someone/something that is conscious. But what is consciousness in the first place? The only consciousness that anyone has access to is that of the first person. "I know I am conscious, and can never be sure if you are just auto-piloting philosophical zombies." I guess other people/animals/programs might also be conscious only because of their similarity to myself.
All in all, I feel people who hold "I am a typical observer" as an indisputable truth didn't take a hard look at what the word "I" or "observer" or "typical" really means.
Replies from: avturchin↑ comment by avturchin · 2022-04-15T19:05:43.581Z · LW(p) · GW(p)
I think that what you said here and elsewhere could boil down to two different views:
- Going from 1 position to 3 position in probabilities sense is ontologically impossible, period. No meaningful probability updates.
- We need to take hard look on what is "I", "observer", and "typical", and only after we clearly define them, we could said something meaningful about probabilities.
I tend here to agree with the second view, and I explored different aspects of it in some of my posts.
Replies from: dadadarren↑ comment by dadadarren · 2022-04-16T14:25:55.216Z · LW(p) · GW(p)
I'm not sure what 1 position and 3 position mean here. I would summarize my argument as the first-person perspective is based on subjective experience. It is a primitive notion that cannot be logically analyzed. Just like in Euclidean geometry we can't analyze any of its axioms. Take then as given, that's it.
All the rest, like no self-locating probability, perspective disagreement, rejection of doomsday argument and presumptuous philosopher, double-halving in sleeping beauty, and rejection of fine-tuned universe, are just conclusions based on that.
Replies from: avturchin↑ comment by avturchin · 2022-04-17T09:11:44.979Z · LW(p) · GW(p)
1 position = first-person perspective, 3 position = third-person perspective
Replies from: dadadarren↑ comment by dadadarren · 2022-04-18T17:15:48.451Z · LW(p) · GW(p)
Well in that case yes. 3rd person's perspective is just a shorthand for the perspective of a god's eye view. We should not switch perspectives halfway in any given analysis.
↑ comment by avturchin · 2022-04-10T07:30:20.519Z · LW(p) · GW(p)
To get more credible estimates with 90 per cent confidence, it better to take just order of magnitude. In that case, the apparent strange overconfidence of DA predictions dissappears as well as its mirror structure.
So we can say that both ability to think about DA and the thinking about it will exist for several decades.
(Note also that Laplace seems to be the first who was close to DA, and it was in 1801)
↑ comment by MackGopherSena · 2022-04-19T21:52:04.371Z · LW(p) · GW(p)
[edited]
Replies from: avturchin↑ comment by avturchin · 2022-04-20T08:05:15.819Z · LW(p) · GW(p)
It is like Laplace sunrise problem: everyday the sun have risen is a small bit of evidence that it more likely to rise again. The same way if the world didn't end today, it is a small evidence that allows to extend our expected doomsday date.
Replies from: MackGopherSena↑ comment by MackGopherSena · 2022-04-20T14:38:16.411Z · LW(p) · GW(p)
[edited]
Replies from: avturchin↑ comment by Radford Neal · 2022-04-08T16:14:44.796Z · LW(p) · GW(p)
I've read your linked post, and it doesn't convince me. The reasoning doesn't seem rooted in any defensible principles, but is rather just using plausible-sounding heuristics which there is no reason to think will produce consistent results.
The example of the person placed on the unknown-sized grid has a perfectly satisfactory solution using standard Bayesian inference: You have a prior for the number of cells in the row. After observing that you're in cell n, the likelihood function for there being R rows is zero for R less than n, and 1/R for R greater than or equal to n. You multiply the likelihood by the prior and normalize to get a posterior distribution for R. Observing that you're in cell 1 does increase the probability of small values for R, but not necessarily in the exact way you might think from a heuristic about needing to by "typical".
To illustrate the inconsistencies of that heuristic, consider that for as long as humans don't go extinct, we'll probably be using controlled fire, the wheel, and lenses. But fire was controlled hundreds of thousands of years ago, the wheel was invented thousands of years ago, and lenses were invented hundreds of years ago. Depending on which invention you focus on, you get completely different predictions of when humans will go extinct, based on wanting us to be "typical" in the time span of the invention. I think none of these predictions have any validity.
Replies from: avturchin↑ comment by avturchin · 2022-04-09T07:43:46.181Z · LW(p) · GW(p)
“End of the reference class” is not extinction, the class could end in differently. For any question we ask we simultaneously define reference class and what we mean by its ending.
In your example of fire, wheels and lenses: imagine that humanity will experience a very long period civilizational decline. Lens will disappear first, wheels seconds and fire will be the last in million of years. It is a boring but plausible apocalypse.
Replies from: Radford Neal↑ comment by Radford Neal · 2022-04-09T17:06:57.260Z · LW(p) · GW(p)
Possible, sure. But the implication of inference from these reference classes is that this future with a long period of civilizational decline is the only likely one - that some catastrophic end in the near future is pretty much ruled out. Much as I'd like to believe that, I don't think one can actually infer that from the history of fire, wheels, and lenses.
↑ comment by avturchin · 2022-04-03T20:52:34.310Z · LW(p) · GW(p)
I agree with you. The correct reference class is only those who think about DA - and this imply the end is very soon, in a few decades.
But this again is not a surprising news, which could trigger our intuition. Several x-risks has high probability to happen in this timeframe. Complex societies with high population are unstable, and DA is just another way to say that.
Replies from: alex-zamog↑ comment by Alex Zamog (alex-zamog) · 2022-04-04T21:18:21.776Z · LW(p) · GW(p)
imho the correct reference class is: non-genetically-modified humans. After this -- "everyone becomes immortal but the birth rate declines" -- happens to the class, it won't matter who thought about DA earlier.
comment by Shmi (shminux) · 2022-04-03T21:55:24.765Z · LW(p) · GW(p)
You are at a bus stop, and have been waiting for a bus for 5 min. The "doomsday logic" says that you are expected to wait another 5 min. 5 min later without a bus you are expected to wait another 10 min. If you look at the reference class of all bus stop waits, some of them have a bus coming in the next minute, some in 10, some in an hour, some next day, some never (because the route changed). You can't even estimate the expected value of the bus wait time until you narrow the reference class to a subset where "expected value" is even meaningful, let alone finite. To do that, you need extra data other than the time passed. Without it you get literally ZERO information about when the bus is coming. You are stuck in Knightian uncertainty. So it's best not to fret about the Doomsday argument as is, and focus on collecting extra data, like what x-risks are there, what the resolution to the Fermi paradox might be, etc.
Replies from: avturchin↑ comment by avturchin · 2022-04-04T07:15:07.877Z · LW(p) · GW(p)
What you describes here is the Laplace sunrise problem: If sun rose 5000 times, what are the chances that it will rise tomorrow? Laplace solved the problem and got almost the same equation as Gott’s Doomsday Argument - and got 1 in 5002 chance of non-rise tomorrow - which gives around 50 per cent chances of non-rise in next 5000 days. But everyday his estimation could be updated on the data that the sun has risen again.
But he didn’t use anthropic reasoning, but instead did the sum of all possible hypothesis about sunrise probabilities consisted with the observation. Anyway, he may have some assumptions about how hypothesis are distributed.
Replies from: shminux↑ comment by Shmi (shminux) · 2022-04-04T08:32:35.285Z · LW(p) · GW(p)
He didn't "solve" it, not in any meaningful sense of the term "solve". He probably implicitly assumed a certain distribution and did the calculation for the next day only. To solve it would mean to gather all possible data about the reasons the sun might not rise, and define what "sun not rising" even means.
Replies from: avturchin↑ comment by avturchin · 2022-04-04T09:03:04.723Z · LW(p) · GW(p)
While sun-rise problem setup is somewhat crazy, the bus waiting problem is ubiquitous. For example, I am waiting for some process to terminate in my computer or file starting to download. The rule of thumb is that if it is not terminating in a few minutes, it will not terminate soon, and it is better to turn off the process.
Leslie in "The end of the world" suggested a version of DA which is independent of assumptions of probability distributions of events. He suggested that if we assume deterministic universe without world-branching, then any process has unknown to us but fixed duration T. For example, the time from previous bus arrival to the next bus arrival is Tb. It is not a variable, it has fixed value for today and for this bus, and Omega may know it. Say, it 15 minutes. It doesn't depend on the way how arrivals of other buses are distributes, are they regular, normally distributed etc. It is only for this bus.
Now you arrive on the bus station. You only know two things: the time from last arrival and the fact that you came in a random moment relative to bus arrivals. In that case, you can estimate time until next bus's arrival according to doomsday argument logic: it will be around the same time as from previous arrival.
comment by Victor Novikov (ZT5) · 2022-04-07T07:07:26.735Z · LW(p) · GW(p)
My intuition says: you cannot predict, or gain any evidence about the future, based on anthropic arguments like this.
comment by AprilSR · 2022-04-03T16:08:10.848Z · LW(p) · GW(p)
The general way I’ve thought of the DA is that it’s probably correct reasoning, but it’s not the only relevant evidence. Even if DA gives us a billion-to-one prior against being in the first billionth of humanity, we could easily find strong enough evidence to overcome that prior. (cf https://www.lesswrong.com/posts/JD7fwtRQ27yc8NoqS/strong-evidence-is-common [LW · GW])
Replies from: avturchin↑ comment by avturchin · 2022-04-03T21:04:15.620Z · LW(p) · GW(p)
What it could be? Alien supercivilization?
Replies from: AprilSR↑ comment by AprilSR · 2022-04-03T21:06:02.666Z · LW(p) · GW(p)
It mostly comes down to outlook on x-risk. If we align an AI, then we’re probably good for the future.
Replies from: avturchin