[Resolved] Is the SIA doomsday argument wrong?
post by Brian_Tomasik · 2014-12-13T06:01:31.965Z · LW · GW · Legacy · 27 commentsContents
27 comments
[EDIT: I think the SIA doomsday argument works after all, and my objection to it was based on framing the problem in a misguided way. Feel free to ignore this post or skip to the resolution at the end.]
ORIGINAL POST:
Katja Grace has developed a kind of doomsday argument from SIA combined with the Great Filter. It has been discussed by Robin Hanson, Carl Shulman, and Nick Bostrom. The basic idea is that if the filter comes late, there are more civilizations with organisms like us than if the filter comes early, and more organisms in positions like ours means a higher expected number of (non-fake) experiences that match ours. (I'll ignore simulation-argument possibilities in this post.)
I used to agree with this reasoning. But now I'm not sure, and here's why. Your subjective experience, broadly construed, includes knowledge of a lot of Earth's history and current state, including when life evolved, which creatures evolved, the Earth's mass and distance from the sun, the chemical composition of the soil and atmosphere, and so on. The information that you know about your planet is sufficient to uniquely locate you within the observable universe. Sure, there might be exact copies of you in vastly distant Hubble volumes, and there might be many approximate copies of Earth in somewhat nearer Hubble volumes. But within any reasonable radius, probably what you know about Earth requires that your subjective experiences (if veridical) could only take place on Earth, not on any other planet in our Hubble volume.
If so, then whether there are lots of human-level extraterrestrials (ETs) or none doesn't matter anthropically, because none of those ETs within any reasonable radius could contain your exact experiences. No matter how hard or easy the emergence of human-like life is in general, it can happen on Earth, and your subjective experiences can only exist on Earth (or some planet almost identical to Earth).
A better way to think about SIA is that it favors hypotheses containing more copies of our Hubble volume within the larger universe. Within a given Hubble volume, there can be at most one location where organisms veridically perceive what we perceive.
Katja's blog post on the SIA doomsday draws orange boxes with humans waving their hands. She has us update on knowing we're in the human-level stage, i.e., that we're one of those orange boxes. But we know much more: We know that we're a particular one of those boxes, which is easily distinguished from the others based on what we observe about the world. So any hypothesis that contains us at all will have the same number of boxes containing us (namely, just one box). Hence, no anthropic update.
Am I missing something? :)
RESOLUTION:
The problem with my argument was that I compared the hypothesis "filter is early and you exist on Earth" against "filter is late and you exist on Earth". If the hypotheses already say that you exist on Earth, then there's no more anthropic work to be done. But the heart of the anthropic question is whether an early or late filter predicts that you exist on Earth at all.
Here's an oversimplified example. Suppose that the hypothesis of "early filter" tells us that there are four planets, exactly one of which contains life. "Late filter" says there are four planets, all of which contain life. Suppose for convenience that if life exists on Earth at all, you will exist on Earth. Then P(you exist | early filter) = 1/4 while P(you exist | late filter) = 1. This is where the doomsday update comes from.
27 comments
Comments sorted by top scores.
comment by Wes_W · 2014-12-13T06:48:52.668Z · LW(p) · GW(p)
This seems like an argument about what reference class is appropriate to use for anthropic reasoning?
I am very confused about anthropics in general, but I'm not sure this even affects the argument. We know that our box is labeled "Earth", but we still don't know if it's an early-filter box, a middle-filter box, or a late-filter box. And since we know that almost all boxes are late-filter boxes...
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2014-12-13T07:02:20.601Z · LW(p) · GW(p)
Thanks, Wes_W. :)
When using SIA (which is actually an abbreviation of SSA+SIA), there are no reference classes. SIA favors hypotheses in proportion to how many copies of your subjective experiences they contain. Shulman and Bostrom explain why on p. 9 of this paper, in the paragraph beginning with "In the SSA+SIA combination".
We know the filter on Earth (if any) can't be early or middle because we're here, though we don't know what the filter looks like on planets in general. If the filter is late, there are many more boxes at our general stage. But SIA doesn't care how many are at our general stage; it only cares how many are indistinguishable from us (including having the label "Earth" on the box). So no update.
Replies from: Wes_W↑ comment by Wes_W · 2014-12-13T07:13:03.082Z · LW(p) · GW(p)
We know the filter on Earth can't be early or middle because we're here, though we don't know what the filter looks like in general.
I'm confused by this sentence. It sounds like it contains the update you're arguing against. Are you presenting this as part of your own argument, or part of the argument you're opposing? Because if we don't have an early or middle filter, that leaves us with a late filter, and thus impending doomsday.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2014-12-13T07:21:26.724Z · LW(p) · GW(p)
Sorry, that sentence was confusing. :/ It wasn't really meant to say anything at all. The "filter" that we're focusing on is a statistical property of planets in general, and it's this property of planets in general that we're trying to evaluate. What happened on Earth has no bearing on that question.
That sentence was also confusing because it made it sound like a filter would happen on Earth, which is not necessarily the case. I edited to say "We know the filter on Earth (if any)", adding the "if any" part.
comment by CarlShulman · 2014-12-15T04:59:07.191Z · LW(p) · GW(p)
It has been endorsed by Robin Hanson, Carl Shulman, and Nick Bostrom.
The article you cite for Shulman and Bostrom does not endorse the SIA-doomsday argument. It describes it, but:
- Doesn't take a stance on the SIA; it does an analysis of alternatives including SIA
- Argues that the interaction with the Simulation Argument changes the conclusion of the Fermi Paradox SIA Doomsday argument given the assumption of SIA.
↑ comment by Brian_Tomasik · 2014-12-15T05:03:27.217Z · LW(p) · GW(p)
Thanks for the correction! I changed "endorsed" to "discussed" in the OP. What I meant to convey was that these authors endorsed the logic of the argument given the premises (ignoring sim scenarios), rather than that they agreed with the argument all things considered.
Replies from: CarlShulman↑ comment by CarlShulman · 2014-12-15T05:16:13.873Z · LW(p) · GW(p)
Thanks Brian.
comment by ChristianKl · 2014-12-13T15:54:41.516Z · LW(p) · GW(p)
If there are a lot of human-level extraterrestrials and they don't advance beyond that point but get wiped out, that's nothing that you should ignore even if those human-level extraterrestrials are a bit different from yourself.
If you do have an argument that you are different from them in a way that prevents you from the same fate, then there existance doesn't matter. On the other hand I don't think that your unique knowledge about earth makes you different in that way.
comment by DanielFilan · 2014-12-13T22:00:35.244Z · LW(p) · GW(p)
Upvoted for editing the OP to include the resolution.
comment by Strilanc · 2014-12-13T16:10:51.294Z · LW(p) · GW(p)
Wait, I had the impression that this community had come to the consensus that SIA vs SSA was a problem along the lines of "If a tree falls in the woods and no one's around, does it make a sound?"? It finds an ambiguity in what we mean by "probability", and forces us to grapple with it.
In fact, there's a well-upvoted post with exactly that content.
The Bayesian definition of "probability" is essentially just a number you use in decision making algorithms constrained to satisfy certain optimality criteria. The optimal number to use in a decision obviously depends on the problem, but the unintuitive and surprising thing is that it can depend on details like how forgetful you are and whether you've been copied and how payoffs are aggregated.
The post I linked gave some examples:
If Sleeping Beauty is credited a cumulative dollar every time she guesses correctly, she should act as if she assigns a probability of 1/2 to the proposition.
If Sleeping Beauty is given a dollar only if she guesses correctly in all cases, otherwise nothing, then she should act as if she assigns a probability of 1/3 to the proposition.
Other payoff structures give other probabilities. If you never recombine Sleeping Beauty, then the problem starts to become about whether or not she values her alternate self getting money and what she believes her alternate self will do.
Replies from: Kindly, Brian_Tomasik↑ comment by Kindly · 2014-12-13T20:25:24.658Z · LW(p) · GW(p)
I agree that thinking about payoffs is obviously correct, and ideally anyone talking about SIA and SSA should also keep this in the back of their heads. That doesn't make anthropic assumptions useless, for the following two reasons:
They give the correct answer for some natural payoff structures.
They are friendlier to our intuitive ideas of how probability should work.
I don't actually think that they're worth the effort, but that's a just a question of presentation. In any case, the particular choice of anthropic language is less important than engaging with the thesis, though the particular avenue of engagement may be along the lines of "SIA is inappropriate for the kind of payoffs involved in the Doomsday Argument, because..."
↑ comment by Brian_Tomasik · 2014-12-13T19:16:29.456Z · LW(p) · GW(p)
I don't think question pits SSA against SIA; rather, it concerns what SIA itself implies. But I think my argument was wrong, and I've edited the top-level post to explain why.
comment by James_Miller · 2014-12-13T07:34:22.125Z · LW(p) · GW(p)
Imagine an extremely powerful computer in another universe that runs a vast number of simulations of our universe, enough so that someone with exactly your current brain state appears billions of times. I think you could use Katja's argument to show that most of the time the civilization you belong to is doomed.
Replies from: LizzardWizzard, Gunnar_Zarncke, Brian_Tomasik↑ comment by LizzardWizzard · 2014-12-13T08:03:53.134Z · LW(p) · GW(p)
I guess you are confusing our universe with parallel worlds. It is very doubtful that there is a planet with the same geography and processes of evolution that completely replicated ours (even giving that evolution on Earth is the only way for life to emerge) so that there are ET humans who named one of their countries USA. So it is obvious that no one up there could share completely same experiences with us
Replies from: James_Miller↑ comment by James_Miller · 2014-12-13T08:16:32.769Z · LW(p) · GW(p)
If the many-worlds interpretation of quantum physics is true then there are lots of universes really, really similar to ours.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2014-12-13T08:31:45.787Z · LW(p) · GW(p)
Yes, but the Fermi paradox and Great Filter operate within a given branch of the MWI multiverse.
↑ comment by Gunnar_Zarncke · 2014-12-13T10:28:50.236Z · LW(p) · GW(p)
That is a simulation style argument that is excluded at the top.
↑ comment by Brian_Tomasik · 2014-12-13T07:46:45.752Z · LW(p) · GW(p)
Sorry, I'm not seeing it. Could you spell out how?
I agree that allowing simulation arguments changes the ball game. For instance, sim args favor universes with lots of simulated copies of you. This requires that at least one alien civilization develops AI within a given local region of the universe, which in turn requires that the filters can't be too strong. But this is different from Katja's argument.
Replies from: James_Miller↑ comment by James_Miller · 2014-12-13T08:11:20.641Z · LW(p) · GW(p)
To simplify things imagine that before you think about the Fermi paradox you calculate that we are in one of four types of universes:
In (A) civilizations frequently reach our level of development and then survive for a long time. In (B) civilizations frequently reach our level of development but then quickly get destroyed. In (C) civilizations rarely reach our level of development but if they do they survive for a long time. In (D) civilizations rarely reach our level of development but get quickly destroyed if they do. You assign some probability to our universe being in (B) or (D) (i.e. we are doomed).
Then you think about the Fermi paradox and realize that it drastically reduces the odds we are in (A), and this causes you to update thinking it more likely that we are in (B) or (D). Then you realize that since more brains like yours exist in (B) then (C), we are in big trouble.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2014-12-13T08:30:56.153Z · LW(p) · GW(p)
Thanks! I think this is basically a restatement of Katja's argument. The problem seems to be that comparing number of brains like ours isn't the right question. The question is how many minds are exactly ours, and this number has to be the same (ignoring simulations) between (B) and (C): namely, there is one civilization exactly like ours in either case.
Replies from: James_Miller↑ comment by James_Miller · 2014-12-13T16:50:38.923Z · LW(p) · GW(p)
So if eternal inflation were correct and there were a vast number of universes, many corresponding to (A), (B), (C), and (D) with many minds exactly like yours in each then you would accept Katja's argument?
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2014-12-13T19:08:26.866Z · LW(p) · GW(p)
Not sure of the relevance of eternal inflation. However, I think I've realized where my argument went astray and have updated the post accordingly. Let me know if we still disagree.
comment by James_Miller · 2014-12-13T16:55:46.672Z · LW(p) · GW(p)
We should be able to find a simple bet that would have a positive expected value if Katja's type of anthropic reasoning is correct, but a negative expected value if it is wrong.
comment by DanielLC · 2014-12-13T07:11:23.363Z · LW(p) · GW(p)
It's highly unlikely a priori that any being would have your exact experiences, but the higher the proportion of human-level life-forms, the more likely it is for there to be one. As such, knowledge that there is one is evidence of a higher proportion of human-level life-forms.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2014-12-13T07:26:58.592Z · LW(p) · GW(p)
That's what I originally thought, but the problem is that the probabilities of each life-form having your experiences are not independent. Once we know that one (non-simulated) life-form has your experiences in our region of the universe, this precludes other life-forms having those exact experiences, because the other life forms exist somewhere else, on different-looking planets, and so can't observe exactly what you do.
Given our set of experiences, we filter down the set of possible hypotheses to those that are consistent with our experiences. Of the (non-simulation) hypotheses that remain, they all contain only one copy of us in our local region of the universe.
Replies from: DanielLC↑ comment by DanielLC · 2014-12-13T18:11:06.162Z · LW(p) · GW(p)
They don't necessarily exist on different-looking planets. It's highly unlikely that two planets will look exactly the same, but that's just because it's so unlikely for a planet to look exactly like that to begin with. It's not that one planet looking like that prevents another planet from doing so.
A given hypothesis with many human-level lifeforms is less likely to be filtered out than one with few. For example, imagine that it's just as likely a priori for there to be one set as two. There's a 25% chance of life on Earth, a 25% chance of life on Alpha Centari, and a 50% chance of life on both. Then we filter out all the ones without life on Earth, and we're stuck with a 33% chance of life on Earth and a 67% chance of life on both.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2014-12-13T19:07:03.545Z · LW(p) · GW(p)
Thanks! What you explain in your second paragraph was what I was missing. The distinction isn't between hypotheses where there's one copy of me versus several (those don't work) but rather between hypotheses where there's one copy of me versus none, and an early filter falsely predicts lots of "none"s.