Simulation Argument errors

post by Will_Newsome · 2011-06-10T12:15:08.254Z · LW · GW · Legacy · 9 comments

I was reading the simulation argument again for kicks and a few errors tripped me up for a bit. I figured I'd point them out here in case anyone's interested or noticed the same problems. If I made a mistake in my analysis please let me know so I can promptly put a bolded warning saying so at the top of this article. (I do not endorse the method of thinking that involves naive anthropics or probabilities of simulation but nonetheless I am willing to play with that framework sometimes for communication's sake.)

We can reasonably argue that all three of the propositions at the end of section 4 of the paper, titled "The core of the simulation argument", are false. Most human-level technological civilizations can survive to reach a posthuman stage (fp=.9), and want to (fI=.9) and are able to run lots of ancestor simulations; and yet there can conceivably be no observers with human-type experiences that live in simulations (fsim=0). Why? Because not all human-level technological civilizations are human technological civilizations; it could easily be argued that most aren't. Human technological civilizations could be part of the fraction of human-level technological civilizations that do not survive to reach a posthuman stage, or survive but do not want to run lots of ancestor simulations. Thus there will be no human ancestor-simulations even if there are many many alien ancestor-simulations who humans do not share an observer moment reference class with.

Nitpicking? Not quite. This forces us to change "fraction of all human-level technological civilizations that survive to reach posthuman stage" to "probability of human civilization reaching posthuman stage", but then some of the discussion in the paper's Interpretation section (section 6) sounds pretty weird because it's comparing human civilization to other human-level civilizations. The equivocation on "posthuman" causes various other statements and passages in the original article to be false-esque or ambiguous, and these would need to be changed.  fsim should be changed to fancestor_sim as well; we might be in non-ancestor simulations. The fraction of us in ancestor-simulations is just one possible lower bound for the fraction of us in simulations generally. Luckily, besides that error I think the paper mostly avoids insinuating that we are unlikely to be in a simulation if we are not in an ancestor-simulation.

The abstract of the paper differs from section 4, and uses "human" instead of "human-level". "This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation." Using the "descended from humans" definition of "posthuman", we see that this argument works; however, it is not supported by section 4, which currently fails to specify human civilizations only. Using the "very technologically advanced" definition of "posthuman", we see that this argument fails for the reasons given above. Either way the wording should be made clearer, especially so considering that section 6 talks about posthuman civilizations that aren't posthuman. It also doesn't match the conclusion despite the similar structure.

The conclusion is more like section 4, and thus fails in the same way as section 4. Not only that, the conclusion says something really weird: "In the dark forest of our current ignorance, it seems sensible to apportion one’s credence roughly evenly between (1), (2), and (3)." I hope this isn't implying the credences should sum to 1, which would be absurd. After making the corrections suggested above it is easy to see that the 90% confidence in all of (1), (2), and (3) is justifiable. (A vaguely plausible scenario to go with that one is where human civilization gets uFAIed, alien civilizations that don't get uFAIed don't waste time simulating their ancestors but instead simulate millions of possible sibling civilizations that got uFAIed for reasons of acausal trade plus diminishing marginal utility functions or summat, and thus we're in one of those sibling simulations while aliens try to compute our values and our game theoretic trustworthiness et cetera.)

All that said, despite the current problems with the structure of its less important supporting arguments, the final sentence remains true: "Unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation."

9 comments

Comments sorted by top scores.

comment by JenniferRM · 2011-06-11T17:19:02.016Z · LW(p) · GW(p)

My current favorite simulation argument scenario is that we're one run in a vast montecarlo wargaming simulation aimed at figuring out what sort of competition an alien species from another galaxy is likely to find in the milky way, when it comes into contact with whoever is "really" in the source version of our galaxy :-)

comment by AlexM · 2011-06-10T19:47:56.415Z · LW(p) · GW(p)

Our current civilization runs lots of computer simulations, nearly all of them are for entertainment purpose. For one "ancestor simulation" trying to be accurate there are millions of WoW and other games with no attempt of realism whatsoever.

Therefore, we if we are in simulation, we are orcs and trolls waiting to be slaughtered by players for few measly XP :-P

comment by DanielLC · 2011-06-11T20:09:32.576Z · LW(p) · GW(p)

humans do not share an observer moment reference class with.

Wait, are you saying that species is picked first, then member? For example, if there are 100 billion humans, and 100 trillion Andromedarians, you'd be equally likely to be either species, and thus 1000 times more likely to be a given human?

Replies from: Will_Newsome
comment by Will_Newsome · 2011-06-11T23:59:25.946Z · LW(p) · GW(p)

Not sure I understand. But nah, I'm saying that due to different reference classes it would be meaningless for a human to calculate the probability they'd end up as them instead of an alien ancestor sim. It's confused but that's Bostrom's framework. So ten quadrillion alien sims doesn't increase the probability you're a sim, the indifference principle no longer holds.

Replies from: DanielLC
comment by DanielLC · 2011-06-12T00:58:23.181Z · LW(p) · GW(p)

Let me put it this way. Ignore for a second what species you are. What's the probability that you're human given that there are 100 billion humans and 100 trillion Andromedians? They can't simply be "different reference classes". You can be any of them. You only know which one you are because you checked, which should have updated your priors about which is more common, and thus which started the simulations.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-06-12T11:58:52.546Z · LW(p) · GW(p)

They can't simply be "different reference classes".

I think I found out why you are assuming that the aliens and humans from my post share an observer moment reference class (or a reference class at the species level at least). I just noticed that Bostrom's fsim was actually defined as the fraction of human-type experiences in sims, not human-level civilization experiences in sims, and thus I was wrong in interpreting fsim otherwise. (He didn't define it by using the term immediately followed by a definition like he did for all the other terms, which threw me off.)

So now my objection is: Bostrom would have to argue that all human-level civilizations have human-type experiences for him to justify calculating the fraction of simulated human-level civilization experiences to draw conclusions about the fraction of simulated human-type experiences. Currently the two sides of his fsim equation don't actually match up.

It seems to me that you were using Bostrom's fsim human-type assumption where I was using a human-level assumption. Given my assumption all the aliens from my post could be nonsentient swarm intelligences. In this scenario it seems clear to me that the number of swarms and the probability of finding ourselves as us are near-completely unrelated. The swarm could simulate a bazillion ancestor swarms for all we care. To justify using the righthand side of the equation to talk about fsim, then, Bostrom would have to change the terms to indicate human-type experiences only. This would also mean changing the wording of the conclusion, et cetera.

comment by timtyler · 2011-06-10T18:10:46.523Z · LW(p) · GW(p)

All that said, despite the current problems with the structure of its less important supporting arguments, the final sentence remains true: "Unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation."

Not one that looks like this, anyway.

comment by cousin_it · 2011-06-10T12:49:20.858Z · LW(p) · GW(p)

I can't help thinking that such far-fetched arguments can easily fail because they make hidden assumptions about what's possible in reality and what isn't, and there's no easy way to notice these assumptions except by making genuine conceptual progress. For example, if civilization-level quantum suicide works, it makes your final statement false (and also defuses the Fermi paradox).

Replies from: Will_Newsome
comment by Will_Newsome · 2011-06-10T13:02:56.035Z · LW(p) · GW(p)

How do you think it makes Bostrom's conclusion false? Post QM-suicide singleton makes lots of ancestor sims but they have little 'measure'? I wouldn't really count that. (Aside from that quantum suicide seems highly unlikely to be optimal, it's the same as the "only care about the copy of you that wins the lottery" confusion.)

I agree that the scenarios Bostrom talks about aren't interesting. I just figured I'd critique them since a lot of people do.