A Problem for the Simulation Hypothesis

post by philosophytorres · 2017-03-16T16:02:19.068Z · score: 1 (2 votes) · LW · GW · Legacy · 11 comments

I've posted about this once before, but here's a more developed version of the idea. Does this pose a serious problem for the simulation hypothesis, or does it merely complicate the idea?

1. Which room am I in?

Imagine two rooms, A and B. At a timeslice t2, there are exactly 1000 people in room B and only 1 person in room A. Neither room contains any clues as to which it is; i.e., no one can see anyone else in room B. If you were placed in one of these rooms with only the information above, which would you guess that you were in? The correct answer appears to be room B. After all, if everyone were to bet that they are in room B, almost everyone would win, whereas if everyone were to bet that they are in room A, almost everyone would lose.

Now imagine that you are told that during a time segment t1 to t2, a total of 100 trillion people had sojourned in room A and only 1 billion in room B. How does this extra information influence your response? The question posed above is not which room you are likely to have been in, all things considered, but which room you are currently in at t2. Insofar as betting odds guide rational belief, it still follows that if everyone at t2 were to bet that they are in room A, almost everyone would lose. This differs from what appears to be the correct conclusion if one reasons across time, from t1 to t2. Thus, we can imagine that at some future moment t3 everyone who ever sojourned in either room A or B is herded into another room C and then asked whether their journeys from t1 to t3 took them through room A or B. In this case, most people would win the bet if they were to point at room A rather than room B.

Let’s complicate this situation. Since more people in total pass through room A than room B, imagine that people are swapped in and out of room A faster than room B. Once in either room, a blindfold is removed and the occupant is asked which room they are in. After they answer, the blindfold is put back on. Thus, there are more total instances of removing blindfolds in room A than room B between t1 and t2. Should this fact change your mind about where you are at exactly t2? Surely one could argue that the directly relevant information is that pertaining to each individual timeslice, rather than the historical details of occupants being swapped in and out of rooms. After all, the bet is being made at a particular timeslice about a particular timeslice, and the fact is that most people who bet at t2 that they are in room B at t2 will win some cash, whereas those who bet that they are in room A will lose.

2. The simulation argument

Nick Bostrom (2003) argues that at least one of the following disjuncts is true: (1) civilizations like ours tend to self-destruct before reaching technological maturity, (2) civilizations like ours tend to reach technological maturity but refrain from running a large number of ancestral simulations, or (3) we are almost certainly in a simulation. The third disjunct corresponds to the “simulation hypothesis.” It is based on the following premises: first, assume the truth of functionalism, i.e., that physical systems that exhibit the right functional organization will give rise to conscious mental states like ours. Second, consider the computational power that could be available to future humans. Bostrom provides a convincing analysis that future humans will have at least the capacity to run a large number of ancestral simulations—or, more generally, simulations in which minds sufficiently “like ours” exist.

The final step of the argument proceeds as follows: if (1) and (2) are false, then we do not self-destruct before reaching a state of technological maturity and do not refrain from running a large number of ancestral simulations. It follows that we run a large number of ancestral simulations. If so, we have no independent knowledge of whether we exist in vivo or in machina. A “bland” version of the principle of indifference thus tells us to distribute our probabilities equally among all the possibilities. Since the number of sims would far exceed the number of non-sims in this scenario, we should infer that we are almost certainly simulated. As Bostrom writes, “it may also be worth to ponder that if everybody were to place a bet on whether they are in a simulation or not, then if people use the bland principle of indifference, and consequently place their money on being in a simulation if they know that that’s where almost all people are, then almost everyone will win their bets. If they bet on not being in a simulation, then almost everyone will lose. It seems better that the bland indifference principle be heeded” (Bostrom 2003).

Now, let us superimpose the scenario of Section 1 onto the simulation argument. Imagine that our posthuman descendants colonize the galaxy and their population grows to 100 billion individuals in total. Imagine further that at t2 they are running 100 trillion simulations, each of which contains 100 billion individuals. Thus, the total number of sims equals 10^25. If one of our posthuman descendants were asked whether she is a sim or non-sim, she should therefore answer that she is almost certainly a sim. Alternatively, imagine that at t2 our posthuman descendants decide to run only a single simulation in the universe that contains a mere 1 billion sims, ceteris paribus. Given this situation: if one of our posthuman descendants were asked whether she is a sim given this information, she should quite clearly answer that she is most likely a non-sim.

3. Complications

With this in mind, consider a final possible scenario: our posthuman descendants decide to run simulations with relatively small populations in a serial fashion, that is, one at a time. These simulations could be sped up a million times to enable complete recapitulations of our evolutionary history (as per Bostrom). The result is that at any given timeslice the total number of non-sims will far exceed the total number of sims—yet across time the total number of sims will accumulate and eventually far exceed the total number of non-sims. The result is that if one takes a bird’s-eye view of our posthuman civilization from its inception to its decline (say, because of the entropy death of the cosmos), and if one were asked whether she is more likely to have existed in vivo or in machina, it appears that she should answer “I was a sim.”

But this might not be the right way to reason about the situation. Consider that history is nothing more than a series of timeslices, one after the other. Since the ratio of non-sims to sims favors the former at every possible timeslice, one might argue that one should always answer the question, “Are you right now more likely to exist in vivo or in machina?” with “I probably exist in vivo.” Again, the difficulty that skeptics of this answer must overcome is the ostensible fact that if everyone were to bet on being simulated at any given timeslice—even billions of years after the first serial simulation is run—then nearly everyone would lose, whereas if everyone were to bet that they are a non-sim, then almost everyone would win.

The tension here emerges from the difference between timeslice reasoning and the sort of “atemporal” reasoning that Bostrom employs. If the former is epistemically robust, then Bostrom’s tripartite argument fails because none of the disjuncts are true. This is because the scenario above entails (a) we survive to reach technological maturity, and (b) we run a large number of ancestor simulations, yet (c) we do not have reason to believe that we are in a simulation at any particular moment. The latter proposition depends, of course, upon how we run the simulations (serially versus in parallel) and, relatedly, how we decide to reason about our metaphysical status at each moment in time.

In conclusion, I am unsure about whether this constitutes a refutation of Bostrom or merely complicates the picture. At the very least, I believe it does the latter, requiring more work on the topic.


Bostrom, Nick. 2003. Are You Living in a Computer Simulation? Philosophical Quarterly. 53(211): 243-255.



Comments sorted by top scores.

comment by gjm · 2017-03-16T21:58:46.834Z · score: 2 (2 votes) · LW(p) · GW(p)

Given that the inhabitants of these hypothetical simulations don't know the (external) time, your argument seems to require a principle along the following lines: "If at each precise time t, X is more likely true than Y, then overall X is more likely true than Y" -- even when X and Y are funky conditional indexical anthropic things like "if I am alive then I am in a simulation". But no such principle is true.

(In other words, I agree with ike.)

comment by ike · 2017-03-16T19:52:03.810Z · score: 2 (2 votes) · LW(p) · GW(p)

If you don't know the current time, you obviously can't reason as if you did. If we were in a simulation, we wouldn't know the time in the outside world.

Reasoning of the sort "X people exist in state A at time t, and Y people exist in state B at time t, therefore I have a X:Y odds ratio of being in state A compared to state B" only work if you know you're in time t.

If you carefully explicate what information each person being asked to make a decision has, I'm pretty sure your argument would fall apart. You definitely aren't being explicit enough now about whether the people in your toy scenario know what timeslice they're in.

comment by ike · 2017-03-16T19:53:11.264Z · score: 0 (0 votes) · LW(p) · GW(p)

The "directly relevant information" is the information you know, and not any information you don't know.

If you want to construct a bet, do it among all possibly existing people that, as far as they know, could be each other. So any information that one person knows at the time of the bet, everyone else also knows.

If you don't know the time, then the bet is among all similarly situated people who also don't know the time, which may be people in the future.

comment by johnsonmx · 2017-04-05T17:21:06.275Z · score: 0 (0 votes) · LW(p) · GW(p)

I think the elephant in the room is the purpose of the simulation.

Bostrom takes it as a given that future intelligences will be interested in running ancestor simulations. Why is that? If some future posthuman civilization truly masters physics, consciousness, and technology, I don't see them using it to play SimUniverse. That's what we would do with limitless power; it's taking our unextrapolated, 2017 volition and asking what we'd do if we were gods. But that's like asking a 5-year-old what he wants to do when he grows up, then taking the answer seriously.

Ancestor simulations sound cool to us- heck, they sound amazingly interesting to me, but I strongly suspect posthumans would find better uses for their resources.

Instead, I think we should try to reason about the purpose of a simulation from first principles.

Here's an excerpt from Principia Qualia, Appendix F:

Why simulate anything?

At any rate, let’s assume the simulation argument is viable- i.e., it's possible we're a simulation, and due to the anthropic math, that it's plausible that we're in one now.

Although it's possible that we are being simulated but for no reason, let's assume entities smart enough to simulate universes would have a good reason to do so. So- what possible good reason could there be to simulate a universe? Two options come to mind: (a) using the evolution of the physical world to compute something, or (b) something to do with qualia.

In theory, (a) could be tested by assuming that efficient computations will exhibit high degrees of Kolmogorov complexity (incompressibility) from certain viewpoints, and low Kolmogorov complexity from others. We could then formulate an anthropic-aware measure for this applicable from ‘within’ a computational system, and apply it to our observable universe. This is outside the scope of this work.

However, we can offer a suggestion about (b): if our universe is being simulated for some reason associated with qualia, it seems plausible that it has to do with producing a large amount of some kind of particularly interesting or morally relevant qualia.

comment by denimalpaca · 2017-03-22T22:08:28.530Z · score: 0 (0 votes) · LW(p) · GW(p)

"(1) civilizations like ours tend to self-destruct before reaching technological maturity, (2) civilizations like ours tend to reach technological maturity but refrain from running a large number of ancestral simulations, or (3) we are almost certainly in a simulation."

Case 2 seems far, far more likely than case 3, and without a much more specific definition of "technological maturity", I can't make any statement on 1. Why does case 2 seem more likely than 3?

Energy. If we are to run an ancestral simulation that even remotely wants to correctly simulate as complex phenomenon as weather, we would probably need the scale of the simulation to be quite large. We would definitely need to simulate the entire earth, moon, and sun, as the physical relationships between these three are very intertwined. Now, let's focus on the sun for a second, because it should provide us with all the evidence we need that a simulation would be implausible.

The sun has a lot of energy, and to simulate it would itself require a lot of energy. To simulate the sun exactly as we know it would take MORE energy than the sun, because the entire energy of the sun must be simulated and we must account for the energy lost due to heat or other factors as an engineering concern. So just to properly simulate the sun, we'd need to generate more energy than the sun has, which already seems very implausible on earth, given we can't create a reactor larger than the sun on the earth. If we extend this argument to simulating the entire universe, it seems impossible that humans would ever have the necessary energy to simulate all the energy in the universe, so we must only be able to simulate a part of the universe or a smaller universe. This again follows from the fact that to perfectly simulate something, it requires more energy than the thing simulated.

comment by gjm · 2017-03-23T00:06:35.883Z · score: 0 (0 votes) · LW(p) · GW(p)

To simulate the sun exactly as we know it would take MORE energy than the sun, because the entire energy of the sun must be simulated and we must account for the energy lost due to heat or other factors as an engineering concern.

I don't understand this argument. If it's appealing to a general principle that "simulating something with energy E requires energy at least E" then I don't see any reason why that should be true. Why should it take twice as much energy to simulate a blue photon as a red photon, for instance?

(I am sympathetic to the overall pattern of your argument; I also do not expect civilizations like ours to run a lot of ancestral simulations and have never understood why they should be expected to, and I suspect that one reason why not is that the resources to do it well would be very large and even if it were possible there ought to be more useful things to do with those resources.)

comment by denimalpaca · 2017-03-23T22:17:23.287Z · score: 0 (0 votes) · LW(p) · GW(p)

Let me be a little more clear. Let's assume that we're in a simulation, and that the parent universe hosting ours is the top level (for whatever reason, this is just to avoid turtles all the way down). We know that we can harness the energy of the sun, because not only do plants utilize that energy to metabolize, but we also can harness that energy and use it as electricity; energy can transfer.

Some machine that we're being simulated on must take into account these kinds of interactions and make them happen in some way. The machine must represent the sun in some way, perhaps as 0s and 1s. This encoding takes energy, and if we were to simply encode all the energy of the sun, the potential energy of the sun must exist somewhere in that machine. Even if the sun's information is compressed, it would still have to be decompressed when used (or else we have a "lossy" sun, not good if you don't want your simulations to figure out they're in a simulation) - and compressing/decompressing takes energy.

We know that even in a perfect simulation, the sun must have the same amount of energy as outside the simulation, otherwise it is not a perfect simulation. So if a blue photon has twice as much energy as a red photon, then that fact is what causes twice as much energy to be encoded in a simulated blue photon. This energy encoding is necessary if/when the blue photon interacts with something.

Said another way: If, in our simulation, we encode the energy of physical things with the smallest number of bits possible to describe that thing, and blue photons have twice as much energy as red photons, then it should take X bits to describe the energy of the red photon and 2*X bits to describe the blue photon.

As to extra energy, as a practical (engineering) matter alone it would take more energy to simulate a thing even after the encoding for the thing is done: in our universe, there are no perfect energy transfers, some is inevitably lost as heat, so it would take extra energy to overcome this loss. Secondly, if the simulation had any meta-data, that would take extra information and hence extra energy.

comment by gjm · 2017-03-24T00:12:02.804Z · score: 0 (0 votes) · LW(p) · GW(p)

I still don't understand. (Less tactfully, I think what you're saying is simply wrong; but I may be missing something.)

Suppose we have one simulated photon with 1000 units of energy and another with 2000 units of energy. Here is the binary representation of the number 1000: 1111101000. And here is the binary representation of the number 2000: 11111010000. The second number is longer -- by one bit -- and therefore may take a little more energy to do things with; but it's only 10% bigger than the first number.

Now, if we imagine that eventually each of those photons gets turned into lots of little blobs carrying one unit of energy each, or in some other way has a bunch of interactions whose number is proportional to its energy, then indeed you end up with an amount of simulation effort proportional to the energy. But it's not clear to me that that must be so. And if most interactions inside the simulation involve the exchange of a quantity of energy that's larger than the amount of energy required to simulate one interaction -- which seems kinda unlikely, which is one reason why I am sympathetic to your argument overall, but again I see no obvious way to rule it out -- then even if simulation effort is proportional to energy the relevant constant of proportionality could be smaller than 1.

comment by denimalpaca · 2017-03-24T15:58:30.983Z · score: 0 (0 votes) · LW(p) · GW(p)

I tried to see if anyone else had previously made my argument (but better); instead I found these arguments:


I think the feasibility argument described here better encapsulates what I'm trying to get at, and I'll defer to this argument until I can better (more mathematically) state mine.

"Yet the number of interactions required to make such a "perfect" simulation are vast, and in some cases require an infinite number of functions operating on each other to describe. Perhaps the only way to solve this would be to assume "simulation" is an analogy for how the universe (operating under the laws of quantum mechanics) acts like a quantum computer - and therefore it can "calculate" itself. But then, that doesn't really say the same thing as "we exist in someone else's simulation"." (from the link).

This conclusion about the universe "simulating itself" is really what I'm trying to get at. That it would take the same amount of energy to simulate the universe as there is energy in the universe, so that a "self-simulating universe" is the most likely conclusion, which is of course just a base universe.

comment by g_pepper · 2017-03-22T23:10:16.822Z · score: 0 (0 votes) · LW(p) · GW(p)

Case 2 seems far, far more likely than case 3, and without a much more specific definition of "technological maturity", I can't make any statement on 1. Why does case 2 seem more likely than 3?

"Technical maturity" as used in the first disjunct means "capable of running high-fidelity ancestor simulations". So, it sounds like you are arguing for the 1st disjunct (or something very close to it) rather than the second, since you are arguing that, due to energy constraints, a civilization like ours would be incapable of reaching technological maturity.

comment by denimalpaca · 2017-03-23T21:58:55.750Z · score: 0 (0 votes) · LW(p) · GW(p)

Yes, then I'm arguing that case 1 cannot happen. Although I find it a little tediously tautological (and even more so reductive) to define technological maturity as being solely the technology that makes this disjunction make sense....