Evidence For Simulation

post by TruePath · 2012-01-27T23:07:42.694Z · LW · GW · Legacy · 25 comments

Contents

  Computational Fingerprints
  Software Fingerprints
  Design Fingerprints
  Other Fingerprints
None
25 comments

The recent article on overcomingbias suggesting the Fermi paradox might be evidence our universe is indeed a simulation prompted me to wonder how one would go about gathering evidence for or against the hypothesis that we are living in a simulation.  The Fermi paradox isn't very good evidence but there are much more promising places to look for this kind of evidence.  Of course there is no sure fire way to learn that one isn't in a simulation, nothing prevents a simulation from being able to perfectly simulate a non-simulation universe, but there are certainly features of the universe that seem more likely if the universe was simulated and their presence or absence thus gives us evidence about whether we are in a simulation.

 

In particular, the strategy suggested here is to consider the kind of fingerprints we might leave if we were writing a massive simulation.  Of course the simulating creatures/processes may not labor under the same kind of restrictions we do in writing simulations (their laws of physics might support fundamentally different computational devices and any intelligence behind such a simulation might be totally alien).  However, it's certainly reasonable to think we might be simulated by creatures like us so it's worth checking for the kinds of fingerprints we might leave in a simulation.

 

Computational Fingerprints

Simulations we write face several limitations on the computational power they can bring to bear on the problem and these limitations give rise to mitigation strategies we might observe in our own universe.  These limitations include the following:

  1. Lack of access to non-computable oracles (except perhaps physical randomness).

    While theoretically nothing prevents the laws of physics from providing non-computable oracles, e.g., some experiment one could perform that discerns whether a given Turing machine halts (halting problem = 0') all indications suggest our universe does not provide such oracles.  Thus our simulations are limited to modeling computable behavior.  We would have no way to simulate a universe that had non-computable fundamental laws of physics (except perhaps randomness).

    It's tempting to conclude that the fact that our universe apparently follows computable laws of physics modulo randomness provides evidence for us being a simulation but this isn't entirely clear.  After all had our laws of physics provided access to non-computable oracles we would presumably not expect simulations to be so limited either.  Still, this is probably weak evidence for simulation as such non-computable behavior might well exist in the simulating universe but be practically infeasable to consult in computer hardware.  Thus our probability for seeing non-computable behavior should be higher conditional on not being a simulation than conditional on being a simulation.
  2. Limited ability to access true random sources.

    The most compelling evidence we could discover of simulation would be the signature of a psuedo-random number generator in the outcomes of `random' QM events.  Of course, as above, the simulating computers might have easy access to truly random number generators but it's also reasonable they lack practical access to true random numbers at a sufficient rate.
  3. Limited computational resources. 

    We always want our simulations to run faster and require less resources but we are limited by the power of our hardware.  In response we often resort to less accurate approximations when possible or otherwise engineer our simulation to require less computational resources.  This might appear in a simulated universe in several ways.
    • Computationally easy basic laws of physics. For instance the underlying linearity of QM (absent collapse) is evidence we are living in a simulation as such computations have a low computational complexity.  Another interesting piece of evidence would be discovering that an efficient global algorithm could be used that generates/uses collapse to speed computation.
    • Limited detail/minimal feature size.  An efficient simulation would be as course grained as possible while still yielding the desired behavior.  Since we don't know what the desired behavior might be for a universe simulation it's hard to evaluate this criteria but the indications that space is fundamentally quantized (rather than allowing structure at arbitrarily small scales) seems to be evidence for simulation.
    • Substitution of approximate calculations for expensive calculations in certain circumstances.  Weak evidence could be gained here by merely observing that the large scale behavior of the universe admits efficient accurate approximations but the key piece of data to support a simulated universe would be observations revealing that sometimes the universe behaved as if it was following a less accurate approximation rather than behaving as fundamental physics prescribed.  For instance discovering that distant galaxies behave as if they are a classical approximation rather than a quantum system would be extremely strong evidence. 
    • Ability to screen off or delay calculations in regions that aren't of interest.  A simulation would be more efficient if it allowed regions of less interest to go unsimilated or at least to delay that simulation without impacting the regions of greater interest.  While the finite speed of light arguably provides a way to delay simulation of regions of lesser interest QM's preservation of information and space-like quantum correlations may outweigh the finite speed of light on this point tipping it towards non-simulation.
  4. Limitations on precision.

    Arguably this is just a variant of 3 but it has some different considerations.  As with 3 we would expect a simulation to bottom out and not provide arbitrarily fine grained structure but in simulations precision issues also bring with them questions of stability.  If the law's of physics turn out to be relatively unaffected by tiny computational errors that would push in the direction of simulation but if they are chaotic and quickly spiral out of control in response to these errors it would push against simulation.  Since linear systems are virtually always stable te linearity of QM is yet again evidence for simulation.
  5. Limitations on sequential processing power.

    We find that finite speed limits on communication and other barriers prevent building arbitrarily fast single core processors.  Thus we would expect a simulation to be more likely to admit highly parallel algorithms.  While the finite speed of light provides some level of parallelizability (don't need to share all info with all processing units immediately) space-like QM correlations push against parallelizability.  However, given the linearity of QM the most efficient parallel algorithms might well be semi-global algorithms like those used for various kinds of matrix manipulation.  It would be most interesting if collapse could be shown to be a requirement/byproduct of such efficient algorithms.
  6. Imperfect hardware

    Finally there is the hope one might discover something like the Pentium division bug in the behavior of the universe.  Similarly one might hope to discover unexplained correlations in deviations from normal behavior, e.g., correlations that occur at evenly spaced locations relative to some frame of reference, arising from transient errors in certain pieces of hardware.

Software Fingerprints

Another type of fingerprint that might be left are those resulting from the conceptual/organizational difficulties occuring in the software design process.  For instance we might find fingerprints by looking for:

  1. Outright errors, particularly hard to spot/identify errors like race conditions or the like.  Such errors might allow spillover information about other parts of the software design that would let us distinguish them from non-simulation physical effects.  For instance, if the error occurs in a pattern that is reminiscent of a loop a simulation might execute but doesn't correspond to any plausible physical law it would be good evidence that it was truly an error.
  2. Conceptual simplicity in design.  We might expect (as we apparently see) an easily drawn line between initial conditions and the rules of the simulation rather than physical laws which can't be so easily divided up, e.g., laws that take the form of global constraint satisfaction.  Also relatively short laws rather than a long regress into greater and greater complexity at higher and higher energies would be expected in a simulation (but would be very very weak evidence).
  3. Evidence of concrete representations.  Even though mathematically relativity favors no reference frame over another often conceptually and computationally it is desierable to compute in a particular reference frame (just as it's often best to do linear algebra on a computer relative to an explicit basis).  One might see evidence for such an effect in differences in the precision of results or rounding artifacts (like those seen in re-sized images).

Design Fingerprints

This category is so difficult I'm not really going to say much about it but I'm including it for completeness.  If our universe is a simulation created by some intentional creature we might expect to see certain features receive more attention than others.  Maybe we would see some really odd jiggering of initial conditions just to make sure some events of interest occurred but without a good idea what is of interest it is hard to see how this could be done.  Another potential way for design fingerprints to show up is in the ease of data collection from the simulation.  One might expect a simulation to make it particularly easy to sift out the interesting information from the rest of the data but again we don't have any idea what interesting might be.

 

Other Fingerprints

I'm hoping the readers will suggest some interesting new ideas as to what one might look for if one was serious about gathering evidence about whether we are in a simulation or not.

25 comments

Comments sorted by top scores.

comment by AlexMennen · 2012-01-28T07:46:49.201Z · LW(p) · GW(p)

Also relatively short laws rather than a long regress into greater and greater complexity at higher and higher energies would be expected in a simulation (but would be very very weak evidence).

If we use Occam's razor, I think you got that backwards. Conditional on us not being in a simulation, we should be in close to the simplest possible universe that could sustain complex life. But it would be difficult for the simulators to figure out the simplest design that would get them what they want, and even if they could, they might choose to sacrifice some simplicity for ease of execution (e.g. wavefunction collapse, as Normal_Anomaly suggested).

Also, faster-than-light neutrinos could be a bug in a simulation.

Replies from: TruePath
comment by TruePath · 2018-08-15T14:54:09.749Z · LW(p) · GW(p)

Which way you think this goes probably depends on just how strongly you think Occam's razor should be applied. We are all compelled to let the probability of a theory's truth go to zero as it's kolmogorov complexity goes to infinity but there is no prima facia reason to think it drops off particularly fast or slow. If you think , as I do, that there is only relatively weak favoring of more simple scientific laws while intelligent creatures would favor simplicity as a cognitive technique for managing complexity quite strongly you get my conclusion. But I'll admit the other direction isn't implausible.

comment by Normal_Anomaly · 2012-01-28T02:11:47.031Z · LW(p) · GW(p)

Would discovering that a wavefunction collapse postulate exists be evidence for simulation? A simulation that actually computed all Everett branches would demand exponentially more resources, so a simulator would be more likely to prune branches either randomly (true or pseudo-) or according to some criterion.

Replies from: TruePath
comment by TruePath · 2014-06-22T01:00:37.844Z · LW(p) · GW(p)

No since experientially we already know that we don't perceive the world as if all everett branches are computed.

In other words what is up for discovery is not 'not all everett branches are fully realized'....that's something from our apparent standpoint as belonging to a single such branch we could never actually know. All we could discover was that the collapse of the wavefunction is observable inside our world.

In other words nothing stops the aliens from simply not computing plenty of everett branches but leaving no trace in our observables to tell us that only one branch is actually real.

comment by Dmytry · 2012-01-28T14:18:28.248Z · LW(p) · GW(p)

I think the best reason against simulation is that within the super-universe the simulated minds implemented more directly (e.g. simulated on the level of neurons) and provided with rough VR (akin to some kind of videogame) can enormously outnumber the minds in the simulated universes where it appears as if minds and everything is implemented on laws of physics like ours, in extremely computationally inefficient way like we are (slowdown on at least order of 10^30 i'd say).

If we are in a simulation created by intelligent beings, we should expect (with overwhelming odds) to find ourselves in the universe laws of which were to some extent designed to minimize computational power, even if those universes use up only a very small fraction of computational power used to simulate universes there.

If we are in the universe that simply exists, we should expect to find simplest possible laws of physics that allow for our existence, with no regard whatsoever for minimization of computational power necessary. Our universe looks very much like the latter case and entirely unlike the former case.

Replies from: TruePath
comment by TruePath · 2018-08-15T15:09:51.151Z · LW(p) · GW(p)

No, because we want the probability of being a simulation conditional on having complex surroundings not the probability of having complex surroundings conditional on a simulation. The fact that a very great number of simulated beings are created in simple universes doesn't mean that none is ever simulated in a complex one or tell us anything about whether being such a simulation is more likely than being in a physical universe.

comment by roystgnr · 2012-01-28T16:36:28.449Z · LW(p) · GW(p)

Since linear systems are virtually always stable

I disagree. See the backwards heat equation for a simple example:

du/dt = -laplacian(u)

The higher the frequencies you allow in the solution, the faster tiny bits of noise blow up on you.

Replies from: TruePath
comment by TruePath · 2018-08-15T15:01:42.159Z · LW(p) · GW(p)

Ok, this is a good point. I should have added a requirement that the true solution is C infinity on the part of the manifold that isn't in our temporal past. The backward's heat equation is ill-posed for this reason on...it can't be propogated arbitrarily far forward (i.e. back).

comment by Shmi (shminux) · 2012-01-28T01:48:12.685Z · LW(p) · GW(p)

The most compelling evidence we could discover of simulation would be the signature of a psuedo-random(sic) number generator in the outcomes of `random' QM events.

The Bell inequality says that this is not likely to happen, since pseudo-random number generators are deterministic and so can be treated as hidden variables. There is a chance that they are non-local hidden variables, but that would imply some weird things regarding the time flows in the simulation and in the simulator (for example, there cannot be a bijection between the two).

Replies from: Dmytry
comment by Dmytry · 2012-01-28T14:29:03.034Z · LW(p) · GW(p)

The Bell's inequality is neatly explained in MWI though with no intrinsic randomness whatsoever (just the thermodynamical randomness inside the observer).

Replies from: shminux
comment by Shmi (shminux) · 2012-01-28T18:01:45.462Z · LW(p) · GW(p)

I must have missed that post in the quantum physics sequence, feel free to link it.

Replies from: Dmytry
comment by Dmytry · 2012-01-28T18:39:53.049Z · LW(p) · GW(p)

here Note: all observations in MWI ends up with observer de-cohering.

It's rather easy to misunderstand Bell's theorem and overstate it's applicability. It's a bit like how relativity doesn't prevent a dot of laser projector from moving faster than light on far away enough screen.

Bell's theorem rules out local hidden variables that move around together with particles. It doesn't rule out locality in general, and doesn't rule out lack of randomness. (MWI doesn't have any objective randomness)

Something tangential. In CS, the "non-deterministic", as in "NP-complete" and "non-deterministic Turing machine", is the machine that would fork or would magically pick the transition that leads to solution, rather than machine that would choose some truly random transition from the list.

comment by [deleted] · 2012-01-28T10:35:57.180Z · LW(p) · GW(p)

One thing that confuses me about these discussions, and I'm very willing to be shown where my reasoning is wrong, is that there seems to be an implicit assumption that the simulators must follow any of the rules they've imposed upon us. If I simulate a universe powered by the energy generated by avacados, would the avacado beings try to spot an avacado limit, or an order to the avacados?

A simulator could have a completely different understanding as to how the universe works.

I would guess the argument against this is that why else would we be simulated if not to be a reflection of the universe above? I'm not sure I buy this, or necessarily assign a high probability to it.

Replies from: TruePath
comment by TruePath · 2014-06-22T00:57:10.701Z · LW(p) · GW(p)

I tried to avoid assuming this in the above discussion. You are correct that I do assume that the physics of the simulating world has two properties.

1) Effective physical computation (for the purposes of simulation) is the result of repeated essentially finite decisions. In other words the simulating world does not have access to a oracle that vastly aids in the computation of the simulated world. In other words they aren't simulating us by merely measuring when atoms decay in their world and that just happens to tell them data about a coherent lawlike physical reality.

I don't think this is so much an assumption as a definition of what it means to be simulated. If the description of our universe is embedded in the natural laws of the simulating universe we aren't so much a simulation as just a tiny part of the simulating universe.

2) I do assume that serial computation is more difficult to perform than serial computation, i.e., information can't be effectively transmitted infinitely fast in the simulating universe. Effectively is an important caveat there since even a world with an infinite speed of light would ultimately have to rely signals from sufficiently far off to avoid detection problems.

This is something that surely is plausible to be true. Maybe it isn't. THAT IS WHY I DON'T CLAIM THESE CONSIDERATIONS CAN EVER GIVE US A STRONG REASON TO BELIEVE WE AREN'T A SIMULATION. I do think they could give us strong reasons to believe we are.

comment by Cthulhoo · 2012-01-28T18:46:55.414Z · LW(p) · GW(p)

A, maybe ingenuous, question: how could we know that the world in which the simulation runs resembles many similarities to our own? I can imagine e.g. us running a simulation of Conway's game of life. It could be possible that what we think should be hard to simulate in our world is instead rather easy in a world running od different physics. Is there any source on this subject?

Replies from: Thomas
comment by Thomas · 2012-01-29T09:05:34.428Z · LW(p) · GW(p)

Of course. It was Fredkin who suggested we live in a natural simulation, decades ago, The world he called "Other" was of a different physics and we would be in a simulation running in that Other universe.

Highly more likely than that we have some humanoid programmers above. I guess,

Replies from: Cthulhoo
comment by Cthulhoo · 2012-01-29T11:07:41.566Z · LW(p) · GW(p)

Thank you for the answer. I often have the impression that in this kind of discussion the Other universe is taken to be very similar to ours for apparently no particularly good reason, and I was wondering if I was missing something.

comment by Saladin · 2012-02-06T21:22:28.723Z · LW(p) · GW(p)

Wouldn't it be rational to assume, that what/whoever designed the simulation, would do so for for the same reason that we know all inteligent life complies to: Survival/reproduction and maximizing its pleasure / minimizing pain?

A priori assumptions arent the best ones, but it seems to me that would be a valid starting point that leads to 2 conclusions:

a) the designer is drastically handicapped with its resources and our very limited simulation is the only one running (therefore the question - why is it exactly like it is - why this design at all if we're talking in several "episodes")

b) the designer is can run all the simulations he wants simultaneusly and ours isn't special in any particular way besides being a functional tool (of many) providing the above max p/p to the designer

if we assume a) then the limitations/errors of the simulation would be more severe in every way, making it easier to detect what the author lists. Also, our one simulation would have to be an optimal compromise to achieve the very limited, but still max. p/p for the designer - we could talk about variety of sorts - but only variety with clear and optimal purpose would count. What is so special then in our known configuration of the physical constants? It would seem that a strong anthropic principle would apply - only a universe with inteligent (even simulated) life and physical constants similar to our own would be required for an evolutionary way for this life to evolve - or think it has evolved. I would quess that the world outside our simulation is subject of similar ways of physic and evolution as is known, in a simplified way, in our simulation - by this same anthropic principle.

If b) is the case and we're only one simualtion of many - that would assume that there are no severe restrictions on resources and computational power of the designer. Our simulation would therefore be a lot more detailed with less room (if any) to find errors or any of the kind of proof that we're living in a simulation. Parallel processing the same simulation with diferent, but relevant permutations asside - what could we tell about other simulations running in paralell with ours? That they are very different to our simulation. Since resources arent a problem - variety for max. p/p is the key. The designer could arbitrarily create simulations that are not long-term sustainable, but allow for scripts and vistas impossible to experience in a simulation similar to our own. He could use the resources to explore all relevant (or potentially relevant) possible world simulations and allocate resurces to constantly find new ones. All computationally accessible and relevant worlds would be running in parallel (because there is no need for an "experience cap"). The only limit would be that of an act utalitarian - to run those scenarios, that in the long run bring out the most pleasure.

The level of detail of the simulation is the key - if its very limited - so is the world outside it and our simulation is the best compromise (best possible world) to run - a fact that could be analysed quite intensely.

if it's very detailed (but we still managed to prove we're in a simulation), then we're only a very small drop of paint in a very big picture. But I would guess in this case that our detailed simulation would allow for additional subsimulations, that we could create ourselves. The same could be true for a), but with much greater limitations (requiring limited moemory/experiences and/or plesure loops - very limited ways of maximizing our own pleasure)

Replies from: TruePath, APMason
comment by TruePath · 2018-08-15T15:18:09.519Z · LW(p) · GW(p)

Why assume whatever beings simulated us evolved?

Now I'm sure you're going to say well a universe where intelligent beings just pop into existence fully formed is surely less simple than one where they evolve. However, when you give it some more thought that's not true and it's doubtful if Occam's razor even applies to initial conditions.

I mean supposed for a moment the universe is perfectly deterministic (newtonian or no-collapse interp). In that case the Kolmogorov complexity of a world starting with a big bang that gives rise to intelligent creatures can't be much less and probably is much more than one with intelligent creatures simply popping into existence fully formed After all, I can always just augment the description of the big bang initial conditions with 'and then run the laws of physics for x years' when measuring the complexity.

Replies from: cousin_it
comment by cousin_it · 2018-08-15T17:20:49.937Z · LW(p) · GW(p)

Nice argument! But note that in such a world, all evidence of the past (like fossils) will look like the creatures indeed evolved. So for purposes of most future decisions, the creatures can safely assume that they evolved. To break that, you need to spend more K-complexity.

comment by APMason · 2012-02-06T21:33:03.405Z · LW(p) · GW(p)

Wouldn't it be rational to assume, that what/whoever designed the simulation, would do so for for the same reason that we know all inteligent life complies to: Survival/reproduction and maximizing its pleasure / minimizing pain?

I see two problems with this:

  1. Alien minds are alien, and
  2. that really doesn't seem to exhaust the motives of intelligent life. It would seem to recommend wireheading to us.
Replies from: Saladin
comment by Saladin · 2012-02-07T17:00:15.207Z · LW(p) · GW(p)
  1. If alien means "not comprehensible" (not even through our best magination), then it's folly to talk about such a thing. If we cannot even imagine something to be realistically possible - then for all practical purposes (until objectively shown otherwise) it isnt. Or using modal logic - Possiblly possible = not realistically possible. Physically/logically possible = realistically possible. The later always has bigger weight and by Occam = higher possibility (higher chance to be correct/be closert to truth)

  2. If we imagine the designer is not acting irrationaly or random - then all potential motives go into survival/reproduction and max. p/p. The notion of max. p/p is directly related to the stage of inteligence and self-awareness of the organism - but survival/reproduction is hardwired in all the evolutionary types of life we know.

Replies from: APMason
comment by APMason · 2012-02-07T17:13:19.444Z · LW(p) · GW(p)

By "alien" I really did just mean "different". There are comprehensible possible minds that are nothing like ours.

If we imagine the designer is not acting irrationaly or random - then all potential motives go into survival/reproduction and max. p/p.

I don't think this is true. Imagine Omega comes to you and says, "Look, I can cure death - nobody will ever die ever again, and the only price you have to pay for this is a) you can never have children, and b) your memory will be wiped, and you will be continuously misled, so that you still think people are dying. To you, the world won't look any different. Will you take this deal?" I don't think it would be acting randomly or irrationally to take that deal - big, big gain for relatively little cost, even though your (personal) survival and reproduction and (personal) max. p/p. aren't affected by it. Humans have complicated values - there are lots of things that motivate us. There's no reason to assume that the simulation-makers would be simpler.

comment by [deleted] · 2012-01-29T06:13:21.783Z · LW(p) · GW(p)

As with 3 we would expect a simulation to bottom out and not provide arbitrarily fine grained structure but in simulations precision issues also bring with them questions of stability. If the law's of physics turn out to be relatively unaffected by tiny computational errors that would push in the direction of simulation but if they are chaotic and quickly spiral out of control in response to these errors it would push against simulation.

We can expect the laws of physics to be relatively stable, simulation or no, due to anthropic reasoning. If we lived in a universe where the laws of physics were not stable (on a timescale short enough for us to notice), it would be very difficult for intelligent life to form.

Replies from: TruePath
comment by TruePath · 2012-01-29T23:12:12.467Z · LW(p) · GW(p)

Here stability refers to numerical stability, i.e., whether or not minor errors in computation accumulate over time and cause the results to go wildly astray or do small random errors cancel out or at least not blow up.