Prokaryote Multiverse. An argument that potential simulators do not have significantly more complex physics than ours

post by mako yass (MakoYass) · 2019-08-18T04:22:53.879Z · LW · GW · 5 comments

Contents

  Definitions
  What this argument is for
  The argument
None
5 comments

Definitions

"Universe" can no longer be said to mean "everything", such a definition wouldn't be able to explain the existence of the word "multiverse". I define universe as a region of existence that, from the inside, is difficult to see beyond.

I define "Multiverse" as: Everything, with a connoted reminder; "everything" can be presumed to be much larger and weirder than "everything that you have seen or heard of".


What this argument is for

This argument disproves the simulation argument for simulators hailing from universes much more complex than our own. Complex physics suffice much much more powerful computers (I leave proving this point as an exercise to the reader). If we had to guess what our simulators might look like, our imagination might go first to universes where simulating an entire pocket universe like ours is easy, universes which are as we are to flatland or to conway's game of life. We might imagine universes with more spacial dimensions or forces that we lack.

I will argue that this would be vanishingly unlikely.

This argument does not refute the common bounded simulation argument of simple universes (which includes ancestor simulations). It does carve it down a bit. It seems to be something that, if true, would be useful to know.


The argument

The first fork of the argument is that a more intricate machine is much less likely to generate an interesting output.

Life needs an interesting output. Life needs a very even combination of possibility, stability, and randomness. The more variables you add to the equation, the smaller the hospitable region within the configuration space. The hospitable configuration-region within our own physics appears to be tiny (wikipedia, anthropic coincidences) (and I'm sure it is much tinier than is evidenced there). The more variables a machine has to align before it can support life, the more vanishingly small the cradle will be within that machine's spaces.


The second fork of the argument is that complex physics are simply the defining feature of a theory that fails kolmogorov's razor (our favoured formalisation of occam's razor).

If we are to define some prior distribution over what exists, out beyond what we can see, kolmogorov complexity seems like a sensible metric to use. A universe generated by a small machine is much more likely a-priori - perhaps we should assume it occurs with much greater frequency - than a universe that can only be generated by a large machine.

If you have faith in solomonoff induction, you must assign lower measure to complex universes even before you consider those universes' propensity to spawn life.


I claim that one large metaphysical number will be outweighed by another large metaphysical number. I propose that the maximum number of simple simulated universes that could be hosted within a supercomplex universe is unlikely to outnumber the natural instances of simple universes that lay about in the multiverse's bulk.

5 comments

Comments sorted by top scores.

comment by shminux · 2019-08-18T05:03:16.930Z · LW(p) · GW(p)

I reflexively downvoted this, so I feel obliged to explained why. Mostly because it reads to me like content-free word salad repeating the buzzwords like Solomonoff induction, Kolmogorov complexity and Occam's razor. And it claims to disprove something it doesn't even clearly define. Not trying to impose my opinion on others here, just figured I'd write it out, since being silently downvoted sucks, at least for me.

Replies from: Viliam
comment by Viliam · 2019-08-18T12:40:38.433Z · LW(p) · GW(p)

My understanding is that the article makes these claims:

1. Universes with "more complex rules" than ours are actually less likely to contain life, because there are more possibilities how things could go wrong.
2. Universes with "more complex rules" are a priori less likely.
Therefore: If our universe is a simulation in another universe, the parent universe likely doesn't have "more complex rules" than ours, because the probability penalty for having "more complex rules" outweighs the fact that such universe could easily find enough computing power to simulate many universes like ours.

I am not defending the assumptions, nor the conclusion, only trying to provide a summary with fewer buzzwords. (Actually, I agree with the assumption 2, but I am not convinced about the rest.)

comment by TAG · 2019-08-18T23:58:01.572Z · LW(p) · GW(p)

If we are to define some prior distribution over what exists, out beyond what we can see, kolmogorov complexity seems like a sensible metric to use. A universe generated by a small machine is much more likely a-priori—perhaps we should assume it occurs with much greater frequency—than a universe that can only be generated by a large machine.

But an unsimulated universe is likeliest of all, by the same reasoning.

Actually, you don't need to use K complexity specifically...most versions of occams razor weigh against a simulated universe.

I propose that the maximum number of simple simulated universes that could be hosted within a supercomplex universe is unlikely to outnumber the natural instances of simple universes that lay about in the multiverse’s bulk.

That's uncountable infinity in many versions of MW theory,so it's hard to exceed. But if you are going to treat MW theory as the main alternative to simulationism, you need to argue for it to some extent.

comment by mako yass (MakoYass) · 2019-08-18T04:29:47.529Z · LW(p) · GW(p)

I should note, I don't know how to argue persuasively for faith in solomonoff induction (especially as a model of the shape of the multiverse). It's sort of at the root of our epistemology. We believe it because we have to ground truth on something, and it seems to work better than anything else.

I can only hope someone will be able to take this argument and formalise it more thoroughly in the same way that hofstadter's superrationality has been lifted up into FDT and stuff (does MIRI's family of decision theories have a name? Is it "LDTs"? I've been wanting to call them "reflective decision theories" (because they reflect each other, and they reflect upon themselves) but that seemed to be already in use. (Though, maybe we shouldn't let that stop us!))

Replies from: Gurkenglas
comment by Gurkenglas · 2019-08-20T22:19:04.751Z · LW(p) · GW(p)

I'd say that the only way to persuade someone using epistomology A of epistomology B is to show that A endorses B. Humans have a natural epistemology that can be idealized as a Bayesian prior of hypotheses being more or less plausible interacting with worldly observations. "The world runs on math." starts out with some plausibility, and then quickly drowns out its alternatives given the right evidence. Getting to Solomonoff Induction is then just a matter of ruling out the alternatives, like a variant of Occam's razor which counts postulated entities. (That one is ruled out because is forbids postulating galaxies made of billions of stars.)

In the end, our posterior is still human-specializing-to-math-specializing-to-Solomonoff. If we find some way to interact with uncomputable entities, we will modify Solomonoff to not need to run on Turing machines. If we find that Archangel Uriel ported the universe to a more stable substrate than divine essence in 500 BC, we will continue to function with only slight existential distress.