# Shock Level 5: Big Worlds and Modal Realism

post by Roko · 2010-05-25T23:19:44.391Z · score: 17 (37 votes) · LW · GW · Legacy · 158 commentsIn recent times, science and philosophy have uncovered evidence that there is something very seriously weird about the universe and our place in it. We used to think that there was one planet earth, inside a universe that is very large (at least 10^26 meters in diameter) but that the reachable universe (*future light-cone* in the terminology of special relativity, or *causal future* in the terminology of GR) was finite. Anything outside the reachable universe is irrelevant, since we can't affect it.

However, cosmologists went on to study the process that probably created the universe, known as inflation. Inflation solves a number of mysteries in cosmology, including the flatness problem. The process of inflation seems to create an infinite number of mini-universes, or "inflationary bubbles" - this is known as chaotic inflation theory. The physical parameters and initial conditions of these bubbles are determined randomly, so every possible set of particle masses, force strengths, *etc *is realized. To quote from this piece by Alan Guth:

*The role of eternal inflation in scientific thinking, however, was greatly boosted by the realization that string theory has no preferred vacuum, but instead has perhaps 10 ^{1000} metastable vacuum-like states. Eternal inflation then has potentially a direct impact on fundamental physics, since it can provide a mechanism to populate the landscape of string vacua. While all of these vacua are described by the same fundamental string theory, the apparent laws of physics at low energies could differ dramatically from one vacuum to another.*

To top this off, the dominant theory about the spacetime manifold we live on is that it is infinitely large in all directions. If you look at this picture of a reconstruction of the large-scale structure of the universe, the idea that we are living in something like an infinite volume with a finite speed-limit and a uniform random distribution of matter and energy that clumps over time becomes plausible.

A final step along this line of increasingly large Big Worlds is modal realism, the idea that all possible worlds exist. Max Tegmark has formalized this as the *Mathematical Universe Hypothesis*: *All structures that exist mathematically also exist physically*.

If any of these theories turn out to be true, then we are living in a *Big World*, a cosmology where every finite collection of atoms, including you, is instantiated infinitely many times, perhaps by the same physical processes that created us here on earth. It is also the case that other life-forms might emerge and use their technological capabilities to create simulations of us. Once an alien civilization reaches the point of being able to create simulations, it can create lots of simulations - really *unreasonably large* numbers of simulated beings can be created in a universe roughly the size of ours^{1,2}, Bostrom's estimate would be something like 10^50. And in other mathematically possible universes with the ability to do an infinite amount of computation in a finite time, you could be simulated an infinite number of times in just one universe.

One (incorrect) way of interpreting it is to think of a bunch of "worlds" spread out over the multiverse, most of them uninhabited, some containing weird green aliens, and one containing you, and saying: " Aha! I only care about this one, the others are causally disconnected from it!".

No, this view of reality claims that your current observer-moment is repeated infinitely many times, and looking forward in time, all possible continuations of (you,now) occur, and furthermore there is *no fact of the matter* about which one you will experience, because the quantum MW aspect of the multiverse has already demolished our intuitions about *anticipated subjective experience*^{4}. Think that chocolate bar will taste nice when you bite into it? Well, actually according to Big Worlds, infinitely many of your continutions will bite the chocolate bar and find it turns into a hamster.

I once saw wormholes explained using the sheet of paper metaphor: draw two dots on a sheet of paper, reasonably far apart, imagining the paper distance between them to be an unfathomably large spatial distance, say 10^(10^100) meters. Now fold the sheet so that the two dots touch each other: they are right on top of each other! Of course, wormholes seem fairly unlikely based upon standard physics. The metaphor here is of what is called a *quotient *in mathematics, in particular of a quotient in topology.

But if you combine a functionalist view of mind with big worlds cosmology, then reality becomes the quotient of the set of all possible computations, where all sub-computations that instantiate you are identified. Imagine that you have an infinite piece of paper representing the multiverse, and you draw a dot on it wherever there is a computational process that is the same as the one going on in your brain right now. Now fold the paper up so that all the dots are touching each other, and glue them at that point into one dot. That is your world.

Almost all of the histories and futures that feed into your "now" are simulations, by Bostrom's simulation argument (which is no longer shackled by the requirement that the simulations must be performed by our particular descendants - all possible descendants and aliens get to simulate us).

* Future Shock level 5* is "the Copernican revolution with respect to your place in the multiverse", the point where you mentally realize that perfectly dry astrophysics implies that there is no unique "you" at the centre of your sphere of concern, analogous to the Copernican revolution that unseated earth from the centre of the solar system. It is considered to be more shocking than any of the previous future shock levels because it destroys the most basic human epistemological assumption that there is such a thing as

*my future*, or such a thing a

*s the consequence of my actions*.

Shock Level 5 is a good candidate for Dan Dennett's universal acid: an idea so corrosive that if we let it into our minds, everything we care about will be dissolved. You can't change anything in the multiverse - every decision or consequence that you don't make will be made infinitely many times elsewhere by near-identical copies of you. Every victory will be produced, as will every possible defeat.

In "What are probabilities anyway?" Wei Dai suggests a potential solution to your SL5 worries:

All possible worlds are real, and probabilities represent how much I care about each world. (To make sense of this, recall that these probabilities are ultimately multiplied with utilities to form expected utilities in standard decision theories.)

For example, you could get your prior probabilities from the mathematization of occam's Razor, the complexity prior. Then the reason you don't worry that your chocolate bar will turn into a hamster is that the complexity of that hypothesis is higher than the complexity of other hypotheses, such as the chocolate bar just tasting like normal chocolate. But you're not saying that this scenario is unlikely to happen: it is certain to happen, but you just don't care about it.

Wei's UDT allows you to overcome the decision-theoretic paralysis that would otherwise follow in a Big World: you think of yourself as defining an agent program that controls all of the instantiations of you, so that your decisions do matter. But remember, in order to get decisions out of UDT in a Big World, you need that all-important measure, that is a "how-much-I-care" density on the multiverse that integrates to 1.

Personally, I think that Shock Level 5 could be seen as emotionally dangerous for a human to take seriously, so beware.

However, there may be strong instrumental reasons to take SL5 seriously if it is true (and there are strong reasons to believe that it is).

1: Anders Sandberg talks about the limits of physical systems to process information.

2: Bostrom on astronomical waste is relevant here as he is calculating the likely number of people that we could simulate in our universe, which ought to be roughly the same as the number of people that some other civilization could simulate in a similar universe.

3: Not one of the originally proposed 4 future shock levels.

4: To really nail the subjective anticipation issue requires another post.

## 158 comments

Comments sorted by top scores.

Does this theory really alter the probability that your next chocolate bar will turn into a hamster? After all, if there were only one of you, maybe there's a one in a trillion chance that one is in a simulation whose alien overlords will turn a chocolate bar into a hamster. If there are a trillion of you, and one of those trillion is in such a simulation, and your subjective experience has an equal chance of continuing down any branch, then the probability of the bar turning into the hamster is still one in a trillion. Although I've never seen a proof, intuitively you'd expect those two probabilities to be the same, or at least not be able to predict how they differ.

It all adds up to normality...except that this takes a lot of the oomph out of the project to reduce existential risk. Saving all humanity from destruction makes a much better motivator for me than reducing the percentage of branches of humanity that end in destruction by an insignificaaEEEEGH MY KEYBOARD JUST TURNED INTO A BADGER!!11asdaghf

EEEEGH MY KEYBOARD JUST TURNED INTO A BADGER!!11asdaghf

At least it's a QWERTY badger, from the looks of it...

your subjective experience has an equal chance of continuing down any branch

And just what does that mean?

I spent a lot of time in the late 90s trying to work out a coherent system of thinking about probabilities that involved things like "your subjective experience has an equal chance of continuing down any branch" but could not make it work out.

Eventually I gave up and went down the road of UDASSA and then UDT, but "your subjective experience has an equal chance of continuing down any branch" seems to be the natural first thing that someone would think of when they think about probabilities in the context of multiple copies/branches. I wish there is a simple and convincing argument why thinking about probabilities this way doesn't work, so people don't spend too much time on this step before they move on.

The implied difference between making N copies straight away, and making two copies and then making N-1 copies of one of them, might be a simple convincing argument that something *really odd* is going on.

Yeah, that one is nasty nasty nasty.

I wish there is a simple and convincing argument why thinking about probabilities this way doesn't work

It doesn't? If I flip a fair coin, I can think of the outcomes as "my subjective experience goes down the branch where heads comes up" and "my subjective experience goes down the branch where tails comes up", and the principle works.

Maybe nothing - maybe the fundamental unit of conscious experience is the observer-moment and that continuity of experience is an illusion - but the consensus on this site seems to be that it's worth talking about in situations like eg quantum suicide or simulation.

Maybe the inferential step would work better than the observer moment?

One inferential step is too little. Really you need an interval sufficiently long for the person to think coherently and do decision theory, but short enough that they don't get copied at all.

reducing the percentage of branches of humanity that end in destruction by an insignificant

Well, it definitely sounds worse than simply saving the world, but the expected number of saved lives should be the same either ways.

Yes, but utility isn't linear across saved lives and maybe it even shouldn't be. I would be willing to give *many* more resources to save the lives of the last fifty pandas in the world, saving pandas from extinction, than I would be to save fifty pandas if total panda population was 100,000 threatening to go down to 99,950.

Now it's true that human utility is more linear than panda utility because I care much more about humans for their own sake versus for the sake of my preference for there being humans, but I still think saving the last eight billion humans is more important than saving eight billion out of infinity.

You're an equivalence class. You don't save the last eight billion humans, you save eight billion humans in each of the infinitely many worlds in which your decision algorithm is instantiated.

Why is that significant? No matter how many worlds I'm saving eight billion humans in, there are still humans left over who are saved no matter what I do or don't do. So the "reward" of my actions still gets downgraded from "preventing human extinction" to "saving a bunch of people, but humanity will be safe no matter what".

In fact...hmm...any given human will be instantiated in infinitely many worlds, so you don't actually save any lives. You just increase those lives' measure, which is sort of hard to get excited about.

Should it? It appears to me that efforts toward saving the world, if successful, only raise the odds that the branch you personally experience will include a saved world.

Or from a different perspective your decision algorithm partially determines the optimization target for the updateless game-theoretical compromise that emerges around that algorithm.

That's certainly a useful view of the ambiguity inherent in decision theory in MWI. Or it would be, if I had a local group to help me get a deep understanding of UDT--the Tampa chapter of the Bayesian Conspiracy has lain in abeyance since your visit.

Does this theory really alter the probability that your next chocolate bar will turn into a hamster?

After all, if there were only one of you, maybe there's a one in a trillion chance that one is in a simulation whose alien overlords will turn a chocolate bar into a hamster.

But what if there are infinity of you, and the set of you that are not in simulations has measure 0? Then the probability of bizarre things happening is much higher, and depends entirely upon the probability distribution over motivations of simulators.

It sounds a bit chicken-and-egg to me. My subjective probability estimate of simulators' motivations comes great part from the frequency and nature of observed bizarre events. Based on what I know about my universe the vast majority of my simulators don't interfere with my physical laws.

Now update on the fact that you're one of perhaps 1000 people who think seriously about the singularity out of 6,000,000,000...

I hear things like this a lot, but I'm not sure if I've heard a clear reason to think that the people that the simulators (of a long-running, naturalistic simulation) are interested in should be more likely to be conscious, or otherwise gain any sort of epistemological or metaphysical significance.

One hypothesis is that we are being mass simulated for acausal game theoretic reasons, and that only the "interesting" people are simulated in enough detail to be conscious.

"interesting" is very much the wrong word though. More like informative regarding the optimization target that one cooperates by pursuing.

Isn't the measure of the set of me not in simulations (in a big world) equal to the probability that I'm not in a simulation (if there's only one of me)?

only if you reason anthropically in calculating the "one of me" probability.

The point is that if there are some places in the multiverse with truly vast or even infinite amounts of computing power, then that will dominate the calculation in the case of thinking of yourself as the union of all your instances. So if that is to agree with the "one of me" case, then you'd better reason anthropically in that case, otherwise they'll disagree.

You seem to be using 'infinite' as a synonym for 'very large', which is sloppy at best and actively misleading here. MW does not of itself imply infinite copies of you, but merely *very many* copies. To get actual infinities requires additional assumptions which you have not supported or even mentioned. Large numbers can be compared in ways that infinities cannot; if there are really *infinitely* many copies of you then your decision makes no difference, but if there are merely *very many*, then there is a sensible way in which a good decision increases the total goodness/badness ratio of the multiverse. Confusion of these two concepts is *very bad*. Please stop, or back up your assertion of infinitudes.

Think about the wavefunction of a single electron. It specifies an amplitude for the electron to be found at any point in space. And there are continuum many such points. Now, the sense in which MWI states that there are many different 'copies' of you all simultaneously existing corresponds to the sense in which 'copies' of the electron are found (with various different amplitudes) at every point. A 'copy' of the electron = one of its possible locations.

So since there are continuum-many copies of an electron, I think it's fairly safe to assume there are at least continuum-many copies of a 'human'.

Of course, there may be a sense in which, although every possible configuration of the elementary particles in your body is assigned some amplitude, almost all of the amplitude gets 'concentrated' into a small number of 'rivers' of relatively much higher probability. For instance, the 'river of probability' for Schrödinger's cat will split into a 'live cat' branch and a 'dead cat' branch. Each branch is smeared over infinitely many configurations, and there are infinitely many configurations not belonging to either branch, which also get *some* non-zero amplitude, but nonetheless the two 'main branches' cover almost all of the probability mass, and thus they stand out as real patterns - as real as 'planets' or 'stones'. It is in *this* sense, I think, that you mean to say there may only be "finitely many" copies of a person.

But *this* kind of finiteness doesn't suffice to defuse the reasoning which led you to say "if there are really infinitely many copies of you then your decision makes no difference". (The resolution, of course, is to abandon that reasoning.)

Think about the wavefunction of a single electron. It specifies an amplitude for the electron to be found at any point in space. And there are continuum many such points.

It does not specify an probability for each point, but a density, which is only turned into an probability by integrating. The probability of being at a particular point is zero. More strongly, the system has countably many qubits.

Sure.

So we agree: far from there being 'only finitely many copies', if there's any sense at all to be made of the 'number of copies' then it is infinite and any (or 'nearly any') possible configuration of the matter making up your body gets *some* non-zero density or probability depending on whether by configuration we mean a 'point' in phase space (if that's the right term...) or a little 'cube'.

Your original post is correct. I think "continuum-many" is misleading, but I can't object much given the context. I don't remember what I was thinking.

**ETA**: I would think of an eigenbasis and say that there are countably many electrons, none of which is localized.

I am not certain what you mean by 'continuum-many'; it sounds as though it could be either 'infinite' or 'the large number you get from a lot of combinatorics'. However, I must point out that quantum theory has the interesting property of being *quantized*. (Sounds almost like a lolcat slogan. "Kwantum fyziks... iz kwantised.") A particle in a bound state does not have infinitely many degrees of freedom, and since our local spacetime is apparently closed (if only just) *every* particle is in a bound state, it's just not obvious.

I am not certain what you mean by 'continuum-many'

It refers to cardinality. You know Cantor showed that, while natural numbers and rationals can be put into one-to-one correspondence, there is no way to put the reals into one-to-one correspondence with the naturals, because there are 'too many' real numbers? Well, "continuum-many" means "the same cardinality as the real numbers".

Still, Douglas Knight makes a fair point - it is somewhat misleading to talk about continuum-many copies if each one has zero probability. In truth, I guess the concept of a 'number of copies' is too simple to capture what's going on.

As for particles being in bound states and having finitely many degrees of freedom: I'd be surprised if it altered the 'bigger picture' whereby all possible rearrangements of the matter in your body (or in the solar system as a whole, say) get some (possibly minuscule) amplitude assigned to them. (Of course, ideally it would be someone who actually knows some physics saying this rather than me.)

"continuum-many" means "the same cardinality as the real numbers".

Ok, fair enough. In that case I must merely disagree that there exist this many possible arrangements of matter; it seems to me that the arrangements are actually countably infinite.

As for particles being in bound states and having finitely many degrees of freedom: I'd be surprised if it altered the 'bigger picture' whereby all possible rearrangements of the matter in your body (or in the solar system as a whole, say) get some (possibly minuscule) amplitude assigned to them.

That's true, but the question is whether that number has the cardinality of the reals or the integers. I think it's the integers, due to the quantisation phenomenon in bound states; *everything* is in a bound state at some level. After my last post it occurred to me that the quantised states might be so close together that they'd be effectively indistinguishable; however, there would still be a finite number of distinguishable states. Two states are not meaningfully different if a quantum number changes by less than the corresponding uncertainty, so in effect the wave-function is quantised even in a continuously-varying number. Once you quantise it's all just combinatorics and integers.

I don't think you're right... isn't it broken down into plank lengths or something?

::Shrug:: There's *something* important about the Planck distance, but I don't know enough physics to be able to say much more. Like Hawking radiation, It's something that only crops up when you start trying to do 'quantum gravity'.

It's tempting to imagine that the universe is something like the "Game Of Life" but with Planck sized cells, but what little I know about string theory makes this idea seem extremely naive. (And anyway, space could be both discrete and infinite.)

IANAPhysicist, but I'm fairly sure that space and time are entirely continuous in standard QM or QFT, though they are discrete in loop quantum gravity and possibly other theories of QG.

Standard QFT doesn't have discrete space and QCD may make sense with continuum of space-time, but models with a Landau pole, like QED and the standard model, *don't make sense* at small length scales. The length at which the Landau pole appears in QED is smaller than the Planck length, so no one cares about it, since they expect bad things to happen already at the Planck scale.

MW does not of itself imply infinite copies of you, but merely very many copies. To get actual infinities requires additional assumptions which you have not supported or even mentioned.

The version of "many worlds" that Roko is talking about is not that of QM. Rather, it is the one where "all structures that exist mathematically also exist physically". According to the conventional understanding of "mathematical existence", there exist infinitely many mathematical structures that contain a copy of me.

Nu. I have got to say that I don't see any good evidence for that piece of philosophising.

if there are really infinitely many copies of you then your decision makes no difference, but if there are merely very many, then there is a sensible way in which a good decision increases the total goodness/badness ratio of the multiverse.

I don't see that it matters either way, provided there is a probability distribution over the set (finite or infinite) that gives your subjective experience. Can you explain your reasoning more?

I am going with the original post's apparent belief that in the presence of infinities, probabilities are meaningless. This presumably derives from the observation that there are equally many even numbers and numbers divisible by eight, even though a probability distribution derived from taking the limit as the range goes to infinity would conclude that the probability of the one is four times higher than the other.

The OP was careful, it seems, to avoid that issue. (Infinite set agnosticism?)

In any case, our perceived history matches the Born rules too well for it to be reasonable that "probabilities are meaningless", so either the universe is OK with measures on infinite sets or it's somehow finite after all. (I incline strongly toward the former hypothesis, for reasons of mathematical elegance— thoroughly finitary versions of Hilbert spaces are hack-ish.)

our perceived history matches the Born rules too well for it to be reasonable that "probabilities are meaningless", so either the universe is OK with measures on infinite sets or it's somehow finite after all

I like this; it is an excellently compact way of putting it.

The OP was careful, it seems, to avoid that issue. (Infinite set agnosticism?)

I don't see this. I am referring particularly to this paragraph:

No, this view of reality claims that your current observer-moment is repeated infinitely many times, and looking forward in time, all possible continuations of (you,now) occur, and furthermore there is no fact of the matter about which one you will experience, because the quantum MW aspect of the multiverse has already demolished our intuitions about anticipated subjective experience.

I agree with you that the Born rules imply meaningful probabilities; but it seems to me that the OP does not believe this, at least in the part I've quoted.

Probabilities over infinite sets are not at all meaningless. If a set is countable, they have to privilege some objects (in the sense that not everything can have the same probability). If the set is uncountable (say the real numbers between 0 and 1) then there's no problem with having a very well-behaved probability distribution. (I'm skipping over some details. The fact that not every set is measurable means that one needs to be very careful when one talks about meaningfulness of probability).

Yes, I understand this, but as noted in my comment above, it appears that the OP is using a different assumption.

I don't think I said that MWI implies infinitely many copies. Did I?

Rather, chaotic inflation theory or an infinite spacetime implies infinite copies.

Shock-level-bragging is so 2003... ;) Still, in my opinion, this post contains some extremely interesting unconventional intuition, which seems to be way underrated.

In the long run, this whole mathematical multiverse idea has the potential to become much less insubstantial than it may look on superficial inspection.

There are quite a few problems with it though. For example the reliance on minimum description length does not feel like the right approach to the probability problem at that level. It may turn out to be eventually, but generally, the perceived probabilities don't come from a conscious decision to care about some abstract (and uncomputable!) complexity measure (like MDL)

No! We experience certain probabilities, because something is built into the very nature of our physics (or rather meta- or multi-physics). So, even if MDL turns out to be at the core, it must be a derived consequence, rather than just being pulled out of the hat as in the OP.

The most essential clue in the puzzle could be, if we'd manage to understand how this glue works that connects processes with phenomenologically equivalent or similar information processing structures. I can see Barbour's derivation of the general relativity as an interesting analogy. But it's obviously much harder to argue about general (approximate) isomorphisms of causality networks than measuring the similarity of mass distributions of three dimensional spaces.

Nevertheless I think the intuition multiverse ideas provide could inspire speculations in extremely exciting directions: For example: Is it conceivable that the symmetries we experience in the physical laws are plainly consequences of the natural symmetries of this observer-gluing process itself?

Just wait until you hear about shock level 7.

Q: How do you convince a singularitarian to eat shit?

A: Declare eating shit shock level 5

Due to anthropic reasoning it is impossible to understand unless you have heard about shock level 8, you will never find yourself in a universe where you hear about shock level 8.

We used to think that there was one planet earth, inside a universe that is very large (at least 10^26 meters in diameter) but that the reachable universe (future light-cone in the terminology of special relativity, or causal future in the terminology of GR) was finite. Anything outside the reachable universe is irrelevant, since we can't affect it. However, cosmologists went on to study the process that probably created the universe, known as inflation.

The scientific idea of a spatially infinite universe, and the recognition that this would have weird implications, is independent of and long predates inflation. Spatial infinity is Tegmark's Level I, while eternal inflation is Level II. Eternal inflation gives rise to some more variation in physical laws than a 'normal' infinite universe, but not anything qualitatively new (if an infinite universe contains everything computable, it contains simulations of every possible set of computable physical laws). As for timing, well, see this comment by Mitchell Porter.

The role of eternal inflation in scientific Eternal inflation and its implications 10 thinking,

You've got some extraneous text there.

The scientific idea of a spatially infinite universe, and the recognition that this would have weird implications, is independent of and long predates inflation

I didn't know it seriously predated inflation. Thanks.

In case anyone downmodded this for using the term "Shock Level 5", I agree that some of the broad or specific implications of the Tegmark Level 4 Multiverse can be called Shock Level 5.

I have just tended to think of this as being how SL4 looks when you have digested it and are no longer shocked. Really though, I just think its a slightly good social rule to avoid creating self-aggrandizing terminology.

I'm going to assume it all adds up to normality and live as though I only exist in a single world. :P

The statement *It all adds up to normality.* as far as I can trace back, was just a simple practical recognition that the MW interpretation of quantum mechanics was consistent with our everyday experience.

A simple, objective, true statement without any imperatives or any overreaching philosophical implications.

Unfortunately, over the time, it became a popular mantra to be repeated every time someone expresses some inconvenient sounding ontological statement that does not fit someone else's warm cozy Star Trek world view.

"It could be, but **don' think** about that. *It always adds up to normality,.* anyways... *

Isn't it?

But chocolate bars *don't* turn into hamsters. The universe *is* predictable. Why are we discussing this stuff when we already know it isn't true?

*Some* universes are predictable. Others are predictable until tomorrow, and after that, chocolate bars turn into hamsters.

I'm talking about our universe. Don't try to confuse me.

"our universe"

SL5 error: we don't have a unique universe...

What makes you think so? Pure shock value?

I'm willing to (provisionally) believe in MWI, but not Tegmark's ensemble. You haven't provided any actual evidence why the latter is true, and chocolate bars indicate that it's almost certainly false. Here's the cousin_it scale of science-worthiness:

This is true.

This works.

This sounds true.

This sounds neat.

From the looks of things, you have yet to rise above level 4.

Chaotic inflation theory is the evidence.

No. That's just one small part of the evidence, far from sufficient and I would say far from necessary. By itself, these ideas would cause me to say "so much the worse for chaotic inflation theory" which is, as far as I know, not terribly well confirmed (or more to the point, not terribly clear in its proper interpretation).

If I understand it correctly, chaotic inflation theory implies a multitude of universes with differing but stable physical laws, not a multitude of universes that evolved just like ours but will soon begin turning chocolate bars into hamsters.

If arbitrarily *large* universes exist, then there would be people with arbitrarily large computers running every possible program. From that you would get worlds in which chocolate bars turn into hamsters.

Question: Tegmark, in one of his multiverse papers, suggests that ordering measure by complexity seems to be an explanation for finding ourselves in a simple universe as well as a possible to answer to the question 'how much relative existence do these structures get?' My intuition says rather strongly that this is almost assuredly correct. Do you know of any other sane ways of assigning measure to 'structures' or 'computations' other than complexity?

Could you elaborate? It seems to me that because there exists a much greater number of complex computations than there are simple computations, we should expect to find ourselves in a complex one. But this, obviously, does not seem to be the case.

But this, obviously, does not seem to be the case.

Meanwhile, a newly-minted hamster scurries down the candy aisle in a vacant supermarket.

If we run each universe-program with probability 2 to the power of minus L, where L is the length of the program in bits, and additionally assume that a valid program can't be a prefix of another valid program, then the total probability sums to 1 or less (by Kraft's inequality). In this setup shorter programs carry most of the probability weight despite being vastly outnumbered by longer ones. I think the same holds for most other probability distributions over programs that you can imagine.

Right: it is enough if there is a sequence U1, U2, U3, ... of increasingly computationally large universes, which seems to be roughly what chaotic inflation + the string theory landscape gives you, though I am a little confused about the ST landscape having a finite number of elements; this may spoil it.

Doesn't follow at all. A large variety of physical laws and universe sizes doesn't imply arbitrarily large computers. It's quite possible that sentient life that can build computers exists only in universes with parameters very much like ours, and our particular universe seems to have hard physical limits on the size of computers before they collapse into black holes or whatever.

Who said anything about sentient life? Arbitrarily numerous computers should simply emerge, within this universe though not this Hubble volume, and should run every computation.

our particular universe seems to have hard physical limits on the size of computers before they collapse into black holes or whatever.

There's no upper limit on the size of a computer in our universe. Black holes are only a problem if you assume a very dense computer.

Moreover, it isn't that hard to construct hypothetical rules for a universe that could easily have arbitrarily large Turing machines. For example, simply using the rules of Conway's Game of Life.

If you make the computer sparse, other limits come into play: all matter decays in finite time, and the speed of light is finite.

Assuming the existence of a Game of Life universe is begging the question.

Assuming the existence of a Game of Life universe is begging the question.

The discussion above was in the context of arbitrarily large universes existing. The point is that one can construct very simple universes which allow arbitrarily large simulation. You only need one such universe for the argument to go through.

Does chaotic inflation theory imply the existence of a Game of Life universe? I don't see how. If it doesn't, what's the evidence for the proposition that a Game of Life universe exists in the first place? Where are you getting this stuff from?

It doens't necessarily do so, but it does imply the existence of others that are similar enough to make not much of a difference. For example, there would probably be a universe with much weaker gravity making black holes impossible. I don't know enough about chaotic inflation to comment in detail but my impression is that one can get much more exotic universes than even that.

This is a good question that I'd like to hear a hardcore physicist answer.

The relevant point is that chaotic inflation allows the string theory landscape to be populated -- but are there vacuum states of string theory that allow infinite computation?

My suspicion is yes because of effects like the omega point. It may be impossible in our universe, but surely there are some where all the parameters work out.

Just so you know: Tipler's Omega Point scenario is the time reverse of a big bang expansion from a BKL singularity. The collapsing universe, filled with a plasma too hot and dense for any bound object to survive, is supposed to undergo an infinite series of "Kasner oscillations" which alternately squeeze the plasma from different cosmic directions, providing the energy for computation.

The scenario is very problematic. The plasma description will not be valid at arbitrarily high temperatures. Eventually the particles will be colliding so hard that they become micro black holes; some other dynamical regime will take over. Tipler has worked hard to contrive ways around this, but it's just really unlikely that an *infinite* sequence of Kasner epochs can be made to happen; especially, I would think, if you work within string theory, which behaves differently from field theory at high energies.

There is no consensus in string theory regarding cosmological initial and final conditions. String theory sometimes "resolves" singularities, i.e. provides a non-singular description of an apparently singular geometry (e.g. the "fuzzball" description of the black hole interior). However, there is no consensus on whether a big-crunch singularity will generically resolve and lead to a big bounce (as in "ekpyrotic" and "pre-big-bang" models), or whether it is simply the end, even in string theory.

At the other end, there is no particular consensus about the combination of chaotic inflation and the string theory landscape being the right way to think about cosmology. (I should probably emphasize that most "string cosmology" is actually about events in a single expanding universe - e.g. studying how the inevitable extra heavy particles affect measurable aspects of cosmic evolution like dark matter and atomic abundances - and not this mind-of-God stuff.) Its chief champion is Leonard Susskind, who is very eminent but does not speak for all his equally eminent colleagues. But let us assume this framework for the purpose of discussion.

Inflation is a hypothetical period of exponentially rapid expansion in the very early universe. In a field theory model of inflation, you start with a "scalar" field in a high energy density state, it dynamically relaxes into a lower energy state, and then inflation ends, being replaced by cosmic expansion at ordinary rates. For inflation to occur, the scalar field only has to have a few properties and so there are endless specific field theories which will exhibit inflation. In string theory, (see bottom of page 6 here), there are also many ways to achieve inflation.

In "eternal inflation", most of the universe always remains in the energy-dense inflating state. The relaxation into slower expansion only occurs in small, disconnected spatial regions, outside of which exponential inflation continues forever. In "chaotic inflation", the relaxation process sees the inflationary fields settling into different stable states in different regions. Maybe I should explain what a "stable state" is. In particle physics theories, particles usually get their mass by interacting with a Higgs field or fields with a "nonzero vacuum expectation value". The Higgs fields interact with each other and settle into some lowest-energy equilibrium determined by the form of the interaction (which can be quite complicated). There can be more than one such equilibrium.

In string theory many apparently stable configurations of the extra dimensions have been constructed. So in stringy eternal inflation, you suppose that different string geometries are being realized in different isolated regions of an otherwise uniformly and eternally inflating universe. Usually this is brought up in the context of anthropic reasoning; the hope is to predict the features of our local physics anthropically, since we can't be living in a region hostile to life.

Now we can think about this question of whether an infinite computation might get to occur somewhere in such a universe. The two standard cosmological scenarios for eternal life are the Tipler and Dyson scenarios. I've already mentioned that Tipler's scenario is dubious. Dyson's scenario is for an eternally expanding universe; something about stable islands of matter communicating with each other ever more rarely and weakly, with these interactions spaced out in such a way that they manage an infinite sequence of such interactions on a finite energy budget. If you believe in a cyclic cosmology, you might add to these scenarios one in which life persists through the bounce from collapse to expansion, but no-one has proposed a model of that.

I have not deeply surveyed the literature on eternal inflation, but I don't remember ever seeing anyone talk about one of those non-inflating regions entirely ceasing to expand and undergoing collapse. My grasp of the concepts is weak enough that I can't even say if there's some principled reason for this, though inflation is such a generic phenomenon, I would think that there must be models where a local big crunch can occur.

In the string theory context, people *really* started talking about a landscape in theory-space of many possible geometries, after the observational discovery of dark energy in 1998. That was at first difficult to incorporate into string theory, and the way it was achieved (in "KKLT vacua") involved the discovery of a new, very large class of stable string geometries. A universe with dark energy is one that expands forever, even at an accelerating rate (just not as fast as inflation). So if we suppose, as Susskind seems to do, that the landscape is dominated by *these* vacua, then it's the Dyson scenario, appropriate for an open universe, which is the relevant model of infinite computation.

Now here I am really overreaching what I know, but in discussions of these vacua - which have a de Sitter geometry - I often see it stated that in the end every particle ends up isolated from every other, alone in its own Hubble volume. So the Dyson scenario may require a flat universe, and may be impossible in de Sitter space. I think there's actually a paper saying as much. I'm not clear on this, but I don't think this geometry *requires* that literally every particle ends up in its own patch of expanding space. Just as a galaxy doesn't experience cosmic expansion, that only happens out in deep intergalactic space where the geometry is FRW, I don't see why a gravitationally bound system much larger than a single particle couldn't become one of these islands in de Sitter space (in which case, maybe you could hope for infinite computation, but not infinitely many states - you would end up repeating). It may only be the "big rip" scenario, in which the dark energy grows, that tears all bound systems apart. But I'm really not sure!

Really, I think these discussions about what's going on beyond our cosmological horizon are a lot like the discussions of the Fermi paradox. They are exercises in reasoning almost totally unconstrained by empirical data. String theory is supposed to be this unique mathematical structure and so you might hope that it simply tells you how string cosmology is supposed to be. But it's a work in progress, and in fact the cosmological question may be the same as the other big unresolved question, how to think about all those different geometries. Usually you just pick a geometry and study how the strings behave in it. You allow for some back-reaction, so the geometry adjusts to what the strings are doing, and in some cases you can even describe how one geometry becomes another (Brian Greene worked on this). But a conceptually unified approach regarding the whole "moduli space" of possible geometries is lacking. Other theorists like Cumrun Vafa and Tom Banks have approaches very different to Susskind's. (Vafa appears to be looking for a single preferred geometry, by using the Hartle-Hawking wavefunction, while Banks thinks moduli space is divided up and the vacua form disjoint groups that aren't dynamically connected.)

Final message: as currently described, the string landscape plus inflation is *not* generally thought of as allowing infinite computation or eternal life. But that whole cosmological conception may be faulty.

This comment is most informative, thanks.

Final message: as currently described, the string landscape plus inflation is not generally thought of as allowing infinite computation or eternal life. But that whole cosmological conception may be faulty.

However, from what you've said, it seems that it is not ruled out even if ST is correct, as we don't know what ST at high energy scales actually does, right?

There are ideas about what happens in those extreme conditions. But frankly I think your chances are better at very low energies. The difficult part about aspiring to live *forever* is that somehow you need the probability of an accident to drop off sharply and permanently, or else the asymptotic odds of survival are zero. Late-time de Sitter space should be a lot more peaceful than a collapsing cosmological fireball.

The difficult part about aspiring to live forever is that somehow you need the probability of an accident to drop off sharply and permanently

Not necessarily -- you can use error-correcting algorithms and multiply redundant hardware to run your computer in spite of an error rate, as long as it is not too high.

Dyson has a scenario for infinitely much computation with finitely much energy with cosmological constant zero. Probably you can't really do infinitely much computation, but end up in a loop because of limited memory. If inflation changes the cosmological constant, then getting it arbitrarily close to zero would be as good as Dyson's scenario for the purpose of this discussion. You also want regions with arbitrarily high memory, which is probably mainly a matter of energy. My vague impression is that the cosmological constant gives a bound on the computation independent of the amount of memory.

Dyson has a scenario for infinitely much computation with finitely much energy with cosmological constant zero. Probably you can't really do infinitely much computation, but end up in a loop because of limited memory.

Dyson suggests that spatially encoding memory in an expanding computer would allow memory capacity to grow logarithmically.

If inflation changes the cosmological constant, then getting it arbitrarily close to zero would be as good as Dyson's scenario for the purpose of this discussion. You also want regions with arbitrarily high memory, which is probably mainly a matter of energy. My vague impression is that the cosmological constant gives a bound on the computation independent of the amount of memory.

What I recall hearing is that a nonzero cosmological constant makes things fall off the edge of the universe, i.e. the edge is an event horizon, so it Hawking-radiates, so the temperature of the sky (=> energy dissipation per operation) asymptotically approaches something nonzero. There might be a more pure argument.

Dyson suggests that spatially encoding memory in an expanding computer would allow memory capacity to grow logarithmically.

Yeah, I guess I should have looked that up. I do not find Dyson's paragraph on memory convincing. Frankly, I take it as evidence of the opposite.

SL5 error: we don't have a unique universe...

The concept of "our universe" does make sense, even if it's not the only thing we care about.

Re: "we don't have a unique universe"

"Universe" means everything there is. You can't have multiple universes. That's what the "uni" in "universe" means - and that's why its the M.*W*.I. - and not the M.*U*.I.

Do you often go around criticizing people for talking about 'atom smashers' or 'ATM machines'?

That is not the promotion of misuse of scientific terminology - so I am less concerned about that.

Rather, what we know (anthropically) is that the *typical* observer-moment comes from an ordered history within a big, simple universe. If the universe works as we think it does (just assuming MWI, not Level IV), then there do exist Boltzmann brains in the same state as my current brain, and some of them have successor states where they do see the chocolate-hamster singularity.

But the measure of those observer-moments is dwarfed by the measure of the observer-moments in orderly contexts, or else my memories wouldn't match my experiences and my current experiences would be highly unlikely to be this low in entropy.

But chocolate bars don't turn into hamsters. The universe is predictable. Why are we discussing this stuff when we already know it isn't true?

Chocolate bars have a very low probability of turning into hamsters. A chocolate bar is one configuration of elementary particles, and a hamster is another, and there are lots of particles that may or may not be in the space of that chocolate bar at any given point in time.

Our universe is predictable, in that very low probability events happen with a very low frequency, but this does not entail that very low probability events never happen.

What you discuss is a question of decision theory, whether there is something besides the apparent environment to care about, and that hardly depends on the way physics is. One doesn't need little "exists" tags on hypotheticals in order to care about them. They probably help, but are not a defining factor, certainly not for the decision theory, before you take into account the finer details of the content of morality.

And in other mathematically possible universes with the ability to do an infinite amount of computation in a finite time, you could be simulated an infinite number of times in just one universe.

*Are* there "mathematically possible universes with the ability to do an infinite amount of computation in a finite time"? Wouldn't that render that entire universe noncomputable, and is there any version of the mathematical universe hypothesis in which noncomputable universes are admitted?

Think that chocolate bar will taste nice when you bite into it? Well, actually according to Big Worlds, infinitely many of your continutions will bite the chocolate bar and find it turns into a hamster.

...and that's the big (apparent) problem with MUH that I'm still trying to figure out. If that line of reasoning is true, then why *don't* we observe arbitrary irregularities like that? I mentioned a few possible solutions around the end of my post about it, but I'm not too confident in any of them; I don't quite know how to think about this yet. (I'm still not convinced that Occam's Razor helps. Suppose we're considering ten universes: one where the chocolate bar you're about to bite into remains a chocolate bar, and nine where it turns into various things. If these were somehow the only possible universes, then you'd estimate a .9 probability of the chocolate turning into something weird, and 9 of you would be right. I don't see why using Occam's Razor to assign greater probability to simpler universes would actually *work* here.)

Almost all of the histories and futures that feed into your "now" are simulations, by Bostrom's simulation argument (which is no longer shackled by the requirement that the simulations must be performed by our particular descendants - all possible descendants and aliens get to simulate us).

If MUH is true, then the simulation argument doesn't matter anyway; a given universe is real and continues to be real whether or not anybody is simulating it.

However, there may be strong instrumental reasons to take SL5 seriously if it is true (and there are strong reasons to believe that it is).

What are some of those instrumental reasons?

What are some of those instrumental reasons?

Applied quantum suicide. If we are in a Big World then all we really care about is probabilities, and we can modify those probabilities by selectively removing ourselves from particular universes.

Applied quantum suicide.

The philosopher David K Lewis has already supplied (pdf warning) a reductio of 'quantum immortality' - the basic problem is that you're more likely to end up crippled than healthy. (And if you're crippled then you're in no position to do lots of snazzy instrumental stuff.) See page 21.

Though Lewis himself doesn't actually carry his reasoning forwards this far, we can finish off: Since the distinction between a crippled, almost-extinct mind and no mind at all is a blurry continuum, with no non-arbitrary way of measuring places along it, the event "you survive your attempted suicide" is not even well defined, and neither is "the probability that you survive", nor "the probability that you are crippled, given that you survive".

The whole notion of 'quantum immortality' is monumentally confused. Putting a gun to your head, firing and seeing whether you find yourself in a quantum miracle-world with virtually zero probability is exactly as reasonable a test of the many worlds interpretation as seeing whether a third arm spontaneously erupts from your chest.

Putting a gun to your head, firing and seeing whether you find yourself in a quantum miracle-world with virtually zero probability is exactly as reasonable a test of the many worlds interpretation as seeing whether a third arm spontaneously erupts from your chest.

Agreed. I'd never go about it that way. If I wanted to test the many worlds interpretation I'd do the following:

Strap a few pounds of high explosives around my head and connect the detonator to a computer. Have the computer select a random number between 1 and 1000 via some unbiased quantum process. Program the computer so that if any number greater than 1 is generated the detonator is activated, otherwise have it do nothing. Run the program. Run it again. And again, until I'm satisfied the the many worlds interpretation is correct.

The important things, in my opinion, are:

- The method of death should be faster than most thought processes. Blast velocities are typically greater than the traveling speed of action potentials. This ensures you don't accidentally observe something that commits you to a world with an almost sure probability of death (in the realm where only 'magical' quantum effects could save you).
- The probability that the method of death fails to kill you should be thousands of orders of magnitude smaller than the probability that the method is never activated at all. In the case of my above example, the probability of finding yourself in a universe where you survived a high explosive blast at point blank range is essentially zero compared to the probability of getting 1 out of a 1000.

Note, with my above setup, it is very easy to transition from testing the many worlds hypothesis, to actually using it to your advantage. Want to factor a large number? Randomly sample the solution space on your computer, detonating only if the random sample isn't a solution. (Make sure to implement an initial fail-safe probability in case no solution exists!)

Note, with my above setup, it is very easy to transition from testing the many worlds hypothesis, to actually using it to your advantage. Want to factor a large number? Randomly sample the solution space on your computer, detonating only if the random sample isn't a solution. (Make sure to implement an initial fail-safe probability in case no solution exists!)

There's an old joke about this related to the problem of sorting lists. A proposed sorting method is to take what you want, and randomly rearrange it. If that isn't sorted, destroy the universe.

Your approach seems directly inferior from a utilitarian perspective because it will lead to many universes where not only did you fail to factor it but the rest of us will miss your company (and be stuck cleaning up a large mess).

Your approach seems directly inferior from a utilitarian perspective because it will lead to many universes where not only did you fail to factor it but the rest of us will miss your company (and be stuck cleaning up a large mess).

The solution, of course, is to replace high explosives with the LHC.

Note, with my above setup, it is very easy to transition from testing the many worlds hypothesis, to actually using it to your advantage. Want to factor a large number? Randomly sample the solution space on your computer, detonating only if the random sample isn't a solution.

This is the most awesome idea I've heard all day. But you could do a lot better than factoring large numbers - you could set it up to detonate only if the random number is not the winning lottery pick!

I'll be in my basement rigging the explosives.........

But you could do a lot better than factoring large numbers

Oh yeah. You can solve any problem in PSPACE. You can basically directly sample the entire space of all programs (with bounded memory).

Screw the lottery. You could make *trillions* on the stock market. Afterwards sample the entire space of all love letters, send them off to famous movie stars, then detonate only if you don't get an eager response back. You might need a delegate to read the letter, as you reading it personally would shunt you into particular universes.

You could make trillions on the stock market. Afterwards sample the entire space of all love letters, send them off to famous movie stars

But would you really need the love letters if you had the trillions? I'd think a bank statement would suffice.

ETA: Ok, I'm confused. What's going on with the downvoting? I'm honestly not concerned at all about the karma, just mystified.

I downvoted because of the cynicism expressed in the idea that money can buy love. It read like a bitter complaint that girls (or guys) just want money.

Upvoted kodos and downvoted you because I don't see that cynicism in the grandparent.

Downvoted you for downvoting me for explaining why I downvoted kodos.

ETA: The cynicism was in saying that money could replace love letters. Also, the original post was about quantum suicide and using it to find the most effective love letter, and the comment about money sort of missed the point, and read like a cheap shot against love.

Downvoted you for downvoting me for explaining why I downvoted you;> Are you saying that copypasted love letters is an adequate substitute for actual love? That sounds pretty cynical to me. But I still don't see any inappropriate cynicism coming from kodos

Upvoted you for being meta.

No, it's not that the letters are an actual substitute for love, it's more the cynical attitude, "yeah, anyone will love you if you have enough money."

Wow.... just.... wow

First of all... it was a *joke*

Second of all, I don't see the idea of money being able to buy love as being any more or less cynical than randomly generated spam love letters being able to buy love...

Third of all... Blueberry, weren't you one of the people on the wrong side of the PUA debate? And you don't see any irony in now acting all holier than thou about cynical attitudes toward mating?

Fourth of all... it was a **joke**!!!! I mean, seriously people.

Regardless, upvoting both of you back up to 0, cause I don't think people should be penalized for explaining their downvotes when asked to do so.

ETA: Wow, this is getting ridiculous. I think it's now safe to say that Human Mating Habits Are The Mind Killer, even more so than politics.

ETA2: LOL@downvoting people in retaliation for explaining why they're *upvoting* you ;)

I thought this whole thread was a joke! And I'm sorry for any offense I caused.

Just for clarification: I'm not sure what you mean about the "wrong side" of the debate, but I support PUA and see it as a positive and productive method for helping men and women develop social skills, understand each other, and have better relationships. I see PUA as the opposite of cynical.

More to the point, it's not well substantiated that the individuals in question would be drawn to riches - there are many people who are, but not nearly 100% of the population. I met a woman who once had a member of The Eagles chatting her up and turned him down.

That's exactly as reasonable a test of the many worlds interpretation as 'flipping a coin' (or a quantum version thereof) lots of times and seeing whether you get all heads.

Oh, and I don't think you've factored in Lewis' point yet. What he's saying, in essence, is that you can 'never really die'. Even when the explosion goes off, the destruction of your head will have to proceed one micro-event after another, and if any one of those micro-events should be one that would finally 'extinguish' your consciousness then your awareness will (by the logic of quantum immortality) 'jump ship' to the somewhat-less-likely world where it doesn't happen.

So you'll end up 'finding yourself' in one of the fantastically unlikely worlds where the explosive only maims you.

So you'll end up 'finding yourself' in one of the fantastically unlikely worlds where the explosive only maims you.

This is precisely what my example avoids. There are substantially more worlds where you got a 1 and there was no explosion, than worlds where there was an explosion but you somehow managed to survive.

Hmm. OK, you have a point there.

Still, the mere fact that *if* your reasoning is valid *then* it must also be true that (as explained above) "you can never really die" constitutes a reductio.

Alternatively, if you want to say that your consciousness really *can* cease as long as it happens gradually, then how can there be possibly be a principled boundary line between 'sudden enough that you'll survive' and 'not sudden enough'.

You spoke earlier of making sure that the method of death was faster than most thought processes, so as to avoid 'committing yourself' to a world where you die. But where's the boundary between 'committing yourself' and not doing so? Can you "only partially" commit yourself? How would that work?

Doesn't make sense.

Doesn't make sense.

Nope, it doesn't. Unfortunately, we don't need the many worlds hypothesis to run into this trouble. The trouble already exists in this single universe, assuming consciousness is computable. Just replace quantum world splitting with mind copying. Check out the Anthropic Trilemma.

But where's the boundary between 'committing yourself' and not doing so? Can you "only partially" commit yourself?

If I make an exact copy of you, wait X minutes, and then instantly kill one of you, how big must X be before this is murder? Beats me. I suspect there is no hard line.

If I make an exact copy of you, wait X minutes, and then instantly kill one of you, how big must X be before this is murder? Beats me. I suspect there is no hard line.

I would be willing to undergo such a procedure for 10 dollars if X is a minute or less (and you don't kill me in front of me, no other adverse effects, etc.). If X is 10 minutes, probably about 100 dollars.

Interesting post!

Personally I think the third option is 'obviously correct'. There isn't really such a thing as a 'thread of persisting subjective identity'. And this undermines the idea that in the quantum suicide scenario you should 'expect to become' the miraculous survivor.

All we can say is that the multiverse contains 'miraculous observers' with tiny 'probability weights' attached to them - and we can even concede that some of them get round to thinking "hang on - surely this means Many Worlds is true?" But whether their less unlikely counterparts live or die doesn't affect this in any way.

Applied quantum suicide. If we are in a Big World then all we really care about is probabilities

I've never taken quantum suicide seriously, even given MWI. You speak of probabilities, but what about measures? How do I gain by ensuring that the vast majority of possible futures do not contain anything resembling me, even if the majority of those that do give me a lottery jackpot? All I'm doing if I blow myself up for failing to win the lottery is erasing the overwhelming majority of my future selves, who if asked would very likely object.

Certainly, if you choose to care about total measure, by all means do so. Personally, I care about subjective experience, and couldn't give a blast what my total measure throughout the multiverse is (except insofar as it effects the subjective experience of other people).

We can modify probabilities conditional on the existence of future versions of ourselves, but those aren't necessarily the only probabilities we care about.

My antidote to this particular variety of universal acid in general and quantum suicide in particular: http://lesswrong.com/lw/208/the_iless_eye/

A competent and comprehensive critique of the ideas from your post would require much more thought and background reading than I've invested into it so far, but nevertheless, this key part strikes me as problematic:

[I]f you combine a functionalist view of mind with big worlds cosmology, then reality becomes the quotient of the set of all possible computations, where all sub-computations that instantiate you are identified.

To talk about a quotient set or quotient space, you need a well-defined equivalence relation. But what would it be in this instance? The set of all possible computations that "instantiate you" in any meaningful sense is necessarily an extremely fuzzy concept, for reasons I'm sure I don't need to elaborate on here. So what exactly gets to be included into "your 'now'"?

One way out of this, I suppose, would be to note that once you unwrap all the definitions, every mathematical object in ZFC is a set (of sets of sets of... -- perhaps infinite, and with empty sets as "bottom" elements), and then define "your 'now'" as the class of sets that contain subsets (or sub-sub-...-sets) that are *exactly* isomorphic to some "yardstick" set that represents "your 'now'." (I.e. those instances of "you" that are different in any detail at all are in a completely different class, and have no more relation to "you" than any other ones.) Similar could be done, of course, not just in ZFC but in any other theory that is sufficient to formalize standard mathematics.

Is this anywhere close to what you have in mind, or am I just rambling in complete misapprehension?

No, it is a real problem that you mention: namely that fuzziness could be hard to deal with.

Personally, I think that Shock Level 5 could be seen as emotionally dangerous for a human to take seriously, so beware.

Oh yes. I have already used Many Worlds Interpretation as a rationalization for not signing up for Cryonics. Arguing quantum immortality, in fact. But an uneasy sense of completely going crazy pointed out the fact that quantum suicide wouldn't be a good idea anyway, for my relative would be very sad in the universes where I don't exist.

Phew. I'm not (too) crazy. Yet.

I don't find it to be emotionally dangerous. Rather, it resolves multiple emotional dangers from earlier surprises.

I like the use of the quotient set here. In fact, I would go on to use it more comprehensively: not only does our observer-moment define an equivalence class, but any particular context implementing it does, too. It could be a simulation, or a simulation in a simulation in a (...), a small corner of a more general mathematical system, anything. The point is that for any and every defined part, it too will always be part of a quotient; there will always be an indistinguishability of what's happening below.

As a result of this: does it mean anything to be 'a simulation'?

My own current thinking is that the Born rule - the everydayness of everyday life - is a reflection of how consciousness must function. I am just not entirely sure how yet...

Shock Level 5 is a good candidate for Dan Dennett's universal acid: an idea so corrosive that if we let it into our minds, everything we care about will be dissolved. You can't change anything in the multiverse - every decision or consequence that you don't make will be made infinitely many times elsewhere by near-identical copies of you. Every victory will be produced, as will every possible defeat.

I'm surprised this didn't link to Bostrom's Infinite Ethics [pdf].

"Almost all of the histories and futures that feed into your "now" are simulations, by Bostrom's simulation argument"

That isn't the conclusion to the simulation argument that Bostrom usually gives.

Damn, I guess God does exist.

Modal realism by itself seems no more shocking than superintelligence and the Singularity (i.e., SL4):

- Modal realism and the Singularity have followed comparable timelines of development. Modal realism was first proposed by David Lewis in 1968, the Singularity by Stanisław Ulam in 1958 and I. J. Good in 1965. The everything-list was created in 1998, SL4 in 1999 or 2000.
- Jürgen Schmidhuber and Max Tegmark both supported versions of modal realism before (publicly) declaring themselves to be Singularitarians, so apparently the former is not more shocking to academia than the latter.

The two ideas put together might have consequences more shocking than either of them alone, but since those are still highly speculative it's probably too early to declare a shock level 5.

Agreed. I didn't find modal realism terribly shocking, and I learned of it / semi-independently figured out my own version of it before I had heard of much >SL2 stuff.

Then again, I seem to have gone straight from SL1 to SL4, with no discomfort. Maybe I'm just hard to shock? Even so, putting myself in the shoes of someone who needs to gradually move up through the shock levels, I don't think modal realism would be *higher* than the Singularity; at most I'd put it alongside it at SL4.

I didn't get which version of 'you exist multiple times' you use.

- multiple people that look rather similar to you
- multiple people that share your experiences for a certain period of time (comparable to branching)
- multiple people that would be difficult for an outsider to distinguish, but still have major notable differences (differing inside experiences)
- multiple people that share your exact sensory experience up until now completely - which basically implies a decent copy of the whole earth, and some of the surrounding space to make it really really similar
- atomic/quantum identical copy - which needs both a copy of the earth, and then some, but also a high level of similar quantum events.

Each of these seems less often to exist in a finite universe. In an infinite they might all just hang around. But the later seems really really unlikely to ever happen.

I'm going to repost here (with minor editing) a comment that I left in the open thread:

I'm unclear about what the statement "All mathematical structures exist" could mean, so I have a hard time evaluating its probability. I mean, what does it mean to say that a mathematical structure exists, over and above the assertion that the mathematical structure was, in some sense, available for its existence to be considered in the first place?

When I try to think about how I would fully flesh out the hypothesis that "All mathematical structures exist" to evaluate its complexity, all I can imagine is that you would have the source code for program that recursively generates all mathematical structures, together with the source code of a second program that applies the tag "exists" to all the outputs of the first program.

Two immediate problems:

(1) To say that we can recursively generate all mathematical structures is to say that the collection of all mathematical structures is denumerable. Maintaining this position runs into complications, to say the least.

(2) More to the point that I was making above, nothing significant really follows from applying the tag "exists" to things. You would have functionally the same overall program if you applied the tag "is blue" to all the outputs of the first program instead. You aren't really saying anything just by applying arbitrary tags to things. But what else are you going to do?

At this point, I find it less awkward to talk about the existence of mathematical structures than to talk about what it would mean to "physically exist" if that *doesn't* mean "instantiated in some mathematical object". We take it too much for granted.

Have you read Tegmark's papers? http://space.mit.edu/home/tegmark/crazy.html You should read The Multiverse Hierarchy if you haven't.

Have you read Tegmark's papers?

No, I am going by the characterization of the hypothesis given on LW.

You should read The Multiverse Hierarchy if you haven't.

Thanks for the link :). But I confess that I was hoping that someone could at least hint at Tegmark's answers to my questions. That would help me to decide whether reading the papers is worth it.

Sorry, I'm not sure how to satisfactorily answer your questions. Don't be intimidated by the paper though; Tegmark may be the best popular science writer out there and The Multiverse Hierarchy is written for a wide audience. It is not dumbed down; it just doesn't have any math. Even the "full-strength" paper is not very mathy in the scheme of physics papers.

I unclear about what the statement "All mathematical structures exist" could mean,

The idea is to abolish the distinction between 'mathematical existence' and 'physical existence' or, if you like, between 'possibility' and 'actuality'. Of course all mathematical structures exist as mathematical structures. But it's not obvious (to say the least!) that all mathematical structures exist in the same sense that the physical universe exists.

**[deleted]**· 2010-05-26T03:48:01.893Z · score: 2 (2 votes) · LW · GW

If the physical universe *were* a purely mathematical structure - just part of the set of all ideas, implied by some rules of mathematics, but not existing in any way that 2+2=4 does not exist - then how would we, as part of the answer to a math problem, know the difference between that and 'really existing'?

just part of the set of all ideas

For a start, we'd want to abandon the idea that mathematical structures are merely "ideas". A mathematician can have an idea of a structure, but the same abstract structure can often be conceived of in many different ways, and some structures are too complicated to be conceived of at all (e.g. a non-principal ultrafilter).

implied by some rules of mathematics

A *structure* (like the set of natural numbers together with its arithmetical operations) is not the same thing as a *proposition* (like "2+2=4" or "addition of natural numbers is commutative"). Structures satisfy propositions, and it may or may not be possible to systematically investigate the propositions satisfied by a structure by setting out 'axioms' and 'rules of inference' (both of which I suppose you'd call "rules of mathematics").

but not existing in any way that 2+2=4 does not exist

Better to say "not existing in any way that the numbers themselves don't exist".

how would we, as part of the answer to a math problem, know the difference between that and 'really existing'?

The real question here is "how is it possible for a mathematical structure to contain an intelligent observer?" Once you have an intelligent observer they can in principle teach themselves logic and mathematics, which will entail finding out about mathematical structures other than the one they're inhabiting.

But it's not obvious (to say the least!) that all mathematical structures exist in the same sense that the physical universe exists.

In the New Scientist version of Tegmark's mathematical universes paper he writes "every mathematical structure ... has physical existence." But what does "physical" add? When we learn the word "physical" as children we are referring to objects we see, feel, hear, etc., and to the laws of nature that describe them. But clearly a radically different mathematical structure, i.e. different from our laws of nature, is not on the same page, so to speak.

Consider ghosts. Suppose that ghosts exist pretty much as Hollywood depicts them, and also suppose that ghost behaviors and abilities follow (highly complex) mathematical laws, albeit radically different laws from QM and relativity. (Have I just supposed two contradictory things? I'm pretty sure I haven't.) Would ghosts then merit the label "physical"? I think they'd still be paradigms of the nonphysical, and the radical difference of the correct descriptions of ghosts versus particles would be the dead giveaway.

If we remove the (apparently unmerited) label "physical" and just assert that mathematical structures exist, there won't be much disagreement.

I'd call the ghosts physical but non-material.

The idea is to abolish the distinction between 'mathematical existence' and 'physical existence' or, if you like, between 'possibility' and 'actuality'.

I understand that that is the intuitive idea. But how is the hypothesis to be formulated in such a way that we could evaluate its probability, even in principle?

I wrote this some years ago: Sink the Tegmark!. As you can see, I share your skepticism as to whether there's enough sense to be made of Tegmark's theory that we can derive empirical predictions from it.

Even so, you should definitely read Tegmark's original papers - he does address this question somewhat.

the Copernican revolution with respect to your place in the multiverse

This is interesting... I saw Luciano Floridi give a talk recently where he talked about the "information revolution" and its relationship to past revolutions (borrowing from Freud's history). To summarize:

- Copernicus displaced us from the center of the universe
- Darwin displaced us from the center of the biosphere
- Freud revealed that we're not fully rational and transparent to ourselves
- The information revolution (which he identifies with Turing) revealed that we're not unique in terms of our ability to process information and be part of information-processing systems

I'd swap Turing for Weiner, but that's really an unimportant turf war.

ETA: relevant paper - www.philosophyofinformation.net/publications/pdf/tisip.pdf - pdf warning

Whoops, formatting issue.

Thanks. I'm not sure if it's the web app or Google Chrome.

I've seen a lot of HTML errors from the Less Wrong page on every browser I've tried.

I've noticed that too. What's odder is that they seem to come and go. I have no evidence, but I swear that sometimes I'll see a link to google in a comment on one day, and the link rendered as /">google in the same comment on another day, etc. Strange.

I have noticed this too. It looks like bytes get randomly corrupted or deleted during transfer. This would indicate a hardware problem. Next time I see it, I'll 'View Source' and observe exactly what's wrong with the HTML I received.

The subjective experience so far doesn't imply multiple parallel worlds influencing the same person similarly (at least my own experience does not). I don't really see how that is supposed to change. I also don't particularly care about people in unreachable worlds, that are very similar to me.

These observations don't contradict the (general message of the) theory.

You attribute a lot of consequences (eg, determinism?!) to this multiverse that are already consequences of much more conservative theories. The only further consequence I see you mention is the problem of finite measure.

I dont think that the multiverse of MWI can be sensibly identified with Tegmark's multiverse; if we accept the latter, the former is just one of the universes that makes it up. The multiverse of MWI is one, complete, standalone mathematical structure; it is perhaps a multiverse from our usual point of view, but from the Tegmark multiverse point of view, it should be considered as just one universe, of which we only care about a small part.

Yes. Tegmark talks about four levels of multiverses: MWI is level 3, and the mathematical universe is level 4.

Oh, hah, that was pretty silly of me. I'd forgotten that distinction.

On that note, ISTM that the "levels" don't really form a total order - rather than 0-1-2-3-4, it seems more like 0-1-2-4 and 0-3-4 with 3 incomparable to 1 and 2.

Yeah, Tegmark says that level 3 doesn't give you anything bigger than 1 and 2 together do.