Any evidence or reason to expect a multiverse / Everett branches?

post by lukehmiles (lcmgcd) · 2024-04-09T05:26:30.990Z · LW · GW · 122 comments

Contents

  Edit April 11: I challenge the properly physics brained people here (I am myself just a Q poster) to prove my guess wrong: Can you get the Born rule with clean hands this way? 
None
125 comments

My understanding is that pilot wave theory (ie Bohmian mechanics) explains all the quantum physics with no weirdness like "superposition collapse" or "every particle interaction creates n parallel universes which never physically interfere with each other". It is not fully "local" but who cares?

Is there any reason at all to expect some kind of multiverse? Why is the multiverse idea still heavily referenced (eg in acausal trade posts)?

 

Edit April 11: I challenge the properly physics brained people here (I am myself just a Q poster) to prove my guess wrong: Can you get the Born rule with clean hands this way? 

They also implicitly claim that in order for the Born rule to work [under pilot wave], the particles have to start the sim following the psi^2 distribution. I thinkk this is just false, and eg a wide normal distribution will converge to psi^2 over time as the system evolves. (For a non-adversarially-chosen system.) I don't know how to check this. Has someone checked this? Am I looking at this right?

 

Edit April 9: Well pilot wave vs many worlds is a holy war topic. People have pointed out excellent non-holy-war material:

122 comments

Comments sorted by top scores.

comment by tangerine · 2024-04-09T21:42:38.030Z · LW(p) · GW(p)

It’s the simplest explanation (in terms of Kolmogorov complexity).

It’s also the interpretation which by far has the most elegant explanation for the apparent randomness of reality. Most interpretations provide no mechanism for the selection of a specific outcome, which is absurd. Under the MWI, randomness emerges from determinism through indexical uncertainty, i.e., not knowing which branch you’re in. Some people, such as Sabine Hossenfelder for example, get confused by this and ask, “then why am I this version of me?”, which implicitly assumes dualism, as if there is a free-floating consciousness which could in principle inhabit any branch; this is patently untrue because you are by definition this “version” of you. If you were someone else (including someone in a different branch where one of your atoms is moved by one Planck distance) then you wouldn’t be you; you would be literally someone else.

Note that the Copenhagen interpretation is also a many-worlds explanation, but with the added assumption that all but one randomly chosen world disappears when an “observation” is made, i.e., when entanglement with your branch takes place.

Replies from: lombertini, lcmgcd, TAG
comment by titotal (lombertini) · 2024-04-10T11:33:41.866Z · LW(p) · GW(p)

It’s the simplest explanation (in terms of Kolmogorov complexity).

 

Do you have proof of this? I see this stated a lot, but I don't see how you could know this when certain aspects of MWI theory (like how you actually get the Born probabilities) are unresolved. 

Replies from: interstice, adastra22, tangerine, rhollerith_dot_com
comment by interstice · 2024-04-10T17:01:19.328Z · LW(p) · GW(p)

certain aspects of MWI theory (like how you actually get the Born probabilities) are unresolved

You can add the Born probabilities in with minimal additional Kolmogorov complexity, simply stipulate that worlds with a given amplitude have probabilities given by the Born rule(this does admittedly weaken the "randomness emerges from indexical uncertainty" aspect...)

Replies from: TAG
comment by TAG · 2024-04-11T17:28:07.550Z · LW(p) · GW(p)
comment by adastra22 · 2024-04-14T19:26:53.740Z · LW(p) · GW(p)

The proof is last paragraph of his post.

comment by tangerine · 2024-04-11T16:08:13.292Z · LW(p) · GW(p)

Every non-deterministic interpretation has a virtually infinite Kolmogorov complexity because it has to hardcode the outcome of each random event.

Hidden-variables interpretations are uncomputable because they are incomplete.

Replies from: lcmgcd
comment by lukehmiles (lcmgcd) · 2024-04-11T23:23:54.070Z · LW(p) · GW(p)

Hidden-variables interpretations are uncomputable because they are incomplete.

Are they complete if you include the hidden variables? Maybe I'm misunderstanding you.

Replies from: tangerine
comment by tangerine · 2024-04-12T17:01:01.824Z · LW(p) · GW(p)

Yes. My bad, I shouldn’t have implied all hidden-variables interpretations.

comment by RHollerith (rhollerith_dot_com) · 2024-04-10T15:53:56.474Z · LW(p) · GW(p)

Being uncertain of the implications of the hypothesis has no bearing on the Kolmogorv complexity of a hypothesis.

Replies from: lombertini
comment by titotal (lombertini) · 2024-04-10T16:35:05.481Z · LW(p) · GW(p)

I'm not talking about the implications of the hypothesis, I'm pointing out the hypothesis itself is incomplete. To simplify, if you observe an electron which has a 25% chance of spin up and 75% chance of spin down, naive MWI predicts that one version of you sees spin up and one version of you sees spin down. It does not explain where the 25% or 75% numbers come from. Until we have a solution to that problem (and people are trying), you don't have a full theory that gives predictions, so how can you estimate it's kolmogorov complexity?

I am a physicist who works in a quantum related field, if that helps you take my objections seriously. 

Replies from: rhollerith_dot_com, gilch
comment by RHollerith (rhollerith_dot_com) · 2024-04-11T17:23:36.650Z · LW(p) · GW(p)

Is it impossible that someday someone will derive the Born rule from Schrodinger's equation (plus perhaps some of the "background assumptions" relied on by the MWI)? 

Replies from: TAG, gilch, lcmgcd
comment by TAG · 2024-04-14T17:00:17.324Z · LW(p) · GW(p)

People keep coming up with derivations, and other people keep coming up with criticisms of them, which is why people keep coming up with new ones.

comment by gilch · 2024-04-11T23:56:34.096Z · LW(p) · GW(p)

Didn't Carroll already do that? Is something still missing?

Replies from: lombertini, rhollerith_dot_com, Signer
comment by titotal (lombertini) · 2024-04-22T09:04:08.877Z · LW(p) · GW(p)

No, I don't believe he did, but I'll save the critique of that paper for my upcoming "why MWI is flawed" post.  

comment by RHollerith (rhollerith_dot_com) · 2024-04-12T13:52:01.804Z · LW(p) · GW(p)

I wouldn't be surprised to learn that Sean Carroll already did that!

comment by Signer · 2024-04-16T16:41:13.269Z · LW(p) · GW(p)

Carroll's additional assumptions are not relied on by the MWI.

comment by lukehmiles (lcmgcd) · 2024-04-11T23:28:14.661Z · LW(p) · GW(p)

Could it be you? Maybe you have a thought on what I said in this other comment?

They also implicitly claim that in order for the Born rule to work [under pilot wave], the particles have to start the sim following the psi^2 distribution. I thinkk this is just false, and eg a wide normal distribution will converge to psi^2 over time as the system evolves. (For a non-adversarially-chosen system.) I don't know how to check this. Has someone checked this? Am I looking at this right?

comment by gilch · 2024-04-10T20:52:38.078Z · LW(p) · GW(p)

OK, what exactly is wrong with Sean Carroll's derivation?

Replies from: Signer
comment by Signer · 2024-04-16T17:50:52.516Z · LW(p) · GW(p)

The wrong part is mostly in https://arxiv.org/pdf/1405.7577.pdf, but: indexical probabilities of being a copy are value-laden - seems like the derivation first assumes that branching happens globally and then assumes that you are forbidden to count different instantiations of yourself, that were created by this global process.

comment by lukehmiles (lcmgcd) · 2024-04-10T05:35:31.551Z · LW(p) · GW(p)

It’s the simplest explanation (in terms of Kolmogorov complexity).

Hmm I think I can implement pilot wave in fewer lines of C than I can many-worlds. Maybe this is a matter of taste... or I am missing something?

It’s also the interpretation which by far has the most elegant explanation for the apparent randomness of reality.

I thought pilot wave's explanation was (very roughly) "of course you cannot say which way the particle will go because you cannot accurately measure it without moving it" plus roughly "that particle is bouncing around a whole lot on its wave, so its exact position when it hits the wall will look random". I find this quite elegant, but that's also a matter of taste perhaps. If this oversimplification is overtly wrong then please tell me.

Replies from: adele-lopez-1, TAG
comment by Adele Lopez (adele-lopez-1) · 2024-04-11T17:21:39.437Z · LW(p) · GW(p)

Hmm I think I can implement pilot wave in fewer lines of C than I can many-worlds. Maybe this is a matter of taste... or I am missing something?

Now simply delete the pilot wave part piloted part.

Replies from: gilch, lcmgcd
comment by gilch · 2024-04-11T23:54:41.636Z · LW(p) · GW(p)

You mean, "Now simply delete the superfluous corpuscles." We need to keep the waves.

comment by lukehmiles (lcmgcd) · 2024-04-11T23:33:11.100Z · LW(p) · GW(p)

I admit I have not implemented so much as a quantum fizzbuzz in my life

comment by TAG · 2024-04-12T08:33:34.968Z · LW(p) · GW(p)

Bohmian mechanics adds hidden variables. why would it be simpler?

Replies from: lcmgcd
comment by lukehmiles (lcmgcd) · 2024-04-12T09:37:07.718Z · LW(p) · GW(p)

I think I was wrong and you & Adele Lopez are right and pilot wave would be more lines. I am concerned about god's RAM though... Maybe if they've got good hardware for low-rank matrices then it's fine.

comment by TAG · 2024-04-10T17:18:19.689Z · LW(p) · GW(p)

"it" isn't a single theory.

The argument that Everettian MW is favoured by Solomonoff induction, is flawed.

If the program running the SWE outputs information about all worlds on a single output tape, they are going to have to be concatenated or interleaved somehow. Which means that to make use of the information, you gave to identify the subset of bits relating to your world. That's extra complexity which isn't accounted for because it's being done by hand, as it were..

Replies from: DaemonicSigil
comment by DaemonicSigil · 2024-04-10T18:12:07.454Z · LW(p) · GW(p)

Disagree.

If you're talking about the code complexity of "interleaving": If the Turing machine simulates quantum mechanics at all, it already has to "interleave" the representations of states for tiny things like a electrons being in a superposition of spin states or whatever. This must be done in order to agree with experimental results. And then at that point not having to put in extra rules to "collapse the wavefunction" makes things simpler.

If you're talking about the complexity of locating yourself in the computation: Inferring which world you're in is equally complex to inferring which way all the Copenhagen coin tosses came up. It's the same number of bits. (In practice, we don't have to identify our location down to a single world, just as we don't care about the outcome of all the Copenhagen coin tosses.)

Replies from: TAG
comment by TAG · 2024-04-10T18:50:35.798Z · LW(p) · GW(p)

I'm not talking about the code complexity of interleaving the SI's output.

I am talking about interpreting the serial output of the SI ....de-interleaving , as it were. If you account for that , then the total complexity is exactly the same as Copenhagen and that's the point. I'm not a dogmatic Copenhagenist, so that's not a gotcha.

Basically , the amount of calculation you have to do to get an empirically adequate theory is the same under any interpretation, because interpretations don't change the maths, they just ... interpret it .....differently. The SI argument for MWI only seems to work because it encourages the reader to neglect the complexity implicit in interpreting the output tape.

Replies from: DaemonicSigil
comment by DaemonicSigil · 2024-04-11T01:54:07.185Z · LW(p) · GW(p)

Right, so we both agree that the randomness used to determine the result of a measurement in Copenhagen, and the information required to locate yourself in MWI is the same number of bits. But the argument for MWI was never that it had an advantage on this front, but rather that Copenhagen used up some extra bits in the machine that generates the output tape in order to implement the wavefunction collapse procedure. (Not to decide the outcome of the collapse, those random bits are already spoken for. Just the source code of the procedure that collapses the wavefunction and such.) Such code has to answer questions like: Under what circumstances does the wavefunction collapse? What determines the basis the measurement is made in? There needs to be code for actually projecting the wavefunction and then re-normalizing it. This extra complexity is what people mean when they say that collapse theories are less parsimonious/have extra assumptions.

Replies from: TAG
comment by TAG · 2024-04-11T02:59:21.247Z · LW(p) · GW(p)

but rather that Copenhagen used up some extra bits in the machine that generates the output tape in order to implement the wavefunction collapse procedure. (

Again: that's some less calculation that the reader of the tape has to do.

Replies from: DaemonicSigil
comment by DaemonicSigil · 2024-04-11T20:23:58.772Z · LW(p) · GW(p)

Amount of calculation isn't so much the concern here as the amount of bits used to implement that calculation. And there's no law that forces the amount of bits encoding the computation to be equal. Copenhagen can just waste bits on computations that MWI doesn't have to do.

In particular, I mentioned earlier that Copenhagen has to have rules for when measurements occur and what basis they occur in. How does MWI incur a similar cost? What does MWI have to compute that Copenhagen doesn't that uses up the same number of bits of source code?

Like, yes, an expected-value-maximizing agent that has a utility function similar to ours might have to do some computations that involve identifying worlds, but the complexity of the utility function doesn't count against the complexity of any particular theory. And an expected value maximizer is naturally going to try and identify its zone of influence, which is going to look like a particular subset of worlds in MWI. But this happens automatically exactly because the thing is an EV-maximizer, and not because the laws of physics incurred extra complexity in order to single out worlds.

Replies from: TAG
comment by TAG · 2024-04-12T05:50:49.325Z · LW(p) · GW(p)

Amount of calculation isn’t so much the concern here as the amount of bits used to implement that calculation. And there’s no law that forces the amount of bits encoding the computation to be equal. Copenhagen can just waste bits on computations that MWI doesn’t have to do

And vice versa. You can do unnecessary calculation under any interpretation, so that's an uninteresting observation.

The importantly is that the minimum amount of calculation you have to do get an empirically adequate theory is the same under any interpretation, because interpretations don't change the maths, they just ... interpret it.... differently. In particular, a.follower many worlder has to discard unobserved results in the same way as a Copenhagenist -- it's just that they interpret doing so as the unobserved results existing in another branch, rather than being snipped off by collapse. The maths is the same, the interpretation is different. You can also do the maths without interpreting it, as in Shut Up And Calculate.

Copenhagen has to have rules for when measurements occur and what basis they occur in

This gets back to a long-standing confusion between Copenhagen and objective collapse theories (here, I mean, not in the actual physics community). Copenhagen ,properly speaking, only claims that collapse occurs on or before measurement. It also claims that nothing is known about the ontology of.the system before collapse -- it's not the case that anything "is" a wave function. An interpretation of QM doesn't have to have an ontology, and many dont. Which, of course, is another factor that renders the whole Kolmogorov. Complexity approach inoperable.

Objective collapse theories like GRW do have to specify when and collapse occurs...but MW theories have to specify when and how decoherence occurs. Decoherence isn't simple.

Replies from: Signer, DaemonicSigil
comment by Signer · 2024-04-16T18:23:48.416Z · LW(p) · GW(p)

In particular, a.follower many worlder has to discard unobserved results in the same way as a Copenhagenist—it’s just that they interpret doing so as the unobserved results existing in another branch, rather than being snipped off by collapse.

A many-worlder doesn't have to discard unobserved results - you may care about other branches.

Replies from: TAG
comment by TAG · 2024-04-17T14:28:19.067Z · LW(p) · GW(p)

I am talking about the minimal set of operations you have to perform to get experimental results. A many worlder may care about other branches philosophically, but if they don't renormalise , their results will be wrong, and if they don't discard, they will do unnecessary calculation.

comment by DaemonicSigil · 2024-04-12T19:57:07.011Z · LW(p) · GW(p)

MW theories have to specify when and how decoherence occurs. Decoherence isn't simple.

They don't actually. One could equally well say: "Fundamental theories of physics have to specify when and how increases in entropy occur. Thermal randomness isn't simple." This is wrong because once you've described the fundamental laws and they happen to be reversible, and also aren't too simple, increasing entropy from a low entropy initial state is a natural consequence of those laws. Similarly, decoherence is a natural consequence of the laws of quantum mechanics (with a not-too-simple Hamiltonian) applied to a low entropy initial state.

Replies from: TAG
comment by TAG · 2024-04-13T16:11:50.801Z · LW(p) · GW(p)

MW has to show that decoherence is a natural consequence, which is the same thing. It can't be taken on faith, any more than entropy should be. Proofs of entropy were supplied a long time ago, proofs of decoherence of a suitable kind, are a work in progress.

Replies from: DaemonicSigil
comment by DaemonicSigil · 2024-04-24T22:16:27.170Z · LW(p) · GW(p)

So once that research is finished, assuming it is successful, you'd agree that many worlds would end up using fewer bits in that case? That seems like a reasonable position to me, then! (I find the partial-trace kinds of arguments that people make pretty convincing already, but it's reasonable not to.)

Replies from: TAG
comment by TAG · 2024-04-26T13:52:53.020Z · LW(p) · GW(p)

The other problem is that MWI is up against various subjective and non-realist interpretations, so it's not it's not the case that you can build an ontological model of every interpretation.

comment by Charlie Steiner · 2024-04-09T10:34:50.673Z · LW(p) · GW(p)

My understanding is that pilot wave theory (ie Bohmian mechanics) explains all the quantum physics

This is only true if you don't count relativistic field theory. Bohmian mechanics has mathematical troubles extending to special relativity or particle creation/annihilation operators.

Is there any reason at all to expect some kind of multiverse?

Depending on how big you expect the unobservable universe to be, there can also be a spacelike multiverse.

Replies from: lcmgcd
comment by lukehmiles (lcmgcd) · 2024-04-10T05:38:05.782Z · LW(p) · GW(p)

Could you illustrate the relativistic field theory issue (or link to illustration)? My blind assumption is that people did not try very hard to make it work.

(Spacelike multiverse seems like legit possibility to me.)

Replies from: Charlie Steiner, Domenic
comment by Charlie Steiner · 2024-04-10T13:46:10.029Z · LW(p) · GW(p)

I found someone's thesis from 2020 (Hoi Wai Lai) that sums it up not too badly (from the perspective of someone who wants to make Bohmian mechanics work and was willing to write a thesis about it).

For special relativity (section 6), the problem is that the motion of each hidden particle depends instantaneously on the entire multi-particle wavefunction. According to Lai, there's nothing better than to bite the bullet and define a "real present" across the universe, and have the hyperparticles sometimes go faster than light. What hypersurface counts as the real present is unobservable to us, but the motion of the hidden particles cares about it.

For varying particle number (section 7.4), the problem is that in quantum mechanics you can have a superposition of states with different numbers of particles. If there's some hidden variable tracking which part of the superposition is "real," this hidden variable has to behave totally different than a particle! Lai says this leads to "Bell-type" theories, where there's a single hidden variable, a hidden trajectory in configuration space. Honestly this actually seems more satisfactory than how it deals with special relativity - you just had to sacrifice the notion of independent hidden variables behaving like particles, you didn't have to allow for superluminal communication in a way that highlights how pointless the hidden variables are.

Warning: I have exerted basically no effort to check if this random grad student was accurate.

Replies from: lcmgcd
comment by lukehmiles (lcmgcd) · 2024-04-11T23:41:29.878Z · LW(p) · GW(p)

the problem is that in quantum mechanics you can have a superposition of states with different numbers of particles ... If there's some hidden variable tracking which part of the superposition is "real," this hidden variable has to behave totally different than a particle!

Oh this seems like a pretty big ding to pilot wave if I understand correctly and it's correct

comment by Domenic · 2024-04-11T08:24:23.603Z · LW(p) · GW(p)

I can help confirm that your blind assumption is false. Source: my undergrad research was with a couple of the people who have tried hardest, which led to me learning a lot about the problem. (Ward Struyve and Samuel Colin.) The problem goes back to Bell and has been the subject of a dedicated subfield of quantum foundations scholars ever since.

This many years distant, I can't give a fair summary of the actual state of things. But a possibly unfair summary based on vague recollections is: it seems like the kind of situation where specialists have something that kind of works, but people outside the field don't find it fully satisfying. (Even people in closely adjacent fields, i.e. other quantum foundations people.) For example, one route I recall abandons using position as the hidden variable, which makes one question what the point was in the first place, since we no longer recover a simple manifest image where there is a "real" notion of particles with positions. And I don't know whether the math fully worked out all the way up to the complexities of the standard model weakly coupled to gravity. (As opposed to, e.g., only working with spin-1/2 particles, or something.)

Now I want to go re-read some of Ward's papers...

Replies from: lcmgcd
comment by lukehmiles (lcmgcd) · 2024-04-11T23:43:17.644Z · LW(p) · GW(p)

(If you do go reread then I would love to read some low-effort notes on it similar to your recollection above.)

comment by lc · 2024-04-10T12:13:01.800Z · LW(p) · GW(p)

I would like to ask a followup question: since we don't have a unified theory of physics yet, why isn't adopting strongly any one of these nonpredictive interpretations premature? It seems like trying to "interpret" gravity without knowing about general relativity.

Replies from: Mitchell_Porter, lcmgcd
comment by Mitchell_Porter · 2024-04-13T07:16:38.568Z · LW(p) · GW(p)

Standard model coupled to gravitons is already kind of a unified theory. There are phenomena at the edges (neutrino mass, dark matter, dark energy) which don't have a consensus explanation, as well as unresolved theoretical issues (Higgs finetuning, quantum gravity at high energies), but a well-defined "theory of almost everything" does already exist for accessible energies. 

Replies from: lc
comment by lc · 2024-04-13T15:45:21.600Z · LW(p) · GW(p)

How is this different from the situation in the late 19th century when only a few things left seemed to need a "consensus explanation"?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2024-04-22T04:46:23.572Z · LW(p) · GW(p)

In Engines of Creation ("Will physics again be upended?"), @Eric Drexler [LW · GW] pointed out that prior to quantum mechanics, physics had no calculable explanations for the properties of atomic matter. "Physics was obviously and grossly incomplete... It was a gap not in the sixth place of decimals but in the first."

That gap was filled, and it's an open question whether the truth about the remaining phenomena can be known by experiment on Earth. I believe in trying to know, and it's very possible that some breakthrough in e.g. the foundations of string theory or the hard problem of consciousness, will have decisive implications for the interpretation of quantum mechanics. 

If there's an empirical breakthrough that could do it, my best guess is some quantum-gravitational explanation for the details of dark matter phenomenology. But until that happens, I think it's legitimate to think deeply about "standard model plus gravitons" and ask what it implies for ontology. 

comment by lukehmiles (lcmgcd) · 2024-04-11T23:56:57.292Z · LW(p) · GW(p)

What do you think of my unified theory?

if small stuff:
    do quantum
else if big stuff:
    do relativity
else:
    raise error "The two theories are never supposed to
        make contradictory predictions in any physically possible
        configuration of matter right? So the above code is fine?"

Furthermore, one cannot participate in the culture war in earnest if one does not strongly adopt one of these nonpredictive interpretations.

Furthermore, the interpretations do make some predictions at least about what kind of computer god is running. We may never find the right experiment...

comment by Garrett Baker (D0TheMath) · 2024-04-09T05:48:55.185Z · LW(p) · GW(p)

I've usually heard the justification for favoring Everett over pilot wave theory is on simplicity terms. We can explain everything we need in terms of just wave functions interacting with other wave functions, why also add in particles to the mix too? You get more complicated equations (so I'm told), with a greater number of types of objects, and a less elegant theory, for what? More intuitive metaphysics? Bah!

Though the real test is experimental, as you know. I don't think there's any experiments which separate out the two hypotheses, so it really is still up in the air which actually is a better description of our universe.

Replies from: quetzal_rainbow, Domenic, metachirality
comment by quetzal_rainbow · 2024-04-09T07:44:25.930Z · LW(p) · GW(p)

The general problem with "more intuitive metaphysics" is that your intuition is not my intuition. My intuition finds zero problem with many worlds interpretation.

And I think you underestimate complexity issues. Many worlds interpretation requires as many information as all wave functions contain, but pilot wave requires as many information as required to describe speed and position of all particles compatible with all wave functions, which for universe with 10^80 particles requires c*10^80, c>=1 additional bits, which drives Solomonoff probability of pilot wave interpretation somewhere into nothing.

Replies from: D0TheMath, TAG, Signer
comment by Garrett Baker (D0TheMath) · 2024-04-09T08:00:22.805Z · LW(p) · GW(p)

The question is not how big the universe under various theories is, but how complicated the equations describing that theory are.

Otherwise, we’d reject the so-called “galactic” theory of star formation, in favor of the 2d projection theory, which states that the night sky only appears to have far distant galaxies, but is instead the result of a relatively complicated (wrt to newtonian mechanics) cellular automata projected onto our 2d sky. You see, the galactic theory requires 6 parameters to describe each object, and posits an enormously large number of objects, while the 2d projection theory requires but 4 parameters, and assumes an exponentially smaller number of particles, making it a more efficient compression of our observations.

see also

Replies from: quetzal_rainbow
comment by quetzal_rainbow · 2024-04-09T09:11:46.964Z · LW(p) · GW(p)

You somehow managed to misunderstand me in completely opposite direction. I'm not talking about size of the universe, I'm talking about complexity of description of the universe. Description of the universe consists of initial conditions and laws of evolution. The problem with hidden variables hypotheses is that they postulate initial conditions of enormous complexity (literally, they postulate that at the start of the universe list of all coordinates and speeds of all particles exists) and then postulate laws of evolution that don't allow to observe any differences between these enourmously complex initial conditions and maximum-entropy initial conditions. Both are adding complexity, but hidden variables contain most of it.

Replies from: D0TheMath
comment by Garrett Baker (D0TheMath) · 2024-04-09T15:39:29.440Z · LW(p) · GW(p)

My apologies

comment by TAG · 2024-04-09T12:17:28.339Z · LW(p) · GW(p)

The main reason for not favouring the Everett interpretation is that it doesn't predict classical observations , unless you make further assumptions about basis, the "preferred basis problem". There is therefore room for an even simpler interpretation.

There is an approach to MWI based on coherent superpositions, and a version based on decoherence. These are (for all practical purposes) incompatible opposites, but are treated as interchangeable in Yudkowsky's writings.

The original, Everettian, or coherence based approach , is minimal, but fails to predict classical observations. (At all. It fails to predict the appearance of a broadly classical universe). The later, decoherence based approach, is more emprically adequate, but seems to require additional structure, placing its simplicity in doubt

Coherent superpositions probably exist, but their components aren't worlds in any intuitive sense. Decoherent branches would be worlds in the intuitive sense, and while there is evidence of decoherence, there is no evidence of decoherent branching.There could be a theoretical justification for decoherent branching , but that is what much of the ongoing research is about -- it isn't a done deal, and therefore not a "slam dunk". And, inasmuch as there is no agreed mechanism for decoherent branching, there is no definite fact about the simplicity of decoherent MWI.

Replies from: tailcalled, artifex0
comment by tailcalled · 2024-04-09T13:28:56.822Z · LW(p) · GW(p)

I'm confused about what distinction you are talking about, possibly because I haven't read Everett's original proposal.

Replies from: TAG
comment by TAG · 2024-04-09T16:59:11.115Z · LW(p) · GW(p)

Everett's thesis doesn't give an answer to how an observer makes sharp-valued classiscal observations, and doesn't flag the issue either, although much of the subsequent literature does.

Eg. https://iep.utm.edu/everett/ for an overview (also why it's more than one theory, and a work-in-progress).

Replies from: Signer
comment by Signer · 2024-04-16T18:30:53.340Z · LW(p) · GW(p)

What's the evidence for these "sharp-valued classical observations" being real things?

Replies from: TAG
comment by TAG · 2024-04-17T14:24:40.516Z · LW(p) · GW(p)

Err...physicists can make them in the laboratory. Or were you asking whether they are fundamental constituents of reality?

Replies from: Signer
comment by Signer · 2024-04-17T15:23:41.534Z · LW(p) · GW(p)

I'm asking how physicists in the laboratory know that their observation are sharp-valued and classical?

Replies from: TAG
comment by TAG · 2024-04-17T15:40:19.570Z · LW(p) · GW(p)

Same way you know anything. "Sharp valued" and "classical" have meanings, which cash out in expected experience.

comment by artifex0 · 2024-04-09T23:31:59.606Z · LW(p) · GW(p)
comment by Signer · 2024-04-09T17:54:25.789Z · LW(p) · GW(p)

My intuition finds zero problem with many worlds interpretation.

Why do you care about the Born measure?

comment by Domenic · 2024-04-11T08:27:54.366Z · LW(p) · GW(p)

If you can find non-equilibrium quantum states, they are distinguishable. https://en.m.wikipedia.org/wiki/Quantum_non-equilibrium

(Seems pretty unlikely we'd ever be able to definitively say a state was non-equilibrium instead of some other weirdness, though.)

Replies from: lcmgcd
comment by lukehmiles (lcmgcd) · 2024-04-11T23:48:02.984Z · LW(p) · GW(p)

I don't understand what you're replying to. Is this about a possible experiment? What experiment could you do?

Replies from: Domenic
comment by Domenic · 2024-04-12T02:20:23.024Z · LW(p) · GW(p)

Finding non-equilibrium quantum states would be evidence of pilot wave theory since they're only possible in a pilot wave theory.

comment by metachirality · 2024-04-09T13:56:04.367Z · LW(p) · GW(p)

Another problem is, why should we expect to be in the particles rather than just in the wave function directly? Both MWI and Bohmian mechanics have the wave function, after all. It might be the case that there are particles bouncing around but the branch of the wave function we live in has no relation to the positions of the particles.

comment by tailcalled · 2024-04-09T08:55:20.994Z · LW(p) · GW(p)

The multiverse interpretation takes the wavefunction literally and says that since the math describes a multiverse, there is a multiverse.

YMMV about how literally you take the math. I've come to have a technical objection to it such that I'd be inclined to say that the multiverse theory is wrong, but also it is very technical and I think a substantial fraction of multiverse theorists would say "yeah that's what I meant" or "I suppose that's plausible too".

But "take the math literally" sure seems like good reason/evidence.

And when it comes to pilot wave theory, its math also postulates a wavefunction, so if you take the math literally for pilot wave theory, you get the Everettian multiverse; you just additionally declare one of the branches Real in a vague sense.

Replies from: sharmake-farah, TAG
comment by Noosphere89 (sharmake-farah) · 2024-04-10T01:52:49.127Z · LW(p) · GW(p)

What's the technical objection you have to it?

Replies from: tailcalled
comment by tailcalled · 2024-04-10T07:02:36.812Z · LW(p) · GW(p)

Gonna post a top-level post about it once it's made it through editing, but basically the wavefunction is a way to embed a quantum system in a deterministic system, very closely analogous to how a probability function allows you to embed a stochastic system into a deterministic system. So just like how taking the math literally for QM means believing that you live in a multiverse, taking the math literally for probability also means believing that you live in a multiverse. But it seems philosophically coherent for me to believe that we live in a truly stochastic universe rather than just a deterministic probability multiverse, so it also feels like it should be philosophically coherent that we live in a truly quantum universe.

Replies from: lcmgcd
comment by lukehmiles (lcmgcd) · 2024-04-10T07:16:44.181Z · LW(p) · GW(p)

What do you mean by "a truly quantum universe"?

Replies from: tailcalled
comment by tailcalled · 2024-04-10T08:05:51.344Z · LW(p) · GW(p)

Before I answer that question: do you know what I mean by a truly stochastic universe? If so, how would you explain the concept of true ontologically fundamental stochasticity to a mind that does not know what it means?

Replies from: lcmgcd
comment by lukehmiles (lcmgcd) · 2024-04-10T08:11:29.443Z · LW(p) · GW(p)

I think by "truly stochastic" you mean that multiple future outcomes are possible, rather than one inevitable outcome. You don't merely mean "it's absolutely physically impossible to take the necessary measurements to predict things" or "a coin flip is pretty much random for all intents & purposes". That's my guess.

Replies from: tailcalled
comment by tailcalled · 2024-04-10T08:17:52.685Z · LW(p) · GW(p)

Kind of, because "multiple future outcomes are possible, rather than one inevitable outcome" could sort of be said to apply to both true stochasticity and true quantum mechanics. With true stochasticity, it has to evolve by a diffusion-like process with no destructive interference, whereas for true quantum mechanics, it has to evolve by a unitary-like process with no information loss.

So to a mind that can comprehend probability distributions, but intuitively thinks they always describe hidden variables or frequencies or whatever, how does one express true stochasticity, the notion where a probability distribution of future outcomes are possible (even if one knew all the information that currently exists), but only one of them happens?

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2024-04-10T08:36:16.525Z · LW(p) · GW(p)

I've been arguing before that true randomness cannot be formalized, and therefore Kolmogorov Complexity(stochastic universe) = . But ofc then the out-of-model uncertainty dominates the calculation, mb one needs a measure with a randomness primitive. (If someone thinks they can explain randomness in terms of other concepts, I also wanna see it.)

comment by TAG · 2024-04-09T18:52:41.549Z · LW(p) · GW(p)

The math doesn't describe a multiverse, in the sense that if you solve the Schrödinger equation for the universe, you get some structure with clearly separated decoherent branches, every time. You need additional assumptions, which have their own complexity cost.

In fact, MWI is usually argued from models of a few particles. These can show coherent superposition, but to spin an MW theory worthy of the name out of that, you need superpositions that can be maintained at large scale, and also decohere into non interacting branches , preferably by a mechanism that can be found entirely in the standard formalism.

Replies from: tailcalled
comment by tailcalled · 2024-04-09T19:17:19.477Z · LW(p) · GW(p)

I'm confused about what you're saying. In particular while I know what "decoherence" means, it sounds like you are talking about some special formal thing when you say "decoherent branches".

Let's consider the case of Schrodinger's cat. Surely the math itself says that when you open the box, you end up in a superposition of |see the cat alive> + |see the cat dead>.

Or from a comp sci PoV, I imagine having some initial bit sequence, |0101010001100010>, and then applying a Hadamard gate to end up with a superposition (sqrt(1/2) |0> + sqrt(1/2) |1>) (x) |101010001100010>. Next I imagine a bunch of CNOTs that mix together this bit in superposition with the other bits, making the superpositions very distant from each other and therefore unlikely to interact.

What are you saying goes wrong in these pictures?

Replies from: TAG
comment by TAG · 2024-04-09T19:29:02.647Z · LW(p) · GW(p)

Surely the math itself says that when you open the box, you end up in a superposition of |see the cat alive> + |see the cat dead>.

In a classical basis. But you could rewrite the superposition in other bases that we dont observe. That's one problem.

As Penrose writes (Road to Reality 29.8) "Why do we not permit these superposed perception states? Until we know exactly what it is about a quantum state that allows it to be considered as a ‘perception’, and consequently see that such superpositions are ‘not allowed’, we have really got nowhere in explaining why the real world of our experiences cannot involve superpositions of live and dead cats." Penrose gives the example of 2|psi>1⁄4 {jperceiving live cati þ jperceiving dead cati} {jlive cati þ j dead cati} þ {jperceiving live cati jperceiving dead cati} {jlive cati jdead cati} as an example of a surreal, non-classical superposition.

And/or, we have no reason to believe, given only the formalism itself , that the two versions of the observer will be unaware of each other, and able to report unambiguously on their individual observations. That's the point of the de/coherence distinction. In the Everett theory, everything that starts in a coherent superposition, stays in one.

"According to Everett’s pure wave mechanics, when our observer makes a measurement of the electron he does not cause a collapse, but instead becomes correlated with the electron. What this means is that where once we had a system that consisted of just an electron, there is now a system that consists of the electron and the observer. The mathematical equation that describes the state of the new system has one summand in which the electron is z-spin up and the observer measured “z-spin up” and another in which the electron is z-spin down and the observer measured “z-spin down.” In both summands our observer got a determinate measurement record, so in both, if we ask him whether he got a determinate record, he will say “yes.” If, as in this case, all summands share a property (in this case the property of our observer saying “yes” when asked if he got a determinate measurement record), then that property is determinate.

This is strange because he did not in fact get a determinate measurement record; he instead recorded a superposition of two outcomes. After our observer measures an x-spin up electron’s z-spin, he will not have determinately gotten either “*z-*spin up” or “*z-*spin down” as his record. Rather he will have determinately gotten “*z-*spin up or *z-*spin down,” since his state will have become correlated with the state of the electron due to his interaction with it through measurement. Everett believed he had explained determinate experience through the use of relative states (Everett 1957b: 146; Everett 1973: 63, 68–70, 98–9). That he did not succeed is largely agreed upon in the community of Everettians."

(https://iep.utm.edu/everett/#H5)

Many attempts have been made to fix the problem, notably decoherence based approaches.

The expected mechanism of decoherence is interaction with a larger environment... which is assumed to already be in a set of decoherent branches on a classical basis. But why? At this point , it becomes a cosmological problem. You can't write a WF of the universe without some cosmological assumptions about the initial state, and so on, so whether it looks many-worldish or not depends on the assumptions.

Replies from: tangerine, gilch, tailcalled
comment by tangerine · 2024-04-09T21:28:52.410Z · LW(p) · GW(p)

It’s just a matter of definition. We say that “you” and “I” are the things that are entangled with a specific observed state. Different versions of you are entangled with different observations. Nothing is stopping you from defining a new kind of person which is a superposition of different entanglements. The reason it doesn’t “look” that way from your perspective is because of entanglement and the law of the excluded middle. What would you expect to see if you were a superposition?

Replies from: TAG
comment by TAG · 2024-04-10T12:20:02.040Z · LW(p) · GW(p)

What would you expect to see if you were a superposition?

If I were in a coherent superposition, I would expect to see non classical stuff. Entanglement alone is not enough to explain my sharp-valued, quasi classical observations.

It isn't just a matter of definition, because I don't perceive non classical stuff, so I lack motivation to define "I" in way that mispredicts that I do. You don't get to arbitrarily relabel things if you are in the truth seeking business.

The objection isn't to using "I" or "the observer" to label a superposed bundle of sub-persons , each of which individually is unaware of the others and has normal, classical style experience, because that doesn't mispredict my experience. There problem is that "super posed bundle of persons , each of which is unaware of the others and has normal, classical style experience" is what you get from a decoherent superposition, and I am specifically talking about coherent superposition. ("In the Everett theory, everything that starts in a coherent superposition, stays in one.") Decoherence was introduced precisely to solve the problem with Everett's RSI.

Replies from: tailcalled
comment by tailcalled · 2024-04-10T13:48:42.056Z · LW(p) · GW(p)

Let's say you have some unitary transformation . If you were to apply this to a coherent superposition , it seems like it would pretty much always make you end up with a decoherent superposition. So it doesn't seem like there's anything left to explain.

Replies from: TAG
comment by TAG · 2024-04-10T14:34:05.346Z · LW(p) · GW(p)

I'm not trying to say all forms of MW are hopeless. I am saying

  • there is more than one form
  • there are trade offs between simplicity and correctness -- there's no simple and adequate MWI.

Decoherence isn't simple [LW · GW] -- you can't find it by naively looking at the SWE, and it took three or four decades for physicists to notice.

It also doesnt't unequivocally support MW -- when we observe decoherence, we observe it one universe at a time, and maybe in the one and only universe.

"Decoherence does half the job of solving the measurement problem. In short, it tells you that you will not in practice be able to observe that Schroodinger's cat is in a superposition, because the phase between the two parts of the superposition would not be sufficiently stable. But the concept of decoherence does not, on its own, yield an answer to the question "how come the experimental outcome turns out to be one of A or B, not both A and B carried forward together into the future?"

The half-job that decoherence succeeds in doing is to elucidate the physical process whereby a preferred basis or pointer basis is established. As you say in the question, any given quantum state can be expressed as a superposition in some basis, but this ignores the dynamical situation that physical systems are in. In practice, when interactions with large systems are involved, states in one basis will stay still, states in another basis will evolve VERY rapidly, especially in the phase factors that appear as off-diagonal elements of density matrices. The pointer basis is the one where, if the system is in a state in that basis, then it does not have this very fast evolution.

But as I say, this observation does not in and of itself solve the measurement problem in full; it merely adds some relevant information. It is the next stage where the measurement problem really lies, and where people disagree. Some people think the pointer basis is telling us about different parts of a 'multiverse' which all should be regarded as 'real'. Other people think the pointer basis is telling us when and where it is legitimate to assert 'one thing and not both things happen'.

That's it. That's my answer to your question.

But I can't resist the lure, the sweet call of the siren, "so tell us: what is really going on in quantum measurement?" So (briefly!) here goes.

I think one cannot get a good insight into the interpretation of QM until one has got as far as the fully relativistic treatment and therefore field theory. Until you get that far you find yourself trying to interpret the 'state' of a system; but you need to get into another mindset, in which you take an interest in events, and how one event influences another. Field theory naturally invites one to a kind of 'input-output' way of thinking, where the mathematical apparatus is not trying to say everything at once, but is a way of allowing one to ask and find answers to well-posed questions. There is a distinction between maths and physical stuff. The physical things evolve from one state to another; the mathematical apparatus tells us the probabilities of the outcomes we put to it once we have specified what is the system and what is its environment. Every system has an environment and quantum physics is a language which only makes sense in the context of an environment.

In the latter approach (which I think is on the right track) the concept of 'wavefunction of the whole universe' is as empty of meaning as the concept of 'the velocity of the whole universe'. The effort to describe the parts of such a 'universal wavefunction' is a bit like describing the components of the velocity of the whole universe. In saying this I have gone beyond your question, but I hope in a useful way."

https://physics.stackexchange.com/questions/256874/simple-question-about-decoherence

ETA:

“Despite how tidy the decoherence story seems, there are some peo-ple for whom it remains unsatisfying. One reason is that the deco-herence story had to bring in a lot of assumptions seemingly extra-neous to quantum mechanics itself: about the behavior of typicalphysical systems, the classicality of the brain, and even the nature ofsubjective experience. A second reason is that the decoherence storynever did answer our question about the probability you see the dotchange color – instead the story simply tried to convince us the ques-tion was meaningless.” Quantum Computing since Democritus, 2nd Ed, P. 169.

comment by gilch · 2024-04-11T02:58:38.692Z · LW(p) · GW(p)

We should not expect any bases not containing conscious observers to be observed, but that's not the same as saying they're not equally valid bases. See Everett and Structure, esp. section 7.

Replies from: TAG
comment by TAG · 2024-04-11T03:10:34.304Z · LW(p) · GW(p)

We don't have to regard basis as objective, ITFP.

comment by tailcalled · 2024-04-10T07:11:27.164Z · LW(p) · GW(p)

But |cat alive> + |cat dead> is a natural basis because that's the basis in which the interaction occurs. No mystery there; you can't perceive something without interacting with it, and an interaction is likely to have some sort of privileged basis.

Replies from: TAG
comment by TAG · 2024-04-10T13:55:06.160Z · LW(p) · GW(p)

Regarding basis as an observers own choice of "co-ordinate grid", and regarding observing (or instrument) as having a natural basis , is a simple and powerful theory of basis. Since an observer's natural basis is the one that minimises superpositions, the fact that observers make quasi-classical observations drops out naturally, without any cosmological assumptions. But, since there is no longer a need for a global and objective basis, a basis that is a feature of the universe, there is no longer a possibility of many worlds as an objective feature of the universe: since an objective basis is needed to objectively define a division into worlds, there such a division is no longer possible, and splitting is an observer-dependent phenomenon.

Replies from: tailcalled
comment by tailcalled · 2024-04-10T18:22:54.396Z · LW(p) · GW(p)

We'd still expect strongly interacting systems e.g. the earth (and really, the solar system?) to have an objective splitting. But it seems correct to say that I basically don't know how far that extends.

Replies from: TAG
comment by TAG · 2024-04-11T03:04:21.466Z · LW(p) · GW(p)

Why? If you could prove that large environments must cause decoherence into n>1 branches you would have solved the measurement problem as it is currently understood.

Replies from: tailcalled
comment by tailcalled · 2024-04-11T05:54:15.840Z · LW(p) · GW(p)

This is just chaos theory, isn't it? If one person sees that Schrodinger's cat is dead, then they're going to change their future behavior, which changes the behavior of everyone they interact with, and this then butterflies up to entangle the entire earth in the same superposition.

Replies from: TAG
comment by TAG · 2024-04-11T12:20:32.174Z · LW(p) · GW(p)

You're saying that if you have decoherent splitting of an observer, that leads to more decoherent splitting. But where does the initial decoherent splitting come from?

Replies from: tailcalled
comment by tailcalled · 2024-04-11T16:22:38.381Z · LW(p) · GW(p)

The observer is highly sensitive to differences along a specific basis, and therefore changes a lot in response to that basis. Due to chaos, this then leads to everything else on earth getting entangled with the observer in that same basis, implying earth-wide decoherence.

Replies from: TAG
comment by TAG · 2024-04-12T17:55:32.934Z · LW(p) · GW(p)

What does highly sensitive mean? In classical physics, an observer can produce an energy output much greater than the energy input of the observation. ,but no splitting is implied. In bare Everettian theory, an observer becomes entangled with the coherent superposition they are observing, and goes into a coherent superposition themself ..so no decoherentsplitting is implied. You still haven't said where and the initial decoherent splitting occurs.

Replies from: tailcalled
comment by tailcalled · 2024-04-12T18:13:37.051Z · LW(p) · GW(p)

Hi? Edit: the parent comment originally just had a single word saying "Test"

comment by TAG · 2024-04-14T18:21:11.102Z · LW(p) · GW(p)

every particle interaction creates n parallel universes which never physically interfere with each other”

Although a fairly standard way of explaining MWI, this is an example of conflating coherence and decoherence. To get branches that never interact with each other again, you need decoherence, but decoherence is a complex dynamical process..it takes some time...so it is not going to occur once per elementary interaction. It's reasonable to suppose that elementary interactions produce coherent superpositions, on the other hand, but these are not mutually isolated "worlds". And we have fairly strong evidence for them.. quantum computing relies on complex coherent superpositions....so any idea that all superpositions just automatically and instantly decohere must be rejected.

comment by gilch · 2024-04-09T19:22:09.386Z · LW(p) · GW(p)

Getting rid of Many Worlds doesn't get rid of the Multiverse. They pop up in many different ways in cosmology. Max Tegmark elaborated four levels., the simplest of which (Level I) ends up looking like a multiverse if the ordinary universe is sufficiently large.

In the field of astronomy, there's the concept of a cosmological horizon. There are several kinds of these depending on exactly how they're defined. That's why they use the term "observable universe". Whatever process kicked off the Big Bang obviously created a lot more of it than we can see, and the horizon is expanding over time as light has more time to get to us. What we can see is not all there is.

Our current understanding of physics implies the Bekenstein bound: for any finite region of space, there is a finite amount of information it can contain. Interestingly, this measure increases with surface area, not volume. (If you pack too much mass/energy in a finite volume, you get an event horizon, and if you try to pack in more, it gets bigger.) Therefore, the current cosmological horizon also contains a finite amount of information, and there are a finite number of possible initial conditions for the part of the Universe we can observe, which must eventually repeat if the Cosmos has a larger number of regions than that number, by the pigeonhole principle. We also expect this to be randomized, so any starting condition will be repeated (and many times) if the Cosmos is sufficiently large. Tegmark estimated that there must be a copy of our volume, including a copy of you about meters away, and this also implies the existence of every physically realizable variation of you, which ends up looking like branching timelines.

Replies from: lcmgcd, lcmgcd
comment by lukehmiles (lcmgcd) · 2024-04-10T05:45:16.190Z · LW(p) · GW(p)

I am also interested in "Level II: Universes with different physical constants". Do you happen to know if there's actually good ways to approach that question — ie any methodology much better than wild speculation? I would love to learn about such a method

Replies from: gilch
comment by gilch · 2024-04-10T18:56:15.942Z · LW(p) · GW(p)

Well, slightly better than wild speculation. We observe broken symmetries in particle physics. This suggests that the corresponding unbroken symmetries existed in the past and could have been broken differently, which would correspond to different (apparent) laws of physics, meaning particles we call "fundamental" might have different properties in different regions of the Cosmos, although this is thought to be far outside our observable universe.

The currently accepted version of the Big Bang theory describes a universe undergoing phase shifts, particularly around the inflationary epoch. This wouldn't necessarily have happened everywhere at once. In the Eternal Inflation model, in a brief moment near the beginning of the observable universe, space used to be expanding far faster than it is now, but (due to chance) a nucleus of what we'd call "normal" spacetime with a lower energy level occurred and spread as the surrounding higher-energy state collapsed, ending the epoch.

However, the expansion of the inflating state is so rapid, that this collapse wave could never catch up to all of it, meaning the higher-energy state still exists and the wave of collapse to normal spacetime is ongoing far away. Due to chance, we can expect many other lower-energy nucleation events to have occurred (and to continue to occur) inside the inflating region, forming bubbles of different (apparent) physics, some probably corresponding to our own, but most probably not, due to the symmetries breaking in different directions.

Each of these bubbles is effectively an isolated universe, and the collection of all of them constitutes the Tegmark Level II Multiverse.

Replies from: lcmgcd
comment by lukehmiles (lcmgcd) · 2024-04-12T00:01:08.789Z · LW(p) · GW(p)

Your explanation is much appreciated! It's probably time that I properly go and understand the broken symmetries stuff.

comment by lukehmiles (lcmgcd) · 2024-04-10T05:33:16.147Z · LW(p) · GW(p)

What's the latest word on size of the whole universe? Last I heard, it was like at least 10,000x the diameter of the observable universe but maybe infinity times bigger.

Replies from: gilch
comment by gilch · 2024-04-10T19:38:15.339Z · LW(p) · GW(p)

I don't know where you heard that, but the short answer is that no-one knows. There are models of space that curve back in on themselves, and thus have finite extent, even without any kind of hard boundary. But astronomical observations thus far indicate that spacetime is pretty flat, or we'd have seen distortions of scale in the distance. What we know to available observational precision, last I heard indicates that even if the universe does curve in on itself, it must be so slight that the total Universe is at least thousands of times larger (in volume) than the observable part, which is still nowhere near big enough for Tegmark Level I, but that's a lower bound and it may well be infinite. (There are more complicated models that have topological weirdness that might allow for a finite extent with no boundary and no curvature in observable dimensions that might be smaller.)

I don't know if it makes any meaningful difference if the Universe is infinite vs "sufficiently large". As soon as it's big enough to have all physically realizable initial conditions and histories, why does it matter if they happen once or a googol or infinity times? Maybe there are some counterintuitive anthropic arguments involving Boltzmann Brains. Those seem to pop up in cosmology sometimes.

comment by Algon · 2024-04-09T11:06:38.267Z · LW(p) · GW(p)

IIRC pilot wave theory doesn't work for QFTs which is a big failure. 
EDIT: I stand corrected. See: 
QFT as pilot-wave theory of particle creation and destruction

Bohmian Mechanics and Quantum Field Theory

Relativistically invariant extension of the de Broglie-Bohm theory of quantum mechanics

Making nonlocal reality compatible with relativity.

Time in relativistic and non relativistic quantum mechanics. 
So apparently there are de Broglie-Bohm variants of QFTs. I'm unsure if these are full QFTs i.e. they can reproduce the standard model. I am unsure how exactly these theories work. But the theories would be noncal w/ hidden variables, as with classical Bohmian mechanics which is IMO a bad sign. But if it can reproduce the standard model, and I don't know if they can, then Bohmian mechanics is much more plausible than I thought. Even this boosts it substantially IMO. @the gears to ascension [LW · GW

Replies from: lcmgcd
comment by lukehmiles (lcmgcd) · 2024-04-10T05:46:49.736Z · LW(p) · GW(p)

Please fill this newb in a bit more — is QFT Great and Correct? Why? When did that happen?

Edit: I am more confused now... Apparently QFT is what I learned in my quantum class but we didn't touch relativity. This term is overloaded? .... In any case, the supposed compatibility with relativity seems good if I understand correctly. (I was trying to read some of those same webpages maybe a year ago but tbh I never understood why squaring the quantum-vs-relativity math was considered so important if they don't make contradictory predictions in physically possible experiments. That's an entirely separate discussion ofc.)

Replies from: Algon, rhollerith_dot_com
comment by Algon · 2024-04-10T19:33:25.622Z · LW(p) · GW(p)

QFT is relativistic quantum mechanics with fields i.e. a continuous limit of a lattice of harmonic oscillators, which you may have encountered in solid state theory. It is the framework for the standard model, our most rigorously tested theory by far. An interpretation of quantum mechanics that can't generalize to QFT is pretty much dead in the water. It would be like having an interpretation of physics that works for classical mechanics but can't generalize to special or general relativity.

(Edited to change "more rigorously" -> "most rigorously".)

comment by evhub · 2024-04-12T04:13:17.948Z · LW(p) · GW(p)

See my discussion of pilot wave theory here [LW · GW].

Replies from: lcmgcd
comment by lukehmiles (lcmgcd) · 2024-04-12T09:06:55.375Z · LW(p) · GW(p)

Furthermore, since pilot wave theory has no collapse postulate, it does not even get rid of the existence of multiple words.

Seems cruxy to me. I thought pilot wave did uhh somehow explicitly set those terms to zero, but I never actually saw that written down. But you can actually use the corpuscles to say which parts of the wave get zeroed out right? If two particles are actually entangled then their corpuscle equations or whatever will tell you that you can't delete that part of the wave? (Apologies if these questions are malformed — I don't actually know the math anymore — thought I wouldn't need it to understand the interpretations but now I'm thinking maybe I do. Really, I need code, not math...)

Replies from: evhub
comment by evhub · 2024-04-12T20:44:27.053Z · LW(p) · GW(p)

Pilot wave theory keeps all of standard wave mechanics unchanged, but then adds on top rules for how the wave "pushes around" classical particles. But it never zeroes out parts of the wave—the entirety of wave mechanics, including the full superposition and all the multiple worlds that implies, are still necessary to compute the exact structure of that wave to understand how it will push around the classical particles. Pilot wave theory then just declares that everything other than the classical particles "don't exist", which doesn't really make sense because the multiple worlds still have to exist in some sense because you have to compute them to understand how the wave will push around the classical particles.

Replies from: lcmgcd, Korz
comment by lukehmiles (lcmgcd) · 2024-04-14T19:32:37.948Z · LW(p) · GW(p)

This is what I get for glossing over the math. RIP

comment by Mart_Korz (Korz) · 2024-04-12T22:04:25.029Z · LW(p) · GW(p)

I am not too familiar with how advocates of Pilot wave theory usually state this, but I want to disagree slightly. I fully agree with the description of what happens mathematically in Pilot wave theory, but I think that there is a way in which the worlds that one finds oneself outside of do not exist.

If we assume that it is in fact just the particle positions which are "reality", the only way in which the wave function (including all many worlds contributions) affects "reality" is by influencing its future dynamics. Sure, this means that the many worlds computationally do exist even in pilot wave theory. But I find the idea that "the way that the world state evolves is influenced by huge amounts of world states that 'could have been'" meaningfully different to "there literally are other worlds that include versions of myself which are just as real as I am". The first is a lot closer to everyday intuitions.

Well, this works to the degree to which we can (arbitrarily?) decide the particle positions to define "reality" (the thing in the theory that we want to look at in order to locate ourselves in the theory) in a way that is separate from being computationally a part of the model. One can easily have different opinions on how plausible this step is.

Replies from: gilch
comment by gilch · 2024-04-12T22:21:58.174Z · LW(p) · GW(p)

The Church-Turing thesis gives us the "substrate independence principle". We could be living in a simulation. In principle, AI could be conscious. In principle, minds could be uploaded. Even granting that there's such a thing as the superfluous corpuscles, the Universe still has to be computing the wave function.

Then the people made out of "pilot" waves instead of pilot waves and corpuscles would still be just as conscious as AIs or sims or ems could (in principle) be, and they would far outnumber the corpuscle folk. How do you know you're not one of them? Is this an attachment to dualism, or do you have some other reason? Why do the corpuscles even need to exist?

Replies from: TAG, Korz
comment by TAG · 2024-04-13T16:27:18.880Z · LW(p) · GW(p)

The Church-Turing thesis gives us the “substrate independence principle”. In principle, AI could be conscious.

The C-T thesis gives you the substrate independence of computation. To get to the substrate independence of consciousness, you need the further premise that the performance of certain computations is sufficient for consciousness, including qualia. This is, of course, not known.

Replies from: sil-ver, gilch
comment by Rafael Harth (sil-ver) · 2024-04-13T18:46:44.309Z · LW(p) · GW(p)

I don't think this is correct, either (although it's closer). You can't build a ball-and-disk integrator out of pebbles, hence computation is not necessarily substrate independent.

What the Turing Thesis says is that a Turing machine, and also any system capable of emulating a Turing machine, is computationally general (i.e., can solve any problem that can be solved at all). You can build a Turing machine out of lots of substrates (including pebbles), hence lots of substrates are computationally general. So it's possible to integrate a function using pebbles, but it's not possible to do it using the same computation as the ball-and-disk integrator uses -- the pebbles system will perform a very different computation to obtain the same result.

So even if you do hold that certain computations/algorithms are sufficient for consciousness, it still doesn't follow that a simulated brain has identical consciousness to an original brain. You need an additional argument that says that the algorithms run by both systems are sufficiently similar.

This is a good opportunity to give Eliezer credit because he addressed something similar in the sequences [LW · GW] and got the argument right:

Albert: "Suppose I replaced all the neurons in your head with tiny robotic artificial neurons that had the same connections, the same local input-output behavior, and analogous internal state and learning rules."

Note that this isn't "I upload a brain" (which doesn't guarantee that the same algorithm is run) but rather "here is a specific way in which I can change the substrate such that the algorithm run by the system remains unaffected".

Replies from: TAG
comment by TAG · 2024-04-14T16:20:36.215Z · LW(p) · GW(p)

I don’t think this is correct, either (although it’s closer). You can’t build a ball-and-disk integrator out of pebbles, hence computation is not necessarily substrate neutral.

Meaning that a strong version of computational substrate independence , where any substrate will do, is false? Maybe, but I was arguing against hypothetical, that "the substrate independence of computation implies the substrate independence of consciousness", not *for* the antecedent, the substrate independence of computation.

What the Turing Thesis says is that a Turing machine, and also any system capable of emulating a Turing machine, is computationally general (i.e., can solve any problem that can be solved at all). You can build a Turing machine out of lots of substrates (including pebbles), hence lots of substrates are computationally general. So it’s possible to integrate a function using pebbles, but it’s not possible to do it using the same computation as the ball-and-disk integrator uses—the pebbles system will perform a very different computation to obtain the same result.

I don't see the relevance.

So even if you do hold that certain computations/algorithms are sufficient for consciousness, it still doesn’t follow that a simulated brain has identical consciousness to an original brain. You need an additional argument that says that the algorithms run by both systems are sufficiently similar.

OK. A crappy computational emulation might not be conscious, because it's crappy. It still doesn't follow that a good emulation is necessarily conscious. You're just pointing out another possible defeater.

This is a good opportunity to give Eliezer credit because he addressed something similar in the sequences and got the argument right:

Which argument? Are you saying that a good enough emulation is necessarily conscious?

Albert: “Suppose I replaced all the neurons in your head with tiny robotic artificial neurons that had the same connections, the same local input-output behavior, and analogous internal state and learning rules.” Note that this isn’t “I upload a brain” (which doesn’t guarantee that the same algorithm is run)

If it's detailed enough, it's guaranteed to. That's what "enough" means

but rather “here is a specific way in which I can change the substrate such that the algorithm run by the system remains unaffected”.

Ok...that might prove the substrate independence of computation, which I wasn't arguing against. Past that, I don't see your point

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2024-04-14T23:01:20.832Z · LW(p) · GW(p)

Ok I guess that was very poorly written. I'll figure out how to phrase it better and then make a top level post.

comment by gilch · 2024-04-13T16:59:26.014Z · LW(p) · GW(p)

Yes, agreed (and I endorse the clarification), hence my question about dualism. (If consciousness is not a result of computation, then what is it?)

Replies from: TAG
comment by TAG · 2024-04-13T18:11:50.578Z · LW(p) · GW(p)

The result (at least partially) of a particular physical substrate. Physicalism and computationalism are both not-dualism , but they are not the same as each other.

comment by Mart_Korz (Korz) · 2024-04-13T12:29:26.968Z · LW(p) · GW(p)

Hmm.. . In my mind, the Pilot wave theory position does introduce a substrate dependence for the particle-position vs. wavefunction distinction, but need not distinguish any further than that. This still leaves simulation, AI-consciousness and mind-uploads completely open. It seems to me that the Pilot wave vs. Many worlds question is independent of/orthogonal to these questions.

I fully agree that saying "only corpuscle folk is real" (nice term by the way!) is a move that needs explaining. One advantage of Pilot wave theory is that one need not wonder about where the Born probabilities are coming from - they are directly implied of one wishes to make predictions about the future. One not-so-satisfying property is that the particle positions are fully guided by the wavefunction without any influence going the other way. I do agree that this makes it a lot easier to regard the positions as a superfluous addition that Occam's razor should cut away.

For me, an important aspect of these discussions is that we know that our understanding is incomplete for every of these perspectives. Gravity has not been satisfyingly incorporated into any of these. Further, the Church-Turing thesis is an open question.

comment by adastra22 · 2024-04-14T19:25:25.109Z · LW(p) · GW(p)

It is not fully "local" but who cares?

Non locality is a big deal. That the underlying physics of the universe has a causal speed limit that applies to everything (gravity and QM) yet somehow doesn’t apply to pilot waves is harder to explain away than multiverse. The multiverse makes you uncomfortable, but it is a simpler physical theory than pilot waves.

comment by gilch · 2024-04-09T18:44:20.214Z · LW(p) · GW(p)

Pilot Wave is just Many Worlds in disguise:

Are Many Worlds & Pilot Wave THE SAME Theory? (youtube.com)

Replies from: lcmgcd
comment by lukehmiles (lcmgcd) · 2024-04-10T05:31:03.722Z · LW(p) · GW(p)

Please share punchline.

Edit: watched video.

Startholywar{

Uncharitable punchline is "if you take pilot wave but keep track of every possible position that any particle could have been (and ignore where they actually were in the actual experiment) then you get many worlds." Seems like a dumb thing to do to me.

They also highlight that quantum entanglement in pilot-wave gives you faster-than-light "communication" of random bits. You have to eat that under any theory, that's just real life.

They also highlight that parts of the wave don't carry particles. Yes of course...

}endholywar

They also implicitly claim that in order for the Born rule to work, the particles have to start the sim following the psi^2 distribution. I thinkk this is just false, and eg a wide normal distribution will converge to psi^2 over time as the system evolves. (For a non-adversarially-chosen system.) I don't know how to check this. Has someone checked this? Am I looking at this right?

Replies from: gilch, tailcalled
comment by gilch · 2024-04-12T00:18:50.398Z · LW(p) · GW(p)

Uncharitable punchline is "if you take pilot wave but keep track of every possible position that any particle could have been (and ignore where they actually were in the actual experiment) then you get many worlds." Seems like a dumb thing to do to me.

Except I don't know how you explain quantum computers without tracking that? If you stop tracking, isn't that just Copenhagen? The "branches" have to exist and interfere with each other and then be "unobserved" to merge them back together.

What does the Elitzur–Vaidman bomb tester look like in Pilot Wave? It makes sense in Many Worlds: you just have to blow up some of them.

Replies from: lcmgcd, lcmgcd
comment by lukehmiles (lcmgcd) · 2024-04-12T00:27:03.301Z · LW(p) · GW(p)

I must abstain from further culturewaring but this thought experiment is blowing my mind. I hadn't heard of it.

comment by lukehmiles (lcmgcd) · 2024-04-12T08:55:07.636Z · LW(p) · GW(p)

What does the Elitzur–Vaidman bomb tester look like in Pilot Wave?

Could I just say that the wave interferes with the bomb (if it's live) and bumps the particle after the wave hits the mirror?

(I don't actually know enough physics to, like, do the math on that.)

comment by tailcalled · 2024-04-10T18:26:15.899Z · LW(p) · GW(p)

Uncharitable punchline is "if you take pilot wave but keep track of every possible position that any particle could have been (and ignore where they actually were in the actual experiment) then you get many worlds." Seems like a dumb thing to do to me.

How would you formalize pilot wave theory without keeping "track of every possible position that any particle could have been" (which I assume refers to, not throwing away the wavefunction)?

Replies from: lcmgcd
comment by lukehmiles (lcmgcd) · 2024-04-12T00:05:36.487Z · LW(p) · GW(p)

I have thoughts on that but my formalizations have never been very formal. I think this debate could go unresolved until one of us writes the code.