What to read on the "informal multi-world model"?

post by mishka · 2023-07-09T04:48:56.561Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    10 johnswentworth
    2 rhollerith_dot_com
None
No comments

People on LessWrong often talk in terms of an "informal many-world model". For example they talk about the worlds where the alignment problem is relatively easy vs the worlds where the alignment problem is really hard.

I wonder what are good references for this "informal multiverse model of reality" with multiverses of worlds with different properties? What's the history of this line of thoughts.

(And I also wonder about possible mathematical models of that, how much people are talking about those, and how much people keeping some of those mathematical models in mind? I've seen some sheaf-based models which seemed to be like that.)

Answers

answer by johnswentworth · 2023-07-12T19:36:05.913Z · LW(p) · GW(p)

I'm not going to give a very full explanation here, but since nobody else has explained at all [EDIT: actually apparently Matt explained correctly below, just not as a top-level answer] I'll at least give some short notes.

  • Most people on LW absorbed the usage you describe via osmosis, and probably don't understand the accurate underlying interpretation.
  • In particular, in the context of usage like e.g. "the worlds where the alignment problem is relatively easy vs the worlds where the alignment problem is really hard", the "multiple-worlds" aspect has approximately nothing to do with the many-worlds interpretation of quantum mechanics.
  • Instead, the "worlds" in question are best interpreted as outcomes of a probabilistic world model.
  • For example, suppose we take a Solomonoff [? · GW]-style view: we imagine the world as having been generated by some program, but we don't know which program a priori. We have a prior which assigns a probability to each program, and we update on what the "true" generator-program might be as we see new data. 
  • Then, in that Solomonoff-style view, we can view each program as a "possible world we could be in", in a Bayesian sense. That's roughly the way that LWers typically use the phrase.
  • So the relevant "multiverse" is the set of "worlds" compatible with our information in a Bayesian sense, roughly speaking. (I say "roughly speaking" because really humans also use e.g. logical uncertainty and other things technically not captured by Bayesianism, but in a way that preserves the core concepts of the explanation given above.)
comment by mishka · 2023-07-13T01:12:55.370Z · LW(p) · GW(p)

Thanks!

answer by RHollerith (rhollerith_dot_com) · 2023-07-09T15:32:39.005Z · LW(p) · GW(p)

Good question!

The answer is that the people on this site should stop doing the thing you are curious about.

It is entirely possible -- and I am tempted to say probable -- under the many-worlds model of physics for every single one of us to be killed by AI research in every single one of the (billions of) branches that will evolve (or descend) from the present moment. Most writers on this site do not realize that, and the habit people around here have of using "worlds in which humanity emerges relatively unscathed from the crisis caused by AI research" to mean the possibility that or the set of outcomes in which humanity emerges relatively unscathed causes the site to persist in that error.

Specifically: although it is certainly possible that we will emerge relatively unscathed from the present dangerous situation caused by AI research, that does not mean that if things go badly for us, there will be any descendant-branches of the our branch in which even a single human survives.

Yes, there is a decent chance AFAICT that people very similar to us will survive for many millennia in branches that branched off from our branch centuries ago, and yes, those people are people, too, but personally that decent chance does not significantly reduce my sadness about the possibility that we will all be killed by AI research in our branch.

You ask, "what to read?" I got most of my knowledge of the many-worlds model from Sean Carroll's excellent 2019 book Something Deeply Hidden, the Kindle version of which is only 6 dollars on Amazon.

comment by mattmacdermott · 2023-07-11T17:09:25.053Z · LW(p) · GW(p)

Despite the similar terminology, people on this site usually aren't talking about the many worlds interpretation of quantum mechanics when they say things like "in 50% of worlds the coin comes up heads".

The overwhelmingly dominant use of probabilities on this website is the subjective Bayesian one i.e. using probabilities to report degrees of belief. You can think of your beliefs about how the coin will turn out as a distribution over possible worlds, and the result of the coin flip as giving you information about which world you inhabit. This turns out to be a nice intuitive way to think about things, especially when it comes to doing an informal version of Bayesian updating in your head.

This has nothing really to do with quantum mechanics. The worlds don't need to have any correspondence to the worlds of the many-worlds interpretation, and I would still think and talk like this regardless of what I believed about QM.

It probably comes from modal logic, where it's standard terminology to talk about worlds which some proposition is true. From a quick google this goes back to at least CI Lewis (1943), which predates the many-worlds interpretation of quantum mechanics, and probably further. Here's the wikipedia page on possible worlds. Probably there's a good resource which explains the terminology in a subjective probability context, but I can't find one right now.

Replies from: mishka
comment by mishka · 2023-07-11T18:45:01.024Z · LW(p) · GW(p)

Thanks!

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2023-07-12T17:21:11.917Z · LW(p) · GW(p)

I try to answer the question that was asked, not the question I most want to answer, and I have failed at that aim here, so I'm glad someone else gave you a satisfactory answer.

(I failed because it has been so long since I thought of probabilities as anything but subjective and because I feel strongly about the misuse of the concept of Everett branches on this site.)

My apologies to you for my off-topic comments here that would've been better put in their own post.

Replies from: mishka
comment by mishka · 2023-07-13T01:16:45.179Z · LW(p) · GW(p)

I think Everett-related views are also on topic.

I am trying to see what is the spectrum of approaches to this in the LessWrong community (obviously, people do disagree, but I am going for completeness here (so the more the various viewpoints are represented in the discussion, the better is my understanding of how people on LessWrong are thinking about all this)).

comment by interstice · 2023-07-10T16:51:01.153Z · LW(p) · GW(p)

It is entirely possible—and I am tempted to say probable—under the many-worlds model of physics for every single one of us to be killed by AI research in every single one of the (billions of) branches that will evolve (or descend) from the present moment

I really doubt this is the case. "Every single one of the branches" is a huge amount of selection power - "billions" is massively underselling it. "Every single one" gives you an essentially unlimited number of coincidences to roll in our favor. So if there is any way in which we can solve alignment at the last minute, or any way to pull off global coordination, or any way in which the chaotic process of AI motivation-formation leads to us not dying, or any way civilization can be derailed before we develop AI, it's highly likely there is at least one future branch in which those things come to pass.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2023-07-10T20:15:04.046Z · LW(p) · GW(p)

I would prefer for this thread to stay on the topic of quantum physics and not become a debate or discussion on how doomed we are. (It was a rhetorical error for me to bring AI doom into the discussion: I should've chosen a different example.)

When you wrote, "at least one future branch", did you mean one Everett branch or did you mean one possible way the future might turn out?

(If you meant the latter, I agree with you and said as much when I wrote, "it is certainly possible that we will emerge relatively unscathed from the present dangerous situation".)

Replies from: interstice
comment by interstice · 2023-07-11T02:23:43.204Z · LW(p) · GW(p)

I meant one Everett branch. My point is that there are so many Everett branches that it's pretty likely that most any semi-realistic scenario will be realized in at least one of them. (That said it's the probability of the branches that matters, not their existence/nonexistence, and probability can indeed get very tiny. And I think I agree with you that lesswrongers overall talk about possible worlds in a perhaps overly cavalier fashion)

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2023-07-11T14:46:25.451Z · LW(p) · GW(p)

And I think I agree with you that lesswrongers overall talk about possible worlds in a perhaps overly cavalier fashion

I am not complaining and do not mean to complain about the talk of possible worlds. My complaint is restricted to when people bring quantum physics into it, e.g., by prefacing with "If many-worlds physics is true" or by calling the possible worlds Everett branches.

My point is that there are so many Everett branches that it’s pretty likely that most any semi-realistic scenario will be realized in at least one of them.

That is an excellent example of the thing I am complaining about!

Let me try this: I have no control or ability to predict which future branch I will end up in. (There is no experiment I can do to even detect other branches. Their existence is instead the logical consequence of the results of other experiments.) And as you know, the branch we are in is splitting constantly.

You believe correctly that there are a huge number of branches with you and I in it and each branch is different. You conclude incorrectly that the future is wide open somehow: almost everything will happen in some branch. In other words, you overestimate the relevance of these differences.

Well, let me ask you, how is it that you or I can reliably achieve any goal?

Suppose I use my computer to run a calculation, which takes 10 minutes, and the correct answer is 63. Since after 10 minutes there are a huge number of branches with you, I and the computer in it and since we can't control which branch we end up in, why do we always find that the computer answered or returned 63. Why is it not sometimes 62 or 64?

There is some nuances to this point, but they don't change the conclusion. If the computer returns the wrong answer, e.g., 62, it is probably because there is a bug in the program. Note however that in that case the computer will probably keep returning 62 every time I run it, which (because which future branch I end up in is random) means that it returns the same wrong answer, 62, in every one of the billions of branches that descend from the moment when I started the calculation.

The computing expert will notice exceptions to this general rule. One exception is that the a cosmic ray might flip a bit in the computer's RAM. Another possible exception I can imagine is that the computer might act as a "quantum measurement device" and that a quantum measurement (and in many-worlds physics, non-conscious inanimate objects perform quantum measurements all the time) might occasionally bubble up to a level at which it influences the outcome of the computation: if the program is threaded for instance, I can imagine the measurement's influencing the behavior of a synchronization primitive (CPU instruction) in a way that changes the result of a calculation.

So there is some small amount of unreliability in a computer. Much more reliable than a computer, however, is the human enterprise of computing. Competent practitioners of the human enterprise of computing have ways of noticing when a computer is behaving unreliably (e.g., by logging errors detected by ECC RAM) and will replace such a computer. Computers used in space, where cosmic rays are more numerous, are routinely radiation-hardened. Quantum measurements are allowed sometimes to influence the results of calculations (because such "non-determinism" enables faster computation) but among competent practitioners, this happens only with calculations for which there is more than one correct or acceptable answer.

In summary, competent computerists are able to reliably perform calculations, year after year, and to ensure that the differences between the Everett branches that emerge over these years are irrelevant: if the differences were relevant, then because competent computerists and the rest of us had no control over which branch we ended up in, we would have noticed an unreliability in the results of calculations run by competent computerists, which we did not. There are surprises, but whenever we devote sufficient engineering resources toward investigating the cause of a surprise, we never have to resort to explanations relying on which Everett branch we and the computer ended up in among all the branches descending from the moment we started the computation.

How do you know that the same thing (the differences' being irrelevant) is not true in our original example? Maybe whether or not humanity survives AI research has already been decided -- it's just that we do not yet know with certainty the outcome of the decision. Probability is a measure of our ignorance. We are not rational enough -- not formidable enough in our cognitive powers -- to predict with certainty the fate of humanity. What is this horrible certainty you have that the source of our ignorance is uncertainty over which future Everett branch we will end up in? Why can't it be uncertainty over the outcome of a deterministic process that has already been set in motion and can no longer be stopped no matter what we do? (I'm not saying I like the idea that the fate of humanity has already been decided; I'm saying as a technical matter that we cannot just ignore the possibility.)

Going back briefly to the calculation that ran for 10 minutes, then produced the incorrect answer of 62. The incorrect answer had already been decided at the start of the calculation. Writing computer programs is hard; the programmer made a mistake. The same incorrect answer occurs in every Everett branch that descends from the moment the calculation began. We would see the same wrong answer if there were no such thing as Everett branches or many-worlds. It all adds up to normality.

A person does not need to understand quantum physics to contribute to AI alignment or to contribute to Lesswrong. It is OK not to understand quantum physics. All I'm asking is that if you don't understand it, don't refer to Everett branches unless you're asking a question about quantum physics. (I have no public objection to referring to possible worlds or possible futures.)

Replies from: interstice, TAG
comment by interstice · 2023-07-11T18:24:45.533Z · LW(p) · GW(p)

A person does not need to understand quantum physics to contribute to AI alignment or to contribute to Lesswrong

I understand quantum physics quite well and am aware of all the issues you've raised in your comment. I think you massively underestimate how stringent a condition "an event happens in no Everett branches" is when applied to the evolution of a large complex system like the Earth. (I'll also remark that if your main source of knowledge regarding quantum mechanics is a pop-science book you probably shouldn't be lecturing people about their lack of understanding :P)

if the differences were relevant, then because competent computerists and the rest of us had no control over which branch we ended up in, we would have noticed an unreliability in the results of calculations

What you're missing here and in the rest of your comment is that events that happen in Everett branches with microscopically small total probability can appear to never happen. To use your computer example, let's say that cosmic rays flip a given register with probability . And let's say that a calculation will fail if there are errors in 20 independent registers. And let's say that the computer uses error-correcting codes such that you actually need to flip 10 physical registers to flip one virtual register. Then there is a probability of that the calculation will fail. If our civilization does on the order of operations per second(roughly the order of magnitude of our current computing power), we should expect to see such an error once every years, vastly greater than the expected lifespan of the universe. Nevertheless, branches in which the calculation fails do exist.

And that's the case of an error-corrected computer performing a simple operation, an isolated system engineered for maximum predictability. In a complex system like the Earth with many chaotic components(which unavoidably create branches due to amplification of tiny fluctuations to macroscopic scales), there are far more opportunities for quantum noise to have huge effects on the trajectory, especially regarding a highly complex event like the development of AGI. That's where my "horrible certainty" comes from - it is massively overdetermined that there are at least some future Everett branches where humanity is not killed by AGI. Now whether those branches are more like fluctuations in the weather or like a freakish coincidence of cosmic ray accidents is more of an open question.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2023-07-12T19:30:30.996Z · LW(p) · GW(p)

My probability that humanity will survive AI research is at least .01. Is yours at least that high?

If so, then why bring tiny probabilities into it? Oh, I know: you bring in the tiny probabilities in response to my third sentence in this thread, namely, "It is entirely possible—and I am tempted to say probable—under the many-worlds model of physics for every single one of us to be killed by AI research in every single one of the (billions of) branches that will evolve (or descend) from the present moment."

I still believe that statement (e.g., I was tempted to say "probable") but as a rhetorical move it was probably a mistake because it tended to cause readers to dig in their heels and resist what I really want out of this conversation: for people on this site to stop conflating the notion of a future Everett branch with the notion of a possible world or a possible future. In order to achieve that end, all I need to do, all I should've been trying to do, is to plant in the mind of the reader the possibility that there might be a difference between the two.

I am assuming that we both agree that there is a significant probability--i.e., a lot more than a few Everett-branches worth--that humanity will survive the AGI crisis. Please speak up immediately if that is not true.

Where does this probability, this hope, come from? I predict that you will say that the hope comes from diversity in future Everett branches. We both believe that the Everett branches will be diverse. I was briefly tempted to say that no 2 branches will be exactly the same, but I don't know quantum physics well enough. But if we choose 2 branches at random, there will almost certainly be differences between them. I am not willing to concede with certainty that there is significant hope in that. The differences might not be relevant: just because an electron has spin up in one branch and spin down in another branch does not mean all the people aren't dead in both branches; and just because we let the branch-creation process run long enough to accumulate lots of differences in the spins of particles and the positions and momentums of particles does not necessarily with certainty change the analysis.

The global situation is so complex that it is impossible for us humans with our puny brains to predict its outcome with certainty: to my eyes, the global situation (caused by AI research) certainly looks dire, but I could easily be overlooking something important. That is where most of my hope for humanity's future comes from: from the fact that my ability to predict is limited. And that is a different grounds for hope than your "it is massively overdetermined that there are at least some future Everett branches where humanity is not killed by AGI". For one thing, my hope makes no reference to Everett branches and would still be there if reality did not split into Everett branches. But more importantly, the 2 grounds for hope are different because there might be a process (steered by an intelligence human or artificial) that has already been set in motion that can reliably kill us or reliably save us from AI researchers in essentially all future Everett branches--i.e., so large a fraction of the future branches that it is not worth even putting any hope into the other, exceptional branches because the probability that we are both fundamentally mistaken about quantum physics or some other important aspect of reality is much higher than the hope that we will end up in one of those exceptional branches.

And that, dear reader, is why you should not conflate in your mind the notion of a possible world (or possible future) with the notion of a "future Everett branch" (which is simply an Everett branch that descends from the present moment in time rather than an Everett branch that split off from our branch in the past with the result that we cannot influence it and it cannot influence us).

I want to stress, for people that might be reading along, that the discussion is not about why we are in danger from AI research or how great the danger is. AI danger is used here as an example (of a complex physical process) in a discussion of the many-worlds interpretation of quantum physics--which as far as I know has no bearing on the questions of why we are in danger or of how great the danger is.

ADDED. The difference between the two grounds for hope is not just pedantic: it might end up mattering.

Replies from: interstice
comment by interstice · 2023-07-12T20:48:20.175Z · LW(p) · GW(p)

I predict that you will say that the hope come from diversity in future Everett branches

Nope. I believe (a) model uncertainty dominates quantum uncertainty regarding the outcome of AGI, but also (b) it is overwhelmingly likely that there are some future Everett branches where humanity survives. (b) certainly does not imply that these highly-likely-to-exist Everett branches comprise the majority of the probability mass I place on AGI going well.

what I really want out of this conversation: for people on this site to stop conflating the notion of a future Everett branch with the notion of a possible world or a possible future

I agree that these things shouldn't be conflated. I just think "it is entirely possible that AGI will kill every single one of us in every single future Everett branch" is not a good example to illustrate this, since it is almost certainly false.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2023-07-13T00:22:17.992Z · LW(p) · GW(p)

If something is almost certainly false, then it remains entirely possible that it is true--because a tiny probability is still a possibility :)

But, yeah, it was not a good example to illustrate any point I care enough about to defend on this forum.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2023-07-13T02:05:43.613Z · LW(p) · GW(p)

But there's a larger issue, an issue that I think matters here: You didn't realize just how very different that the claims "has high probability of occurring in most worlds" from the claim "a certain thing will happen in every world". That first claim is much easier to show than the second claim, since you now have to consider every example, or have a clever trick, since any counterexample breaks your claim.

Most!=All is an important distinction here.

comment by TAG · 2023-07-11T15:44:23.255Z · LW(p) · GW(p)

All I’m asking is that if you don’t understand it, don’t refer to Everett branches unless you’re asking a question about quantum physics.

Also, don't refer to Everett branches if you mean Zurek branches.

comment by TAG · 2023-07-11T15:30:35.709Z · LW(p) · GW(p)

Specifically: although it is certainly possible that we will emerge relatively unscathed from the present dangerous situation caused by AI research, that does not mean that if things go badly for us, there will be any descendant-branches of the our branch in which even a single human survives.

There won't be any iff there is a 100.0000% probability of annihilation. That is higher than EY's estimate. Note that if there is a 99% chance of annihilation, there is a guaranteed 1% of worlds with survivors.

Replies from: Avnix, rhollerith_dot_com
comment by Sweetgum (Avnix) · 2023-07-11T15:44:31.013Z · LW(p) · GW(p)

Bayesian probability (which is the kind Yudkowsky is using when he gives the probability of AI doom) is subjective, referring to one's degree of belief in a proposition, and cannot be 0% or 100%. If you're using probability to refer to the objective proportion of future Everett branches something occurs in, you are using it in a very different way than most, and probabilities in that system cannot be compared to Yudkowsky's probabilities.

Replies from: TAG
comment by TAG · 2023-07-11T16:05:51.251Z · LW(p) · GW(p)

If you're talking about Everett branches, you are talking about objective probability. What I am talking about doesn't come into it, because I don't use "Everett branch" to mean "probable outcome".

Replies from: Avnix
comment by Sweetgum (Avnix) · 2023-07-11T16:25:27.960Z · LW(p) · GW(p)

What are you talking about then? It seems like you're talking about probabilities as being the objective proportion of worlds something happen in in some sort of multiverse theory, even if it's not the Everett multiverse. And when you said "There won't be any iff there is a 100.0000% probability of annihilation" you were replying to a comment talking about whether there will be any Everett branches where humans survive, so it was reasonable for me to think you were talking about Everett branches.

Replies from: TAG
comment by TAG · 2023-07-12T20:44:29.161Z · LW(p) · GW(p)

If I'm not talking about objective probabilities, I'm talking about subjective probabilities. Or both.

comment by RHollerith (rhollerith_dot_com) · 2023-07-11T15:55:01.205Z · LW(p) · GW(p)

Note that if there is a 99% chance of annihilation, there is a guaranteed 1% of worlds with survivors.

This is an excellent example of the kind of thing I am complaining about -- provided that by "worlds" the author means Everett branches. (Consequently, I am upvoting it and disagreeing with it.)

Briefly, the error is incorrectly assuming that all our uncertainty is uncertainty over which future Everett branch we will find ourselves in, ignoring our uncertainty over the outcome of deterministic processes that have already been set in motion.

Actually, I can say a little more: there is some chance humanity will be annihilated in every future Everett branch, some chance humanity will survive AI research in every future branch and some chance the outcome depends on the branch.

No comments

Comments sorted by top scores.