Resurrection of the dead via multiverse-wide acausual cooperation
post by avturchin · 2018-09-03T11:21:32.315Z · LW · GW · 30 commentsContents
1. Introduction 2. Problems of Almond’s approach 2.2. Problem 2: Non-human and not welcomed minds 2.3. Problem 3: Damaged minds 3. Patches Patch 1. The use of the human mind’s universal model as a starting point Patch 2. The use of the digital immortality data to create only minds which comply with our expectations Patch 3. The use of multiverse-wide cooperation for the cross-resurrection 4. Remaining problems Conclusion None 30 comments
TL;DR: Measure decline in random mind creation may be prevented if we take into account very large number of random mids created in other universes.
Summary: P.Almond suggested the idea of the resurrection of the dead via a quantum random generator which creates a random mind, but such an approach has several problems: non-human beings in our world, non-necessary suffering of non-perfect copies, and measure decline.
Here I suggest three patches, which prevent most of the undesired effects:
1. Human mind matrix to prevent pure random minds appearing.
2. Digital immortality data to create a person which satisfies all known external expectations, and the use of randomness only to fill unknown information.
3. Multiverse-wide cooperation for the “cross-resurrection” of the dead between multiple worlds via quantum random minds, so the total measure of all resurrected people will not decline.
1. Introduction
Almond in “Many-Worlds Assisted Mind Uploading: A Thought Experiment” suggested the following idea about the resurrection of the dead by the use of a quantum random generator, which would create a random mind within a computer (Almond, 2006):
[A technician who lost someone’s brain scan file] writes a computer program which takes input from a physical system. The physical system, known as a quantum event generator, generates "1"s and "0"s randomly as a result of quantum events. The program will use the physical system to tell it what sequence of "1"s and "0"s will be used to try to recreate the lost scan file. The program starts with an empty scan file which will be filled with "1"s and "0"s.
If the many-worlds interpretation of quantum mechanics is correct, all possible minds will appear in separate timelines starting from the moment of random mind creation, which would mean the resurrection of everyone from his own point of view. However, this approach will a) not help an outside observer, who wants to resurrect a relative, for instance, as the observer would see only a random mind, and b) the quantum "measure" of existence of each mind will be infinitely small.
2. Problems of Almond’s approach
To illustrate the problems with quantum mind uploading, I will explore a simplified thought experiment where only names will be restored using quantum mind uploading. First, here is what Almond suggested:
Thought experiment “Not-patched quantum mind uploading”:
Bob had a friend John Smith. John has died and Bob wants to resurrect him. Bob remembers only first letter of John’s name: S.
Bob and John are interested only in the uniqueness of name preservation, and no other identity considerations are important. Bob wants to observe his friend to be alive, and for his friend to be named “John S….” (I would call it immortality from the point of view of the external observer). John wants his own immortality, and will be satisfied only if “John Smith” is created.
Bob creates random quantum mind A using a quantum generator to choose each new letter in the names.
It turns out that A is “jYY2№@11”. Only less than 10-30 share of all such copies in the multiverse are named John Smith. Both Bob and John are unhappy.
This thought experiment leaves both John and Bob unsatisfied, and we see three reasons for that below:
2.1. Problem 1: Measure decline
Problem 1 is a problem for John.
Measure could be defined as a share of an observer of a given type between all possible observers. If the typical size of the simulated mind is, say, 10^15 bites, the chances that a randomly generated mind will be exactly the needed person is 2^(10^15). In other words, a quantum mind generator results in a measure decline of 2^-(10^15) which is an extremely large number. Even in our thought experiment 1 measure decline is 1030 times.
Many authors claim that large measure decline should be treated as death or as an infinitely small chance of survival. Such discussions appeared in the context of so-called quantum immortality, that is, the counterfactual possibility to survive death via existing in quantum multiverse timelines where a person will not die.
Even if the measure decline is not bad per se, it leads to a world where very small probability outcomes will dominate possible futures of an observer, and such parasitic outcomes may be full of suffering. For example, the quantum immortality improbable survival landscape may be dominated by people who are very old and dying but can’t die (it could be patched by signing up for cryonics).
If we use some expected utility calculations, and measure decline results in declining utility of any useful outcome associated with it, we could just ignore my copies with infinitely small measures.
2.2. Problem 2: Non-human and not welcomed minds
Problem 2 is mostly for Bob.
Another problem is that most random minds will be non-human, and will not be adapted to our world, so they will suffer or cause suffering to people living here. In our thought experiment “jYY2№@11” is an example of a non-human random mind.
Such random minds are also extremely bad for any outside observer, like Bob, as he will be very unlikely to meet anyone resembling his friend John Smith.
2.3. Problem 3: Damaged minds
Problem 3 is a problem for both Bob and for John.
Most randomly-created minds will be not minds at all, but some garbage code, or at “best case,” damaged minds. For example, if Bob wants to resurrect John Smith, there will be much more copies where his name (as well as his other properties) is a parody of the name Smith, for example Smthi, Smiht, Misth, Smitt, etc. For n bits long name, there are n individual names which have 1 bit difference.
Thus, for any real person, there will be much larger set of his-her damaged copies, which implies suffering for such a person as the most probable outcome of the quantum random resurrection and s-risks for all people.
3. Patches
Fortunately, quantum random mind uploading could be patched, so it will provide much more satisfaction for John and Bob.
Patch 1. The use of the human mind’s universal model as a starting point
The goal of this patch is to escape minds of “aliens” or of non-workable gibberish code, and thus prevent suffering of most created minds. For example, for a human mind model, his-her possible name will be generated not as random symbols but from the preset of typical human names.
Such a human mind model may look like an untrained neural network which has the general architecture of a human mind, with some other constraints, so any random set of parameters will create a more-or-less normal human mind. We assume that some future assistant AI will be able to find an appropriate model.
In that case, Bob uses a random mind generator for parameters of the universal human mind model. He gets “Maria Stuart”. This will increase the share of the worlds where real John Smith is resurrected to 10-10. Both John and Bob are a little bit more satisfied, as Bob gets a human friend, and John increases his measure.
Obviously, some minds may not want to be resurrected, but this could an important parameter in the model, and models, where “resurrection preference = false” will be ignored.
Patch 2. The use of the digital immortality data to create only minds which comply with our expectations
The problem of Bob’s satisfaction could be overcome by the use of Bob’s expectations as priors, if there are no other current of future sources of data about John.
In that case, Bob could use his memories about John S. to create a model of John S. He remembers that John was either John Smith or John Simpson. He uses a random quantum coin to choose between Smith or Simpson, and gets “John Simpson”.
In another branch of the quantum multiverse, where the coin fails tails, John Smith appears, but his measure declines to 0.5. Both John and Bob are partly satisfied. Bob got someone who looks like his friend, but Bob knows that it is not exactly his friend, and that his friend has now smaller measure of existence.
Digital immortality, or indirect mind uploading, is the collecting information about a person while he is alive with hope that future advanced AI may be able to resurrect the person, by creating an advanced model of the personality based on all available information. Such a model will, by definition, satisfy Bob and all other relatives, as all available information has already been taken into account, including all relatives’ expectations. However, large chunks of information will never be known, and thus have to be replaced with some random data. Even if quantum randomness is used to fill the gaps, John will have an infinitely small share of all possible worlds, and in most other worlds he will be replaced by someone else.
Patch 3. The use of multiverse-wide cooperation for the cross-resurrection
The next step is that Bob considers that not only his universe exists, but all possible other universes exist in the Multiverse.
Bob concludes that because all possible observers exist in the Multiverse, his John Simpson created via a quantum random generator is a resurrection of some John Simpson from another universe, while John Smith who lived in our universe, will be resurrected in some other universe where another copy of Bob will do the same experiment.
In other words, Bob and Bob’s copies in other universes cooperate to resurrect the exact John Smith.
As the second universe is exactly the same as ours except for John’s name, there is another exact copy of Bob in it, and this Воb’s copy is also wanting to resurrect his friend John S., so he uses another quantum random mind generator. Now the following happens:
So, the total measure of John Smith has not declined, if Bob takes into account that other copies of Bob in other universes will run the same experiment. By deciding to start the random mind generator (and to not turn off the resulting mind), Bob joins a large group of other minds, who think similarly, but who are located in causally disconnected parts of the Multiverse. Everyone expects that some other random generator recreates an exact copy of their loved one.
In a real case of large missing data, like gigabytes, this requires a simultaneous run of an extremely large number of quantum random mind generators, like 10^(10^9), which is only possible via multiverse-wide cooperation. The measure will not decline in such a case too, as for every dead person there will be one random person, and given the large numbers, any person will be randomly recreated, at least in approximately one world. (Some may go deeper and take into account standard deviation, but because we use quantum generators in the many worlds interpretation, each universe creates exactly its share of John, and there will be no fluctuations, which would result in non-existence of some Johns and two copies of another.)
Any of Воb’s copies can join such a multiverse-wide cooperation by creating just one quantum random mind (and treating the resulting mind well).
4. Remaining problems
Multiverse. What if the multiverse doesn’t actually exist? In that case, Bob and John get partly satisfying results, as Bob gets John’s copy, but John’s copy is not perfect from John’s point of view. If the quantum multiverse is not real, but some other form of the multiverse exists, like the one based on inflational cosmology, the resurrection method will still work.
Defection. Bob may not create any random mind generators at all but still expect that someone else will recreate his friend. In general, the rate of defections may be known and compensated by increasing the number of random minds by those who have more resources.
There are several other possible generic problems of multiverse-wide cooperation, including infinite ethics, the possibility of acausal blackmail, a method to measure similarity between agents, and problems with agents that have other values as described in EA post's comment.
Conclusion
I hope that this post may increase one's hope in the future personal resurrection by superintelligent AI.
30 comments
Comments sorted by top scores.
comment by mako yass (MakoYass) · 2018-09-04T06:18:24.520Z · LW(p) · GW(p)
Hello again, compat...
The use of the human mind’s universal model as a starting point
The aspects over which real people vary is a function of real lived histories, not random numbers in a neural template that you can just randomize independently. In order to dredge up a uniformly sampled mind who really could have lived, so that total measure of humans who really lived would be preserved over conditions of multiversal cooperation-, you would need to simulate randomly chosen actual human histories. That would be expensive. Most lived histories were pretty miserable, too, so there's an additional eudaimonia cost being imposed there.
If your generator abandons the brittle logic of real histories, you're going to end up with a majority (there are many more ways to miss reality than hit it) of specimens who, on inspection, could not have lived. Who have inconsistent memories, personalities that don't accord with their experiences, skills they never practiced.
I suppose, if you could command, say, several billion stars for a billion years, then stop once we've satisfied our obligations of the acausal treaty, grant the resurrected visitors from other timelines citizenship and the associated immortality, then get on with living out our eschaton, I suppose humans' grief might really be so deep as to motivate a project like that.
To know that our lost ones are alive out there in some other timeline, with the same measure they left off with, maybe that's really worth the cost for some people.
Hopefully not too many people. Some of us would rather just let the dead lie and make new, better people instead for a fraction of the cost.
Replies from: avturchin↑ comment by avturchin · 2018-09-04T11:30:23.330Z · LW(p) · GW(p)
This is a reasonable objection which may need Patch 4 for the whole method in order to escape "billion stars for billion years" (which is still small cost for universe-wide superintelligent AI, which will control billions of billions stars for tens of billions of years).
Fortunately, the Patch 4 is simple: we model just one mind history, which complies known initial conditions, and use random variables for unknown initial historical facts. In that case we get correct distribution of random minds, but we spend computational resources to simulate just one person. Some additional patches may be needed to escape intensive sufferings inside the simulation, like the use of only one playing character and turning off its subjective experiences if the pain is above unbearable threshold.
To resurrect all the dead, we don't need to run many past histories, but we need just one simulation of all human past, in which case all characters will be "playing characters". Running one simulation may be computationally intensive, but not billion stars for billion years.
The next step of such game would be resurrect "all possible people", which again could be done for a small cost via multiverse-wide cooperation. In that case, creation of new people, resurrection of past people and giving live to all possible minds will be approximately the same action or running different simulation with different initial parameters.
Moreover, we may be morally obliged to resurrect all possible minds to save these minds from very improbable timelines, where evil AI creates s-risks. I will address this type of multiverse-wide cooperation in the next post.
Replies from: Kaj_Sotala, MakoYass↑ comment by Kaj_Sotala · 2018-09-07T10:53:25.019Z · LW(p) · GW(p)
Fortunately, the Patch 4 is simple: we model just one mind history, which complies known initial conditions, and use random variables for unknown initial historical facts.
Shameless plug: you may enjoy my short fiction piece on a similar idea.
↑ comment by mako yass (MakoYass) · 2018-09-06T03:32:43.949Z · LW(p) · GW(p)
Just regarding
which is still small cost for universe-wide superintelligent AI, which will control billions of billions stars for tens of billions of years
All of the stars will be dead in 100 trillion years (although it's likely a good org will aestivate and continue most of its activities beyond that, which supposedly will get them a much higher operating efficiency than anything that's imaginable now). There are only 50 Bn stars in the local cluster, and afaik it's not physically possible to spread beyond the local cluster. All that stuff's just a bunch of fading images that we'll never touch. (I tried to substantiate this and the only simple account I could find was a youtube video. Such is our internet https://www.youtube.com/watch?v=ZL4yYHdDSWs best I could do)
(And it doesn't seem sound, to me, to guess that we'll ever find a way around the laws of relativity just because we really want to.)
It still seems profoundly hard to tell how much of the distribution of a history generator is going to be fictional, and it wouldn't surprise me if the methods you have in mind generate mostly cosmically unlikely life-histories. You essentially have to get the measure of your results to match the measure of people who really lived and died. We have access to a huge measure multiplier, but it's finite, and the error rate might just as huge.
How many lives-worth of energy are you trading away for every resurrection?
Replies from: avturchin↑ comment by avturchin · 2018-09-06T10:21:37.170Z · LW(p) · GW(p)
Personally, I think that it would not be computationally intense for an AI capable to create past simulations (and also it will create them anyway for some instrumental reasons), so it will be more likely to be less than 1000 years and a small fraction of one star energy. It is based on some ideas about limits of computations and power of human brain, and I think Bostrom had calculations in hist article about simulations.
However, I think that we are morally obliged to resurrect all the dead, as most of the people of past dreamed about some form of life after death. They lived and died for us and for our capability to create advance technology. We will pay the price back.
comment by Andaro · 2018-09-08T17:52:58.040Z · LW(p) · GW(p)
If the required kind of multiverse exists, this leads to all kinds of contradictions.
For example, in some universes, Personal Identity X may have given consent to digital resurrection, while in others, the same identity may have explicitly forbidden it. In some universes, their relatives and relationships may have positive prefrences regarding X's resurrection, in others, they may have negative preferences.
Given your assumed model of personal identity and the multiverse, you will always find that shared identities have contradicting preferences. They may also have made contradicting decisions in their respecting pasts, which makes multiverse-spanning acausal reciprocity highly questionable. For every conceivable identity, there are instances that have made decisions in favor of your values, but also instances who did the exact opposite.
These problems go away if you define personal identity differently, e.g. by requiring biographical or causal continuity rather than just internal state identity. But then your approach no longer works.
I personally am not motivated to be created in other Everett branches, nor do I extend my reciprocity to acausal variants.
Replies from: avturchin↑ comment by avturchin · 2018-09-08T18:21:24.434Z · LW(p) · GW(p)
I think that most of your objections are addressed in the patch 2 in the post. As we use all biographical data about the person to create his model (before filling gaps with random noise) we will know if he wanted to be resurrected or not. Or we will not resurrect all those copies which do not want to be resurrected.
There are elements of biographical and causal continuity: We use all known biographical data to create the best possible model, and such information is received via causal lines from the original person, which creates some form of causal connection between original and resurrected copy.
comment by Shmi (shminux) · 2018-09-03T22:06:06.957Z · LW(p) · GW(p)
Not related to the main body of your post, just to its false premise.
If the many-worlds interpretation of quantum mechanics is correct...
Interpretations by definition make no difference. Eliezer screwed with so many eager rationalist minds by pushing his pet idea, completely unnecessary and even harmful to zen and the art of cultivating useful thinking patterns and raising the sanity waterline. Interpretations are mind projection fallacies.
Replies from: TAG, avturchin, Gurkenglas, matthew-barnett, avturchin↑ comment by Gurkenglas · 2018-09-04T17:42:39.862Z · LW(p) · GW(p)
We can test whether consciousness causes collapse once we can simulate a person on a quantum computer, so it's not an interpretation by definition. See chapter 8 of Quantum Theory as a Universal Physical Theory.
Replies from: TAG↑ comment by TAG · 2018-09-05T11:27:39.937Z · LW(p) · GW(p)
We can test whether consciousness causes collapse once we can simulate a person on a quantum computer, so it’s not an interpretation by definition.
Assuming that the simulated person actually has consciousness and isn't a zombie. That is a very big assumption, and if you ever did perform the experiment , the CCC enthusiasts would fight it on those grounds.
Replies from: Gurkenglas↑ comment by Gurkenglas · 2018-09-06T01:59:36.201Z · LW(p) · GW(p)
They're making a philosophical mistake [LW · GW] and just because many would make it doesn't mean MWI isn't falsifiable.
Replies from: TAG↑ comment by TAG · 2018-09-06T09:03:12.825Z · LW(p) · GW(p)
They are making what some consider to be a philosophical mistake and others don't. The falsifiability of MWI isn't a scientific fact if it depends on a contentious philosophical claim. By the way, computational zombies, functional duplicates of humans which lack consciousness, can't be argued against using the same arguments that exclude p-zombies.
Replies from: Gurkenglas↑ comment by Gurkenglas · 2018-09-07T19:07:11.190Z · LW(p) · GW(p)
No belief-theoretic mistake is considered one by those who make it. We should be thinking about what's true, not what people think, to find out whether the premise is false. If your functional duplicate says it's conscious, that's going to be for the same reasons you would, and you couldn't deduce your consciousness from talking about it any more than you could deduce the duplicate's consciousness from its talking about it. As the link explains.
↑ comment by Matthew Barnett (matthew-barnett) · 2018-09-04T03:58:30.758Z · LW(p) · GW(p)
Could you clarify?
Surely the interpretations have different implications about the nature of reality, right?
Replies from: shminux↑ comment by Shmi (shminux) · 2018-09-05T01:55:50.875Z · LW(p) · GW(p)
They are called "interpretations" and not "theories" for a reason: they are designed to make no new testable predictions. I don't know what untestable musings can say about the nature of reality, as opposed to the nature of the person doing the musing.
Replies from: jessica.liu.taylor, Vladimir_Nesov↑ comment by jessicata (jessica.liu.taylor) · 2018-09-05T03:40:43.060Z · LW(p) · GW(p)
If I interpret "reality" as "the underlying causality I am part of" rather than "what my future sense data is going to be" then untestable statements about reality are totally possible, and can be very action-relevant, for example in making decisions that only have effects after I die. It is possible to form very-likely-true beliefs about many of these statements using considerations such as parsimony and symmetry.
See also No Logical Positivist am I [LW · GW].
Replies from: shminux↑ comment by Shmi (shminux) · 2018-09-05T06:22:18.835Z · LW(p) · GW(p)
I don't understand what "the underlying causality I am part of" can possibly mean, since causality is a human way to model observations. This statement seems to use the mind projection fallacy to invert the relationship between map and territory.
untestable statements about reality are totally possible, and can be very action-relevant, for example in making decisions that only have effects after I die
Obviously. There is a good model of what happens after you die. It has been tested many times on other people. This has nothing to do with untestability of interpretations, which all predict the same thing, because they use the same mathematical formalism.
It is possible to form very-likely-true beliefs about many of these statements using considerations such as parsimony and symmetry.
Not really parsimony or symmetry as main considerations. What you use is a model of the world that has been proven reliable in the past. Parsimony and symmetry are just some ideas that were useful in constructing this model. E.g. "when a person dies, the world continues to exist" and "I am a person" are both testable models. Sure, there are models like "I'm a special snowflake", but they generally don't survive the contact with observations.
Replies from: jessica.liu.taylor, dxu↑ comment by jessicata (jessica.liu.taylor) · 2018-09-05T07:25:29.160Z · LW(p) · GW(p)
Re causality: In context I mean something like: there is a world that I am part of, and it evolves over something similar to time, with "future" things depending on "past" things. (The issue with time is that causality seems more fundamental than time; there can be multiple consistent assignments of times to the same causal structure, e.g. in the theory of relativity). Unless you are a solipsist or otherwise think the territory is unreferencable I am not sure how we could disagree on this? I suppose you could believe that things at different "times" exist and that things at "times immediately in the future" always satisfy some law with respect to the "previous" things, but that they don't depend on "past" things (scare quotes to indicate that time isn't fundamental); this leads to the same conclusions with respect to the things under discussion.
Re stuff that happens after you die: "This is what happens after other people die, therefore it will happen after I die" is an appeal to symmetry. The statement "the world continues to exist after I die" can't be tested; similar statements ("the world continues to exist after Bob dies") can be tested but they are not equivalent to the statement in question.
Similarly, "Small parts of the world evolve according to quantum mechanics, therefore large ones do too" is an appeal to symmetry; this statement implies that there is no such thing as "wavefunction collapse" (except perhaps a universal wavefunction collapse) and in particular consciousness can't cause collapse.
Re parsimony and symmetry: what do you think of the grue/bleen problem? Parsimony and symmetry both offer easy answers to this problem, but how do you make sensible predictions without appealing to either of these or to very similar things?
Replies from: shminux↑ comment by Shmi (shminux) · 2018-09-07T22:20:45.813Z · LW(p) · GW(p)
Most of these ideas are of the type of "what happens to a spaceship when it goes beyond the cosmological horizon?" and the answer is pretty standard: we build models which work well in certain situations and we apply them to all situations where they are reasonably expected to work, even if we sometimes don't get to see the results first-hand. You can call it parsimony or symmetry, but the order is reversed: you first build a working model, then apply it wherever it makes sense and adjust or replace as needed where it is outside its domain of applicability based on new observations. In the cases where the observations are not available, you take a chance, but generally not a huge one. For example, there might be a topological domain wall just outside the cosmological horizon, but there are no indications of this being the case given what we know about the universe.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2018-09-07T22:40:14.169Z · LW(p) · GW(p)
The question of where a model is expected to generalize and where it isn't is the entire problem. You are taking expectations about generalization as basic; I argue that these expectations are based on considerations of parsimony and symmetry. The order isn't reversed here; parsimony/symmetry give rise to intuitions about whether a model will generalize.
The argument that no collapse happens at intermediate scales between very small and the entire universe is a symmetry-based argument, just as the argument that things beyond the cosmological horizon still exist is a symmetry-based argument.
Replies from: shminux↑ comment by Shmi (shminux) · 2018-09-08T03:16:40.353Z · LW(p) · GW(p)
The argument that no collapse happens at intermediate scales between very small and the entire universe is a symmetry-based argument, just as the argument that things beyond the cosmological horizon still exist is a symmetry-based argument.
Yes, I agree. But to discover and effectively apply symmetry one generally has to have a workable model first. For example, the invariance of the speed of light followed from the Maxwell equations, and was confirmed experimentally, and was incorporated in the math of the Lorentz transformations, yet without a good theory those appeared ugly, not symmetric. It took a new theory to reveal the hidden symmetry. And to eventually write the Maxwell equations in half a line, []A=J and divA=0, down from the original 20. Same with the cosmological horizon: it does not appear symmetric and one needs to understand some amount of general relativity to see the symmetry. Or believe those who say that there is one. The "no collapse at the intermediate scales" is a good hypothesis, but quite possible wrong, because gravity is likely to cause decoherence in some way, as Penrose pointed out.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2018-09-08T03:52:23.178Z · LW(p) · GW(p)
I agree with most of the things you are saying here. I am not sure I agree about the cosmological horizon; it seems like you could derive this from special relativity, but in any case this is a minor point. I don't know enough physics to confirm the thing you said about gravity and collapse.
In any case it seems you are currently saying "no collapse at intermediate scales is a good hypothesis and maybe wrong for this specific reason" whereas initially you were saying "interpretations [of quantum mechanics] by definition make no difference" and "I don't know what untestable musings can say about the nature of reality", and these statements seem to be in tension (as the question of whether collapse happens at intermediate scales depends on what are currently called "interpretations of quantum mechanics", and is currently untestable); do you still agree with your original statements?
Replies from: shminux↑ comment by Shmi (shminux) · 2018-09-08T04:59:37.731Z · LW(p) · GW(p)
Yes, you could derive the horizon stuff from special relativity, but to construct an asymptotically de Sitter spacetime you need general relativity. Anyway, that wasn't the original issue. "no collapse at intermediate scales is a good hypothesis and maybe wrong for this specific reason" is one possibility, the likelihood of which is currently hard to evaluate, as it extrapolates quantum mechanics far beyond the domain where it had been tested (Zeilinger's bucky ball double slit experiments). The nature of the apparent collapse is a huge open problem, with decoherence and Zurek's quantum Darwinism giving some hints at why certain states survive and others don't, and pretending that MWI somehow dissolves the issue, the way Eliezer tells the tale, is a bit of a delusion. Anyway, MWI does not make any predictions, since it simply tells you that the feeling of being in a single world is an illusion, without going into the details of how to resolve the Wigner's friend and similar paradoxes. See Scott Aaronson's lecture 12 on the topic for more discussion.
↑ comment by dxu · 2018-09-07T04:38:50.757Z · LW(p) · GW(p)
I don't understand what "the underlying causality I am part of" can possibly mean, since causality is a human way to model observations. This statement seems to use the mind projection fallacy to invert the relationship between map and territory.
If you want to discount the use of causal models as merely a "human way to model observations" (one that presumably bears no underlying connection to whatever is generating those observations), then you will need to explain why they work so well. The set of all possible sequences of observations is combinatorially large, and the supermajority of those sequences admit no concise description--they contain no regularity or structure that would allow us to substantially compress their length without losing information. The fact that our observations do seem to be structured, therefore, is a very improbable coincidence indeed. The belief in an external reality is simply a rejection of the notion that this extremely improbable circumstance is a coincidence.
↑ comment by Vladimir_Nesov · 2018-09-05T11:59:02.481Z · LW(p) · GW(p)
I think the correct claim around this topic is that interpretation may reflect moral judgement, and consequently decisions about what to do in a world seen under a given interpretation, which does say something about the person doing the musing, and could be very useful to them. Conversely, knowledge about reality is useful to a person only to the extent it helps them with decision making. So insisting on divesting theories of interpretation is good methodology with both upsides and downsides, not a fundamental principle, which is I'm guessing what some people hear when the distinction between theories and interpretations is pointed out.
↑ comment by avturchin · 2018-09-05T14:44:32.868Z · LW(p) · GW(p)
The idea of acausal multiverse wide cooperation make sense even if at least one of many ideas why universe is (almost) infinitely large has significant probability. Tegmark listed 4 reasons for multiverse, I find that actually there are around 10.
Anyway, I think that everettian multiverse is more than just "interpretation", and you are right that we shouldn't based our policy on "interpretations". The same way, "anthropic principle" is not a just "principle" - we use to call it such way, but it is more than just a random principle of unknown epistemic status - it is a idea about conditional probability of past events and could be presented without being called "principle".
In other words, "everettian interpretation" is not "interpretation", but a physical theory. Could it be tested or not is another question, some suggestions exist, but debatable, like "quantum suicide test" or "quantum bomb testing".
Replies from: shminux↑ comment by Shmi (shminux) · 2018-09-07T03:52:34.154Z · LW(p) · GW(p)
I am one of those who considers Tegmark's hierarchy a steaming pile of BS that has nothing to do with physics or reality. So I automatically discount any reasoning based on this. The many-worlds direction is a natural way to try to extrapolate quantum mechanics, but so far it has not produced anything consistent, and it is in direct conflict with general relativity, since all those multiple worlds share the same spacetime, yet produce no obvious gravitational effects despite being macroscopic, if not detectable by other means because of the decoherence. So for now it is just a convenient tool for musing about possible worlds while pretending that they are real. That is how Eliezer uses it, anyway. And the flock of his followers who learned about QM from his sequence on the topic.
The anthropic principle is a different beast, and I agree that it has some usefulness, though not nearly as much as its proponents claim, mainly because you cannot usefully talk about probabilities without specifying a probability distribution. But that's a different topic.
Replies from: avturchin↑ comment by avturchin · 2018-09-07T09:30:51.399Z · LW(p) · GW(p)
I understand your position: EY ignores many other interesting interpretations of QM, like retrocausality, and if you goes deeper in the field, his position may seem oversimplified.
However, it is not equal to the claim that universe is finite in space and in time. Even if some form of infinity (or very largeness) is possible, like cyclic universe, it creates possibility of existence of very large number of civilizations in casually disconnected regions. This idea may need additional analysis without simple linking Tegmark.