When would an agent do something different as a result of believing the many worlds theory?
post by mako yass (MakoYass) · 2019-12-15T01:02:40.952Z · LW · GW · 11 commentsThis is a question post.
Contents
Answers 7 Korz 5 DanielFilan 4 TAG 3 Teerth Aloke 2 Nicholas Garcia 2 Charlie Steiner 2 James_Miller 0 Donald Hobson None 11 comments
One of the things impeding the many worlds vs wavefunction-collapse dialogue is that nobody seems to be able to point to a situation in which the difference clearly matters, where we would make a different decision depending on which theory we believe. If there aren't any, pragmatism would instruct us to write the question off as meaningless.
Has anyone tried to pose a compelling thought experiment in which the difference matters?
Answers
As any collapse (if it does happen) occurs so 'late' that current experiments are unable to differentiate between many worlds and collapse -- it seems quite possible that both theories will continue to give identical predictions for all realisable situations, with the only difference being 'one branch becomes realised' and 'all branches become realised'.
General:
- Assuming this practical indistinguishability between the theories, I think that any utility function based on one of the theories can be directly translated into the other theory by just reinterpreting the theory-inherent probabilities. This assumes that all branches in the many worlds reasoning are weighted with their 'probability' (e.g. the Quantum Russian Roulette thought experiment hinges on counting 'I survive'-branches differently [LW · GW]¹)
More Human related:
- One relevant aspect is how natural utility maximisation feels using one of the two theories as world model. Thinking in many worlds terms makes expected utility maximisation a lot more vivid compared to the different future outcomes being 'mere probabilities' -- on the other hand, this vividness makes rationalisation of pre-existing intuitions easier.
- Another point is that most people strongly value existence/non-existence additionally to the quality and 'probability' of existence (e.g. people might play Quantum Russian Roulette but not normal Russian Roulette as many worlds makes sure that they will survive [in some branches]). This makes many worlds feel more comforting when facing high probabilities of grim futures.
- A third aspect is the consequences for the concept of identity. Adopting many worlds as world model also means that naive models of self and identity are up for a major revision. As argued above, valuing all future branch selves equally (=weighted by the 'probabilities') should make many worlds and collapse equivalent (up to the 'certain survival [in some branches]' aspect). A different choice in accounting for many worlds might not be translatable into the collapse world model.
Disclaimer:
I am still very much confused by decision theories that involve coordination without a causal link between agents such as Multiverse-wide Cooperation. For such theories, other considerations might also be important.
----
¹: To be more exact, I would argue that the case for Quantum Russian Roulette becomes identical to the case for normal Russian Roulette if many world branches are weighted with their 'probabilities' and also takes into account the 'certain survival [in some branches]' bonus that many worlds gives.
↑ comment by mako yass (MakoYass) · 2021-06-08T02:02:56.250Z · LW(p) · GW(p)
Another point is that most people strongly value existence/non-existence additionally to the quality and 'probability' of existence
Mm, agreed. We're fans of quantities, rather than qualities, so I may have been underrecognizing this.
Humans clearly have special concerns about not existing at all, that extend beyond the linear concern for merely existing less. A quantum multiverse (or maybe even just a physically large multiverse, with chance recurrences) would soundly and naturally decrease a human's aversion to death, to some extent.
I think it's more natural to ask "how might an agent behave differently as a result of believing an objective collapse theory?" One answer that comes to mind is that they will be less likely to invest in quantum computers, which will need to rely on entanglement between a large number of quantum systems that under objective collapse theories might not be maintained (depending on the exact collapse theory). Similarly, other different physical theories of quantum mechanics will result in different predictions about what will happen in various somewhat-arcane situations.
More flippantly, an agent might answer the question 'What do you think the right theory of quantum mechanics is?' differently.
[Edited to put the serious answer where people will see it in the preview]
Things like determinism and many worlds may not affect fine grained decision making, but they can profoundly impact what decision making, choice volition, agency and moral responsibility are. It is widely accepted that determinism affects freedom of choice, excluding some notions of free will. It is less often noticed that many worlds affects moral responsibility, because it removes refraining: if there is the slightest possibility that you would kill someone, then there is a world where you killed someone. You can't refrain from doing anything that is possible for you to do
↑ comment by Luca (MaimedUbermensch) · 2019-12-17T18:37:43.887Z · LW(p) · GW(p)
Does that mean that utilitarianism is incompatible with Many Worlds? if everything that is possible for you to do is something that you actually do then that would mean that utility, across the whole multiverse, is constant, even assuming any notion of free will.
Replies from: Viliam, MakoYass↑ comment by Viliam · 2019-12-15T22:58:35.105Z · LW(p) · GW(p)
Everything is possible, but not everything has the same measure (is equally likely). Killing someone in 10% of "worlds" is worse than killing them in 1% of "worlds".
At the end, believing in many worlds will give you the same results as believing in collapse. It's just that epistemologically, the believer in collapse needs to deal with the problem of luck. Does "having a 10% probability of killing someone, and actually killing them" make you a worse person that "having a 10% probability of killing someone, but not killing them"?
(From many-worlds perspective, it's the same. You simply shouldn't do things that have 10% risk of killing someone, unless it is to avoid even worse things.)
(And yes, there is the technical problem of how exactly you determine that the probability was exactly 10%, considering that you don't see the parallel "words".)
Replies from: TAG↑ comment by TAG · 2019-12-16T12:54:55.297Z · LW(p) · GW(p)
Everything is possible, but not everything has the same measure (is equally likely). Killing someone in 10% of “worlds” is worse than killing them in 1% of “worlds”.
Apart from the other problem: MWI is deterministic, so you can't alter the percentages by any kind of free will, despite what people keep asserting.
Does “having a 10% probability of killing someone, and actually killing them” make you a worse person that “having a 10% probability of killing someone, but not killing them”?
Actually killing them is certainly worse. We place moral weight on actions as well as character.
Replies from: Lanrian↑ comment by Lukas Finnveden (Lanrian) · 2019-12-16T18:01:13.647Z · LW(p) · GW(p)
MWI is deterministic, so you can't alter the percentages by any kind of free will, despite what people keep asserting.
Neither most collapse-theories nor MWI allow for super-physical free will, so that doesn't seem relevant to this question. Since the question concerns what one should do, it seems reasonable to assume that some notion of choice is possible.
(FWIW, I'd guess compatibilism is the most popular take on free will on LW.)
Replies from: TAG↑ comment by TAG · 2019-12-16T18:10:36.789Z · LW(p) · GW(p)
Yes, but compatibilism doesn't suggest that you choose between different actions or between different decision theories.
Replies from: Lanrian↑ comment by Lukas Finnveden (Lanrian) · 2019-12-16T20:01:57.051Z · LW(p) · GW(p)
Wait, what? If compatibilism doesn't suggest that I'm choosing between actions, what am I choosing between?
Replies from: TAG↑ comment by mako yass (MakoYass) · 2019-12-16T00:01:01.044Z · LW(p) · GW(p)
No, if 99% of timelines have utility 1, while in 1% of timelines something very improbable happens and you instead cause utility to go to 0, the global utility is still pretty much 1. Some part of the human utility function seems to care about absolute existence or nonexistence, and that component is going to be sort of steamrolled by multiverse theory, but we will mostly just keep on going in pursuit of greater relative measure.
Replies from: TAG↑ comment by TAG · 2019-12-16T12:58:22.236Z · LW(p) · GW(p)
That amounts to saying that if the conjunction of MWI and utilitarianism is correct, we would or should behave as though it isn't. That is a major departure from typical rationalism (eg the Litany of Tarski).
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2019-12-16T23:09:00.825Z · LW(p) · GW(p)
The question isn't really whether it's correct, the question is closer to "is it equivalent to the thing we already believed".
There is the Quantum Russian roulette thought experiment. It was posted in LessWrong.
↑ comment by mako yass (MakoYass) · 2019-12-16T01:23:21.947Z · LW(p) · GW(p)
Yeah. I reject it. If you're any good at remapping your utility function after perspective shifts ("rescuing the utility function"), then, after digesting many worlds, you will resolve that being dead in all probable timelines is pretty much what death really is, then, and you have known for a long time that you do not want death, so you don't have much use for quantum suicide gambits.
Many of the other comments deal with thought experiments rather than looking at the reality of how "many worlds" is USED. From my point of view as a non-physicist it seems to primarily be used as psuedo-science "woo" - a revival of mystery and awe under the cloak of scientific authority. A kind of paradoxical mysticism for non-religious people, or fans of "science-ism".
An agent might act differently from MISUNDERSTANDING many worlds theory. Or by paying more attention to it. Psychological "priming" is real ansd powerful.
The answer by TAG below is case in point. For someone committed to a belief in determinism or fatalism, having a manyworlds theory in mind may buttress that belief.
If they are put into an interferometer, someone who thinks the wavefunction has collapsed would think, while in the middle, that they have a 50/50 chance of coming out each arm, while an everettian will make choices as if they might deterministically come out of one arm (depending on the construction of the interferometer).
The difficulty of putting humans into interferometers us more or less why this doesn't matter much. Though of course "pragmatism" shouldn't stop us from applying occam's razor.
Assume you put enormous weight on avoiding being tortured and you recognize that signing up for cryonics results in some (very tiny) chance that you will be revived in an evil world that will torture you and this, absent many worlds, causes you to not sign up for cryonics. There is an argument that in many worlds there will be versions of you that are going to be tortured so your goal should be to reduce the percentage of these versions that get tortured. Signing up for cryonics in this world means you are vastly more likely to be revived and not tortured than revived and tortured and signing up for cryonics will thus likely lower percentage of you across the multiverse who are tortured. Signing up for cryonics in this world reduces the importance of versions of you trapped in worlds where the Nazis won and are torturing you.
If you use some form of noncausal decision theory, it can make a difference.
Suppose Omega flips a quantum coin, if its tails, they ask you for £1, if its heads they give you £100 if and only if they predict that you would have given them £1 had the coin landed tails.
There are some decision algorithms that would pay the £1 if and only if they believed in quantum many worlds. A CDT agent would never pay, and a UDT agent would always pay however.
It is of course possible to construct agents that want to do X if and only if quantum many worlds is true. It is also possible t construct agents that do the same thing whether it's true or false. (Eg Alpha Go)
The answer to this question depends on which wave function collapse theory you use. There are a bunch of quantum superposition experiments where we can detect that no collapse is happening. If photons collapsed their superposition in the double slit experiment, we wouldn't get an interference pattern. Collapse theories postulate a list of circumstances that we haven't measured yet when collapse happens. If you believe that quantum collapse only happens when 10^40 kg of mass are in a single coherent superposition, this belief has almost no effect on your predictions.
If you believe that you can't get 100 atoms into superposition then you are wrong, current experiments have tested that. If you believe that collapse happens at the 1gram level. Then future experiments could test this. In short, there are collapse theories in which collapse is so rare that you will never spot it. There are theories where collapse is so common that we would have already spotted it (so we know those theories are wrong), and there are theories in between. The in between theories will make different predictions about future experiments. They will not expect large quantum computers to work.
Another difference is that current QFT doesn't contain gravity. In the search for a true theory of everything, many worlds and collapse might suggest different successors. This seems important to human understanding. It wouldn't make a difference to an agent that could consider all possible theories.
↑ comment by mako yass (MakoYass) · 2019-12-16T05:43:04.427Z · LW(p) · GW(p)
There are some decision algorithms that would pay the £1 if and only if they believed in quantum many worlds
Go on then, which decision algorithms? Note, though: They do have to be plausible models of agency. I don't think it's going to be all that informative if a pointedly irrational model acts contingent on foundational theory when CDT and FDT don't.
Replies from: Gurkenglas, donald-hobson, Charlie Steiner↑ comment by Gurkenglas · 2019-12-16T08:40:07.092Z · LW(p) · GW(p)
An agent might care about (and acausally cooperate with) all versions of himself that "exist". MWI posits more versions of himself. Imagine that he wants there to exist an artist like he could be, and a scientist like he could be - but the first 50% of universes that contain each are more important than the second 50%. Then in MWI, he could throw a quantum coin to decide what to dedicate himself to, while in CI this would sacrifice one of his dreams.
↑ comment by Donald Hobson (donald-hobson) · 2019-12-16T10:12:44.678Z · LW(p) · GW(p)
The agent first updates on the evidence that it has, and then takes logical counterfactuals over each possible action. This behaviour means that it only cooperates in newcolmblike situations with agents it believes actually exist. It will one box in Newcolmbs problem, and cooperate against an identical duplicate of itself. However it won't pay in logical counterfactual blackmail, or any source of counterfactual blackmail accomplished with true randomness.
↑ comment by Charlie Steiner · 2019-12-16T07:55:07.848Z · LW(p) · GW(p)
(I think this is a good chance for you to think of an answer yourself.)
11 comments
Comments sorted by top scores.
comment by Matthew Barnett (matthew-barnett) · 2019-12-15T01:18:08.729Z · LW(p) · GW(p)
Have you read Multiverse-wide Cooperation via Correlated Decision Making by chance?
Replies from: MakoYass, shminux↑ comment by mako yass (MakoYass) · 2019-12-16T00:17:33.657Z · LW(p) · GW(p)
I'm not sure, it sounds very familiar, but I think it would have sounded very familiar to me before reading it or knowing of its existence. It sounds like the sorts of things I would already know.
People who think this way tend to converge on the same ideas. It's hard to tell whether thinking superrationally causes the convergence, or whether thinking in convergent ways causes a person to have more interest in superrationality, ~~or whether causality is involved at all~~
Replies from: matthew-barnett↑ comment by Matthew Barnett (matthew-barnett) · 2019-12-16T00:47:39.754Z · LW(p) · GW(p)
It's hard to tell whether thinking superrationally causes the convergence, or whether thinking in convergent ways causes a person to have more interest in superrationality, or whether causality is involved at all
I recommend reading the paper on Functional Decision Theory, to get an intuition on what an answer to this might look like. I think the question you're interested in is whether we should think of our action as actually having an effect on observers in another universe (or world, in MWI). This might seem absurd if you have the intuition that you can only affect things that are causally dependent on your actions. But if you drop the assumption of causal dependence, you can say that their decision is subjunctively dependent on yours.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2019-12-16T01:19:12.541Z · LW(p) · GW(p)
Sorry. That last bit about whether causality is involved at all was a little joke. It was bad. That wasn't really what I was pondering.
↑ comment by Shmi (shminux) · 2019-12-15T02:35:27.045Z · LW(p) · GW(p)
A short summary of the paper: "Don't be a dick."
comment by ioannes (ioannes_shade) · 2019-12-16T01:25:50.758Z · LW(p) · GW(p)
See also: If physics is many-worlds, does ethics matter? [LW · GW]
comment by mako yass (MakoYass) · 2019-12-16T06:08:39.657Z · LW(p) · GW(p)
I'm noticing a deeper impediment. Before we can imagine how a morality that is relatable to humans might care about the difference between MW and WC, we need to know how to extend the human morality we bare into the bizarre new territory of quantum physics. We don't even have a theory of how human morality extends into modernity, we definitely don't have an idealisation of how human morality should take to the future, and I'm asking for an idealisation of how it would take to something as unprecedented as... timelines popping in and out of existence, universes separated by uncrossable gulfs (how many times have you or your ancestors ever straddled an uncrossable gulf!)
It's going to be very hard to describe a believable agent that has come to care about this new, hidden, bizarre distinction when we don't know how we come to care about anything.
comment by Pattern · 2019-12-17T19:31:08.723Z · LW(p) · GW(p)
When would an agent do something different as a result of believing the many worlds theory?
That depends on their utility function.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2019-12-17T22:17:26.697Z · LW(p) · GW(p)
Sure. The question, there, is whether we should expect there to be any powerful agents with utility functions that care about that.
Replies from: Pattern↑ comment by Pattern · 2019-12-18T06:32:15.566Z · LW(p) · GW(p)
Would you buy a ticket for a quantum lottery, for immortality?
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2019-12-19T23:10:02.721Z · LW(p) · GW(p)
No. Measure decrease is bad enough to more than outweigh the utility of the winning timelines. I can imagine some very specific variants that are essentially a technology for assigning specialist workloads to different timelines, but I don't have enough physics to detail it, myself.