Can we, in principle, know the measure of counterfactual quantum branches?

post by sisyphus (benj) · 2022-12-18T22:07:56.736Z · LW · GW · 4 comments

This is a question post.

Contents

  Answers
    3 TAG
    3 JBlack
    3 Viliam
    3 shminux
    2 Slider
    1 Dacyn
None
4 comments

In the Many-Worlds Interpretation, the amplitude of the wave function is seen as describing the "measure of existence". We can tell the existence measure of potential future Everett branches, but can we, even in principle, know the measure of existence for counterfactual branches? E.g. the measure of existence of an Everett branch where WW2 never happened?

Answers

answer by TAG · 2022-12-19T17:13:28.383Z · LW(p) · GW(p)

Why do you need to know? You can't do the standard physics thing of calculating probabilities and then confirming them experimentally , because you can't detect other decoherent branches.

On the other hand, the philosophical implications are buge.

There's broadly two areas where MWI has ethical implications. One is over the fact that MW means low probability events have to happen very time -- as opposed to single universe physics, where they usually don't. The other is over whether they are discounted in moral significance for being low in quantum mechanical measure or probability

It can be argue that probability calculations come out the same under different interpretations of QM, but ethics is different. The difference stems from the fact that what what other people experience is relevant to them, whereas for a probability calculation, I only need to be able to statistically predict my own observations. Using QM to predict my own observations, I can ignore the question of whether something has a ten percent chance of happening in the one and only world, or a certainty of happening in one tenth of possible worlds.

You can have objective information about observations, and if your probability calculus is wrong , you will get wrong results and know that you are getting wrong results. That is the negative feedback that allows physics to be less wrong.

You can have subjective information about your own mental states, and if your personal calculus is wrong , you will get wrong results and know that you are getting wrong results. That is the negative feedback that allows personal decision theory to be less wrong.

Altruistic ethics is different. You don't have either kind of direct evidence, because you are concerned with other people's subjective sensations , not objective evidence , or your own subjectivity. Questions about ethics are downstream of questions about qualia, and qualia are subjective, and because they are subjective, there is no reason to expect them to behave like third person observations. We have to infer that someone else is suffering , and how much, using background assumptions. For instance, I assume that if you hit your thumb with a hammer , it hurts you like it hurts me when I hit my thumb with a hammer.

One can have a set of ethical axioms saying that I should avoid causing death and suffering to others, but to apply them under many worlds assumptions, I need to be able to calculate how much death and suffering my choices cause in relation to the measure. Which means I need to know whether the measure or probability of a world makes a difference to the intensity of subjective experience.. including the option of "not at all", and I need to know whether the deaths of then people in a one tenth measure world count as ten deaths or one death. .

Suppose they are discounted.

If people in low measure worlds experience their suffering fully, then a 1%, of creating a hell-world would be equivalent in suffering to a 100% chance. But if people in low measure worlds are like philosophical zombies, with little or no phenomenal consciousness, so that their sensations are faint or nonexistent, the moral hazard is much lower.

A similar, but slightly less obvious argument applies to causing death. Causing the "death" of a complete zombie is presumably as morally culpable as causing the death of a character in a video game...which, by common consent, is not problem at all. So... causing the death of a 50% zombie would be only half as bad as killing a real person...maybe.

There is an alternative way of cashing out quantum mechanical measure due to David Deutsch. He supposes that you can have exact duplicates of worlds , which form sets of identical worlds, and which have no measure of their own. Each set contains different worlds, making the sets unique, each world within a set is identical to the others. Counting the worlds in an equivalence set gives you the equivalent of measure.

Under this interpretation, you should ethically discount low-measure worlds (world sets) because there are fewer people in them.

The approach where suffering and moral worth are discounted according to measure has the convenient property that it works like probability theory, so that you don't have to take an interpretation of QM into account. But , if you are arguing to the conclusion that the interpretation of QM is ethically neutral, you can't inject the same claim as an assumption, without incurring circularity.

What's needed is a way of deciding the issue of measure versus reality versus consciousness ethical value that is not question begging. We are not going to get a rigourous theory based on physics alone, because physics does not explicitly deal with reality, consciousness or ethical value. But there are some hints. One important consideration is that under many world's theory, all worlds must have measures summing to 1, so any individual world,including ours, has measure less than 1. But our world and our consciousness seem fully real to us; if there is nothing special about our world, then the inhabitants of other worlds presumably feel the same. And that is just the premise that leads to the conclusion that many world's theory does impact ethics.

Of course , there is the metaphysical issue rthat MW is a deterministic theory, so there is a question about whether we can chooae to behave differently at all. But MW is not the only deterministic theory...newtonian physics had the same problem

answer by JBlack · 2022-12-19T11:47:11.319Z · LW(p) · GW(p)

There are no actual branches in these interpretations, that's just a popular simplification that is taken way too seriously. Every part of the wavefunction contributes to the ongoing evolution of the wavefunction, continuously and unitarily.

One apparently nitpicky but actually very serious objection is that for large messy heavily decoherent criteria like "WW2 happened", we can't even hope to define in quantum terms what sort of measurement would correspond to an outcome "it did happen" or "it did not happen". It's on a massively separated level of abstraction from any quantum properties.

However from various macroscopic principles, we can be pretty sure that any such measurement would overwhelmingly have greater measure on "it didn't happen". The probability would have too many 9's to count after the decimal point, and the exact number would depend upon the exact details of what measurement is conceived of for determining whether "WW2 happened".

In the MWI interpretation, we[1] occupy an ill-defined microscopic blob within a microscopic dot on a microscopic dot in the continuous wavefunction. Despite that, the future of "WW2 happened" states is almost entirely determined by the present of "WW2 happened" states due to the near-perfect symmetry of decoherence. There is effectively zero net contribution from the vastly greater parts of the quantum wavefunction, leaving only a tiny coherent contribution from something approximating the classical past.

So the same property that allows us to have a classical-like past at all, almost certainly prevents us from measuring anything about the parts of the past that are decoherent from us - not counterfactual, because in MWI interpretations these things did happen - including things like "Earth never formed" or "humans never existed on Earth" or "human history was the same up until 1930 but WW2 never happened". To the extent that MWI is accurate, those things are still happening right here and right now. Not in some "parallel universe", but right here in ours, just undetectable. They're just so perfectly balanced in complex phase that - like the staggeringly immense positive and negative charges in ordinary matter - the net effect on us is neutrality.

  1. ^

    The meaning of "we" is also highly ill-defined from a quantum point of view, but should include all states that are somewhat compatible with given subjective experiences (such as memories of evidence that WW2 happened) and not just a single point in state space.

comment by Slider · 2022-12-19T15:49:08.169Z · LW(p) · GW(p)

To the extent that MWI applies then nothing is counterfactual so might as well use that as a synonym for "decoherent" to bridge the differences between ontologies.

The past is not exactly classical and to the extent it is "merely" classical-like that data extraction hope is possible.

The "WW2 never happened" portion would have its own classical-like past so calling it "right now" doesn't seem obviously proper. Sure the crosstalk parties make more sense to be existing on the same level rather than betwen a real and not real party. But like I would not count neutrinos passing through me as part of my body I would not count that "other side of the wavefunction" to be part of my immediate experience.

I do wonder if somebody wanted to maximise the amount of crosstalk possible what would be the limiting factors.

comment by TAG · 2022-12-19T17:21:19.537Z · LW(p) · GW(p)

There are no actual branches in these interpretations, that’s just a popular simplification that is taken way too seriously. Every part of the wavefunction contributes to the ongoing evolution of the wavefunction, continuously and unitarily.

But to very varying extents, so that decoherence can occur for all practical purposes.

answer by Viliam · 2022-12-19T08:15:47.899Z · LW(p) · GW(p)

To make the question more precise, you would need to define a starting point in the past. You could set it right after the end of WW1. Or you could set it before WW1, which means that the counterfactual branches without WW2 would also include those where neither WW1 nor WW2 happened.

If you set the starting point too far, like in the age of dinosaurs, the probability of "humans did not have WW2" is almost 100%, because almost certainly humans have never evolved.

But suppose you choose the starting point to be e.g. the year 1920.

There is a tiny nitpick that technically the past is also uncertain (although much less than the future, because... something something... entropy and arrow of time), so you get a probabilistic version of 1920, but probably for all practical purposes the majority of its measure is the same as you would expect, macroscopically, and lot of noise at the microscopic level.

Anyway, let's define "1920" as the best probabilistic distribution of 1920 we could reconstruct now.

Reconstructing the past is already technically difficult, because the particles are too many (the entire Earth? probably much more, because some observed astronomical events could have influenced something). So at this moment, this is a completely unrealistic "in principle" thought experiment.

But suppose that we have a gigantic magical machine that could dismantle the Solar system and measure each tiny particle in order to best figure out the answer to our question.

Then there is the problem of defining "WW2" in terms of positions and momentums of tiny particles. Calculating all possible combinations of particles, and evaluating whether they contribute to the "no WW2" set, I think that would require more energy than is available in our universe.

But suppose that a magical hypercomputer from an alternative reality joins the project...

I guess, unless there are a few more technical problems that I forgot, it probably could be done.

But given the amount of magic we have assumed, it would probably be much simpler to assume that the world is a simulation, and ask the Masters of the Matrix nicely to restart the simulation 1000 times from the starting point of 1920, then return to us and show us the probability distribution.

comment by sisyphus (benj) · 2022-12-19T08:41:08.103Z · LW(p) · GW(p)

I get that doing something like this is basically impossible using any practical technology, but I just wanted to know if there was anything about it that was impossible in principle (e.g. not even an ASI could do it).

The main problem that I wanted to ask and get clarification on is whether or not we could know the measure of existence of branches that we cannot observe. The example I like to use is that it is possible to know where an electron "is" once we measure it, and then the wave function of the electron evolves according to the Schrodinger equation. The measure of existence of a future timeline where electron is measured at a coordinate X is equal to the amplitude of the wave function at X after being evolved forward using the Schrodinger equation. But I am guessing that it is impossible to go backwards, in the sense of deducing the state of the wave function before the initial measurement is made using the measurement result (what was the amplitude of the wave function at Y before we measured the electron at Y)? Does that make sense?

answer by Shmi (shminux) · 2022-12-19T07:38:15.351Z · LW(p) · GW(p)

That's not how it works. That's not how any of it works.

MWI does not have a prescription for macroscopic probability calculations, it is still an open problem Sean Carroll and other MWI proponents are working on. 

It pays not to take the branching idea literally, this is just an analogy for popular consumption, like Hawking's analogy of particle/antiparticle for thermal radiation from event horizons. The transition from "everything is a unitary evolution of a Hilbert state with no other processes involved" to "macroscopic structure of the universe looks like "branching" is very much iffy and not science-based. 

All we can say confidently is that the only prescription that can work for probability calculations of a microscopic quantum system interacting with a macroscopic classical system and is compatible with unitary evolution is the Born rule. This statement says absolutely nothing about world branching on macroscopic scales. This applies equally to the "future" and "counterfactual" branches.

comment by sisyphus (benj) · 2022-12-19T08:33:40.594Z · LW(p) · GW(p)

I thought David Deutsch had already worked out a proof that the Born rule using decision theory? I guess it does not explain objective probability but as far as I know the question of what probability even means is very vague.

I know that the branching is just a metaphor for the human brain to understand MWI better, but the main question I wanted to ask is whether or not you can know the amplitude of different timelines that have "diverged" a long time ago. E.g. it is possible to know where an electron "is" once we measure it, and then the wave function of the electron evolves according to the Schrodinger equation. The measure of existence of a future timeline where electron is measured at X is equal to the amplitude of the wave function at X after being evolved forward using the Schrodinger equation. But I am guessing that it is impossible to go backwards, in the sense of deducing the state of the wave function before a measurement is made using the measurement result (what was the amplitude of the wave function at X before we measured the electron at X)? Does that make sense?

Replies from: shminux
comment by Shmi (shminux) · 2022-12-19T20:06:19.031Z · LW(p) · GW(p)

You are right, (apparent) collapse is not reversible, and there is no known way to figure out the pre-collapsed quantum state, and so there is no state to apply the Born rule to. This statement makes sense when discussing the evolution of quantum systems, not classical systems though.

answer by Slider · 2022-12-19T02:43:06.421Z · LW(p) · GW(p)

Counterfactual computation is a thing. I don't know the requirements that well but it would suggest that the information could be tortured out under some circumstances.

It is one way to think how to get around Bell limits. It is not a local hidden variable if your tiebreaker data lies in a parralel timeline.

In this scenario [LW · GW] one can think of being in a "world" C where you have one photon. Then if C is only whatever exists in the multiverse the mirror D splits it. But if the detector E never receives anything you can deduce that the photon in the "parallel timeline" B is a entity that you need to keep track off. In this scenario we know to keep track of B because we knew that A existed justifying keeping track of C and B. Decoherence makes a bit hard to think about, any memory-former that interacts with only C can't see B and any memory-former that interacts with both can't be sure whether C or B is the case.

answer by Dacyn · 2022-12-19T19:51:44.971Z · LW(p) · GW(p)

I think quantum mechanics and the MWI are a red herring here. The real question is whether you can compute the probability of counterfactual events like WWII not happening -- and as Viliam says, the answer to that question is that it depends on choosing a starting point in the past to diverge from. Of course, once you choose such a point, actually estimating the probability of WWII not happening is an exercise in historical reasoning, not quantum physics.

4 comments

Comments sorted by top scores.

comment by DialecticEel · 2022-12-18T22:38:11.792Z · LW(p) · GW(p)

An interesting point here is that when talking about future branches, I think you mean that they are probabilities conditioned on the present. However as a pure measure of existence, I don't see why it would need to be conditioned on the present at all. The other question is then, what would count as WW2? A planetary conflict that has occured after another planetary conflict? A conflict called World War 2?

Perhaps you are talking about branches conditioned on a specific point in the past i.e. the end of WW1 as it happened in our past. In which case, I don't see why you couldn't estimate those probabilities, though it's a super complex and chaotic system which you would be applying estimates to and therefore best to take with a pinch of bayesian salt imo.

Replies from: benj
comment by sisyphus (benj) · 2022-12-18T22:56:56.078Z · LW(p) · GW(p)

Yea my main question is that can we even in principle estimate the pure measure of existence of branches which diverged from our current branch? We can know the probabilities conditioned on the present but I don't see how we can work backwards to estimate the probabilities of a past event not occurring. Just like how a wavefunction can be evolved forward after a measurement but cannot be evolved backwards from the measurement itself to deduce the probability of obtaining such a measurement. Or can we?

I mainly picked "world where WW2 did not happen" to illustrate what I mean by counterfactual branches, in the sense that it has already diverged from us and is not in our future.

Replies from: Slider, DialecticEel
comment by Slider · 2022-12-19T19:15:38.698Z · LW(p) · GW(p)

In arrow of time discussions quantum theory is on the level that does not prefer one direction.For example on a electron the future question would be "where is the electron going (at a future time in which position there is an electron)?" and the past question would be "where did the electron come from (at a past time in which position there was an electron)?". That the electron is here and an electron happened is going to stay fixed.

"uncollapsing" is probably mathematically sensible. Take the past superposition then forget where we found the electron and project forward paths from each of the past positions up to present. Those are the electron positions which are past-compatible with our found electron. Analysis of choice erasure experiments probably runs the same maths. If you do not know the source there is probably no other consistent position than the actual one (because deterministic theory). If you have a reason to know the electron came from a particular source point then the destination is going to fan out.

It seems to me that if you sum up the spreads from knowing the source was in each position that is a different spread than not knowing at all, the spread of one point. So a true superposition behaves differently than being uncertain of a non-superposition source. In this deduction I am losing a complex phase in treating "where the electron could have been from this source" as a real field out of which I sum up a new real field. Would keeping the phases end up agreeing that only the detected position was possible? Without being able to run the complex math in my head, indirect argument that it does: deterministic outcome evolving to a stochastic outcome T-symmetry reversed means there is a process that turns a stochastic state into a deterministic state. Which means it can't really be that stochastic at all if it can be unscrambled. So any interpretation that insists that there is a single classical underlying reality and the rest is just all epistemics is going to run into bookkeeping trouble explaining these "merger" processes. So the complex valuedness is connected to the fact that it is not dice playing at all.

comment by DialecticEel · 2022-12-19T15:43:17.329Z · LW(p) · GW(p)

Hmm, I mean when we are talking about these kind of counterfactuals, we obviously aren't working with the wavefunction directly, but that's an interesting point. Do you have a link to any writings on that specifically?

We can perform counterfactual reasoning about the result of a double slit experiment, including predicting the wavefunction, but perhaps that isn't quite what you mean.