Many Worlds, One Best Guess
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-05-11T08:32:18.000Z · LW · GW · Legacy · 80 commentsContents
80 comments
If you look at many microscopic physical phenomena—a photon, an electron, a hydrogen atom, a laser—and a million other known experimental setups—it is possible to come up with simple laws that seem to govern all small things (so long as you don’t ask about gravity). These laws govern the evolution of a highly abstract and mathematical object that I’ve been calling the “amplitude distribution,” but which is more widely referred to as the “wavefunction.”
Now there are gruesome questions about the proper generalization that covers all these tiny cases. Call an object “grue” if it appears green before January 1, 2020 and appears blue thereafter. If all emeralds examined so far have appeared green, is the proper generalization, “Emeralds are green” or “Emeralds are grue”?
The answer is that the proper generalization is “Emeralds are green.” I’m not going to go into the arguments at the moment. It is not the subject of this essay, and the obvious answer in this case happens to be correct [LW · GW]. The true Way is not stupid [LW · GW]: however clever you may be with your logic, it should finally arrive at the right answer rather than a wrong one.
In a similar sense, the simplest generalizations that would cover observed microscopic phenomena alone take the form of “All electrons have spin ” and not “All electrons have spin before January 1, 2020” or “All electrons have spin unless they are part of an entangled system that weighs more than 1 gram.”
When we turn our attention to macroscopic phenomena, our sight is obscured. We cannot experiment on the wavefunction of a human in the way that we can experiment on the wavefunction of a hydrogen atom. In no case can you actually read off the wavefunction with a little quantum scanner. But in the case of, say, a human, the size of the entire organism defeats our ability to perform precise calculations or precise experiments—we cannot confirm that the quantum equations are being obeyed in precise detail.
We know that phenomena commonly thought of as “quantum” do not just disappear when many microscopic objects are aggregated. Lasers put out a flood of coherent photons, rather than, say, doing something completely different. Atoms have the chemical characteristics that quantum theory says they should, enabling them to aggregate into the stable molecules making up a human.
So in one sense, we have a great deal of evidence that quantum laws are aggregating to the macroscopic level without too much difference. Bulk chemistry still works.
But we cannot directly verify that the particles making up a human have an aggregate wavefunction that behaves exactly the way the simplest quantum laws say. Oh, we know that molecules and atoms don’t disintegrate, we know that macroscopic mirrors still reflect from the middle. We can get many high-level predictions from the assumption that the microscopic and the macroscopic are governed by the same laws, and every prediction tested has come true.
But if someone were to claim that the macroscopic quantum picture differs from the microscopic one in some as-yet-untestable detail—something that only shows up at the unmeasurable 20th decimal place of microscopic interactions, but aggregates into something bigger for macroscopic interactions—well, we can’t prove they’re wrong. It is Occam’s Razor [LW · GW] that says, “There are zillions of new fundamental laws you could postulate in the 20th decimal place; why are you even thinking about this one [? · GW]?”
If we calculate using the simplest laws which govern all known cases, we find that humans end up in states of quantum superposition, just like photons in a superposition of reflecting from and passing through a half-silvered mirror [? · GW]. In the Schrödinger’s Cat setup, an unstable atom goes into a superposition of disintegrating, and not-disintegrating. A sensor, tuned to the atom, goes into a superposition of triggering and not-triggering. (Actually, the superposition is now a joint [? · GW] state of [atom-disintegrated × sensor-triggered] + [atom-stable × sensor-not-triggered].) A charge of explosives, hooked up to the sensor, goes into a superposition of exploding and not exploding; a cat in the box goes into a superposition of being dead and alive; and a human, looking inside the box, goes into a superposition of throwing up and being calm. The same law at all levels.
Human beings who interact with superposed systems will themselves evolve into superpositions. But the brain that sees the exploded cat, and the brain that sees the living cat, will have many neurons firing differently, and hence many many particles in different positions. They are very distant in the configuration space, and will communicate to an exponentially infinitesimal degree. Not the 30th decimal place, but the 1030th decimal place. No particular mind, no particular cognitive causal process, sees a blurry superposition of cats.
The fact that “you” only seem to see the cat alive, or the cat dead, is exactly what the simplest quantum laws predict. So we have no reason to believe, from our experience so far, that the quantum laws are in any way different at the macroscopic level than the microscopic level.
And physicists have verified superposition at steadily larger levels. Apparently an effort is currently underway to test superposition in a 50-micron object, larger than most neurons.
The existence of other versions of ourselves, and indeed other Earths, is not supposed additionally. We are simply supposing that the same laws govern at all levels, having no reason to suppose differently, and all experimental tests having succeeded so far. The existence of other decoherent Earths is a logical consequenceof the simplest generalization that fits all known facts. If you think that Occam’s Razor says that the other worlds are “unnecessary entities” being multiplied, then you should check the probability-theoretic math; that is just not how Occam’s Razor works [? · GW].
Yet there is one particular puzzle that seems odd in trying to extend microscopic laws universally, including to superposed humans:
If we try to get probabilities by counting the number of distinct observers, then there is no obvious reason why the integrated squared modulus of the wavefunction should correlate with statistical experimental results. There is no known reason for the Born probabilities, and it even seems that, a priori, we would expect a 50/50 probability of any binary quantum experiment going both ways, if we just counted observers.
Robin Hanson suggests that if exponentially tinier-than-average decoherent blobs of amplitude (“worlds”) are interfered with by exponentially tiny leakages from larger blobs, we will get the Born probabilities back out. I consider this an interesting possibility, because it is so normal.
(I myself have had recent thoughts along a different track: If I try to count observers the obvious way, I get strange-seeming results in general, not just in the case of quantum physics. If, for example, I split my brain into a trillion similar parts, conditional on winning the lottery while anesthetized; allow my selves to wake up and perhaps differ to small degrees from each other; and then merge them all into one self again; then counting observers the obvious way says I should be able to make myself win the lottery (if I can split my brain and merge it, as an uploaded mind might be able to do).
In this connection, I find it very interesting that the Born rule does not have a split-remerge problem. Given unitary quantum physics, Born’s rule is the unique rule that prevents “observers” from having psychic powers—which doesn’t explain Born’s rule, but is certainly an interesting fact. Given Born’s rule, even splitting and remerging worlds would still lead to consistent probabilities. Maybe physics uses better anthropics than I do!
Perhaps I should take my cues from physics, instead of trying to reason it out a priori, and see where that leads me? But I have not been led anywhere yet, so this is hardly an “answer.”)
Wallace, Deutsch, and others try to derive Born’s Rule from decision theory. I am rather suspicious of this, because it seems like there is a component of “What happens to me?” that I cannot alter by modifying my utility function. Even if I didn’t care at all about worlds where I didn’t win a quantum lottery, it still seems to me that there is a sense in which I would “mostly” wake up in a world where I didn’t win the lottery. It is this that I think needs explaining.
The point is that many hypotheses about the Born probabilities have been proposed. Not as many as there should be, because the mystery was falsely marked “solved” [LW · GW] for a long time. But still, there have been many proposals.
There is legitimate hope of a solution to the Born puzzle without new fundamental laws. Your world does not split into exactly two new subprocesses on the exact occasion when you see “absorbed” or “transmitted” on the LCD screen of a photon sensor. We are constantly being superposed and decohered, all the time, sometimes along continuous dimensions—though brains are digital and involve whole neurons firing, and fire/not-fire would be an extremely decoherent state even of a singleneuron… There would seem to be room for something unexpected to account for the Born statistics—a better understanding of the anthropic weight of observers, or a better understanding of the brain’s superpositions—without new fundamentals.
We cannot rule out, though, the possibility that a new fundamental law is involved in the Born statistics.
If there’s one lesson we can take from the history of physics, it’s that everytime new experimental “regimes” are probed (e.g. large velocities, small sizes, large mass densities, large energies), phenomena are observed which lead to new theories (Special Relativity, quantum mechanics, General Relativity, and the Standard Model, respectively).
“Every time” is too strong. A nitpick, yes, but also an important point: you can’t just assume that any particular law will fail in a new regime. But it’s possible that a new fundamental law is involved in the Born statistics, and that this law manifests only in the 20th decimal place at microscopic levels (hence being undetectable so far) while aggregating to have substantial effects at macroscopic levels.
Could there be some law, as yet undiscovered, that causes there to be only oneworld?
This is a shocking notion; it implies that all our twins in the other worlds— all the different versions of ourselves that are constantly split off, not just by human researchers doing quantum measurements, but by ordinary entropic processes—are actually gone, leaving us alone! This version of Earth would be the only version that exists in local space! If the inflationary scenario in cosmology turns out to be wrong, and the topology of the universe is both finite and relatively small—so that Earth does not have the distant duplicates that would be implied by an exponentially vast universe—then this Earth could be the only Earth that exists anywhere, a rather unnerving thought!
But it is dangerous to focus too much on specific hypotheses that you have no specific reason to think about. This is the same root error of the Intelligent Design folk, who pick any random puzzle in modern genetics, and say, “See, God must have done it!” Why “God,” rather than a zillion other possible explanations?—which you would have thought of long before you postulated divine intervention, if not for the fact that you secretly started out already knowing [LW · GW] the answer you wanted to find [LW · GW].
You shouldn’t even ask, “Might there only be one world?” but instead just go ahead and do physics, and raise that particular issue only if new evidence demands it.
Could there be some as-yet-unknown fundamental law, that gives the universe a privileged center, which happens to coincide with Earth—thus proving that Copernicus was wrong all along, and the Bible right?
Asking that particular question—rather than a zillion other questions in which the center of the universe is Proxima Centauri, or the universe turns out to have a favorite pizza topping and it is pepperoni—betrays your hidden agenda. And though an unenlightened one might not realize it, giving the universe a privileged center that follows Earth around through space would be rather difficult to do with any mathematically simple fundamental law.
So too with asking whether there might be only one world. It betrays a sentimental attachment to human intuitions already proven wrong. The wheel of science turns, but it doesn’t turn backward.
We have specific reasons to be highly suspicious of the notion of only one world. The notion of “one world” exists on a higher level of organization, like the location of Earth in space; on the quantum level there are no firm boundaries (though brains that differ by entire neurons firing are certainly decoherent). How would a fundamental physical law identify one high-level world?
Much worse, any physical scenario in which there was a single surviving world, so that any measurement had only a single outcome, would violate Special Relativity.
If the same laws are true at all levels—i.e., if many-worlds is correct—then when you measure one of a pair of entangled polarized photons, you end up in a world in which the photon is polarized, say, up-down, and alternate versions of you end up in worlds where the photon is polarized left-right. From your perspective before doing the measurement, the probabilities are 50/50. Light-years away, someone measures the other photon at a 20° angle to your own basis. From their perspective, too, the probability of getting either immediate result is 50/50—they maintain an invariant state of generalized entanglement with your faraway location, no matter what you do. But when the two of you meet, years later, your probability of meeting a friend who got the same result is 11.6%, rather than 50%.
If there is only one global world, then there is only a single outcome of any quantum measurement. Either you measure the photon polarized up-down, or left-right, but not both. Light-years away, someone else’s probability of measuring the photon polarized similarly in a 20° rotated basis actually changes from 50/50 to 11.6%.
You cannot possibly interpret this as a case of merely revealing properties that were already there; this is ruled out by Bell’s Theorem. There does not seem to be any possible consistent view of the universe in which both quantum measurements have a single outcome, and yet both measurements are predetermined, neither influencing the other. Something has to actually change, faster than light.
And this would appear to be a fully general objection, not just to collapse theories [? · GW], but to any possible theory that gives us one global world! There is no consistent view in which measurements have single outcomes, but are locally determined (even locally randomly determined). Some mysterious influence has to cross a spacelike gap.
This is not a trivial matter. You cannot save yourself by waving your hands and saying, “the influence travels backward in time to the entangled photons’ creation, then forward in time to the other photon, so it never actually crosses a spacelike gap.” (This view has been seriously put forth, which gives you some idea of the magnitude of the paradox implied by one global world!) One measurement has to change the other, so which measurement happens first? Is there a global space of simultaneity? You can’t have both measurements happen “first” because under Bell’s Theorem, there’s no way local information could account for observed results, etc.
Incidentally, this experiment has already been performed, and if there is a mysterious influence it would have to travel six million times as fast as light in the reference frame of the Swiss Alps. Also, the mysterious influence has been experimentally shown not to care if the two photons are measured in reference frames which would cause each measurement to occur “before the other.”
Special Relativity seems counterintuitive to us humans—like an arbitrary speed limit, which you could get around by going backward in time, and then forward again. A law you could escape prosecution for violating, if you managed to hide your crime from the authorities.
But what Special Relativity really says is that human intuitions about space and time are simply wrong. There is no global “now,” there is no “before” or “after” across spacelike gaps. The ability to visualize a single global world, even in principle, comes from not getting Special Relativity on a gut level. Otherwise it would be obvious that physics proceeds locally with invariant states of distant entanglement, and the requisite information is simply not locally present to support a globally single world.
It might be that this seemingly impeccable logic is flawed—that my application of Bell’s Theorem and relativity to rule out any single global world contains some hidden assumption of which I am unaware—
—but consider the burden that a single-world theory must now shoulder! There is absolutely no reason in the first place to suspect a global single world; this is just not what current physics says! The global single world is an ancient human intuition that was disproved, like the idea of a universal absolute time. The superposition principle is visible even in half-silvered mirrors; experiments are verifying the disproof at steadily larger levels of superposition—but above all there is no longer any reason to privilege the hypothesis of a global single world. The ladder has been yanked out from underneath that human intuition.
There is no experimental evidence that the macroscopic world is single (we already know the microscopic world is superposed). And the prospect necessarily either violates Special Relativity, or takes an even more miraculous-seeming leap and violates seemingly impeccable logic. The latter, of course, being much more plausible in practice. But it isn’t really that plausible in an absolute sense. Without experimental evidence, it is generally a bad sign to have to postulate arbitrary logical miracles.
As for quantum non-realism [LW · GW], it appears to me to be nothing more than a Get Out of Jail Free card. “It’s okay to violate Special Relativity because none of this is real anyway!” The equations cannot reasonably be hypothesized to deliver such excellent predictions for literally no reason. Bell’s Theorem rules out the obvious possibility that quantum theory represents imperfect knowledge of something locally deterministic.
Furthermore, macroscopic decoherence gives us a perfectly realistic understanding of what is going on, in which the equations deliver such good predictions because they mirror reality. And so the idea that the quantum equations are just “meaningless,” and therefore it is okay to violate Special Relativity, so we can have one global world after all, is not necessary. To me, quantum non-realism appears to be a huge bluff built around semantic stopsigns [LW · GW] like “Meaningless!”
It is not quite safe to say that the existence of multiple Earths is as well-established as any other truth of science [LW · GW]. The existence of quantum other worlds is not so well-established as the existence of trees, which most of us can personally observe.
Maybe there is something in that 20th decimal place, which aggregates to something bigger in macroscopic events. Maybe there’s a loophole in the seemingly iron logic which says that any single global world must violate Special Relativity, because the information to support a single global world is not locally available. And maybe the Flying Spaghetti Monster is just messing with us, and the world we know is a lie.
So all we can say about the existence of multiple Earths, is that it is as rationally probable as e.g. the statement that spinning black holes do not violate conservation of angular momentum. We have extremely fundamental reasons, having to do with the rotational symmetry of space, to suspect that conservation of angular momentum is built into the underlying nature of physics. And we have no specific reason to suspect this particular violation of our old generalizations in a higher-energy regime.
But we haven’t actually checked conservation of angular momentum for rotating black holes—so far as I know. (And as I am talking here about rational guesses in states of partial knowledge, the point is exactly the same if the observation has been made and I do not know it yet.) And black holes are a more massive regime. So the obedience of black holes is not quite as assured as that my toilet conserves angular momentum while flushing, which come to think, I haven’t checked either…
Yet if you make the mistake of thinking too hard about this one particular possibility [LW · GW], instead of zillions of other possibilities—and especially if you don’t understand the fundamental reason why angular momentum is conserved— then it may start seeming more and more plausible that “spinning black holes violate conservation of angular momentum,” as you think of more and more vaguely plausible-sounding reasons it could be true.
But the rational probability is pretty damned small.
Likewise the rational probability that there is only one Earth.
I mention this to explain my habit of talking as if many-worlds is an obvious fact. Many-worlds is an obvious fact, if you have all your marbles lined up correctly (understand very basic quantum physics, know the formal probability theory of Occam’s Razor, understand Special Relativity, etc.) It is in fact considerably moreobvious to me than the proposition that spinning black holes should obey conservation of angular momentum.
The only reason why many-worlds is not universally acknowledged as a direct prediction of physics which requires magic to violate, is that a contingent accident of our Earth’s scientific history [LW · GW] gave an entrenched academic position to a phlogiston-like theory that had an unobservable faster-than-light magical “collapse” devouring all other worlds. And many academic physicists do not have a mathematical grasp of Occam’s Razor, which is the usual method for ridding physics of invisible angels. So when they encounter many-worlds and it conflicts with their (undermined [LW · GW]) intuition that only one world exists, they say, “Oh, that’s multiplying entities”—which is just flatly wrong as probability theory—and go on about their daily lives.
I am not in academia. I am not constrained to bow and scrape to some senior physicist who hasn’t grasped the obvious, but who will be reviewing my journal articles. I need have no fear that I will be rejected for tenure on account of scaring my students with “science-fiction tales of other Earths.” If I can’t speak plainly, who can?
So let me state then, very clearly, on behalf of any and all physicists out there who dare not say it themselves: Many-worlds wins outright given our current state of evidence. There is no more reason to postulate a single Earth, than there is to postulate that two colliding top quarks would decay in a way that violates Conservation of Energy. It takes more than an unknown fundamental law; it takes magic.
The debate should already be over. It should have been over fifty years ago. The state of evidence is too lopsided to justify further argument. There is no balance in this issue. There is no rational controversy to teach. The laws of probability theory are laws, not suggestions [LW · GW]; there is no flexibility in the best guess given this evidence. Our children will look back at the fact that we were still arguing about this in the early twenty-first century, and correctly deduce that we were nuts.
We have embarrassed our Earth long enough by failing to see the obvious. So for the honor of my Earth, I write as if the existence of many-worlds were an established fact, because it is. The only question now is how long it will take for the people of this world to update.
80 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by mitchell_porter2 · 2008-05-11T11:40:22.000Z · LW(p) · GW(p)
False for three reasons.
First: The Born probabilities. That is where all the predictive power of quantum theory is located. If you don't have those, you just have a qualitative world-picture, one of many possibilities.
Second: There is no continuity of identity in time of a world, as I suppose we shall see in the Julian Barbour instalment; nothing to relate the worlds extracted from the wavefunction in one moment to those extracted in the next, nothing to say 'this world is the continuation of that one'. The denial of continuity in time is a radical step and should be recognized as such.
Third: If you favor the position basis, then as things stand, you have to talk about instantaneous spacelike states of the whole universe, i.e. there is a conceptually (though not dynamically) special reference frame. You are free to say 'maybe we can do it differently, in a way that's more relativistic', but for now that's just a hope.
For all these reasons, many worlds is not obviously the leading contender.
comment by mitchell_porter2 · 2008-05-11T11:53:07.000Z · LW(p) · GW(p)
I suppose the basic intuition here is, "Superposition is real for small things, we have no evidence that it breaks down for large things, and superposition means multiple instances of the thing superposed; therefore, many worlds, not just many electrons."
But is it clear that superposition means multiple instances of the thing superposed? Consider the temporal zigzag interpretations. There it is supposed that there is only one history between first and final observed event, and that the amplitudes are just the appropriate form of probabilities, not signs of multiple coexisting actualities. The temporal zigzag theorists cannot yet rigorously show that this is so; but the many worlds people cannot show that they get the right probabilities either. Therefore, even at the level of the individual quantum process, there is no evidence to favor the interpretation of superposition as denoting multiple actuality rather than multiple possibility.
Replies from: whowhowhocomment by mitchell_porter2 · 2008-05-11T12:28:34.000Z · LW(p) · GW(p)
Eliezer asked (of zigzag theories): "One measurement has to change the other, so which measurement happens first?"
It doesn't have to be that way. Events can be determined through a combination of local causality and global consistency; see the work on attempts to create time travel paradoxes using wormholes. For example, you may set things up so that a sphere, sent into one end of a wormhole, should emerge from the other in such a way as to collide with itself on the way in, thereby preventing its entry. It sounds like a grandfather paradox: what's the answer? The answer is that only nonparadoxical histories are even possible; such as those in which the sphere emerges and perturbs its prior course, but not by so much as to prevent its entry into the wormhole.
The harmony of distant outcomes in an EPR experiment may similarly be due to a global consistency.
Ideally, in order to apply the description-length version of Occam's razor to competing and wildly dissimilar theories, such as we have in these attempts to explain quantum mechanics, one would first take the rival theories, embed them in a common superfamily of possible theories, deploy some prior across that superfamily, and then condition on experimental results. However, neither many worlds nor temporal zigzag is even capable of reproducing experimental results, so long as they cannot derive the Born probabilities. There are two types of realist theories which are experimentally adequate: stochastic objective collapse theories (e.g. Ghirardi-Rimini-Weber), and deterministic nonlocal hidden-variable theories (e.g. Bohm). In theory, if we're trying to figure out our best current guess, we have to choose between those two! In practice, it seems obvious that theoretical pluralism is still called for, and that much more work needs to be done by the advocates of interpretations which remain qualitative but could become quantitative.
comment by Recovering_irrationalist · 2008-05-11T15:18:22.000Z · LW(p) · GW(p)
Eliezer, continued compliments on your series. As a wise man once said, it's remarkable how clear explanations can become when an expert's trying to persuade you of something, instead of just explaining it. But are you sure you're giving appropriate attention to rationally stronger alternatives to MWI, rather than academically popular but daft ones?
comment by Günther_Greindl · 2008-05-11T15:34:45.000Z · LW(p) · GW(p)
Mitchell,
there is another argument speaking for many-worlds (indeed, even for all possible worlds - which raises new interesting questions of what is possible of course - certainly not everything that is imaginable): that to specify one universe with many random events requires lots of information, while if everything exists the information content is zero - which fits nicely with ex nihilo nihil fit :-)
Structure and concreteness only emerges from the inside view, which gives the picture of a single world. Max Tegmark has paraphrased this idea nicely with the quip "many words or many worlds" (words standing for high information content).
Max's paper is quite illuminating: Tegmark, Max. 2007. The Mathematical Universe http://arxiv.org/abs/0704.0646
So we could say that there a good metaphysical reasons for preferring MWI to GRW or Bohm.
Replies from: naasking, RobbBB, EHeller↑ comment by naasking · 2013-05-13T19:00:05.877Z · LW(p) · GW(p)
there is another argument speaking for many-worlds (indeed, even for all possible worlds - which raises new interesting questions of what is possible of course - certainly not everything that is imaginable): that to specify one universe with many random events requires lots of information, while if everything exists the information content is zero - which fits nicely with ex nihilo nihil fit
Now THAT's an interesting argument for MWI. It's not a final nail in the coffin for de Broglie-Bohm, but the naturalness of this property is certainly compelling.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2013-09-23T03:42:55.530Z · LW(p) · GW(p)
Although Tegmark incidentally endorses MWI, Tegmark's MUH does not entail MWI. Yes, if there's a model of MWI, then some world follows MWI; but our world can be a part of a MUH ensemble without being in an MWI-bound region of the ensemble. We may be in a Bohmian portion of the ensemble.
Tegmark does seem to think MWI provides some evidence for MUH (which would mean that MUH predicts MWI over BM), but I think the evidence is negligible at best. The reasons to think MWI is true barely overlap at all with the reasons to think MUH is. In fact, the failure of Ockham to resolve BM v. MW could well provide evidence against MUH; if MWI (say) turned out to be substantially more complex (in a way that gives it fewer models) and yet true, that would give strong anthropic evidence against MUH. MUH is more plausible if we live in the kind of world that should predominate in the habitable zone of an ensemble.
↑ comment by Rob Bensinger (RobbBB) · 2013-09-23T03:31:49.828Z · LW(p) · GW(p)
But MWI is not the doctrine 'everything exists'. This is a change of topic. Yes, if we live in a Tegmark universe and MWI is the simplest theory, then it's likely we live in one of the MWI-following parts of the universe. But if we don't live in a Tegmark universe and MWI is the simplest theory, then it's still likely we live in one of the MWI-following possible worlds. It seems to me that all the work is being done by Ockham, not by Tegmark.
↑ comment by EHeller · 2013-09-23T03:41:50.077Z · LW(p) · GW(p)
Max Tegmark has paraphrased this idea nicely with the quip "many words or many worlds"
Sure, but why is the information content of the current state of the universe something that we would want to minimize? In both many-worlds and alternatives, the complexity of the ALGORITHM is roughly the same.
comment by billswift · 2008-05-11T15:50:37.000Z · LW(p) · GW(p)
"If the same laws are true at all levels - i.e., if many-worlds is correct - then when you measure one of a pair of entangled polarized photons, you end up in a world in which the photon is polarized, say, up-down, and alternate versions of you end up in worlds where the photon is polarized left-right. From your perspective before doing the measurement, the probabilities are 50/50. Light-years away, someone measures the other photon at a 20° angle to your own basis. From their perspective, too, the probability of getting either immediate result is 50/50 - they maintain an invariant state of generalized entanglement with your faraway location, no matter what you do. But when the two of you meet, years later, your probability of meeting a friend who got the same result is 11.6%, rather than 50%.
"If there is only one global world, then there is only a single outcome of any quantum measurement. Either you measure the photon polarized up-down, or left-right, but not both. Light-years away, someone else's probability of measuring the photon polarized similarly in a 20° rotated basis, actually changes from 50/50 to 11.6%."
I don't see how you claim many-worlds gets you around the special relativity problem, the measurements can only be compared within one world - how would postulating other non-interacting (after the split) worlds help?
Also I have been having trouble following your posts. Your writing here has the same problem many weirdos (IDers, perpetual-motion-machine makers, etc) has. Any facts and arguments are getting lost in your wordiness. You might want to try to post brief explanations of what specifically your claims are in each post (maybe as occsasional summing-up posts).
comment by athmwiji · 2008-05-11T16:17:59.000Z · LW(p) · GW(p)
Brains, as far as we currently understand them, are not digital. For a neuron fire / not fire is digital, but there is a lot of information involved in determining weather or not a neuron fires. A leaky integrator is a reasonable rough approximation to a neuron and is continuous.
comment by Richard_Hollerith2 · 2008-05-11T16:33:42.000Z · LW(p) · GW(p)
Your writing here has the same problem many weirdos . . . has. Any facts and arguments are getting lost in your wordiness.
Unfair. Eliezer has been trying to keep the series accessible to nonspecialists, and of course that means that the specialists are going to wade through more words than they would have preferred to wade through. Boo hoo.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-05-11T16:35:47.000Z · LW(p) · GW(p)
Brains, as far as we currently understand them, are not digital. For a neuron fire / not fire is digital, but there is a lot of information involved in determining weather or not a neuron fires. A leaky integrator is a reasonable rough approximation to a neuron and is continuous.
The point is that by the time two brains differ by a whole neuron firing, they are decoherent - far too many particles in different positions. That's why you can't feel the subtle influence of someone trying to think a little differently from you - by the time a single neuron fires differently, the influence has diminished down to an exponentially tiny infinitesimal. Even a single neurotransmitter in a different place prevents two configurations from being identical.
@Billswift: The point is that nothing happens differently as a result of distant events - no local evolution, no probabilistic chance, no experience, no "non-signaling influence", nothing changes - until the two parties meet, slower than light. You can (I think) split it up and view it in terms of strictly local events with invariant states of distant entanglement.
@Recovering irrationalist: I haven't encountered any stronger arguments for the untestable SR-violating single-world theory. Sure, no one knows what science doesn't know. But given that I believe single-worlds is false, I should not expect to encounter unknown strong arguments for it. Do you have a particular stronger argument in mind?
@Jason: Bohm's particles are epiphenomena. The pilot-wave must be real to guide the particles; the particles themselves have no effect. If the pilot-wave is real, the amplitude distribution we know is real, and it will have conscious observers in it if it performs computations, etc. And there is simply no reason to suppose it.
@Mitchell: Of Born I have already extensively spoken (your 1), and postulating a single world doesn't help you at all; it is strictly simpler to say "The Born probabilities exist" than to say "The Born probabilities exist and control a magical FTL collapse" or "The Born probabilities exist and pilot epiphenomenal points [also FTL]." On your 2, it so happens that I don't deny causal continuity, and plan to speak of this later. And regarding (3) quantum physics describes a covariant, local process so it seems like a good guess that there exists a covariant, local representation; but regardless the essence of Special Relativity is in the covariance and locality, whether we can find a representation that reveals it, or not.
Replies from: aaronswcomment by Dynamically_Linked · 2008-05-11T17:09:28.000Z · LW(p) · GW(p)
Robin Hanson suggests that if exponentially tinier-than-average decoherent blobs of amplitude ("worlds") are interfered with by exponentially tiny leakages from larger blobs, we will get the Born probabilities back out.
Shouldn't it be possible for a tinier-than-average decoherent blobs of amplitude to deliberately become less vulnerable to interference from leakages from larger blobs, by evolving itself to an isolated location in configuration space (i.e., a point in configuration space with no larger blobs nearby)? For example, it seems that we should be able to test the mangled worlds idea by doing the following experiment:
- Set up a biased quantum coin, so that there is a 1/4 Born probability of getting an outcome of 0, and 3/4 of getting 1.
- After observing each outcome of the quantum coin toss, broadcast the outcome to a large number of secure storage facilities. Don't start the next toss until all of these facilities have confirmed that they've received and stored the previous outcome.
- Repeat 100 times.
Now consider a "world" that has observed an almost equal number of 0s and 1s at the end, in violation of Born's rule. I don't see how it can get mangled. (What larger blob will be able to interfere with it?) So if mangled worlds is right, then we should expect a violation of Born's rule in this experiment. Since I doubt that will be the case, I don't think mangled worlds can be right.
comment by Goplat · 2008-05-11T17:16:54.000Z · LW(p) · GW(p)
So the Bohm interpretation takes the same amplitude distribution as many-worlds and builds something on top of that. So what? That amplitude distribution is just a mathematical object, but it having a physical existence certainly doesn't change the truth or falsehood of any mathematical statements, so I could just as easily say that the amplitude distribution itself is an "epiphenomenon" (and therefore can't exist).
comment by RobinHanson · 2008-05-11T17:41:53.000Z · LW(p) · GW(p)
Dynamically, "secure storage facilities" are not at all secure against world mangling. Perhaps quantum error correction could do better.
comment by Dynamically_Linked · 2008-05-11T18:14:27.000Z · LW(p) · GW(p)
Robin, can you offer some intuitive explanation as to why defense against world mangling would be difficult? From what I understand, a larger blob of amplitude (world) can mangle a smaller blob of amplitude only if they are close together in configuration space. Is that incorrect? If those "secure storage facilities" simply write the quantum coin toss outcomes in big letters on some blackboards, which worlds will be close enough to be able to mangle the worlds that violate Born's rule?
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-05-11T20:50:35.000Z · LW(p) · GW(p)
Dynamically, I think the problem is that for everything you try that would render your world "distant" in the configuration space, it naturally tends to make your world smaller and more vulnerable, too. The worlds mangling yours aren't close, it's just that, collectively, they're so much larger than yours, that even very tiny stray amplitude flows from them can mangle you.
@Goplat: In Bohm's theory, the amplitude distribution has to be real because it affects the course of the particles. But the amplitude distribution itself is not affected by the particles. So any people encoded in the amplitude distribution - which can certainly compute things - would have no way of knowing the particles existed.
Replies from: drnickbone↑ comment by drnickbone · 2012-06-27T20:29:10.042Z · LW(p) · GW(p)
Rather a late comment... but this response to Goplat reminds me of one of David Lewis's arguments for modal realism. Namely, he argues that "merely possible" people have exactly the same evidence that they are "real" as we do (it all looks real to them), and hence we ourselves have no evidence that we are "real" rather than merely possible.
An objection to this is "No! Merely possible people DON'T have evidence that they are real, because they don't exist. They don't have any evidence at all. They WOULD have the same evidence that we do if they DID exist, but then of course they WOULD be real."
A similar objection is that the wave function amplitudes can't do any real computation (as opposed to possible computation) unless they have real particles to compute with. So any people who find themselves existing can infer (correctly) that they are made out of real particles and not mere amplitudes.
It always amuses me that the particle motions in Bohm's theory are described as "hidden variables". Rather to the contrary, they are the ONLY things in the theory which are NOT hidden (whereas the wave function pushing the particles around is...)
comment by Jason3 · 2008-05-11T21:16:31.000Z · LW(p) · GW(p)
"it will have conscious observers in it if it performs computations" I'm at a loss for what this means.
"In Bohm's theory, the amplitude distribution has to be real because it affects the course of the particles. But the amplitude distribution itself is not affected by the particles. So any people encoded in the amplitude distribution - which can certainly compute things - would have no way of knowing the particles existed." How is not being able to know where the particular particles are in a particular amplitude distribution an argument against it?
comment by Robin_Z · 2008-05-11T21:31:12.000Z · LW(p) · GW(p)
Oh, that's subtle.
Check me if I'm wrong: according to the MWI, the evolving waveform itself can include instantiations of human beings, just as an evolving Conway's Life grid can include gliders. Thus, if we're proposing that humans exist (a reasonable hypothesis), they exist in the waveform, and if the Bohmian particles do not influence the evolution of the waveform, they exist in the waveform the same way whether or not Bohm's particles are there. And, in fact, if they do not influence the amplitude distribution, they're epiphenomenal in the same sense that people like Chalmers claim consciousness is.
If the particles do influence the evolution of the amplitude distribution, everything changes (of course). But that remains to be shown.
comment by Dynamically_Linked · 2008-05-11T22:27:09.000Z · LW(p) · GW(p)
Eliezer, I think your (and Robin's) intuition is off here. Configuration space is so vast, it should be pretty easy for a small blob of amplitude to find a hiding place that is safe from random stray flows from larger blobs of amplitude.
Consider a small blob in my proposed experiment where the number of 0s and 1s are roughly equal. Writing the outcomes on blackboards does not reduce the integrated squared modulus of this blob, but does move it further into "virgin territory", away from any other existing blobs. In order for it to be mangled by stray flows from larger blobs, those stray flows would somehow have to reach the same neighborhood as the small blob. But how? Remember that in this neighborhood of configuration space, the blackboards have a roughly equal number of 0s and 1s. What is the mechanism that can allow a stray piece of a larger blob to reach this neighborhood and mangle the smaller blob? It can't be random quantum fluctuations, because the Born probability of the same sequence of 0s and 1s spontaneously appearing on multiple blackboards is much less than the integrated squared modulus of the small blob. To put it another way, by the time a stray flow from a larger blob reaches the small blob, its amplitude would be spread much too thin to mangle the small blob.
Replies from: Dacyncomment by Wiseman · 2008-05-11T22:29:02.000Z · LW(p) · GW(p)
Question: how does MWI not violate SR/no-faster-than-light-travel itself?
That is, if a decoherence happens with a particle/amplitude, requiring at that point a split universe in order to process everything so both possibilities actually happen, how do all particles across the entire universe know that at that point they must duplicate/superposition/whatever, in order to maintain the entegrity of two worlds where both posibilities happen?
comment by Recovering_irrationalist · 2008-05-12T00:04:51.000Z · LW(p) · GW(p)
Eliezer: But given that I believe single-worlds is false, I should not expect to encounter unknown strong arguments for it.
Indeed. And in light of your QM explanation, which to me sounds perfectly logical, it seems obvious and normal that many worlds is overwhelmingly likely. It just seems almost too good to be true that I now get what plenty of genius quantum physicists still can't.
The mental models/neural categories we form strongly influence our beliefs. The ones that now dominate my thinking about QM are learned from one who believes overwhelmingly in MWI. The commenters who already had non-MWI-supporting mental representations that made sense to them seem less convinced by your arguments.
Sure I can explain all that away, and I still think you're right, I'm just suspicious of myself for believing the first believable explanation I met.
comment by Patrick_(orthonormal) · 2008-05-12T02:32:38.000Z · LW(p) · GW(p)
Well, now I think I understand why you chose to do the QM series on OB. As it stands, the series is a long explication of one of the most subtle anthropocentric biases out there— the bias in favor of a single world with a single past and future, based on our subjective perception of a single continuous conscious experience. It takes a great deal of effort before most of us are even willing to recognize that assumption as potentially problematic.
Oh, and one doesn't even have to assume the MWI is true to note this; the single-world bias is irrationally strong in us even if it turns out to correspond to reality.
comment by mitchell_porter2 · 2008-05-12T05:25:41.000Z · LW(p) · GW(p)
Günther, I am aware of that argument, but it has very little to do with favoring many worlds in the sense of Everett. See Tegmark's distinction between Level III and Level IV. The worlds of an Everett multiverse are supposed to be connected facets of a single entity, not disjoint Level IV entities.
This allows me to highlight another aspect of many worlds, which is the thorough confusion regarding causality. What are the basic cause-and-effect relationships, according to many worlds? What are the entities that enter into them? Do worlds have causal power, or are they purely epiphenomenal? Remember, that-which-exists at any moment does not just consist of a set of worlds, but a set of worlds each with a complex number attached. And that-which-exists in the next moment is - the same set of worlds, but now with different complex numbers attached. The more I think about it, the less sense it makes, but people have been seduced by the simple-sounding rhetoric of worlds splitting and recombining.
To say it all again: what are we being offered by this account? On the one hand, a qualitative picture: the reality we see is just one sheet in a sheaf of worlds, which split and merge as they become dissimilar and then become similar again. On the other hand, a quantitative promise: the picture isn't quite complete, but we hope to get the exact probabilities back somehow.
Now what is the reality of quantum mechanics, applied to the whole universe? (If we adopt the configuration-centric approach.) There is a space of classical-looking "configurations" - each an arrangement of particles in space, or a frozen sea of waves in fundamental fields. Then, there is a complex number, a "probability amplitude", associated with each configuration. Finally, we have an equation, the Schrödinger equation, which describes how the complex numbers change with time. That's it.
If we just look at the configuration space, and ignore the complex numbers, there is no splitting and merging, nothing changes. We have a set of instantaneous world-states, just sitting there.
If we try to bring the complex numbers into the picture, there are two obvious options. One is to identify a world with a particular static configuration. Then nothing ever actually moves in any world, all that changes is the mysterious complex number globally associated with it. That's one way to break down a universal wavefunction into "many worlds", but it seems meaningless.
The other way is to break down the wavefunction at any moment in that fashion, but to deny any relationship between the worlds of one moment and the world of the next, as I described it up in my second paragraph. So once again, reality ends up consisting of a set of static spatial configurations, each with a complex number magically attached, but there is no continuity of existence.
There is actually a third option, however - an alternative way to assert continuity of existence, between the worlds of one moment and the world of the next. Basically, you go against the gradient in configuration space of the angles of the complex numbers, in order to decide which world-moments later continue the world-moments of now. That defines a family of nonintersecting trajectories, each of which resembles a perturbed version of a classical history. In fact, we've just reinvented a form of Bohmian mechanics.
But enough. I hope someone grasps that we have simply not been given a picture which can sustain the rhetoric of worlds splitting. Either the worlds sit there unchanging, or they only exist for a moment, or they are self-sufficient Bohmian worlds which neither split nor join. If I try to understand mangled worlds in this way, it seems to say, "ignore those configurations where the amplitude is very small". But they're either there or not; and if they are literally not, we're no longer using the Schrödinger equation.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-05-12T06:38:32.000Z · LW(p) · GW(p)
Mitchell, you already know about Barbour, so why are you asking this?
comment by constant3 · 2008-05-12T06:50:05.000Z · LW(p) · GW(p)
Remember, that-which-exists at any moment does not just consist of a set of worlds, but a set of worlds each with a complex number attached. And that-which-exists in the next moment is - the same set of worlds, but now with different complex numbers attached.
You seem to be talking about the wavefunction, which is a complex function defined over the configuration space (a set of configurations each with a complex number attached). But in that case you seem to be confusing a world with a configuration. A configuration defines only position. (Assuming we're talking about positional configuration space.)
It seems I can save myself some trouble explaining by quoting Eliezer:
A point mass of amplitude, concentrated into a single exact position in configuration space, does not correspond to a precisely known state of the universe. It is physical nonsense. It's like asking, in Conway's Game of Life: "What is the future state of this one cell, regardless of the cells around it?" The immediate future of the cell depends on its immediate neighbors; its distant future may depend on distant neighbors.
If Conway's Game of Life managed to support a multiverse, then a single universe in this multiverse would not correspond to a cell. It would correspond to some section of the whole pattern quite a bit larger than a single cell - a section which was for the most part causally separated from the rest of the pattern. And this section might move around over Conway's gameboard (or whatever it's called), just as a glider can move across Conway's gameboard.
comment by mitchell_porter2 · 2008-05-12T07:00:47.000Z · LW(p) · GW(p)
So far, we're still implicitly in a framework where there's time evolution, so I have described ways of implementing the many worlds vision in that framework. I am a little hesitant to preempt your next step (after all, I don't know what idiosyncratic spin you may put on things), but nonetheless: Suppose we adopt the "timeless" perspective. The wavefunction of the universe is a standing wave in configuration space; it does not undergo time evolution. My first option means nothing, because now we just have a static association of amplitudes with configurations. The second option is Barbour's - disconnected "time capsules" - only now there isn't even a question of linking up the time capsules of one moment with the time capsules of the next, because there's only one timeless moment throughout configuration space. I don't know if the third option is still viable or not; you can still compute the phase gradients of the standing wave according to the Bohmian law of motion, but I don't know about the properties of the resulting trajectories.
There may be a problem for mangled worlds peculiar to Barbour's model; there are no dynamics, therefore no mangling in any dynamical sense. You will have to come up with a nondynamical notion of decoherence too.
comment by mitchell_porter2 · 2008-05-12T07:10:09.000Z · LW(p) · GW(p)
(Previous comment was in response to Eliezer's 02:38 AM.)
constant, part of my objective is to highlight the vagueness of the concept of "world" as used by many-worlds advocates, and the problems peculiar to the various ways of making it exact, having previously argued that leaving it vague is not an option. I have certainly seen many-worlds people talk as if worlds were "wave packets" or other extended substructures within the total wavefunction. But I await a precise statement of what that means.
comment by constant3 · 2008-05-12T07:42:40.000Z · LW(p) · GW(p)
mitchell,
I think Eliezer recognizes the the vagueness of "world" but sees it as a problem for single-worlders. This is what he seems to be saying here:
We have specific reasons to be highly suspicious of the notion of only one world. The notion of "one world" exists on a higher level of organization, like the location of Earth in space; on the quantum level there are no firm boundaries (though brains that differ by entire neurons firing are certainly decoherent). How would a fundamental physical law identify one high-level world?
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-05-12T07:56:13.000Z · LW(p) · GW(p)
What flows is not time, but causality. As you guessed, I shall expand on that later. I think Barbour's time capsules reflect his lack of cog-sci-phil background - a static disk drive should never contain any observers; something has to be processed. You cannot identify observer-moments with individual configurations, which seems to be what Barbour is trying to do.
From the perspective outside time, nothing changes, but things are nonetheless determined by their causal ancestors. This is what makes the notion of "local causality" or Markov neighborhoods meaningful. This flow of determination is what supports computation, which is what supports the existence of observers. This means that no observer is ever embedded in a single configuration; only a determination of future configurations' amplitude by past configurations' amplitude, can support computation and consciousness.
Which I consider as common sense. Timelessness, also, adds up to normality; there's still a future, there's still a past, and there's still a causal relation between throwing a rock and breaking a window. None of that goes away when you take a standpoint outside time.
comment by mitchell_porter2 · 2008-05-12T08:22:02.000Z · LW(p) · GW(p)
constant - well, then, it is shaping up as follows: We need some concept of world. We can try to be exact about it, and run into various problems, as I have suggested above. Or we can be determinedly vague about it - e.g. saying that a world is a roughly decoherent blob of amplitude - and run into other problems. And then on top of this we can't even recuperate the quantitative side of quantum mechanics.
There is a form of many-worlds that gives you the correct probabilities back. It's called consistent histories or decoherent histories. But it has two defining features. First of all, the histories in question are "coarse-grained". For example, if your basic theory was a field theory, in one of these consistent histories, you don't specify a value for every field at every space-time point, just a scattering of them. Second, each consistent history has a global probability associated with it - not a probability amplitude, just an ordinary probability. Within this framework, if you want to calculate a transition probability - the odds of B given A - first you consider only those histories in which A occurs, and then you compute Pr(B|A) by using those apriori global probabilities.
Those global probabilities don't come from nowhere. The basic mathematical entity in consistent histories is an object called the decoherence functional (which can be derived from a familiar-to-physicists postulate like an action or a Hamiltonian), which takes as its input two of these coarse-grained histories. The decoherence functional defines a consistency condition for the coarse-grained histories; a set of them is "consistent" if they are all pairwise decoherent according to the decoherence functional. You then get that apriori global probability for the individual history by using it for both inputs (in effect, calculating its self-decoherence, though I don't see what that could mean). The whole thing is reminiscent of a diagonalized density matrix, and if I understood it better I'm sure I could make more of that similarity.
Anyway, technical details aside, the important point is that there is a form of many-worlds thinking in which we do get the Born probabilities back, by conditioning on a universal prior computed from the decoherence functional. If we try this out as a picture of reality, we now have to make sense of the probabilities associated with the histories. Two possibilities suggest themselves to me (I will neglect subjectivist interpretations of those probabilities): (a) there's only one world, and a world-probability is the primordial probability that that was to be the world which became actual; (b) all the worlds exist, in multiple copies, and the probabilities describe their relative multiplicities. They're both a little odd, but I think either is preferable to the whole "dare to be vague" line of argument.
comment by mitchell_porter2 · 2008-05-12T09:32:06.000Z · LW(p) · GW(p)
Eliezer: would you agree with the following, as a paraphrase of the physical ontology you propose?
Quantum theory is just field theory in the infinite-dimensional space formerly known as configuration space. What we thought were "locations in space" are actually directions in configuration space. If I see a thing at a place, it actually means there's a peak in the ψ-field in a certain region of configuration space, a region which somehow corresponds to my seeing of the thing just as much as it corresponds to the thing itself being in that state. And if the peak splits into two, there are now two of me.
I think I get it finally. Not that I believe it now. But expressed that way, I can put it into communication with the other interpretations, as part of the one spectrum of theoretical possibilities. I still strongly doubt that, after you employ a Kolmogorovian razor, theories with branching worlds will be favored over theories without. And I still advance the vagueness objection; but there are extra directions in which this idea might be taken. For example, though the boundaries of a wave are vague, the existence of a peak is not. So a quest for ontologically sharp entities, as the ostensible correlates of 'world' and 'mind', could focus on topological structures in the ψ-field, like inflection points, rather than geometric ones like blobs. Indeed, the whole description in terms of a smoothly varying ψ-field might be dual to a discrete combinatorial one; there are many such correspondences in algebraic geometry.
comment by Silas · 2008-05-12T14:44:23.000Z · LW(p) · GW(p)
So, decoherence, which implies Many Worlds, is the superior scientific theory because it makes the same predictions with strictly fewer postulates, and academic physicists only believe otherwise because of deeply ingrained biases.
There, that didn't take 4,000 words, now, did it?
j/k,j/k, you're good, you're good ;-)
(Don't ban me)
comment by Dustin2 · 2008-05-12T18:23:37.000Z · LW(p) · GW(p)
So, decoherence, which implies Many Worlds, is the superior scientific theory because it makes the same predictions with strictly fewer postulatesNo. Decoherence as an interpretation is not a scientific theory, it is an ontology. Decoherence as an interpretation does not imply Many Worlds unless the wavefunction is considered to be metaphysically real. That ascription of reality to the wavefunction is not a scientific postulate, it is a metaphysical one. Many worlds does not predict anything -- quantum theory makes the predictions, Many Worlds is an ontology, a reification of that theory.
In any case, my last question was ignored, and I don't suspect that further questions about considering things in a less realistic light will be taken seriously because of the glib dismissal and flippant mischaracterization Eli has given the very serious objections from instrumentalists. But I'm going to throw out another paper on the relational interpretation in the hopes that someone here will take seriously the idea that all of this confusion over which interpretation is the right one comes from an unreasonable committment to bad metaphysics.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-05-13T01:15:36.000Z · LW(p) · GW(p)
Dustin said: "Decoherence as an interpretation does not imply Many Worlds unless the wavefunction is considered to be metaphysically real."
Dustin's referenced paper said:
The relational approach claims that a number of confusing puzzles raised by Quantum Mechanics (QM) result from the unjustified use of the notion of objective, absolute, ‘state’ of a physical system, or from the notion of absolute, real, ‘event’. The way out from the confusion suggested by RQM consists in acknowledging that different observers can give different accounts of the actuality of the same physical property [6]. This fact implies that the occurrence of an event is not something absolutely real or not, but it is only real in relation to a specific observer. Notice that, in this context, an observer can be any physical system. Thus, the central idea of RQM is to apply Bohr and Heisenberg’s key intuition that “no phenomenon is a phenomenon until it is an observed phenomenon” to each observer independently. This description of physical reality, though fundamentally fragmented, is assumed in RQM to be the best possible one, i.e. to be complete
The final step in the proof is left as an exercise to the reader.
comment by mitchell_porter2 · 2008-05-13T01:28:10.000Z · LW(p) · GW(p)
A further implication of "quantum theory as field theory of configuration space": It means that "spatial configurations" are merely coordinates, labels; and labels are merely conventions. All that really exists in this interpretation are currents in a homogeneous infinite-dimensional space. When such a current passes through a point notionally associated with the existence of a particular brain state, there's no picture of a brain attached anywhere. This means that the currents and their intrinsic relations bear all the explanatory burden formerly borne by spatial configurations in classical physics.
Dustin, what question are you talking about? Question to whom? The only comments I see from you are addressed to Caledonian, in the previous post in this series.
I am afraid that I find the relational interpretation to be gibberish. "The character of each quantum event is only relative to the system involved in the interaction." Can we apply this to Schrödinger's cat? "The cat is only dead relative to its being seen to be dead", perhaps? The cat is dead, alive, neither, or both. It is not "relative".
Replies from: whowhowho↑ comment by whowhowho · 2013-01-25T17:25:18.973Z · LW(p) · GW(p)
No, you can't fet inconsistent interpretations:-
"This relativisation of actuality is viable thanks to a remarkable property of the formalism of quantum mechanics. John von Neumann was the first to notice that the formalism of the theory treats the measured system (S ) and the measuring system (O) differently, but the theory is surprisingly flexible on the choice of where to put the boundary between the two. Different choices give different accounts of the state of the world (for instance, the collapse of the wave function happens at different times); but this does not affect the predictions on the final observations. Von Neumann only described a rather special situation, but this flexibility reflects a general structural property of quantum theory, which guarantees the consistency among all the distinct "accounts of the world" of the different observing systems. The manner in which this consistency is realized, however, is subtle."--SEP
comment by Thanatos_Savehn · 2008-05-13T07:23:39.000Z · LW(p) · GW(p)
It's good to know that somewhere I won the World Series of Poker last year; and the idiot that went all in over my 3x raise with 7-2 off suit and sucked out to beat my AA with is poor and broke somewhere today and that's good to know too. Not that I'm bitter or anything, of course. Not in those other worlds anyway.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-05-13T07:46:11.000Z · LW(p) · GW(p)
Live in your own world.
comment by Günther_Greindl · 2008-05-14T14:43:15.000Z · LW(p) · GW(p)
Mitchell,
your concerns concerning vagueness of the world concept is addressed here:
Everett and Structure (David Wallace) http://arxiv.org/abs/quant-ph/0107144v2
Also, the ontology proposed here fits very nicely with the currently most promising streak of Scientific Realism (also referred to in the Wallace paper) -in it's ontic variant.
http://plato.stanford.edu/entries/structural-realism/
Cheers, Günther
comment by mitchell_porter2 · 2008-05-17T06:57:57.000Z · LW(p) · GW(p)
Günther, I have previously argued that vagueness is not an option for "mind" and "world", even if it is for "baldness" or "heap of sand" or "table". The existence of some sort of a world, with you in it, and the existence of a mind aware of this, are epistemic fundamentals. Try to go vague on those and you are in effect saying there's some question as to whether anything at all exists, or that that is just a matter of definition. Your mind in your world is the medium of your awareness of everything. You are somewhat free to speculate as to the nature of mind and world, but you are not free to say that there's no fact of the matter at all.
This whole situation exists because of the particular natural-scientific models we have. But rather than treat the nonvagueness of mind and world as an extra datum to be used in theoretical construction, instead we get apologetics for the current models, explaining how we can do without exactness in this regard. It's all rationalization, if you ask me.
comment by Dihymo · 2008-06-01T20:16:41.000Z · LW(p) · GW(p)
Live in your own world. Sure except when I need the MWI Spaghetti Monster to get the opposite of my result.
Collapse/MWI are the new wave/particle duality. The metaphysical cube fell over and rotated 90 degrees. Collapse/MWI only looks different because the cube looks unchanged.
A superposition doesn't imply that the simpler component waveforms exist. It can also mean you drove the speakers to eleven, reached the limit the fabric of spacetime could handle, and are receiving distortion.
comment by Dave4 · 2008-06-27T10:18:27.000Z · LW(p) · GW(p)
Many worlds is far from obviously true. The only logical stand point is single universe, there's no evidence against it or even suggesting ANYTHING else.
Bohm is probably the correct one, and has been since 1926, before even Copenhagen was made up.
If your such a MWI believer, realize it's self refuting faith. In MWI all the atoms making up your brain would be in many universes made to believe it was right while it was wrong.
comment by Z._M._Davis · 2008-06-27T12:23:47.000Z · LW(p) · GW(p)
"realize it's self refuting faith. [...] all the atoms making up your brain would be [...] made to believe it was right while it was wrong."
That's not an argument against the MWI; that's an argument against physics.
comment by Dave4 · 2008-06-28T07:02:03.000Z · LW(p) · GW(p)
Only if Many worlds is assumed true, yeah, cause then EVERY possibility is true. Like right now in this universe you read this post. In another you have intercourse with your neighbours dog. In another your hair just fell off. EVERY physical possibility being true = not science = cop out = end of science.
Anyway, MWI is inconsistent with all forms of realism so it's a incoherent hypothesis.
comment by Dave4 · 2008-06-28T07:04:36.000Z · LW(p) · GW(p)
Please save your breathe, don't even try to say "NONO Many worlds is the REALIST" approach to QM. That's bohm, he came 3 years before Everett, he saved realism in QM. Actually no, de Broglie did in the early 1920's.
Read Travis Norsen's article in Foundation of Physics: "Against realism". It'll show you just HOW deluded MW proponents claim they are.
You can find it on arxivs I think
comment by Z._M._Davis · 2008-06-28T07:14:34.000Z · LW(p) · GW(p)
Note the ellipses, Dave.
Replies from: mlionson↑ comment by mlionson · 2010-02-17T02:19:23.803Z · LW(p) · GW(p)
There is no serious quantum physicist who would deny that it is possible to prepare a superposition of states in which a needle penetrates the skin to obtain a blood sugar measurement or does not. This situation could be created, perhaps by briefly freezing a small component of blood and skin on a live person. When this situation predictably resolves into a situation in which the measuring apparatus reads out the result of a blood sugar measurement, though the needle is seen to never penetrate the skin, where was the measurement made?
Where was the bloody needle? Where was the measuring apparatus on which the measurement was made. Where was the arm from which the blood was taken!
Those who do not understand the existence of the multiverse need to provide answers to these simple questions. If the arm is not real in a different universe in which the needle actually went in, how was blood drawn from it and a result reported?
If someone seriously doubts that this scenario can and will be created in the future, which law of physics says that we cannot create this superposition? Which law of physics do you plan to change, to prevent this result, though it has not failed any experiment?
Remarkably, even most of those who deny the existence of the multiverse do not deny that such a blood sugar result could be obtained. This means that virtually all physicists, including those who support Bohm, transactional perspective, Copenhagen, etc., agree that we will be able to obtain a blood sugar result from a needle that never penetrated the arm.
To them I ask again. Where is the arm from which the blood was drawn? Is your hypothesis really that it was drawn in the world of possibility? If so then the map that you call the world of possibility has every component of a real world, including the blood! When the map is as detailed in every respect as the territory, it is the territory. Right?
Replies from: wnoise↑ comment by wnoise · 2010-02-17T05:48:38.684Z · LW(p) · GW(p)
There is no serious quantum physicist who would deny that it is possible to prepare a superposition of states in which a needle penetrates the skin to obtain a blood sugar measurement or does not.
True, but many will say it is impossible for all practical purposes.
When this situation predictably resolves into a situation in which the measuring apparatus reads out the result of a blood sugar measurement, though the needle is seen to never penetrate the skin, where was the measurement made?
The situation resolves into either:
- The measuring apparatus pierces the skin, has a bloody needle, and reports the result.
- The measuring apparatus does not pierce the skin, does not have a bloody needle, and does not report the result.
Histories only interfere when they come to the same end result. That doesn't happen in this case.
Replies from: mlionson↑ comment by mlionson · 2010-02-17T06:30:43.979Z · LW(p) · GW(p)
"True, but many will say it is impossible for all practical purposes."
So the truth of the science is determined by the costs of doing the experiment? By the way, experimentalists are getting far better at creating larger and larger superpositions in making quantum computers, and quantum unitary evolution of the state vector has never been shown to be violated. There is never a time when what could have happened can not effect what does happen.
"The situation resolves into either: 1. The measuring apparatus pierces the skin, has a bloody needle, and reports the result. 2. The measuring apparatus does not pierce the skin, does not have a bloody needle, and does not report the result"
That is just not true according to known laws of physics. The blood sugar measuring apparatus can also be in a superposition of blood being analyzed and blood not being analyzed, along with the superposition of the needle. So the result can in fact be recorded and the experiment can be set up so that the skin is (almost) never penetrated.
Copenhagen people call this type of result a "counterfactual". The fact that something could have happened (the needle going in) changes what does happen (the blood sugar result is measured). Except, the whole counterfactual argument becomes nonsensical when one is talking about blood sugar recordings in needles that never penetrate the skin.
This is precisely the type of situation that David Deutsch writes about when he says the following:
"To predict that future quantum computers, made to a given specification, will work in the ways I have described, one need only solve a few uncontroversial equations. But to explain exactly how they will work, some form of multiple-universe language is unavoidable. Thus quantum computers provide irresistible evidence that the Multiverse is real. One especially convincing argument is provided by quantum algorithms ... which calculate more intermediate results in the course of a single computation than there are atoms in the visible universe. When a quantum computer delivers the output of such a computation, we shall know that those intermediate results must have been computed somewhere, because they were needed to produce the right answer. So I issue this challenge to those who still cling to a single-universe worldview: if the universe we see around us is all there is, where are quantum computations performed? I have yet to receive a plausible reply."
Blood sugar results from needles and measuring devices that were in superposition and results of calculations from qubits in superposition are precisely the outcomes we can expect in the future from utilizing the known laws of physics to our advantage.
Where are the calculations performed? Where is the bloody arm?
Those who do not accept the reality of the multiverse really do have to answer these simple questions, yet invariably they cannot.
Replies from: wnoise↑ comment by wnoise · 2010-02-17T06:50:45.472Z · LW(p) · GW(p)
There is never a time when what could have happened can not effect what does happen.
You are badly confused. When you describe things as being in superposition, then only what happened (the entire superposition) effects what does happen (in the entire superposition). If you take some sort of "coherent histories" view, then, again, all coherent histories can equally well have been said to happen.
The blood sugar measuring apparatus can also be in a superposition of blood being analyzed and blood not being analyzed, along with the superposition of the needle.
Correct.
So the result can in fact be recorded and the experiment can be set up so that the skin is (almost) never penetrated.
No. We get a superposition of the result being recorded, and the result not being recorded.
Those who do not accept the reality of the multiverse
I do accept the reality of the multiverse. But I know how to use quantum mechanics to make predictions, and I get different ones than you do.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-02-17T06:57:10.142Z · LW(p) · GW(p)
So the result can in fact be recorded and the experiment can be set up so that the skin is (almost) never penetrated.
No. We get a superposition of the result being recorded, and the result not being recorded.
mlionson may be talking about Elitzur-Vaidman bomb-testing:
Replies from: wnoiseOn average, this will identify all of the dud bombs, explode two thirds of the usable bombs, and identify one third of the usable bombs without detonating them... Kwiat et al. devised a method, using a sequence of polarising devices, that efficiently increases the yield rate to a level arbitrarily close to one.
↑ comment by wnoise · 2010-02-17T07:09:54.141Z · LW(p) · GW(p)
That is the most charitable interpretation. I confess that I did not at all think of that.
Of course, given no further details, and hence assuming standard measurement devices and procedures, this sort of thing really is impossible with needles and arms.
Replies from: mlionson↑ comment by mlionson · 2010-02-17T07:30:04.842Z · LW(p) · GW(p)
The Elitzur-Vaidman bomb testing device is an example of a similar phenomenon. What law of physics precludes the construction of a device that measures blood sugar but with the needle (virtually never) penetrating the skin?
Replies from: mlionson↑ comment by mlionson · 2010-02-17T08:50:41.365Z · LW(p) · GW(p)
And if no law of physics precludes something from being done, then only our lack of knowledge prevents it from being done.
So if there are no laws of physics that preclude developing bomb testing and sugar measuring devices, our arguments against this have nothing to do with the laws of physics, but instead have to do with other parameters, like lack of knowledge or cost. So if the laws of physics do not preclude things form happening, we might as well assume that they can happen, in order to learn from the physics of these possible situations.
So for the purposes of understanding what our physics says can happen, it becomes reasonable to posit that devices have been constructed that can test the activity of Elitzur-Vaidman bombs without (usual) detonation or measure blood sugars without needles (usually) penetrating the skin. It is reasonable to posit this because the known laws of physics do not forbid this.
So those who do not believe in the multiverse but still believe in their own rationality do need to answer the question, "Where is the arm from which the blood was drawn?"
Or, individuals denying the possibility of such a measuring device being constructed need to posit a new law of physics that prevents Elitzur-Vaidman bomb testing devices from being constructed and blood sugar measuring devices (that do not penetrate the skin) from being constructed.
If they posit this new law, what is it?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-02-18T02:27:33.615Z · LW(p) · GW(p)
In the Elitzur-Vaidman bomb test, information about whether the bomb has exploded does not feed into the experiment at any point. When you shoot photons through the interferometer, you are not directly testing whether the bomb would explode or has exploded elsewhere in the multiverse; you are testing whether the sensitive photon detector in the bomb trigger works.
As wnoise said, to directly gather information from a possible history, the history has to end in a physical configuration identical to the one it is being compared with. The two histories represent two paths through the multiverse, if you wish, with a separate flow of quantum amplitude along each path in configuration space, and then the flows combine and add when the histories recombine by converging on the same configuration.
In the case of an exploded bomb, this means that for a history in which the bomb explodes to interfere with a history in which the bomb does not explode, the bomb has to reassemble somehow! And in a way which does not leave any other physical traces of the bomb having exploded.
In the case of your automated blood glucose meter coupled to a quantum switch, for the history where the reading occurs to interfere with the history where the reading does not occur, the reading and all its physical effects must similarly be completely undone. Which is going to be a problem since the needle pricked flesh and a pain signal was probably conveyed to the subject's brain, creating a memory trace. You said something about "briefly freezing a small component of blood and skin on a live person", so maybe you appreciate this need for total reversibility.
In the case of counterfactual measurements which have actually been performed, very simple quantum systems were involved, simple enough that the reversibility, or the maintenance of quantum coherence, was in fact possible.
However, I totally grant you that the much more difficult macro-superpositions appear to be possible in principle, and that this does pose a challenge for single-world interpretations of quantum theory. They need to either have a single-world explanation for where the counterfactual information comes from, or an explanation as to why the macro-superpositions are not possible even in principle.
Such explanations do in fact exist. I'll show how it works again using the Elitzur-Vaidman bomb test.
The bomb test uses destructive interference as its test pattern. Destructive interference is seen in the dark zones in the double slit experiment. Those are the regions where (in a sum-over-histories perspective) there are two ways to get there (through one slit, through the other slit), but the amplitudes for the two ways cancel, so the net probability is zero. The E-V bomb-testing apparatus contains a beam splitter, a "beam recombiner", and two detectors. It is set up so that when the beam proceeds unimpeded through the apparatus, there is total destructive interference between the two pathways leading to one of the detectors, so the particles are only ever observed to arrive at the other detector. But if you place an object capable of interacting with the particle in one of the paths, that will modify the portion of the wavefunction traveling along that path (part of the wavefunction will be absorbed by the object), the destructive interference at the end will only be partial, and so particles will sometimes be observed to arrive at that detector.
The many-worlds explanation is that when the object is there, it creates a new subset of worlds where the particle is absorbed en route, this disturbs the balance between worlds, and so now there are some worlds where the particle makes it to the formerly forbidden detector.
Now consider John Cramer's transactional interpretation. This interpretation is all about self-consistent standing waves connecting past and future, via a transaction, a handshake across time, between "advanced" and "retarded" electromagnetic potentials (in the case of light). It's like the Novikov self-consistency principle for wormhole histories; events arrange themselves so as to avoid paradox because logically they have to. That's how I understand Cramer's idea.
So, in the transactional framework, how do we explain the E-V bomb test? The apparatus, the experimental setup, defines the boundary conditions for the standing waves. When we have the interferometer with both pathways unimpeded (or with a "dud bomb", which means that the photon detector in its trigger isn't working, which means the photon passes right through it), the only self-consistent outcome is the one where the photon makes it to the detector experiencing constructive interference. But when there is an object in one pathway capable of absorbing a photon, we have three self-consistent outcomes: photon goes to one detector, photon goes to other detector, photon is absorbed by the object (which then explodes if it's an E-V bomb, but that outcome is not part of the transaction, it's an external causal consequence).
In general, the transactional interpretation explains counterfactual measurement or counterfactual computation through the constraint of self-consistency. The presence of causal chains moving in opposite temporal directions in a single history produces correlations and constraints which are nonlocal in space and time. By modulating the boundary conditions we are exploring logical possibilities, and that is how we probe counterfactual realities.
A completely different sort of explanation would be offered by an objective collapse theory like Penrose's. Here, the prediction simply is that such macro-superpositions do not exist. By the way, in Penrose's case, he is not just arbitrarily stipulating that macro-superpositions do not happen. He was led to this position by a quantum-gravity argument that superpositions of significantly different geometries are dynamically undefined. In general relativity, the rate of passage of time is internal to the geometry, but to evolve a superposition of geometries would require some calibration of one geometry's time against the other. Penrose argued that there was no natural way to do this and suggested that this is when wavefunction collapse occurs. I doubt that the argument holds up in string theory, but anyway, for argument's sake let's consider how a theory like this analyzes the E-V bomb-testing experiment. The critical observation is that it's only the photon detector in the bomb trigger which matters for the experiment, not the whole bomb; and even then, it's not the whole photon detector, but just that particular combination of atoms and electrons which interacts with the photon. So the superposition required for the experiment to work is not macro at all, it's micro but it's coupled to macro devices.
This is a really good case study for quantum interpretation; I had to engage in quite a bit of thought and research to analyze it even this much. But the single-world schools of thought are not bereft of explanations even here.
Replies from: mlionson↑ comment by mlionson · 2010-02-18T06:48:08.523Z · LW(p) · GW(p)
“By modulating the boundary conditions we are exploring logical possibilities, and that is how we probe counterfactual realities (in the transactional interpretation)”
But note then that these “logical possibilities” must render a complete map of the blood and all its atomic and subatomic components and oxygen concentration, because without these components and a heart beating properly to oxygenate the blood, the measurement of the blood sugar would be wrong. But without an atmosphere and a universe that allows an atmosphere to have appropriate oxygen content and lungs to breath in the oxygen, the blood sugar measurement would also be wrong.
But it is not wrong.
So this “logical possibility’ (blood sugar measurement with actual result) must simulate not only the blood, but the heart, the person, the planet on which he resides and the universe which houses the planet, in order for the combined quantum state to appropriately render a universe to calculate the correct results of a blood sugar measurement (or any other wanted measurement) that is made on this merely “possible” universe. Does anyone seriously doubt that multiple different measurements could be made on this so-called merely “possible” universe to make sure that it performs like ours? (Blood sugar measurement, dimensions of room in which experiment was performed, color of wall, etc.)
It is almost humorous to have to ask, “What is the difference between a map that renders every single aspect of a territory, including its subatomic structure, and the territory?”
It is strangely sad (and a tribute to positivism) that we must think that just because we cannot see the needle penetrating the skin, this implies that the blood is merely possible blood, not actual blood. Does our examination of fossils of dinosaurs really imply the mere existence of only possible dinosaurs, just because we can’t see the dinosaurs right now?
So, in order to eliminate the multiverse theory, opponents must believe that blood sugar measurements -- on blood --in people-- on planets-- in a universe--are somehow not real just because you can’t see the needle penetrate the skin. What else is philosophically different from measuring our own blood? Why do we not call our own blood mere possible blood, because when we measure that we also only see the results of the measurement through the lens of our own implicit neuropsychological theories. All data is interpreted through theory, whether it is data about another universe or our own.
Or one must formulate a new law of physics, as Penrose does. Note that one formulates this new law, not because the old laws are not working, but merely because the multiverse conclusion does not seem right to him. I appreciate his honesty in implicitly agreeing that the multiverse conclusion follows unless a new law of physics is invented.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-02-18T09:45:40.126Z · LW(p) · GW(p)
So this “logical possibility’ (blood sugar measurement with actual result) must simulate not only the blood, but the heart, the person, the planet on which he resides and the universe which houses the planet, in order for the combined quantum state to appropriately render a universe to calculate the correct results of a blood sugar measurement (or any other wanted measurement) that is made on this merely “possible” universe.
Slow down there. In order to "simulate" the behavior of an entity X using counterfactual measurement, you need X (both actually and counterfactually) to be isolated from the rest of the universe (interactions must be weak enough to not decohere the superposition). To say that we must be able to simulate the rest of the universe because we could instead be measuring Y, Z, etc is confusing the matter.
The basic claim of the counterfactualists is: We can find out about a possible state of X - call it X' - by inducing a temporary superposition in X - schematically, |X> goes to |X>+|X'>, and then back to |X> - while it is coupled to some other quantum system. We find out something about X' by examining the final state of that other system, but X' itself never actually existed, just X.
So the core claim is that by having quantum control over an entity, you can find out about how it would behave, without actually making it behave that way. This applies to any entity or combination of entities, though it will be much easier for some than others.
Now first I want to point out that being a single-world theorist does not immediately make you a counterfactualist about these measurements. All a single-world theorist has to do is to explain quantum mechanics without talking about a multiverse. Suppose someone were to say of the above process that what actually existed was X, then X', and then X again, and that X' while it existed interacted a little with the auxiliary quantum system. Suddenly the counterfactualist magic is gone, and we know about X' simply because situation X' really did exist for a while, and it left a trace of its existence in something else.
So here is the real issue: The discourse of quantum mechanics is full of "superpositions". Not just E-V bomb-testing and a superposition which goes from one component to two and back to one - but superpositions in great multitudes. Quantum computers in exponentially large superpositions; atoms and molecules in persistent multi-component superpositions; complicated macro-superpositions which appear to be theoretically possible. A single-world interpretation of quantum theory has to deal with superpositions of all sorts, in a way such that multiplicity of possibility does not equate to multiplicity of actuality. That might be achieved by saying that only one component of the superposition is the reality, or even by saying that none of them is, and that what is real (the hidden variables) is something else entirely.
The Copenhagen interpretation does this, but it does it in a non-explanatory way. The wavefunction is just a predictive device; the particle is always somewhere in particular; and it always happens to be where the predictive device says it will be. You can say that, but really you need to say how that manages to be true. You need a model of microscopic dynamics which explains why the wavefunction works. So we can agree that the Copenhagen interpretation is inadequate for a final theory.
Bohm's theory has a single world, but then it uses the wavefunction to guide the events there, so the multiverse seems to be there implicitly. You can, however, rewrite Bohm's equation of motion so it makes no reference to a pilot wave. Instead, you have a nonlocal potential. I'm not aware of significant work in this direction so I don't know if it is capable of truly banishing the multiverse from its explanation of QM.
Anyway, let me return to this issue of interpreting a generic superposition in a single-world way and make some further comments. You should not assume that just because, formally and mathematically, we can write about something like |cat dead>+|cat alive>, that there simply must be some physical reality where both cat-dead information and cat-alive information can be extracted with equal ease. This is very far from having been demonstrated.
First of all, on a purely classical level, I can easily cook up a probability distribution like "50% probability the cat is dead and 50% probability the cat is alive", but I cannot deduce from that, that the multiverse must be real. That the formalism can talk about macro-superpositions doesn't yet make them real.
So in order to make things difficult for the single-world theorist, we need superpositions where the various branches seem to have an equal claim on reality, e.g. where we can probe at will for information about macroscopically distinct situations existing within the superposition. Schrodinger's cat doesn't really work because you only ever see the cat dead or alive. If you could somehow open the lid and see the cat alive, and yet also get a photo from a video camera in the box which showed the cat to be dead - now that would be evidence of a multiverse!
Counterfactual measurement and counterfactual computation certainly sound like this. Thus, in counterfactual computation, you couple your readout system to a quantum computer, and then you do the |X> to |X>+|X'> thing and back again, where X' is "quantum computer makes a computation". So the computer is back in the X state, and the computation never was, but it left its counterfactual trace in the readout system. It's as if you opened the lid on the box and the cat was alive, yet the video camera gave you a photo of the cat dead.
However, the superpositions and entanglements involved in these experiments are so micro, and so far from anything macroscopic and familiar, that to talk about them in these ways is very much a rhetorical choice. A common-sense interpretation of counterfactual measurement would be that you are simply measuring an existing property which would produce the counterfactual behavior under the right circumstances. Thus, I might look at a house of cards and claim that it would fall down in the presence of a strong wind. But I didn't arrive at this correct conclusion by mysteriously probing a parallel world where a strong wind actually blows; I arrived at the conclusion by observing the very fine balance of forces existing among the cards in this world. I learnt something true by observing something actual, not by paranormally observing something possible.
The argument that counterfactual measurement implies many-worlds is applying this common-sense principle, but it's also assuming that the actuality which provides the information cannot be anything less than a duplicate world. Because counterfactual measurement has so far only been performed on microscopic quantum systems, we do not get to apply macroscopic intuitions as we could in the situation of a counterfactual photograph of Schrodinger's cat.
To really make progress here, what we need is a thought-experiment in which a macroscopic superposition is made to yield information about more than one branch, as the counterfactualist rhetoric claims. Unfortunately, your needle-in-the-arm experiment is not there yet, because we haven't gone into the exact details of how it's supposed to work. You can't just say, 'If we did a quantum experiment where we could produce data about glucose levels in someone's bloodstream, without the needle having gone into their arm, why, that would prove that the multiverse is real!' Even just as a hypothetical, that's not enough. You need to explain how the decoherence shielding works and what the quantum readout system is - the one which remembers the interaction with the branch where the needle did go in. We need a physically complete description of the thought-experiment in order to reason about the interpretive possibilities.
Sampling the bloodstream is too complicated because of all the complexities of human metabolism. But something like reversibly sampling a salt crystal and measuring ion density - that sounds a more plausible candidate for analysis, something where the experimental setup can be described and where the analysis is tractable. I'll have to see if there's a proposed experiment or known thought-experiment which is suitable...
Replies from: mlionson↑ comment by mlionson · 2010-02-19T04:37:34.560Z · LW(p) · GW(p)
“To really make progress here, what we need is a thought-experiment in which a macroscopic superposition is made to yield information about more than one branch, as the counterfactualist rhetoric claims. Unfortunately, your needle-in-the-arm experiment is not there yet, because we haven't gone into the exact details of how it's supposed to work. You can't just say, 'If we did a quantum experiment where we could produce data about glucose levels in someone's bloodstream, without the needle having gone into their arm, why, that would prove that the multiverse is real!' Even just as a hypothetical, that's not enough. You need to explain how the decoherence shielding works and what the quantum readout system is”
I think you are mistaken here, Mitchell. But let me first thank you for engaging. Most people, when confronted with different outcomes than they expected from the fully logical implications of their own thinking, run screaming from the room.
Perhaps someone could write on these very pages a detailed quantum mechanical and excellent description of a hypothetical experiment in which a “counterfactual” blood sugar measurement is made. But if so, would that then make you believe in the reality of the multiverse? It shouldn’t, from a logical point of view. Because my (or anyone else’s) ability to do that is completely irrelevant to the argument about the reality of the multiverse...
We are interested in the implications of our understanding of the current laws of physics. When we now talk about which “interpretation” of quantum mechanics is the correct one, and that is what I thought we were talking about, we are talking about interpreting the current laws of physics. (Right?) What do the currently understood laws of physics allow us to do, using whichever interpretation one wants, since each interpretation is supposed to give the same predictions. If all the interpretations say that we can make measurements on counterfactual realities, then do all of the interpretations still make logical sense?
I think I have not yet heard an answer to the question, “Is there a current law of physics that prohibits a blood sugar measuring device from measuring counterfactual blood sugars?
Since I doubt (but could be mistaken) that you are able to point to a current law of physics that says that such a device can’t be created, I will assume that you can’t. That’s OK. I can’t either.
To my knowledge there is no law of physics that says there is an in principle limit on the amount of complexity in a superposition. If there is, show me which one.
Since there is no limit in the current laws of physics about this (and I assume we are agreeing on this point), those who believe in any interpretation of quantum mechanics (that makes these same predictions) should also agree on this point.
So adherents to any of the legitimate quantum mechanical interpretations (e.g Copenhagen, Transactional, Bohm, Everettian) should also agree that our current laws of physics do not limit the amount of complexity in a superposition.
And if a law of physics does not prevent something, then it can be done given enough knowledge. This is the most important point. Do you (Mitchell) dispute this or can anyone point out why I am mistaken about it? I would really like to know.
So if enough knowledge allows us to create any amount of complex superposition, then the laws of physics are telling us that any measurement that we can currently perform using standard techniques (for example measurements of blood sugars, lengths of tables, colors of walls, etc.) can also be performed using counterfactual measurement.
But if we can make the same measurements in one reality as another, given enough knowledge, why do we have the right to say that one reality is real and the other is not?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-02-20T00:32:03.934Z · LW(p) · GW(p)
Somehow I never examined these experiments and arguments. But what I've learned so far is to reject counterfactualism.
If you have an Everett camera in your Schrodinger cat-box which sometimes takes a picture of a dead cat, even when the cat later walks out of the box alive, then as a single-world theorist I should say the cat was dead when the photo was taken, and later came back to life. That may be a thermodynamic miracle, but that's why I need to know exactly how your Everett camera is supposed to work. It may turn out that that it works so rarely that this is the reasonable explanation. Or it may be that you are controlling the microscopic conditions in the box so tightly – in order to preserve quantum coherence – that you are just directly putting the cat's atoms back into the living arrangement yourself.
Such an experiment allegedly involves a superposition of histories, one of the form
|alive> -> |alive> -> |alive>
and the other
|alive> -> |dead> -> |alive>
And then the camera is supposed to have registered the existence of the |dead> component of the superposition during the intermediate state.
But how did that second history even happen? Either it happened by itself, in which case there was the thermodynamic miracle (dead cat spontaneously became live cat). Or, it was caused to happen, in which case you somehow made it happen! Either way, my counter-challenge would be: what's the evidence that the cat was also alive at the time it was photographed in a dead state?
Replies from: mlionson↑ comment by mlionson · 2010-02-20T06:31:35.710Z · LW(p) · GW(p)
I think I see where we are disagreeing.
Consider a quantum computer. If the laws of physics say that only our lack of knowledge limits the amount of complexity in a superposition, and the logic of quantum computation suggests that greater complexity of superposition leads to exponentially increased computational capacity for certain types of computation, then it will be quite possible to have a quantum computer sit on a desktop and make more calculations per second than there are atoms in the universe. My quote above from David Deutsch makes that point. Only the limitations of our current knowledge prevent that.
When we have larger quantum computers, children will be programming universes with all the richness and diversity of our own, and no one will be arguing about the reality of the multiverse. If the capacity for superposition is virtually limitless, the exponential possibilities are virtually limitless. But so will be the capacity to measure “counterfactual” states that are more and more evolved, like dead cats with lower body temperatures. Why will the body temperature be lower? Why will the cat in that universe not (usually) be coming back to life?
As you state, because of the laws of thermodynamics. With greater knowledge on our part, the exponential increase in computational capacity of the quantum computer will parallel the exponential increase in our ability to measure states that are decohering from our own and are further evolved, using what you call the “Everett camera”. I say “decohering from” rather than “decoherent from” because there is never a time when these states are completely thermodynamically separated. And the state vector has unitary evolution. We would not expect it to go backwards any more than you would expect to see your own cat at home go from a dead to an alive state.
I am afraid that whether we use an Everett camera or one supplied to us by evolution (our neuropsychological apparatus) we are always interpreting reality through the lens of our theories. Often these theories are useful from an evolutionary perspective but nonetheless misleading. For example, we are likely to perceive that the world is flat, absent logic and experiment. It is equally easy to miss the existence of the multiverse because of the ruse of positivism. “I didn’t see the needle penetrate the skin in your quantum experiment. It didn’t or (even worse!) can't happen.” But of course when we do this experiment with standard needles, we never truly see the needle go in, either.
I have enjoyed this discussion.
Replies from: pengvado↑ comment by pengvado · 2010-02-20T14:52:00.464Z · LW(p) · GW(p)
If [...] the logic of quantum computation suggests that greater complexity of superposition leads to exponentially increased computational capacity for certain types of computation, then it will be quite possible to have a quantum computer sit on a desktop and make more calculations per second than there are atoms in the universe.
Certainly the default extrapolation is that quantum computers can efficiently perform some types of computation that would on a classical computer take more cycles than the number of atoms in the universe. But that's not quite what you asserted.
Suppose I have a classical random access machine, that runs a given algorithm in time O(N), where the best equivalent algorithm for a classical 1D Turing machine takes O(N^2). Would you say that I really performed N^2 arithmetic ops, and theorize about where the extra calculation happened? Or would you say that the Turing machine isn't a good model of the computational complexity class of classical physics?
I do subscribe to Everett, so I don't object to your conclusion. But I don't think exponential parallelism is a good description of quantum computation, even in the cases where you do get an exponential speedup.
Edit: I said that badly. I think I meant that the parallelism is not inferred from the class of problems you can solve, except insofar as the latter is evidence about the implementation method.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-20T15:04:14.752Z · LW(p) · GW(p)
I do think exponential parallelism is a good description of QC, because any adequate causal model of a quantum computation will invoke an exponential number of nodes in the explanation of the computation's output. Even if we can't always take full advantage of the exponential number of calculations being performed, because of the readout problem, it is nonetheless only possible to explain quantum readouts in general by postulating that an exponential number of parallel calculations went on behind the scenes.
Here, of course, "causal model" is to be taken in the technical Pearl sense of the term, a directed acyclic graph of nodes each of whose values can be computed from its parent nodes plus a background factor of uncertainty that is uncorrelated to any other source of uncertainty, etc. I specify this to cut off any attempt to say something like "well, but those other worlds don't exist until you measure them". Any formal causal model that explains the quantum computation's output will need an exponential number of nodes, since those nodes have real, causal effects on the final probability distribution over outputs.
comment by ata · 2011-02-15T20:43:36.329Z · LW(p) · GW(p)
Interesting quote from Stephen Hawking, apparently he's on board with MWI as the obvious best guess (and with Bayesian reasoning):
HAWKING: I regard [the many worlds interpretation] as self-evidently correct.
T.F.: Yet some don't find it evident to themselves.
HAWKING: Yeah, well, there are some people who spend an awful lot of time talking about the interpretation of quantum mechanics. My attitude — I would paraphrase Göring — is that when I hear of Schrödinger's cat, I reach for my gun.
T.F.: That would spoil the experiment. The cat would have been shot, all right, but not by a quantum effect.
HAWKING (laughing): Yes, it does, because I myself am a quantum effect. But, look: All that one does, really, is to calculate conditional probabilities — in other words, the probability of A happening, given B. I think that that's all the many worlds interpretation is. Some people overlay it with a lot of mysticism about the wave function splitting into different parts. But all that you're calculating is conditional probabilities.
...though I am a bit confused by how he describes it in the last line — doesn't that sound more like non-realism (or at least "shut up and calculate") than MWI?
Replies from: JGWeissman, Luke_A_Somers↑ comment by JGWeissman · 2011-02-15T20:48:49.923Z · LW(p) · GW(p)
Do you have a link to the source? I would be interested it seeing more context.
Replies from: ata↑ comment by ata · 2011-02-15T20:51:06.208Z · LW(p) · GW(p)
It's from here, but no further context was given, unfortunately.
Replies from: None↑ comment by [deleted] · 2011-06-19T06:55:07.360Z · LW(p) · GW(p)
...though I am a bit confused by how he describes it in the last line — doesn't that sound more like non-realism (or at least "shut up and calculate") than MWI?
Isn't the point of the "best" explanation (in the Bayesian sense) that it is the one most at peace with the "shut up and calculate" mentality? My reaction, which please feel free to disregard, is that nothing could be more "real" than saying something like, "Okay, here's the theory, it's self-evident given our observations. Great. Now shut up and multiply. Onto the next question."
↑ comment by Luke_A_Somers · 2012-05-02T14:34:43.170Z · LW(p) · GW(p)
It's saying that there is no mysticism inherent in MWI - you can be just as practical about it as you would otherwise.
comment by themusicgod1 · 2017-01-07T20:26:57.913Z · LW(p) · GW(p)
Our children will look back at the fact that we were STILL ARGUING about this in the early 21st-century, and correctly deduce that we were nuts.
We're still arguing whether or not the world is flat, whether the zodiac should be used to predict near-term fate and whether we should be building stockpiles of nuclear weapons. There's billions left to connect to the internet, and most extant human languages to this day have no written form. Basic literacy and mathematics is still something much of the world struggles with. This is going to go on for awhile: the future will not be surprised that the finer details of after the 20th decimal point were being debated when we can't even agree on whether intelligent design is the best approach to cell biology or not.
comment by Joel Dignam (joel-dignam) · 2020-02-02T01:19:06.607Z · LW(p) · GW(p)
Just coming in to say that emeralds have now been confirmed as green, not "grue".
comment by Jem Bishop (jem-bishop) · 2023-03-31T12:23:37.044Z · LW(p) · GW(p)
The many worlds interpretation is a vague woolly attempt to paper over the fundamentally paradigm shifting nature of quantum mechanics with a non predictive, but psychologically superficially comforting notion.
The real "interpretation" is the Copenhagen one, but the word "interpretation" is problematic itself as stated by the founders of QM. There is no need for interpretation as the Born rule is completely sufficient to connect the mathematical part of the theory to physical predictions. https://www.youtube.com/watch?v=85TUPcL7aWQ is one of the few videos of QM on YouTube which really gets this right. The Copenhagen interpretation is really a restatement of this fact.
The big difference of QM which is different from classical theory, is that classical theory takes for granted an objective reality, and makes predictions in that framework. QM doesn't and considers the raw principle of science, that science is about giving an individual a framework to predict (probabilities) of future observations. For example in classical mechanics you may say what the probability of a moving particle being at a point x in the future. In QM you talk about the probability of you observing a particle at point x. Superficially they seem the same because we have a brain so accustomed to classical thinking we unconsciously equate them, but they are actually very different. And if you try and shoehorn in quantum concepts into some objective reality framework, which is basically what any "interpretation" is doing, then you will always end up with non-sensical results. Eg if you think of the wave function as being a physical field rather than (roughly) an observers subjective probability distribution, you will quickly get into violations of special relativity. However your own probability distribution updates instantly on new data, so it doesn't have the same issue.
The many-worlds interpretation at least doesn't lead to breakages of physical laws, but it is so vague and philosophical that it can't really. It doesn't help in deriving the Born rule, and it just adds non-physically testable fluff acting as psychological comfort. If I find the fact that spacetime is curved in GR unpalatable then one could come up with an "interpretation" which reframes it in terms of Newtonian forces, but it would be extra fluff that wouldn't add any predictive power or insight, and I view the many-worlds interpretation the same. What actually is useful is explaining how the classical paradigm emerges as the limit of the quantum one, and that is well understood and follows from quantum mechanics with the Born rule, no interpretation needed.
↑ comment by TAG · 2023-03-31T14:17:34.347Z · LW(p) · GW(p)
There is no need for interpretation as the Born rule is completely sufficient to connect the mathematical part of the theory to physical predictions
The whole point of interpretation is to figure out what the maths means, once you have a mathematically adequate theory. Non-realism is an interpretation -- one possible interpretation. It is not forced on you for reasons of compatibility with SR, because QM without collapse-- MWI, broadly speaking -- is also compatible with SR.