# Quotes and Notes on Scott Aaronson’s "The Ghost in the Quantum Turing Machine"

post by shminux · 2013-06-17T05:11:29.160Z · score: 18 (22 votes) · LW · GW · Legacy · 83 commentsThis highly speculative paper has been discussed here before, but I found the discussion's quality rather disappointing. People generally took bits and pieces out of context and then mostly offered arguments already addressed in the paper. Truly the internet is the most powerful misinterpretation engine ever built. It's nice to see that Scott, who is no stranger to online adversity, is taking it in stride.

So I went through the paper and took notes, which I posted on my blog, but I am also attaching them below, in a hope that someone else here finds them useful. I initially intended to write up a comment in the other thread, but the size of it grew too large for a comment, so I am making this post. Feel free to downvote if you think it does not belong in Discussion (or for any other reason, of course).

TL;DR: The main idea of the paper is, as far as I can tell, that it is possible to construct a physical model, potentially related to the "free" part of the free will debate, where some events cannot be predicted at all, not even probabilistically, like it is done in Quantum Mechanics. Scott also proposes one possible mechanism for this "Knightian unpredictability": the not-yet-decohered parts of the initial state of the universe, such as the Cosmic Microwave Background radiation. He does not claim a position on whether the model is correct, only that it is potentially testable and thus shifts a small piece of the age-old philosophical debate on free will into the realm of physics.

For those here who say that the free-will question has been dissolved, let me note that the picture presented in the paper is one explicitly rejected by Eliezer, probably a bit hastily. Specifically in this diagram:

Eliezer says that the sequential picture on the left is the only correct one, whereas Scott offers a perfectly reasonable model which is better described by the picture on the right. To reiterate, there is a part of the past (Scott calls it "microfacts") which evolves reversibly and unitarily until some time in the future. Given that this part has not been measured yet, there is no way, not even probabilistically, to estimate its influence on some future event, when some of those microfacts interact with the rest of the world and decohere, thus affecting "macrofacts", potentially including human choices. This last speculative idea could be tested if it is shown that small quantum fluctuations can be chaotically amplified to macroscopic levels. If this model is correct, it may have significant consequences on whether a human mind can be successfully cloned and on whether an AI can be called sentient, or even how it can be made sentient.

My personal impression is that Scott's arguments are much better thought through than the speculations by Penrose in his books, but you may find otherwise. I also appreciate this paper for doing what mainstream philosophers are qualified and ought to do, but consistently fail to do: look at one of the Big Questions, chip away some small solvable piece of it, and offer this piece to qualified scientists.

Anyway, below are my notes and quotes. If you think you have found an obvious objection to some of the quotes, this is likely because I did not provide enough context, so please read the relevant section of the paper before pointing it out. It may also be useful to recite the Litany of a Bright Dilettante.

p.6. On QM's potentially limiting "an external agent’s ability to scan, copy, and predict human brains and other complicated biological systems" : "In this essay I’ll argue strongly [...] that we can easily imagine worlds consistent with quantum mechanics (and all other known physics and biology) where the answer to the question is yes, and other such worlds where the answer is no. And we don’t yet know which kind we live in."

p. 7. "The [...] idea—that of being “willed by you”—is the one I consider outside the scope of science, for the simple reason that no matter what the empirical facts were, a skeptic could always deny that a given decision was “really” yours, and hold the true decider to have been God, the universe, an impersonating demon, etc. I see no way to formulate, in terms of observable concepts, what it would even mean for such a skeptic to be right or wrong."

"the situation seems different if we set aside the “will” part of free will, and consider only the “free” part."

"I’ll use the term freedom, or Knightian freedom, to mean a certain strong kind of physical unpredictability: a lack of determination, even probabilistic determination, by knowable external factors. [..] we lack a reliable way even to quantify using probability distributions."

p.8. "I tend to see Knightian unpredictability as a necessary condition for free will. In other words, if a system were completely predictable (even probabilistically) by an outside entity—not merely in principle but in practice—then I find it hard to understand why we’d still want to ascribe “free will” to the system. Why not admit that we now fully understand what makes this system tick?"

p.12. "from my perspective, this process of “breaking off” answerable parts of unanswerable riddles, then trying to answer those parts, is the closest thing to philosophical progress that there is." -- professional philosophers would do well to keep this in mind. Of course, once you break off such answerable part, it tends to leave the realm of philosophy and become a natural science of one kind or another. Maybe something useful professional philosophers could do is to look for "answerable parts", break them off and pass along to the experts in the subject matter. And maybe look for the answers in the natural sciences and see how they help sculpt the "unanswerable riddles".

p.14. Weak compatibilism: "My perspective embraces the mechanical nature of the universe’s time-evolution laws, and in that sense is proudly “compatibilist.” On the other hand, I care whether our choices can actually be mechanically predicted—not by hypothetical Laplace demons but by physical machines. I’m troubled if they are, and I take seriously the possibility that they aren’t (e.g., because of chaotic amplification of unknowable details of the initial conditions)."

p.19. Importance of copyability: "the problem with this response [that you are nothing but your code] is simply that it gives up on science as something agents can use to predict their future experiences. The agents wanted science to tell them, “given such and- such physical conditions, here’s what you should expect to see, and why.” Instead they’re getting the worthless tautology, “if your internal code causes you to expect to see X, then you expect to see X, while if your internal code causes you to expect to see Y , then you expect to see Y .” But the same could be said about anything, with no scientific understanding needed! To paraphrase Democritus, it seems like the ultimate victory of the mechanistic worldview is also its defeat." -- If a mind cannot be copied perfectly, then there is no such thing as your "code", i.e. an algorithm which can be run repeatedly.

p.20. Constrained determinism: "A form of “determinism” that applies not merely to our universe, but to any logically possible universe, is not a determinism that has “fangs,” or that could credibly threaten any notion of free will worth talking about."

p.21: Bell's theorem, quoting Conway and Kochen: "if there’s no faster than-light communication, and Alice and Bob have the “free will” to choose how to measure their respective particles, then the particles must have their own “free will” to choose how to respond to the measurements." -- the particles' "free will" is still constrained by the laws of Quantum Mechanics, however.

p.23. Multiple (micro-)past compatibilism: "multiple-pasts compatibilism agrees that the past microfacts about the world determine its future, and it also agrees that the past macrofacts are outside our ability to alter. [...] our choices today might play a role in selecting one past from a giant ensemble of macroscopically-identical but microscopically-different pasts."

p.26. Singulatarianism: "all the Singulatarians are doing is taking conventional thinking about physics and the brain to its logical conclusion. If the brain is a “meat computer,” then given the right technology, why shouldn’t we be able to copy its program from one physical substrate to another? [...] given the stakes, it seems worth exploring the possibility that there are scientific reasons why human minds can’t be casually treated as copyable computer programs: not just practical difficulties, or the sorts of question-begging appeals to human specialness that are child’s-play for Singulatarians to demolish. If one likes, the origin of this essay was my own refusal to accept the lazy cop-out position, which answers the question of whether the Singulatarians’ ideas are true by repeating that their ideas are crazy and weird. If uploading our minds to digital computers is indeed a fantasy, then I demand to know what it is about the physical universe that makes it a fantasy."

p.27. Predictability of human mind: "I believe neuroscience might someday advance to the point where it completely rewrites the terms of the free-will debate, by showing that the human brain is “physically predictable by utside observers” in the same sense as a digital computer."

p.28. Em-ethics: "I’m against any irreversible destruction of knowledge, thoughts, perspectives, adaptations, or ideas, except possibly by their owner." -- E.g. it's not immoral to stop a simulation which can be resumed or restored from a backup. (The cryonics implications are obvious.) "Deleting the last copy of an em in existence should be prosecuted as murder, not because doing so snuffs out some inner light of consciousness (who is anyone else to know?), but rather because it deprives the rest of society of a unique, irreplaceable store of knowledge and experiences, precisely as murdering a human would." -- Again, this is a pretty transhumanist view, see the anti-deathist position of Eliezer Yudkowsky as expressed in HPMoR.

p.29. Probabilistic uncertainty vs Knightian uncertainty: "if we see a conflict between free will and the deterministic predictability of human choices, then we should see the same conflict between free will and probabilistic predictability, assuming the probabilistic predictions are as accurate as those of quantum mechanics. [...] If we know a system’s quantum state , then quantum mechanics lets us calculate the probability of any outcome of any measurement that might later be made on the system. But if we don’t know the state, then itself can be thought of as subject to Knightian uncertainty."

On the source of this unquantifiable "Knightian uncertainty": "in current physics, there appears to be only one source of Knightian uncertainty that could possibly be both fundamental and relevant to human choices. That source is uncertainty about the microscopic, quantum-mechanical details of the universe’s initial conditions (or the initial conditions of our local region of the universe)"

p.30. "In economics, the “second type” of uncertainty—the type that can’t be objectively quantified using probabilities—is called Knightian uncertainty, after Frank Knight, who wrote about it extensively in the 1920s [49]. Knightian uncertainty has been invoked to explain phenomena from risk-aversion in behavioral economics to the 2008 financial crisis (and was popularized by Taleb [87] under the name “black swans”)."

p.31. "I think that the free-will-is-incoherent camp would be right, if all uncertainty were probabilistic." Bayesian fundamentalism: "Bayesian probability theory provides the only sensible way to represent uncertainty. On that view, “Knightian uncertainty” is just a fancy name for someone’s failure to carry a probability analysis far enough."" Against the Dutch-booking argument for Bayesian fundamentalism: "A central assumption on which the Dutch book arguments rely—basically, that a rational agent shouldn’t mind taking at least one side of any bet—has struck many commentators as dubious."

p.32. Objective prior: "one can’t use Bayesianism to justify a belief in the existence of objective probabilities underlying all events, unless one is also prepared to defend the existence of an “objective prior.”"

Universal prior: "a distribution that assigns a probability proportional to 2^(−n) to every possible universe describable by an n-bit computer program." Why it may not be a useful "true" prior: "a predictor using the universal prior can be thought of as a superintelligent entity that figures out the right probabilities almost as fast as is information-theoretically possible. But that’s conceptually very different from an entity that already knows the probabilities."

p.34. Quantum no-cloning: "it’s possible to create a physical object that (a) interacts with the outside world in an interesting and nontrivial way, yet (b) effectively hides from the outside world the information needed to predict how the object will behave in future interactions."

p.35. Quantum teleportation answers the problem of "what to do with the original after you fax a perfect copy of you to be reconstituted on Mars": "in quantum teleportation, the destruction of the original copy is not an extra decision that one needs to make; rather, it happens as an inevitable byproduct of the protocol itself"

p.36. Freebit picture: "due to Knightian uncertainty about the universe’s initial quantum state, at least some of the qubits found in nature are regarded as freebits" making "predicting certain future events—possibly including some human decisions—physically impossible, even probabilistically". Freebits are qubits because otherwise they could be measured without violating no-cloning. Observer-independence requirement: "it must not be possible (even in principle) to trace [the freebit's] causal history back to any physical process that generated [the freebit] according to a known probabilistic ensemble."

p.37. On existence of freebits: "In the actual universe, are there any quantum states that can’t be grounded in PMDs?" PMD, a "past macroscopic determinant" is a classical observable that would have let one non-invasively probabilistically predict the prospective freebit to arbitrary accuracy. This is the main question of the paper: can freebits from the initial conditions of the universe survive till present day and even affect human decisions?

p.38: CMB (cosmic microwave background radiation) is one potential example of freebits: detected CMB radiation did not interact with matter since the last scattering, roughly 380, 000 years after the Big Bang. Objections: a) last scattering is not initial conditions by any means, b) one can easily shield from CMB.

p.39. Freebit effects on decision-making: "what sorts of changes to [the quantum state of the entire universe] would or wouldn’t suffice to ... change a particular decision made by a particular human being? ... For example, would it suffice to change the energy of a single photon impinging on the subject’s brain?" due to potential amplification of "microscopic fluctuations to macroscopic scale". Sort of a quantum butterfly effect.

p.40. Freebit amplification issues: amplification time and locality. Locality: the freebit only affects the person's actions, which mediates all other influences on the rest of the world. I.e. no direct freebit effect on anything else. On why these questions are interesting: "I can easily imagine that in (say) fifty years, neuroscience, molecular biology, and physics will be able to say more about these questions than they can today. And crucially, the questions strike me as scientifically interesting regardless of one’s philosophical predilections."

p.41. Role of freebits: "freebits are simply part of the explanation for how a brain can reach decisions that are not probabilistically predictable by outside observers, and that are therefore “free” in the sense that interests us." It could just a noise source, it can help "foils probabilistic forecasts made by outside observers, yet need not play any role in explaining the system’s organization or complexity."

p.42. "Freedom from the inside out": "isn’t it anti-scientific insanity to imagine that our choices today could correlate nontrivially with the universe’s microstate at the Big Bang?" "Causality is based on entropy increase, so it can only make sense to draw causal arrows “backwards in time,” in those rare situations where entropy is not increasing with time. [...] where physical systems are allowed to evolve reversibly, free from contact with their external environments." E.g. the normal causal arrows break down for, say, CMB photons. -- Not sure how Scott jumps from reversible evolution to backward causality.

p.44. Harmonization problem: backward causality leads to all kinds of problems and paradoxes. Not an issue for the freebit model, as backward causality can point only to "microfacts", which do not affect any "macrofacts". "the causality graph with be a directed acyclic graph (a dag), with all arrows pointing forward in time, except for some “dangling” arrows pointing backward in time that never lead anywhere else." The latter is justified by "no-cloning". In other words, "for all the events we actually observe, we must seek their causes only to their past, never to their future."" -- This backward causality moniker seems rather unfortunate and misleading, given that it seems to replace the usual idea of discovery of some (micro)fact about the past with "a microfact is directly caused by a macrofact F to its future". "A simpler option is just to declare the entire concept of causality irrelevant to the microworld."

p.45. Micro/Macro distinction: A potential solution: "a “macrofact” is simply any fact of which the news is already propagating outward at the speed of light". I.e. an interaction turns microfact into a macrofact. This matches Zurek's einselection ideas.

p.47 Objections to freebits: 5.1: Humans are very predictable. "Perhaps, as Kane speculates, we truly exercise freedom only for a relatively small number of “self-forming actions” (SFAs)—that is, actions that help to define who we are—and the rest of the time are essentially “running on autopilot.”" Also note "the conspicuous failure of investors, pundits, intelligence analysts, and so on actually to predict, with any reliability, what individuals or even entire populations will do"

p.48. 5.2: The weather objection: How are brains different from weather? "brains seem “balanced on a knife-edge” between order and chaos: were they as orderly as a pendulum, they couldn’t support interesting behavior; were they as chaotic as the weather, they couldn’t support rationality. [...] a single freebit could plausibly influence the probability of some macroscopic outcome, even if we model all of the system’s constituents quantum-mechanically."

p.49 5.3: The gerbil objection: if a brain or an AI is isolated from freebits except through a a gerbil in a box connected to it, then "the gerbil, though presumably oblivious to its role, is like a magic amulet that gives the AI a “capacity for freedom” it wouldn’t have had otherwise," in essence becoming the soul of the machine. "Of all the arguments directed specifically against the freebit picture, this one strikes me as the most serious." Potential reply: brain is not like AI in that "In the AI/gerbil system, the “intelligence” and “Knightian noise” components were cleanly separable from one another. [...] With the brain, by contrast, it’s not nearly so obvious that the “Knightian indeterminism source” can be physically swapped out for a different one, without destroying or radically altering the brain’s cognitive functions as well." Now this comes to the issue of identity.

"Suppose the nanorobots do eventually complete their scan of all the “macroscopic, cognitively-relevant” information in your brain, and suppose they then transfer the information to a digital computer, which proceeds to run a macroscopic-scale simulation of your brain. Would that simulation be you? If your “original” brain were destroyed in this process, or simply anesthetized, would you expect to wake up as the digital version? (Arguably, this is not even a philosophical question, just a straightforward empirical question asking you to predict a future observation!) [...] My conclusion is that either you can be uploaded, copied, simulated, backed up, and so forth, leading to all the puzzles of personal identity discussed in Section 2.5, or else you can’t bear the same sort of “uninteresting” relationship to the “non-functional” degrees of freedom in your brain that the AI bore to the gerbil box."

p.51. The Initial-State Objection: "the notion of “freebits” from the early universe nontrivially influencing present-day events is not merely strange, but inconsistent with known physics" because "it follows from known physics that the initial state at the Big Bang was essentially random, and can’t have encoded any “interesting” information". The reply is rather involved and discusses several new speculative ideas in physics. It boils down to "when discussing extreme situations like the Big Bang, it’s not okay to ignore quantum-gravitational degrees of freedom simply because we don’t yet know how to model them. And including those degrees of freedom seems to lead straight back to the unsurprising conclusion that no one knows what sorts of correlations might have been present in the universe’s initial microstate."

p.52. The Wigner’s-Friend Objection: A macroscopic object "in a superposition of two mental states" requires freebits to make a separate "free decision" in each one, requiring 2^(number of states) freebits for independent decision making in each state.

Moreover "if the freebit picture is correct, and the Wigner’s-friend experiment can be carried out, then I think we’re forced to conclude that—at least for the duration of the experiment—the subject no longer has the “capacity for Knightian freedom,” and is now a “mechanistic,” externally-characterized physical system similar to a large quantum computer."

p.55. "what makes humans any different [from a computer]? According to the most literal reading of quantum mechanics’ unitary evolution rule—which some call the Many-Worlds Interpretation—don’t we all exist in superpositions of enormous numbers of branches, and isn’t our inability to measure the interference between those branches merely a “practical” problem, caused by rapid decoherence? Here I reiterate the speculation put forward in Section 4.2: that the decoherence of a state should be considered “fundamental” and “irreversible,” precisely when [it] becomes entangled with degrees of freedom that are receding toward our de Sitter horizon at the speed of light, and that can no longer be collected together even in principle. That sort of decoherence could be avoided, at least in principle, by a fault-tolerant quantum computer, as in the Wigner’s-friend thought experiment above. But it plausibly can’t be avoided by any entity that we would currently recognize as “human."

p.56. Difference from Penrose: " I make no attempt to “explain consciousness.” Indeed, that very goal seems misguided to me, at least if “consciousness” is meant in the phenomenal sense rather than the neuroscientists’ more restricted senses."

p.57. "instead of talking about the consistency of Peano arithmetic, I believe Penrose might as well have fallen back on the standard arguments about how a robot could never “really” enjoy fresh strawberries, but at most claim to enjoy them."

"the real issue is not whether the AI follows a program, but rather, whether it follows a program that’s knowable by other physical agents."

"I’m profoundly skeptical that any of the existing objective reduction [by minds] models are close to the truth. The reasons for my skepticism are, first, that the models seem too ugly and ad hoc (GRW’s more so than Penrose’s); and second, that the AdS/CFT correspondence now provides evidence that quantum mechanics can emerge unscathed even from the combination with gravity."

"I regard it as a serious drawback of Penrose’s proposals that they demand uncomputability in the dynamical laws"

p.61. Boltzmann brains: "By the time thermal equilibrium is reached, the universe will (by definition) have “forgotten” all details of its initial state, and any freebits will have long ago been “used up.” In other words, there’s no way to make a Boltzmann brain think one thought rather than another by toggling freebits. So, on this account, Boltzmann brains wouldn’t be “free,” even during their brief moments of existence."

p.62. What Happens When We Run Out of Freebits? "the number of freebits accessible to any one observer must be finite—simply because the number of bits of any kind is then upper-bounded by the observable universe’s finite holographic.entropy. [...] this should not be too alarming. After all, even without the notion of freebits, the Second Law of Thermodynamics (combined with the holographic principle and the positive cosmological constant) already told us that the observable universe can witness at most s 10^122 “interesting events,” of any kind, before it settles into thermal equilibrium."

p.63. Indexicality: "indexical puzzle: a puzzle involving the “first-person facts” of who, what, where, and when you are, which seems to persist even after all the “third-person facts” about the physical world have been specified." This is similar to Knightian uncertainty: "For the indexical puzzles make it apparent that, even if we assume the laws of physics are completely mechanistic, there remain large aspects of our experience that those laws fail to determine, even probabilistically. Nothing in the laws picks out one particular chunk of suitably organized matter from the immensity of time and space, and says, “here, this chunk is you; its experiences are your experiences.”"

Free will connection: Take two heretofore identical Earths A and B in an infinite universe and are about to diverge based on your decision, and it's not impossible for a superintelligence to predict this decision, not even probabilistically, because it is based on a freebit:

"Maybe “youA” is the “real” you, and taking the new job is a defining property of who you are, much as Shakespeare “wouldn’t be Shakespeare” had he not written his plays. So maybe youB isn’t even part of your reference class: it’s just a faraway doppelg¨anger you’ll never meet, who looks and acts like you (at least up to a certain point in your life) but isn’t you. So maybe p = 1. Then again, maybe youB is the “real” you and p = 0. Ultimately, not even a superintelligence could calculate p without knowing something about what it means to be “you,” a topic about which the laws of physics are understandably silent." "For me, the appeal of this view is that it “cancels two philosophical mysteries against each other”: free will and indexical uncertainty".

p.65. Falsifiability: "If human beings could be predicted as accurately as comets, then the freebit picture would be falsified." But this prediction has "an unsatisfying, “god-of-the-gaps” character". Another: "chaotic amplification of quantum uncertainty locally and on "reasonable" timescales. Another: "consider an omniscient demon, who wants to influence your decision-making process by changing the quantum state of a single photon impinging on your brain. [...] imagine that the photons’ quantum states cannot be altered, maintaining a spacetime history consistent with the laws of physics, without also altering classical degrees of freedom in the photons’ causal past. In that case, the freebit picture would once again fail."

p.68. Conclusions: "Could there exist a machine, consistent with the laws of physics, that “non-invasively cloned” all the information in a particular human brain that was relevant to behavior— so that the human could emerge from the machine unharmed, but would thereafter be fully probabilistically predictable given his or her future sense-inputs, in much the same sense that a radioactive atom is probabilistically predictable?"

"does the brain possess what one could call a clean digital abstraction layer : that is, a set of macroscopic degrees of freedom that (1) encode everything relevant to memory and cognition, (2) can be accurately modeled as performing a classical digital computation, and (3) “notice” the microscopic, quantum-mechanical degrees of freedom at most as pure random number sources, generating noise according to prescribed probability distributions? Or is such a clean separation between the macroscopic and microscopic levels unavailable—so that any attempt to clone a brain would either miss much of the cognitively-relevant information, or else violate the No-Cloning Theorem? In my opinion, neither answer to the question should make us wholly comfortable: if it does, then we haven’t sufficiently thought through the implications!"

In a world where a cloning device is possible the indexical questions "would no longer be metaphysical conundrums, but in some sense, just straightforward empirical questions about what you should expect to observe!"

p.69. Reason and mysticism. "but what do I really think?" "in laying out my understanding of the various alternatives—yes, brain states might be perfectly clonable, but if we want to avoid the philosophical weirdness that such cloning would entail, [...] I don’t have any sort of special intuition [...]. The arguments exhaust my intuition."

## 83 comments

Comments sorted by top scores.

shminux: Thanks so much for compiling these notes and quotes! But I should say that I thought the other LW thread was totally fine. Sure, lots of people strongly disagreed with me, but I'd be disappointed if LW readers *didn't*! And when one or two people who hadn't read the paper got things wrong, they were downvoted and answered by others who had. Kudos to LW for maintaining such a high-quality discussion about a paper that, as DanielVarga put it, "moves in a direction that's very far from any kind of LW consensus."

If someone is interested in freedom but does not think unpredictability is fundamental to freedom, they are unlikely to be very interested in engaging with a lengthy paper arguing for unpredictability. And the view that unpredictability is not fundamental to freedom is pretty widespread, especially among compatibilists. An unpredictable outcome seems a lot like a random outcome, and something being random seems quite different from it being up to me, from it being under my control. Now, of course, some people think anything predictable can't be free, but if so, the conclusion would seem to be that there is no such thing as freedom, since saying the predictable is unfree doesn't do anything to undermine the reasons for thinking the unpredictable is unfree.

Just as a quick point of information, these arguments are all addressed in Sections 2.2 and 3.1. In particular, while I share the common intuition that "random" is just as incompatible with "free" as "predictable" is, the crucial observation is that "unpredictable" does not in any way imply "random" (in the sense of governed by some knowable probability distribution). But there's a broader point. Suppose we accepted, for argument's sake, that unpredictability is not "fundamental to freedom" (whatever we take "freedom" to mean). Wouldn't the question of whether human choices are predictable or not remain interesting enough in its own right?

I think that the "absolute prediction" question is answered. I mean, I'm acquiring bits of information you can't physically know all the time just by entangling with air molecules that haven't reached you yet. But there's a separate question of "how important is that?" which is a combination of at least two different questions: first "how big an impact does flipping a qubit have on human cognitive actions?" and second "how much do I care that someone can't predict me exactly, if they can predict my macroscopic actions out to a time horizon of minutes / days / years?"

I think you're more concerned about absolute prediction relative to "pretty good" prediction than I am, which is a shame because that's the totally subjective part of the question :)

As a point of information, I too am only interested in predicting macroscopic actions (indeed, only probabilistically), not in what you call "absolute prediction." The worry, of course, is that chaotic amplification of small effects would preclude even "pretty good" prediction.

By "random" I certainly don't mean to imply that the probability distribution must be knowable. I don't see how an unknowable probability distirbution makes things any more up to me, any more under my control.

The popular Not Under my Control objection tacitly assumes that you would have to predetermine tthe output of your randomness module. However "you" ... as a complex system..can still select or filter its output downstream,

As y'all know, I agree with Hume (by way of Jaynes) that the error of projecting internal states of the mind onto the external world is an incredibly common and fundamental hazard of philosophy.

Probability is in the mind to start with; if I think that 103,993 has a 20% of being prime (I haven't tried it, but Prime Number Theorem plus it being not divisible by 2, 3, or 5 wild ballpark estimate) then this uncertainty is a fact about my state of mind, not a fact about the number 103,993. Even if there are many-worlds whose frequencies correspond to some uncertainties, that itself is just a fact; probability is in the map, not in the territory.

Then we have Knightian uncertainty, which is how I feel when I try to estimate AI timelines, i.e., when I query my brain on different occasions it returns different probability estimates, and I know there are going to be some effects which aren't on my causal map. This is a kind of doubly-subjective double-uncertainty. Of course you still have to turn it into betting odds on pain of violating von Neumann-Morgenstern; see also the Ellsberg paradox of inconsistent decision-making if ambiguity is given a special behavior.

Taking this doubly-map-level property of Knightian uncertainty (a sort of *confusion about* probabilities) and trying to reify it in the territory as a kind of *stuff* (encoded in hidden interstices of QM) which somehow plays an irreplaceable functional role in cognition is...

...probably not going to be the best-received philosophical speculation ever posted to LW. I mean, as a species we should know by now that this kind of idea just basically never turns out to be correct. If X is confusing and Y is confusing this does not make X a good explanation for Y when X makes no new experimental predictions about Y even in retrospect, thou shalt not answer confusing questions by postulating new mysterious opaque substances, etc.

Hi Eliezer,

(1) One of the conclusions I came to from my own study of QM was that we can't always draw as sharp a line as we'd like between "map" and "territory." Yes, there are some things, like Stegosauruses, that seem clearly part of the "territory"; and others, like the *idea* of Stegosauruses, that seem clearly part of the "map." But what about (say) a quantum mixed state? Well, the probability distribution aspect of a mixed state seems pretty "map-like," while the quantum superposition aspect seems pretty "territory-like" ... but oops! we can decompose the same mixed state into a probability distribution over superpositions in infinitely-many nonequivalent ways, and get exactly the same experimental predictions regardless of what choice we make.

(Since you approvingly mentioned Jaynes, I should quote the famous passage where he makes the same point: "But our present QM formalism is not purely epistemological; it is a peculiar mixture describing in part realities of Nature, in part incomplete human information about Nature --- all scrambled up by Heisenberg and Bohr into an omelette that nobody has seen how to unscramble.")

Indeed, this strikes me as an example where -- to put it in terms LW readers will understand -- the exact demarcation line between "map" and "territory" is empirically sterile; it doesn't help at all in constraining our anticipated future experiences. (Which, again, is not to deny that certain aspects of our experience are "definitely map-like," while others are "definitely territory-like.")

(2) It's not entirely true that the ideas I'm playing with "make no new experimental predictions" -- see Section 9.

(3) I don't agree that Knightian uncertainty must always be turned into betting odds, on pain of violating this or that result in decision theory. As I said in this essay, if you look at the standard derivations of probability theory, they typically make a crucial non-obvious assumption, like "given any bet, a rational agent will always be willing to take either one side or the other." If that assumption is dropped, then the path is open to probability intervals, Dempster-Shafer, and other versions of Knightian uncertainty.

Well, the probability distribution aspect of a mixed state seems pretty "map-like," while the quantum superposition aspect seems pretty "territory-like" ... but oops! we can decompose the same mixed state into a probability distribution over superpositions in infinitely-many nonequivalent ways, and get exactly the same experimental predictions regardless of what choice we make.

I think the underlying problem here is that we're using the word "probability" to denote at least two different things, where those things are causally related in ways that keep them *almost* consistent with each other but not quite. Any system which obeys the axioms of Cox's theorem can potentially be called probability. The numbers representing subjective judgements of an idealized reasoner satisfy those axioms; call these reasoner subjective probabilities, P_r(event,reasoner). The numbers representing a quantum mixed state do too; call these quantum probabilities, P_q(event,observer).

For an idealized reasoner who knows everything about quantum physics, has unlimited computational power, and has some numbers from the quantum system to start with, these two sets of numbers can be kept consistent: reasoner=observer-->P_r(x,reasoner)=P_q(x,observer). In other words, there is a bit of map and a bit of territory, and these contain the exact same numbers, within the intersection of their domains. The numeric equivalence makes it tempting to merge these into one entity, but that entity can't be localized to the map or the territory, because it contains one part from each. And the equivalence breaks down, when you step outside of P_q's domain; if a portion of a quantum system is causally isolated from an observer, then P_q becomes undefined, while P_r does still has a value and still obeys Cox's axioms.

If the domain of P_r failed to cover all possible events, that would be a huge deal, philosophically. But for P_q to be undefined in some places, that isn't nearly as interesting.

1) I'm not so clear that the map/territory distinction in QM is entirely relevant to the map/territory distinction with regard to uncertainty. The fact that we do not know a fact, does not describe in any way the fact itself. Even if we could not, by physical law, possibly know the fact, this still does not equate to the fact having inherent unknowability. There is no functional difference between uncertainty caused by the proposed QM effects, and uncertainty caused by any other factor. So long as we are uncertain for any reason, we are uncertain. We can map that uncertainty with a probability distribution, which will exist in the map; in the territory everything is (at least according to the Many-Worlds interpretation Eliezer ascribes to, and which is the best explanation I've seen to date although I've not studied the subject extensively) determined, even if our experiences are probabilistic. Even if it turns out that reality really does just throw dice sometimes, that won't change our ability to map probability over our uncertainty. The proposed source of randomness is not any more or less "really random" than other QM effects, and we can still map a probability distribution over it. The point of drawing the map/territory distinction is to avoid the error of proposing special qualities onto parts of the territory that should only exist on the map. "Here there be randomness" is fine on the map to mark your uncertainty, but don't read that as ascribing a special "random" property to the territory itself. "Randomness" is not a fundamental feature of reality, as a concept, even if sometimes reality is literally picking numbers at random; you would be mistaken if you tried to draw on that fundamental "randomness" in any way that was not *exactly* equivalent to any other uncertainty, because on the map it looks exactly the same.

2) I'm not really qualified to evaluate such claims.

3) Refusing to bet is, itself, just making a different bet. Refusing a 50/50 bet with a 1000:1 payoff is just as stupid as picking the wrong side. Any such refusal is just selecting the alternative option of "100%: no loss no gain," and decision theories are certainly able to handle options of that nature. Plus, often in reality there is no way to avoid a bet entirely; usually "do nothing" is just one side of the bet. You can't ever avoid the results of decision theory; you are guaranteed to get worse payoffs on average by refusing to take the recommended actions, even in the real world. This is minimized by computational limitations, such that you usually wouldn't be implementing a decision theory anyway or would only be approximating one, but you will still lose out on average.

"Even if we could not, by physical law, possibly know the fact, this still does not equate to the fact having inherent unknowability."

I think the sentence above nicely pinpoints where I part ways from you and Eliezer. To put it bluntly, if a fact is impossible for any physical agent to learn, according to the laws of physics, then that's "inherently unknowable" enough for me! :-) Or to say it even more strongly: *I don't actually care much* whether someone chooses to regard the unknowability of such a fact as "part of the map" or "part of the territory" -- any more than, if a bear were chasing me, I'd worry about whether aggression was an intrinsic attribute of the bear, or an attribute of my human understanding of the bear. In the latter case, I mostly just want to know *what the bear will do*. Likewise in the former case, I mostly just want to know whether the fact *is* knowable -- and if it isn't, then why! I find it strange that, in the free-will discussion, so many commentators seem to pass over the empirical question (in what senses can human decisions *actually* be predicted?) without even evincing curiosity about it, in their rush to argue over the definitions of words. (In AI, the analogue would be the people who argued for centuries about whether a machine could be conscious, without --- until Turing --- ever cleanly separating out the "simpler" question, of whether a machine could be built that couldn't be empirically distinguished from entities we regard as conscious.) A central reason why I wrote the essay was to try to provide a corrective to this (by my lights) anti-empirical tendency.

"you would be mistaken if you tried to draw on that fundamental "randomness" in any way that was not exactly equivalent to any other uncertainty, because on the map it looks exactly the same."

Actually, the randomness that arises from quantum measurement *is* empirically distinguishable from other types of randomness. For while we can measure a state |psi> in a basis not containing |psi>, and thereby get a random outcome, we also could've measured |psi> in a basis containing |psi> -- in which case, we would've confirmed that a measurement in the first basis *must* give a random outcome, whose probability distribution is exactly calculable by the Born rule, and which can't be explainable in terms of subjective ignorance of any pre-existing degrees of freedom unless we give up on locality.

But the more basic point is that, if freebits existed, then they *wouldn't* be "random," as I use the term "random": instead they'd be subject to Knightian uncertainty. So they couldn't be collapsed with the randomness arising from (e.g.) the Born rule or statistical coarse-graining, for that reason even if not also for others.

"Refusing to bet is, itself, just making a different bet."

Well, I'd regard that statement as the defining axiom of a certain limiting case of economic thinking. In practice, however, most economic agents exhibit some degree of *risk-aversion*, which could be defined as "that which means you're no longer in the limiting case where everything is a bet, and the only question is which bet maximizes your expected utility."

Formatting note: You can quote a paragraph by beginning it with '>'.

With regard to "inherent randomness" I think we essentially agree. I tend to use the map/territory construct to talk about it, and you don't, but in the end the only thing that matters is what predictions we can make (predictions made correspond to what's in the "map"). The main point there is to avoid the mind-projection fallacy of purporting that concepts which help you think about a thing must necessarily relate to how the thing really is. You appear to not actually be committing any such fallacy, even though it almost sounded like you were due primarily to different uses of terminology. "Can I predict this fact?" is a perfectly legitimate question, so long as you don't accidentally mix up the answer to that question with something about the fact having some mysterious quality. (I know that this sounds like a pointless distinction, because you aren't actually making the error in question. It's not much of a leap to start spouting nonsense, but it's hard to explain why it isn't much of a leap when it's so far from what either of us is actually saying.)

I am pretty definitely not curious about the empirical question of how accurately humans can really be predicted, except in some sense that the less predictable we are the less I get the feeling of having free will. I already know with high confidence that my internal narrative is consistent though, so I'm not too concerned about it. The main reasoning behind this is the same as why I feel like the question of free will has already been entirely resolved. From the inside, I feel like I make decisions, and then I carry out those decisions. So long as my internal narrative leading up to a decision matches up with my decision, I feel like I have free will. I don't really see the need for any further explanation of free will, or how it really truly exists outside of simply me feeling like I have it. I feel like I have it, and I know why I feel like I have it, and that's all I need to know.

I recognize that quantum randomness is of a somewhat different nature than other uncertainty, since we cannot simply gain more facts to make it go away. However, when we make predictions, it doesn't really matter whether our uncertainty comes from QM or classical subjective uncertainty. We have to follow the same laws of probability either way. It matters with regard to how we generate the probabilities, but not with regard to how we use them. I think the core disagreement, though, is that I don't see Knightian uncertainty as being in it's own special class of uncertainty. We can, and indeed have to, compute it in our models or they will be wrong. We can generate a probability distribution for the effect of freebits upon our models, even if that distribution is extremely imprecise. Those models, assuming they are generated correctly, will then give correct probabilistic predictions of human behavior. There's also the issue of whether such effects would actually have anything close to a significant role in our computations; I'm extremely skeptical that such effects would be large enough to so much as flip a single neuron, though I am open to being proven wrong (I'm certainly not a physicist).

Risk-aversion is just a modifier on how the agent computes expected utility. You can't avoid the game just by claiming you aren't playing; maximizing expected utility, by definition, is the outcome you want, and decision theory is all about how to maximize utility. If you're offered a 50/50 bet at 1000:1 odds (in utils) and you refuse it, you're not being risk-averse, you're being stupid. Real agents are often stupid, but it doesn't follow that being stupid is rational. Rational agents maximize utility, using decision theory. All agents always take some side of a bet, once you frame it right (and if any isomorphic phrasing makes the assumptions used to derive probability theory true, then they should hold for the original scenario). There is no way to avoid the necessity of finding betting odds if you want to be right.

Eliezer, with due respect, your comment consisted of re-iterating a bunch of basic arguments that Scott has seen many times before, without even attempting to engage with any of Scott's actual arguments. This seems a bit uncharitable...

The entanglement(s) of hot-noisy-evolved biological cognition with abstract ideals of cognition that Eliezer Yudkowsky vividly describes in *Harry Potter and the Methods of Rationality*, and the quantum entanglement(s) of dynamical flow with the physical processes of cognition that Scott Aaronson vividly describes in *Ghost in the Quantum Turing Machine*, both find further mathematical/social/philosophical echoes in Joshua Landsberg's *Tensors: Geometry and Applications* (2012), specifically in Landsberg's thought-provoking introductory section **Section 0.3: Clash of Cultures** (this introduction is available as **PDF on-line**).

*E.g.*, the above discussions above relating to "map versus object" distinctions can be summarized by:

Aaronson's Law of Ontic Mixing"We can't always draw as sharp a line as we'd like betweenmapandterritory".

as contrasted with the opposing assertion

Landsberg's No-Mixing Principle"Don’t use coordinates unless someone holds a pickle to your head"

As Landsberg remarks

"These conversations [are] very stressful to all involved ... there are language and even philosophical barriers to be overcome."

The Yudkowsky/Aaronson philosophical divide is vividly mirrored in the various divides that Landsberg describes between geometers and algebraists, and mathematicians and engineers.

**Question** Has it happened before, that philosophical conundrums have arisen in the course of STEM investigation, then been largely or even entirely resolved by further STEM progress?

**Answer** Yes of course (beginning for example with Isaac Newton's obvious-yet-wrong notion that "absolute, true and mathematical time, of itself, and from its own nature flows equably without regard to anything external").

**Conclusion** It may be that, in coming decades, the philosophical debate(s) between Yudkowsky and Aaronson will be largely or even entirely resolved by mathematical discourse following the roadmap laid down by Landsberg's outstanding text.

An elaboration of the above argument now appears on *Shtetl Optimized*, essentially as a meditation on the question: What strictly *mathematical* proposition would comprise *rationally* convincing evidence that the key linear-quantum postulates of "One Ghost in the Quantum Turing Machine* amount to “an unredeemed claim [that has] become a roadblock rather than an inspiration” (to borrow an apt phrase from Jaffe and Quinn's arXiv:math/9307227).

Readers of *Not Even Wrong* seeking further (strictly mathematical) mathematical illumination in regard to these issues may wish to consult Arnold Neumaier and Dennis Westra's textbook-in-progress *Classical and Quantum Mechanics via Lie Algebras* (arXiv:0810.1019, 2011), whose Introduction states:

"The book should serve as an appetizer, inviting the reader to go more deeply into these fascinating, interdisciplinary fields of science. ... [We] focus attention on the simplicity and beauty of theoretical physics, which is often hidden in a jungle of techniques for estimating or calculating quantities of interest."

That the Neumaier/Westra textbook is an unfinished work-in-progress constitutes proof *prima facie* that the final *tractatus* upon these much-discussed *logico-physico-philosophicus* issues has yet to be written! :)

Once more with feeling:

If there is a kind of probability in the mind, that doesn't mean there isn't another kind in reality.

You came decide the nature of reality with amchair arguments.

Take two heretofore identical Earths A and B in an infinite universe and are about to diverge based on your decision, and it's not impossible for a superintelligence to predict this decision, not even probabilistically, because it is based on a freebit:

Suppose you just have the one freebit - it's your standard issue qubit, and you keep it in a box at absolute zero in case you need to make a decision. If this superintelligence can predict you perfectly except for this one qubit, why wouldn't it just assign a uniform probability distribution to the qubit's values and then simulate you for different qubit values to obtain a probability distribution?

I'm probably just confused about something.

The relevant passage of the essay (p. 65) goes into more detail than the paraphrase you quoted, but the short answer is: how does the superintelligence know it should assume the *uniform* distribution, and not some other distribution? For example, suppose someone tips it off about a *third* Earth, C, which is "close enough" to Earths A and B even if not microscopically identical, and in which you made the same decision as in B. Therefore, this person says, the probabilities should be adjusted to (1/3,2/3) rather than (1/2,1/2). It's not obvious whether the person is right---is Earth C really close enough to A and B?---but the superintelligence decides to give the claim some nonzero credence. Then boom, its prior is no longer uniform. It might still be *close*, but if there are thousands of freebits, then the distance from uniformity will quickly get amplified to almost 1.

Your prescription corresponds to E. T. Jaynes's "MaxEnt principle," which basically says to assume a uniform (or more generally, maximum-entropy) prior over any degrees of freedom that you don't understand. But the conceptual issues with MaxEnt are well-known: the uniform prior over *what*, exactly? How do you even parameterize "the degrees of freedom that you don't understand," in order to assume a uniform prior over them? (You don't want to end up like the guy interviewed on Jon Stewart, who earnestly explained that, since the Large Hadron Collider might destroy the earth and might not destroy it, the only rational inference was that it had a 50% chance of doing so. :-) )

To clarify: I don't deny the enormous value of MaxEnt and other Bayesian-prior-choosing heuristics in countless statistical applications. Indeed, if you forced me at gunpoint to bet on something about which I had Knightian uncertainty, then I too would want to use Bayesian methods, making judicious use of those heuristics! But applied statisticians are forced to use tricks like MaxEnt, precisely *because* they lack probabilistic models for the systems they're studying that are anywhere near as complete as (let's say) the quantum model of the hydrogen atom. If you believe there are *any* systems in nature for which, given the limits of our observations, our probabilistic models can never achieve hydrogen-atom-like completeness (even in principle)---so that we'll always be forced to use tricks like MaxEnt---then you believe in freebits. There's nothing more to the notion than that.

how does the superintelligence know it should assume the uniform distribution, and not some other distribution?

Symmetry arguments? And since our superintelligence understands the working of your brain minus this qubit, the symmetry isn't between choices A and B, but rather between the points on the Bloch sphere of the qubit. Learning that in some microscopically independent trial a qubit had turned out in such a way that you chose B doesn't give the superintelligence any information about your qubit, and so wouldn't change its prediction.

A less-super intelligence, who was uncertain about the function (your brain) that mapped qubits onto decisions, *would* update in favor of the functions that produced B - the degree to which this mattered would depend on its probability distribution over functions.

I don't deny the enormous value of MaxEnt and other Bayesian-prior-choosing heuristics in countless statistical applications. Indeed, if you forced me at gunpoint to bet on something about which I had Knightian uncertainty, then I too would want to use Bayesian methods, making judicious use of those heuristics!

This still seems weird, though I believe in freebits by your requirement. Why would you want to use Bayesian methods if no guess (in the form of a probability, to be scored according to some rule that rewards good guesses) is better than another on Knightian problems? And if some guess *is* better than another, why not use the best guess? That's what using probability is all about - if you didn't have incomplete information, you wouldn't need to guess at all.

Well, I can try to make my best guess if forced to -- using symmetry arguments or any other heuristic at my disposal -- but my best guess might differ from some other, equally-rational person's best guess. What I mean by a probabilistic system's being "mechanistic" is that the probabilities can be calculated in such a way that no two rational people will disagree about them (as with, say, a radioactive decay time, or the least significant digit of next week's Dow Jones average).

Also, the point of my "Earth C" example was that symmetry arguments can only be used once we know the reference class of things to symmetrize over -- but which reference class to choose is precisely the sort of thing about which rational people can disagree, with no mathematical or empirical way to resolve the disagreement. (And moreover, where it's not even obvious that there *is* a "true, pre-existing answer" out there in the world, or what it would mean for something to count as such an answer.)

Hm. So then do we have *two* types of problems you're claiming Bayesian inference isn't good enough for? One is problems involving freebits, and another is problems involving disagreements about reference classes?

The reason I don't think "Earth C" had an impact on the perfect-prediction-except-for-isolated-qubits case is because I'd turned the reference class problem into an information content problem, which actually *does* have a correct solution.

where it's not even obvious that there is a "true, pre-existing answer" out there in the world

I think this is the normal and acceptable state of affairs for *all* probability assignments.

How would one test experimentally whether the uncertainty in question is Knightian? Assuming we do our best to make many repeatable runs of some experiment, what set of outcomes would point to the MaxEnt (or any) prior being inadequate?

shminux: I don't know any way, even in principle, to *prove* that uncertainty is Knightian. (How do you decisively refute someone who claims that if only we had a better theory, we could calculate the probabilities?) Though even here, there's an interesting caveat. Namely, I *also* would have thought as a teenager that there could be no way, even in principle, to "prove" something is "truly probabilistic," rather than deterministic but with complicated hidden parameters. But that was before I learned the Bell/CHSH theorem, which does pretty much exactly that (if you grant some mild locality assumptions)! So it's at least logically possible that some future physical theory could *demand* Knightian uncertainty in order to make internal sense, in much the same way that quantum mechanics demands probabilistic uncertainty.

But setting aside that speculative possibility, there's a much more important point in practice: namely, it's much easier to *rule out* that a given source of uncertainty is Knightian, or at least to place upper bounds on how much Knightian uncertainty it can have. To do so, you "merely" have to give a model for the system so detailed that, by using it, you can:

(1) calculate the probability of any event you want, to any desired accuracy,

(2) demonstrate, using repeated tests, that your probabilities are *well-calibrated* (e.g., of the things you say will happen roughly 60% of the time, roughly 60% of them indeed happen, and moreover the subset of those things that happen passes all the standard statistical tests for not having any further structure), and

(3) crucially, provide evidence that your probabilities *don't* merely reflect epistemic ignorance. In practice, this would almost certainly mean providing the causal pathways by which the probabilities can be traced down to the quantum level.

Admittedly, (1)-(3) sound like a tall order! But I'd say that they've already been done, more or less, for all sorts of complicated multi-particle quantum systems (in chemistry, condensed-matter physics, etc.): we can calculate the probabilities, compare them against observation, *and* trace the origin of the probabilities to the Born rule.

Of course, if you have a large ensemble of identical copies of your system (or things you regard as identical copies), then that makes validating your probabilistic model a lot more straightforward, for then you can replace step (2) by direct experimental *estimation* of the probabilities. But in the above, I was careful never to assume that we had lots of identical copies --- since *if* the freebit picture were accepted, then in many cases of interest to us we wouldn't!

How do you decisively refute someone who claims that if only we had a better theory, we could calculate the probabilities?

This seems like too strong a statement. After all, if one knows exactly the initial quantum state at the Big Bang, then one also knows all the freebits. I believe that what you are after is not proving that no theory would allow us to calculate the probabilities, but rather that our current best theory does not. In your example, that knowing any amount of macrofacts from the past still would not allow us to calculate the probabilities of some future macrofacts. My question was about a potential experimental signature of such a situation.

I suspect that this would be a rather worthwhile question to seriously think about, potentially leading to Bell-style insights. I wonder what could be a simple toy model of a situation like that: a general theory G, a partial theory P and a set of experimental data E from which one can conclude that there is no well calibrated set of probabilities P->p(E) derivable from P only, even though there is one from G, G->p(E). Hmm, I might be letting myself to get carried away a bit.

Why should human choices being randomized by some hypothetical primordial 'freebits' be any different in practice from them being randomized by the seething lottery-ball bounces of trillions of molecules at dozens of meters per second inside cells? That's pretty damn random.

In both cases, the question that interests me is whether an external observer could build a model of the human, by non-invasive scanning, that let it forecast the probabilities of future choices in a well-calibrated way. If the freebits or the trillions of bouncing molecules inside cells served only as randomization devices, then they wouldn't create any obstruction to such forecasts. So the relevant possibility here is that the brain, or maybe other complex systems, *can't* be cleanly decomposed into a "digital computation part" and a "microscopic noise part," such that the former sees the latter purely as a random number source. Again, I certainly don't know that such a decomposition is impossible, but I also don't know any strong arguments from physics or biology that assure us it's possible -- as they say, I hope future research will tell us more.

In the paper you also wrote

With the brain, by contrast, it’s not nearly so obvious that the “Knightian indeterminism source” can be physically swapped out for a different one, without destroying or radically altering the brain’s cognitive functions as well.

But given the relatively large amplitude of the microscopic thermal noise that CellBioGuy points to, what evolutionary reason would favor a strong role for quantum freebits? After all, thermal noise is far beyond the comprehension of any rival or predator organism. So the organism is safe from being too predictable, even if it harnesses only probabilistic randomization sources. Or it might amplify both types of randomness, thermal noise and quantum freebits alike. But in that case I'd expect the thermal noise to dominate the cognitive and behavioral results, just because thermal noise is so richly available.

My thoughts exactly. Real randomness and sufficiently advanced pseudo randomness would be equally good for practical purposes, all other things being equal, but it might well have been easier to tap into noise than evolve a PRNG. So we may have ended up with incompatibilist FW by a kind if accident.

"Causality is based on entropy increase, so it can only make sense to draw causal arrows “backwards in time,” in those rare situations where entropy is not increasing with time. [...] where physical systems are allowed to evolve reversibly, free from contact with their external environments." E.g. the normal causal arrows break down for, say, CMB photons. -- Not sure how Scott jumps from reversible evolution to backward causality.

It's a few paragraphs up, where he says:

Now, the creation of reliable memories and records is essentially always associated with an increase in entropy (some would argue by definition). And in order for us, as observers, to speak sensibly about “A causing B,” we must be able to create records of A happening and then B happening. But by the above, this will essentially never be possible unless A is closer in time than B to the Big Bang.

That is, we are only capable of remembering (by any means) things closer to the Big Bang, because memories require entropy increase; and furthermore, memories are necessary for drawing a causal arrow that orders past vs future. But if there is a system that stays isentropic, it needn't have such a ordering.

Note: this is actually very close to Drescher's resolution of Loschmidt's paradox ("why is physics time-symmetric but entropy isn't?") in *Good and Real*: since entropy determines what we (or any observers) regard as pastward, we will necessarily *observe* only those time histories of increasing entropy.

I think one can also justify talk of backward causality along the lines of what Scott says on p. 23:

our choices today might play a role in selecting one past from a giant ensemble of macroscopically-identical but microscopically-different pasts.

If we are considering actions A and B now, and these correspond to microscopically different past facts X and Y, and there is no other route to knowledge of X or Y, it seems reasonable to agree with Scott that we are "selecting one past".

Rancor commonly arises when STEM discussions in general, and discussions of quantum mechanics in particular, focus upon personal beliefs and/or personal aesthetic sensibilities, as contrasted with verifiable mathematical arguments and/or experimental evidence and/or practical applications.

In this regard, a pertinent quotation is the self-proclaimed "personal belief" that Scott asserts on page 46:

"One obvious way to enforce a macro/micro distinction would be via a dynamical collapse theory. ... I personally cannot believe that Nature would solve the problem of the 'transition between microfacts and macrofacts' in such a seemingly ad hoc way, a way that does so much violence to the clean rules of linear quantum mechanics."

Scott's personal belief calls to mind Nature's solution to the problem of gravitation; a solution that (historically) has been alternatively regarded as both "clean" or "unclean". His quantum beliefs map onto general relativity as follows:

General relativity is "unclean""We can be confident that Nature will not do violence to the clean rules of linear Euclidean geometry; the notion is so repugnant that the ideas of general relativityCANNOTbe correct."

as contrasted with

General relativity is "clean""Matter tells space how to curve; space tells matter how to move; this principle is so natural and elegant that general relativityMUSTbe correct!"

Of course, nowadays we are mathematically comfortable with the latter point-of-view, in which Hamiltonian dynamical flows are naturally associated to non-vanishing Lie derivatives of metric structures , that is .

This same mathematical toolset allow us to frame the ongoing debate between Scott and his colleagues in mathematical terms, by focusing our attention not upon the metric structure , but similarly upon the complex structure .

In this regard a striking feature of Scott's essay is that it provides precisely *one* numbered equation (perhaps this a deliberate echo of Stephen Hawking's **A Brief History of Time**, which also has precisely one equation?). Fortunately, this lack is admirably remedied by the discussion in Section 8.2 "Holomorphic Objects" of Andrei Moroianu's textbook **Lectures on Kahler Geometry**. See in particular the proof arguments that are associated to Moroianu's Lemma 8.7, which conveniently is freely available as Lemma 2.7 of an early draft of the textbook, that is available on the arxiv server as **arXiv:math/0402223v1**. Moroianu's draft textbook is short and good, and his completed textbook is longer and better!

Scott's aesthetic personal beliefs naturally join with Moroianu's mathematical toolset to yield a crucial question: Should/will 21st Century STEM researchers embrace with enthusiasm, or reject with disdain, dynamical theories in which ?

Scott's essay is entirely correct to remind us that this crucial question is (in our present state-of-knowledge) not susceptible to *any* definitively verifiable arguments from mathematics, physical science, or philosophy (although plenty of arguments from plausibility have been set forth). But on the other hand, students of STEM history will appreciate that the community of *engineers* has rendered a unanimous verdict: in essentially all modern large-scale quantum simulation codes (matrix product-state calculations provide a prominent example).

So to the extent that biological systems (including brains) are accurately and efficiently simulable by these emerging dynamic-J methods, then Scott's *definition* of quantum dynamical systems may have only marginal relevance to the *practical* understanding of brain dynamics (and it is plausible AFAICT that this proposition is entirely consonant with Scott's notion of "freebits").

Here too there is ample precedent in history: early 19th Century textbooks like Nathaniel Bowditch's renowned *New American Practical Navigator* (1807) succinctly presented **the key mathematical elements of non-Euclidean geometry** (many decades in advance of Gauss, Riemann, and Einstein).

Will 21st Century adventurers learn to navigate nonlinear quantum state-spaces with the same exhilaration that adventurers of earlier centuries learned to navigate first the Earth's nonlinear oceanography, and later the nonlinear geometry of near-earth space-time (via **GPS satellites**, for example)?

**Conclusion** Scott's essay is right to remind us: *We don't know whether Nature's complex structure is comparably dynamic to Nature's metric structure, and finding out will be a great adventure!* Fortunately (for young people especially) textbooks like Moroianu's provide a well-posed roadmap for helping mathematicians, scientists, engineers --- and philosophers too --- in setting forth upon this great adventure. Good!

I honestly don't understand why you invoke Killing vectors to make your point. I am also not sure what this "complex structure J" means (is it some tensor?) in the QM context and what it would mean to take a Lie derivative of J with respect to some vector field.

The dynamicist Vladimir Arnold had a wonderful saying:

"Every mathematician knows that it is impossible to understand any elementary course in thermodynamics."

This saying is doubly true of quantum mechanics. For example, the undergraduate quantum physics notion of "multiply a quantum vector by " is not so easy to convey without mentioning the number "." Here's how the trick is accomplished. We regard Hilbert space as a real manifold that is equipped with a symplectic form and a metric . Given an (arbitrary) vector field on , we can construct an endomorphism by first "flatting" with and then "sharping" with , that is . The physicist's equation thus is naturally instantiated as the endomorphic condition .

**The Point** To a geometer, the Lie derivative of has no very natural definition, but the Lie derivative of the endomorphism is both mathematically well-defined and (on non-flat quantum state-spaces) need not be zero. The resulting principle that " is not necessarily constant" thus is entirely natural to geometers, yet well-nigh inconceivable to physics students!

Are you describing

- a (nonstandard?) formalism equivalent to other standard ways of doing quantum mechanics?
- an actually existing theory of quantumish mechanics, genuinely different from others, that makes predictions that match experiment as well as the existing theories do?
- a sketch of how you hope some future theory might look?
- something else?

It sounds as if your endomorphism J is supposed to play the role of *i* somehow, but how? What do you actually do with it, and why?

What is your manifold M actually supposed to be, and why? Is it just a formal feature of the theory, or is it meant to be spacetime, or some kind of phase space, or what?

JLM, the mathematically natural answer to your questions is:

• the quantum *dynamical* framework of (say) Abhay Ashtekar and Troy Schilling's *Geometrical Formulation of Quantum Mechanics* arXiv:gr-qc/9706069v1, and

• the quantum *measurement* framework of (say) Carlton Caves' on-line notes *Completely positive maps, positive maps, and the Lindblad form*, both pullback naturally onto

• the *varietal* frameworks of (say) Joseph Landsberg's *Tensors: Geometry and Applications*

Textbooks like Andrei Moroianu's *Lectures on Kahler Geometry* and Mikio Nakahara's *Geometry, Topology and Physics* are helpful in joining these pieces together, but definitely there is at present no single textbook (or article either) that grinds through all the details. It would have to be a long one.

For young researchers especially, the present literature gap is perhaps a good thing!

(Who's JLM?)

I don't think you actually answered any of my questions; was that deliberate? Anyway, it seems that (1) the general description in terms of Kähler manifolds is a somewhat nonstandard way of formulating "ordinary" quantum mechanics; (2) *J* does indeed play the role of *i*, kinda, since one way you can think about Kähler manifolds is that you start with a symplectic manifold and then give it a local complex structure; (3) yes, M is basically a phase space; (4) you see some great significance in the idea that some Lie derivative of J might be nonzero, but haven't so far explained (a) whether that is a possibility *within* standard QM or a *generalization* beyond standard QM, or (b) along what vector field *V* you're taking the Lie derivative (it looks -- though I don't understand this stuff well at all -- as if it's more natural to take the derivative of something else along J, rather than the derivative of J along something else), or (c) why you regard this as importance.

And I still don't see that there's any connection between this and Scott's stuff about free will. (That paragraph you added -- is it somehow suggesting that "dynamic-J methods" for *simulation* can somehow pull out information that according to Scott is in principle inaccessible? Or what?)

Gjmasks "Along what vector field V are you taking the Lie derivative?

The natural answer is, along a *Hamiltonian* vector field. Now you have all the pieces needed to ask (and even answer!) a broad class of questions like the following:

Alice possesses a computer of exponentially large memory and clock speed, upon which she unravels the Hilbert-space trajectories that are associated to the overall structure ), where is a Hilbert-space (considered as a manifold), is its metric, is its symplectic form, is the complex structure induced by ), and ) are the (stochastic,smooth) Lindblad and Hamiltonian potentials that are associated to a physical system that Alice is simulating. Alice thereby computes a (stochastic) classical data-record as the output of her unraveling.

Bob pulls-back ) onto his lower-dimension varietal manifold (per Joseph Landsberg's recipes), upon which he unravels the pulled-back trajectories, thus obtaining (like Alice) a classical data-record as the output of his unraveling (but using far-fewer computational resources).

Then It is natural to consider questions like the following:

QuestionFor hot noisy quantum dynamical systems (like brains), what is the lowest-dimension varietal state-space for which Bob's simulation data-record cannot be verifiably distinguished from Alice's simulation data-record? In particular, do polynomially-many varietal dimensions suffice for Bob's record to be indistinguishable from Alice's?

Is this a mathematically *well-posed* question? Definitely! Is it a scientifically *open* question? Yes! Does it have engineering consequences (and even medical consequences) that are *practical* and *immediate*? Absolutely!

What *philosophical* implications would a "yes" answer have for Scott's freebit thesis? Philosophical questions are of course tougher to answer than mathematical, scientific, or engineering questions, but one reasonable answer might be "The geometric foaminess of varietal state-spaces induces Knightian undertainty in quantum trajectory unravelings that is computationally indistinguishable from the Knightian uncertainty that, in Hilbert-space dynamical systems, can be ascribed to primordial freebits."

Are these questions *interesting*? Here it is neither feasible, nor necessary, nor desirable that everyone think alike!

(reposted with proper nesting above)

Gjmasks "Along vector field V are you taking the Lie derivative?

The natural answer is, along a *Hamiltonian* vector field. Now you have all the pieces needed to ask (and even answer!) a broad class of questions like the following:

Alice possesses a computer of exponentially large memory and clock speed, upon which she unravels the Hilbert-space trajectories that are associated to the overall structure ), where is a Hilbert-space (considered as a manifold), is its metric, is its symplectic form, is the complex structure induced by ), and ) are the (stochastic,smooth) Lindblad and Hamiltonian potentials that are associated to a physical system that Alice is simulating. Alice thereby computes a (stochastic) classical data-record as the output of her unraveling.

Bob pulls-back ) onto his lower-dimension varietal manifold (per Joseph Landsberg's recipes), upon which he unravels the pulled-back trajectories, thus obtaining (like Alice) a classical data-record as the output of his unraveling (but using far-fewer computational resources).

Then It is natural to consider questions like the following:

QuestionFor hot noisy quantum dynamical systems (like brains), what is the lowest-dimension manifold for which Bob's simulation cannot be verifiably distinguished from Alice's simulation? In particular, do polynomially-many dimensions suffice for Bob's record to be indistinguishable from Alice's?

Is this a mathematically *well-posed* question? Definitely! Is it a scientifically *open* question? Yes! Does it have *practical* engineering consequences? Absolutely!

What *philosophical* implications would a "yes" answer have for Scott's freebit thesis? Philosophical questions are of course tougher to answer than mathematical, scientific, or engineering questions, but one reasonable answer might be "The geometric foaminess of algebraic state-spaces induces Knightian undertainty in quantum unravelings that is computationally indistinguishable from the dynamical effects that are associated to primordial freebits."

Are these questions *interesting*? Here is it neither feasible, nor necessary, nor desirable that everyone think alike!

I cannot tell whether your writing style indicates an inability to bridge an inferential gap or an attempt at status smash ("I'm so smart, look at all the math I know, relevant or not!"). I will assume that it's the former, but will disengage, anyway, given how unproductive this exchange has been so far. Next time, consider using the language appropriate for your audience, if you want to get your point across.

**[deleted]**· 2013-06-18T21:33:24.201Z · score: 4 (4 votes) · LW · GW

I believe you're being uncharitable. JS is a bit effervescent and waxes poetic in a few places, but doesn't say anything obviously wrong.

I would have assumed (perhaps wrongly) that you'd know how to take the Lie derivative of a (1, 1)-tensor field, and there's only a short Googling necessary to ascertain that complex structures are certain kinds of (1, 1)-tensor fields. The linked draft is pretty clear about what L_X(J) = 0 means, and that makes it clear what L_X(J) != 0 means -- X doesn't generate holomorphic flows.

In another comment, he shows how to construct a compatible (almost) complex structure J from a Riemannian structure g and a symplectic structure w. This is actually a special case of a theorem of Arnol'd, which states that fixing any two yields a compatible choice of the third. (I've always heard this called the "two out of three" theorem, but apparently some computer science thing has overtaken this name.) This shows that J is actually relevant to the dynamics of the underlying system -- just as relevant as the symplectic structure is.

From that, it's not too much of a stretch to make a metaphor between this situation and contrasting the study of non-conservative flows with the study of conservative flows. Seems reasonable enough to me!

but doesn't say anything obviously wrong.

I don't believe I claimed that he did. When I expressed not understanding the relevance or even meaning of his notation, all I got in reply was more poetic waxing. By contrast, your clear and to the point explanation made sense to this lowly ex-physicist. So, I am not sure which part of my reply was uncharitable... Anyway, JS is now on my rather short list of people not worth engaging with in an online discussion.

**[deleted]**· 2013-06-18T22:26:40.267Z · score: 1 (1 votes) · LW · GW

So, I am not sure which part of my reply was uncharitable...

The part where you accuse him of a "status smash" after he directly answered what J was (that is, the composition of those two maps) and why it was important (that is, because it reflects the symplectic structure). The only lack of productivity is this thread is yours.

Anyway, JS is now on my rather short list of people not worth engaging with in an online discussion.

These sorts of declarations always remind me of *Sophist*:

STRANGER: The attempt at universal separation is the final annihilation of all reasoning; for only by the union of conceptions with one another do we attain to discourse of reason.

Huh, I guess I am not alone in being Sidles-averse, for the same reasons.

Shminux, perhaps some *Less Wrong* readers will enjoy the larger reflection of our differing perspectives that is provided by Arthur Jaffe and Frank Quinn’s *‘Theoretical mathematics’: Toward a cultural synthesis of mathematics and theoretical physics* (Bull. AMS 1993, arXiv:math/9307227, 188 citations); an article that was notable for its **biting criticism of Bill Thurston's geometrization program**.

Thurston's gentle, thoughtful, and scrupulously polite response *On proof and progress in mathematics* (Bull. AMS 1994, arXiv:math/9307227, 389 citations) has emerged as a classic of the mathematical literature, and is recommended to modern students by many mathematical luminaries (Terry Tao's weblog sidebar has a permanent link to it, for example).

**Conclusion** It is no bad thing for students to be familiar with this literature, which plainly shows us that it is neither necessary, nor feasible, nor desirable for everyone to think alike!

**[deleted]**· 2013-06-18T22:50:17.676Z · score: 1 (1 votes) · LW · GW

These sorts of declarations always remind me of Sophist:

That's a little out of context...I think the stranger literally means the union of *concepts*, not the union of opinions or points of view.

I see. Well, I very much appreciate your feedback, it's good to know how what I say comes across. I will ponder it further.

Shminux, it may be that you will find that your concerns are substantially addressed by Joshua Landsberg's *Clash of Cultures* essay (2012), which is **cited above**.

"These conversations [are] very stressful to all involved ... there are language and even philosophical barriers to be overcome."

Thank you for your gracious remarks, Paper-Machine. Please let me add, that few (or possibly none) of the math/physics themes of the preceding posts are original to me (that's why I give so many references!)

Students of quantum history will find pulled-back/non-linear metric and symplectic quantum dynamical flows discussed as far back as Paul Dirac's seminal *Note on exchange phenomena in the Thomas atom* (1930); a free-as-in-freedom review of the nonlinear quantum dynamical frameworks that came from Dirac's work (nowadays called the "Dirac-Frenkel-McLachlan variational principle") is Christian Lubich's recent *On Variational Approximations In Quantum Molecular Dynamics* (*Math. Comp.*, 2004).

Shminux, perhaps your appetite for nonlinear quantum dynamical theories would be whetted by reading the most-cited article in the history of physics, which is Walter Kohn and Lu Jeu Sham's *Self-Consistent Equations Including Exchange and Correlation Effects* (1965, cited by 29670); a lively followup article is Walter Kohn's *Electronic Structure of Matter*, which can be read as a good-humored celebration of the practical merits of varietal pullbacks ... or as Walter Kohn calls them, *variational Ansatzes*, having a varietal product form.

There is a considerable overlap between Scott Aaronson's "freebit" hypothesis and the view of quantum mechanics that Walter Kohn's expresses in his *Electronic Structure of Matter* lecture (views whose origin Kohn ascribes to Van Vleck):

Kohn's Provocative StatementIn general the many-electron wavefunction ) for a system of electrons is not a legitimate scientific concept, when , where .

Scott's essay would (as it seems to me) be stronger if it referenced the views of Kohn (and Van Vleck too) ... especially given Walter Kohn's unique status as the most-cited quantum scientist in all of history!

Walter Kohn's vivid account of how his "magically" powerful quantum simulation formalism grew from strictly "muggle" roots---namely, the study of disordered intermetallic alloys---is plenty of fun too, and eerily foreshadows some of the hilarious scientific themes of Eliezer Yudkowsky's *Harry Potter and the Methods of Rationality*.

In view of these nonpareil theoretical, experimental, mathematical (and nowadays) engineering successes, sustained over many decades, it is implausible (as it seems to me) that the final word has been said in praise of nonlinear quantum dynamical flows! Happy reading Shminux (and everyone else too)!

Quantum aficionados in the mold of **Eliezer Yudkowsky** will have fun looking up "Noether's Theorem" in the index to Michael Spivak's **well-regarded** *Physics for Mathematicians: Mechanics I*, because near to it we notice an irresistible index entry "Muggles, 576", which turns out to be a link to:

TheoremThe flow of any Hamiltonian vector field consists of canonical transformations

Proof(Hogwarts version) ...

Proof(Muggles version) ...

**Remark** It is striking that Dirac's *The Principles of Quantum Mechanics* (1930), Feynman's *Lectures on Physics* (1965), Nielsen and Chuang's *Quantum Computation and Quantum Information* (2000)---and Scott Aaronson's essay *The Ghost in the Turing Machine* (2013) too---all frame their analysis exclusively in terms of (what Michael Spivak aptly calls) Muggle mathematic methods! :)

**Observation** Joshua Landsberg has written an essay **Clash of Cultures** (2012) that discusses the sustained tension between Michael Spivak's "Hogwarts math versus Muggle math". The tension has historical roots that extent at least as far back as Karl Gauss' celebrated apprehension regarding the "**the clamor of the Boeotians**" (aka Muggles).

**Conclusion** Michael Spivak's wry mathematical jokes and Eliezer Yudkowsky's wonderfully funny *Harry Potter and the Methods of Rationality* both help us to appreciate that outdated Muggle-mathematical idioms of standard textbooks and philosophical analysis are a substantial impediment to 21st Century learning and rational discourse of all varieties---including philosophical discourse.

Joshua Landsberg has written an essay [...]

The thing you link to is not anything by Joshua Landsberg, but another of your own comments.

That in turn does link to something by Landsberg that has a section headed "Clash of cultures" but it could not by any reasonable stretch be called an essay. It's only a few paragraphs long and about half of it is a quotation from Plato. (It also makes no explicit allusion to Spivak's Hogwarts-Muggles distinction, though I agree it's pointing at much the same divergence.)

gjmavers "Landsberg that has a section headed "Clash of cultures" but it could not by any reasonable stretch be called an essay. It's only a few paragraphs long."

LOL ... gjm, you must *really* dislike Lincoln's ultra-short *Gettysburg Address!*

More seriously, isn't the key question whether Landsberg's essay is *correct* to assert that "there are language and even philosophical barriers to be overcome", in communicating modern geometric insights to STEM researchers trained in older mathematical techniques?

Most seriously of all, gjm, please let me express the hope that the various references that you have pursued have helped to awaken an appreciation of the severe and regrettable mathematical limitations that are inherent in the essays of *Less Wrong's* **Quantum Physics Sequence**, including in particular Eliezer_Yudkowsky's essay *Quantum Physics Revealed As Non-Mysterious*.

The burgeoning 21st century literature of geometric dynamics helps us to appreciate that the the 20th century mathematical toolkit of *Less Wrong's* quantum essays perhaps will turn out to be not so much "less wrong" as "not even wrong," in the sense that *Less Wrong*'s quantum essays are devoid of the geometric dynamical ideas that are flowering so vigorously in the contemporary STEM literature.

This is of course very good news for young researchers! :)

you must

reallydislike Lincoln's ultra-shortGettysburg Address!

No, I think it's excellent (though I prefer the PowerPoint version), but it isn't an essay.

let me express the hope that the various references [...] have helped to awaken an appreciation of the [...] mathematical limitations that are inherent in [...]

Less Wrong's Quantum Physics Sequence

That sentence appears to me to embody some assumptions you're not in a position to make reliably. Notably: That I think, or thought until John Sidles kindly enlightened me, that Eliezer's QM essays are anything like a complete exposition of QM. As it happens, that wasn't my opinion; for that matter I doubt it is or was even Eliezer's.

In particular, when Eliezer says that QM is "non-mysterious" I don't think he means that everything about it is understood, that there are no further scientific puzzles to solve. He certainly doesn't mean it isn't possible to pick a mathematical framework for talking about QM and then contemplate generalizations. He's arguing against a particular sort of mysterianism one often hears in connection with QM -- the sort that says, roughly, "QM is counterintuitive, which means no one really understands it or can be expected to understand it, so the right attitude towards QM is one of quasi-mystical awe", which is the kind of thing that makes Chopraesque quantum woo get treated with less contempt than it deserves.

Even Newtonian mechanics is mysterious in the sense that there are unsolved problems associated with it. (For instance: What are all the periodic 3-body trajectories? What is the right way to think about the weird measure-zero situations -- involving collisions of more than two particles -- in which the usual rules of Newtonian dynamics *constrain* what happens next without, prima facie, fully *determining* it?) But no one talks about Newtonian mechanics in the silly way some people talk about quantum mechanics, and it's that sort of quantum silliness Eliezer is (at least, as I understand it) arguing against.

I think at least one of us has a serious misunderstanding of what's generally meant by the phrase "not even wrong". To me, it means "sufficiently vague or confused that it doesn't even yield the sort of testable predictions that would allow it to be refuted", which doesn't seem to me to be an accurate description of conventional 20th-century quantum mechanics. It might, indeed, turn out that conventional 20th-century QM is a severely incomplete description of reality, and that in some situations it gives badly wrong predictions, and that some such generalization as you favour will do better, much as relativity and QM improved on classical physics in ways that radically revised our picture of what the universe fundamentally is. But classical physics was not "not even wrong". It was an *excellent* body of ideas. It explained a lot, and it enabled a lot of correct predictions and useful inventions. It was wrong but clear and useful. It was the exact opposite of "not even wrong".

Finally and incidentally: What impression do you think your tone gives to your readers? What impression are you hoping for? I ask because I think the reality may not match your hopes.

gjm avers: 'When Eliezer says that QM is "non-mysterious' ... He's arguing against a particular sort of mysterianism"

That may or may not be the case, but there is zero doubt that this assertion provides rhetorical foundations for the essay ** And the Winner is... Many-Worlds!**.

A valuable service of the mathematical literature relating to geometric mechanics is that it instills a prudent humility regarding assertions like "the Winner is... Many-Worlds!" A celebrated meditation of Alexander Grothendieck expresses this humility:

"A different image came to me a few weeks ago. The unknown thing to be known appeared to me as some stretch of earth or hard marl, resisting penetration ... the sea advances insensibly in silence, nothing seems to happen, nothing moves, the water is so far off you hardly hear it ... yet it finally surrounds the resistant substance."

Surely in regard quantum mechanics, the water of our understanding is far from covering the rocks of our ignorance!

As for the tone of my posts, the intent is that people who enjoy references and quotations will take no offense, and people who do *not* enjoy them can simply pass by.

It seems perfectly possible to me -- I make no claims about whether it's actually true -- that the following could all be the case. (1) Of the various physical theories in the possession of the human race that are definite enough to be assessed, one of the clear winners is the Standard Model. (2) Of the various ways to understand the quantum mechanics involved in the Standard Model, the clear winner is "many worlds". (3) The known lacunae in our understanding of physics make it clear that further conceptual advances will be needed before we can claim to understand everything. (4) Those conceptual advances could take just about any form, and everything we currently think we know is potentially up for grabs. (5) "Many worlds" is not *uniquely* under threat from these future conceptual advances -- *everything* is up for grabs -- and the possibility of future conceptual revolutions doesn't call for any more caution about "many worlds" than it does for caution about, say, the inseparability of space and time.

In other words: The fact that science is hard and not yet finished is indeed reason for epistemic humility -- about *everything*; but pointing to some *particular* thing alleged to be a discovery of modern science and saying "no, wait, it could turn out to be wrong" is not justifiable by that fact alone, unless you are happy to do the same for all other alleged discoveries of modern science.

My guess is that you have some *other* reasons for being skeptical about the many-worlds interpretation, besides the very general fact that quantum mechanics might some day be the subject of a great scientific upheaval. But you haven't said what they are.

My point about your tone is not concerned with the fact that you include references and quotations, and *taking offence* isn't the failure mode you might need to worry about. The danger, rather, is that you come across as pushing, with an air of smug superiority, a non-standard view of the present state of science, and that this is liable to pattern-match in people's brains to a host of outright cranks. If you prefer not to be dismissed as a crank, you might want to adjust your tone.

(If, on the other hand, you don't care whether you are dismissed as a crank, then the question in some minds will be "Why should I take him seriously when he doesn't seem to do so himself?".)

gjmasserts "Of the various ways to understand the quantum mechanics involved in the Standard Model, the clear winner is "many worlds"

LOL ... by that lenient standard, the first racehorse out of the gate, or the first sprinter out of the blocks, can reasonably be proclaimed "the clear winner" ... before the race is even finished!

That's a rational announcement only for very short races. Surely there is very little evidence that the course that finishes at comprehensive understanding of Nature's dynamics ... is a short course?

As for my own opinions in regard to quantum dynamical systems, they are more along the lines of **here are some questions that are mathematically well-posed and are interesting to engineers and scientists alike** ... and definitely *not* along the lines of "here are the answers to those questions"!

Actually I clearly and explicitly went out of my way to say I *wasn't* asserting that.

Bored of being laughed at out loud now. (Twice in one short thread is enough.) Bye.

Goodbye, gjm. The impetus that your posts provided to post thought-provoking mathematical links will be missed. :)

Shminux, there are plenty of writers---mostly far more skilled than me!---who have attempted to connect our physical understanding of dynamics to our mathematical understanding of dynamical flows. So please don't let my turgid expository style needlessly deter you from reading this literature!

In this regard, Michael Spivak's works are widely acclaimed; in particular his early gem *Calculus on Manifolds: a Modern Approach to Classical Theorems of Advanced Calculus* (1965) and his recent tome *Physics for Mathematicians: Mechanics I* (2010) (and in a comment on *Shtetl Optimized* I have suggested some short articles by David Ruelle and Vladimir Arnold that address these same themes).

Lamentably, there are (at present) no texts that deploy this modern mathematical language in service of explaining the physical ideas of (say) Nielsen and Chuang's *Quantum Computation and Quantum Information* (2000). Such a text would (as it seems to me) very considerably help to upgrade the overall quality of discussion of quantum questions.

On the other hand, surely it is no bad thing for students to read these various works---each of them terrifically enjoyable in their own way---while wondering: How do these ideas fit together?

Ooo, looks like LW's comment-parsing code just lets HTML through blindly or something. I wonder if I can fix the overhanging italics like this: ? [EDIT: no, apparently not. So whatever did John do?]

Gjm, after a round-of-suffering with the LaTeX equation editor, any-and-all markup glitches in the above comment now seem to be fixed (at least, the comment now parses OK on FireFox/OSX and Safari/OSX). Please let me know if you are encountering any remaining markup-relating problems, and if so, I will attempt to fix them.

No, it's OK now. The funny thing is that the glitch doesn't seem to have been near any of your LaTeX bits.

I confess that I don't really see the connection between your comment and Scott's essay, beyond the fact that both have something to do with Scott's opinions on quantum mechanics.

Thank you gjm. To the best of my understanding, (1) all markup glitches are fixed; (2) all links are live; and (3) an added paragraph (fourth-from-last) now explicitly links dynamic-J methods to Scott's notion of "freebits".

This last speculative idea could be tested if it is shown that small quantum fluctuations can be chaotically amplified to macroscopic levels.

For this not to be the case would require a heck of a lot of new physics.

Also and separately, it seems very hard to me to prepare an experiment to test this assertion.

For this not to be the case would require a heck of a lot of new physics.

"**not** to be the case"? Not sure what you mean, new physics for what? During my stint in biophysics long ago I observed that channel-level fluctuations wash out and manifest at most as slight variations in the delay and amplitude of the aggregate impulse. Even that is probably more due to a number of unrelated issues, like the synaptic vesicle size, neurotransmitter concentration and other macroscopic events.

it seems very hard to me to prepare an experiment to test this assertion.

That can be decided once someone figures out what exactly ought to be tested, inputs and outputs. It's not clear to me at this point what it would even mean to test this.

I'm guessing you meant that the question is whether they're actually amplified up to macroscopic levels *in this situation*. I should have figured that that was what you meant just from context.

The point of dissolving the free will question was that it doesn't matter what physics we run on. There is in fact no physics which could possibly cause me to believe I had "free will" in the sense of somehow determining my actions outside of physics, because any method for determining my actions is a physical process. In every possible consistent physics, where the laws of physics are more or less constant, I will believe I have "free will" in the sense that the output of my brain corresponds to the algorithm I feel like I'm implementing. It's logically impossible for me to determine my own algorithm, because I'd need some starting algorithm to select one, recurring infinitely. Not to mention that when I say "me" I'm really just referring to the algorithm I'm currently running anyway. I accept (as I think everyone does intuitively) that I exist due to forces outside my control. My mental algorithm was shaped over time by forces I could not control. Even now my algorithm can be changed by events outside my control, although this usually happens only in very small degrees and/or with consent from my then-current algorithm. But when I actually make a decision, the output of my body consistently lines up to what feels like the output of my mental algorithm. That's what free will means, that's where the feeling comes from; when I act, my thoughts and actions are in sync. That's how free will gets dissolved, physics be damned. In fact, randomness and unpredictability makes me feel *less* in control than otherwise. If I really have free will, I ought to be perfectly predictable! You could just simulate the algorithm I run on, to whatever degree that's actually accurate to how my mental architecture actually operates. If my decisions are unpredictable, even to me, then I start to worry about getting some fundamental part of me randomly erased, or finding that my mental narrative is a lie and I don't control my own actions. Since this doesn't happen (much) I can fairly safely say that random factors are not strongly important in my mental processes, though it's possible I could be mistaken (humans are good at making up consistent narratives).

**[deleted]**· 2013-06-19T05:15:08.912Z · score: 2 (2 votes) · LW · GW

There is in fact no physics which could possibly cause me to believe I had "free will" in the sense of somehow determining my actions outside of physics, because any method for determining my actions is a physical process.

I don't think this is a valid argument without the premise that if my will is a physical cause, it must itself be subject to physical causality. In other words, why couldn't my will be a 'one-way' physical cause, able to cause things in the physical world, but unable to be affected by the physical world? Are you certain that *every* possible physics excludes this kind of one-way causation? Because while the idea strikes me as wildly implausible, it doesn't seem to be contradictory or anything.

I haven't seen an argument anywhere in the sequences (or elsewhere) defending this premise. This has always bothered me, so I'd appreciate it if you could supply the argument or point me to wherever I may have missed it.

In other words, why couldn't my will be a 'one-way' physical cause, able to cause things in the physical world, but unable to be affected by the physical world?

The simplest examples of one-way causes may be the laws of physics. They cause the physical universe to have its properties but the universe does not cause them to exist or affect their nature. Theoretically there could be other "laws of wills" governing our behavior in a similar way but I would hesitate calling them actual (or especially individual) wills because of their effective non-agency. Agents' behavior is caused by interaction with the physical universe, whereas the nature of laws is apparently not caused by interaction with the physical universe. A one-way will would be completely sensory-deprived and thus lack effective agency.

**[deleted]**· 2013-06-23T20:35:25.764Z · score: 0 (0 votes) · LW · GW

I think this is a very interesting thought, one famously articulated by Kant: the CI is essentially a law in the style of natural law, only pertaining to the will. He agrees with you that the law can't be identified with each individual will (for one thing, some of us are bad or irrational). This avoids the 'sensory deprivation' problem, but keeps the idea that insofar as we're governed by the law of the will, we're free. The result is that we're free only to the extent that we're good.

Actually, that premise isn't needed. No matter what the causes are, or even if I am an ontologically basic mental entity, it still remains true that I did not cause *myself*. I may not have been caused by anything else, but I certainly didn't determine which algorithm is making my decisions. Not to mention that the rest of the world has to affect my will somehow, or I couldn't actually perceive it or act on it intentionally (it's a simple inversion of the argument against epiphenominalism). One-way causation is easily possible; I could write a computer program that worked like that. But the very act of writing the computer program violates the "free will" in the strict knee jerk reaction sense of "determining my own actions". Determining your own actions requires cyclic causality, and even then I would struggle to accept that I really was determining my actions, rather than just saying that they happened basically by chance (I cannot recall where, but I recently saw something by EY about circular causality and time-turners in Conway's Game of Life in which he said that the only way to calculate it with a Turing machine is to iterate over all possible universes and rule out the ones where it doesn't happen by chance).

**[deleted]**· 2013-06-20T00:10:27.474Z · score: 1 (1 votes) · LW · GW

It sounds like the premise is not just needed, but quite complicated!

The point of dissolving the free will question was that it doesn't matter what physics we run on. There is in fact no physics which could possibly cause me to believe I had "free will" in the sense of somehow determining my actions outside of physics, because any method for determining my actions is a physical process. In every possible consistent physics, where the laws of physics are more or less constant, I will believe I have "free will" in the sense that the output of my brain corresponds to the algorithm I feel like I'm implementing. It's logically impossible for me to determine my own algorithm, because I'd need some starting algorithm to select one, recurring infinitely. Not to mention that when I say "me" I'm really just referring to the algorithm I'm currently running anyway.

This only makes sense because you've heavily internalized how our actual universe seems to work. There's no reason a priori that you couldn't have a universe with ontologicaly irreducible mental entities which aren't subject to any restrictions on their actions (this is not too distant from the suggested idea of "freebits").

Even if I was an ontologically basic mental entity, it remains true that I did not determine my own algorithm, I merely execute it. I mean, "freedom" to do anything doesn't actually feel like freedom unless it's curiously structured so that I can do precisely those things I choose to do and no others. Added randomness makes me a lot less confident in my free will, because then my actions might not line up with what I would otherwise have chosen. There aren't supposed to be *any* possible worlds in which I make a different decision (given identical perception), merely worlds in which a different "I" is there to make the decision. I can counter factually imagine making all sorts of decisions, but which one I will actually make ought to be determined by what algorithm I actually implement; and if I don't implement any consistent algorithm then I ought not to feel like I have free will, since I will notice that my decisions are based on chance rather than on my own preferences and decision algorithm. "Freedom" to do anything is meaningless; either I actually do things at random, or I am constrained by my own thought processes (which ought to be consistent and which cannot have been originally caused by me).

Even if I was an ontologically basic mental entity, it remains true that I did not determine my own algorithm

Among other issues, this assumes that you have an algorithm. It looks like stuff in our universe works that way, and it is important to develop intuitions (in so far as intuitions are internalized experience) that actually reflect how our universe works, but that shouldn't stop you from imagining very different universes.

I mean, if I don't have a consistent algorithm (Turing-computable or not) then I won't feel like I have free will, at least not in the sense that I do right now. Unpredictable is equivalent to random; and if I'm making random decisions then I won't feel like my decisions match up to some internal narrative. The more I notice that my decisions are random, unpredictable in foresight, the more I will feel like I have no free will (more precisely, feel like I have no will rather than that it is not free). But I'm not sure it's even coherent to have those sorts of feelings without implementing some kind of consistent algorithm in the background (I'm not sure it isn't, either, but it certainly feels like there's a potential problem there).

Not to mention, even if I do not implement any consistent algorithm, it does not follow that "I" (whatever on earth "I" can actually mean in such a case) am able to determine my own decisions. Unpredictable decisions do not in any way support the idea of me having free will, or that whatever is determining my decisions is itself a mental entity which has free will.

I suspect that you are using free will in a way that's very different from how many people use the term or have intuitions about it. Many when discussing free will emphasize lack of predictability as a basic part. Have you read Scott's piece? He discusses why that makes sense in some detail. Maybe I should ask what do you think free will would look/feel like?

I think that the initial knee jerk intuitions most people have about free will are incoherent. It wouldn't look like anything if they were actually true, because they are logically incoherent; there is no way the world could be where a conscious entity decides its own actions, unless potentially cyclic causality is involved (a quick example of a problem, though perhaps not unsolvable: if I determine all of my own actions, and my state of mind determines my actions (this must be true in order for me to feel like I have free will, regardless of whether I do for some definition), then I must determine my own state of mind; and how did I determine the very first state I was in?). However, that's a very different question to why people, including me, feel like we have free will. The feeling of free will is pretty well linked to our feeling of being able to conceive of counterfactuals in which we make all sorts of different decisions, and then decide between those decisions. Regardless of how over decided (read: predictable) our decision is, we feel like we could, if we just wanted to, make a different decision. The key is that we can't actually make ourselves want to make a different decision from the one we do in fact want to make. We can imagine that we could want something different, because we don't know yet what choice we will want to make, but at no point can we actually change which decision we want to make, in a way which does not regress to us wanting to do so.

I also hold, though this was not explored on Less Wrong that I remember, that the existence of a consistent internal narrative is key for free will. Without it we would feel like we were not at the least in complete control of our decisions; we would decide to do one thing but then do another, or remember doing things but not be able to understand why. To the extent these phenomena actually happen in real life, it seems that this holds (and it certainly seems to hold in fantasy, where mind control is often illustrated as feeling like this).

I should also note that while do not hold what I understand to be a standard Compatibilist conception of free will, Compatibilism certainly also holds that unpredictability is not a requirement for free will. Certainly this is not a new idea, and at least a part of my understanding does fall into standard Compatibilism as I understand it. My views are also derived from the free will subsequence here on LW. These ideas are debatable, but they are certainly not all that special. Perhaps I was assuming too little inferential difference; I didn't attempt to derive the entire argument here, not did I give a link to the sequence which formed my beliefs; I think I assumed that many people would notice the connection, but perhaps not.

You're straw manning. An entity controlling its own behaviour is so non contradictory that there is a branch of engineering dedicated to it: cybernetics.

There's plenty of evidence that people rationalise decisions after the event, so you would have a feeling of narrative under any circumstances.

If there are N things you might want to do, why not make a random choice between them?

You may be constrained, but that doesn't imply constrained down to one.

Free will isn't generally defined as necessarily operating outside of physics, so that is more an attack on a strawmman than a dissolution of the question.

Free will also doesn't mean a feeling.

If you didn't k ow the output of you're algorithm, you might feel free, and if you had free will you might feel free too. EYs solution isn't unique.