Posts

Comments

Comment by ScottAaronson on [link] Scott Aaronson on free will · 2013-06-20T13:15:45.061Z · LW · GW

(1) Well, that's the funny thing about "should": if copyable entities have a definite goal (e.g., making as many additional copies as possible, taking over the world...), then we simply need to ask what form of reasoning will best help them achieve the goal. If, on the other hand, the question is, "how should a copy reason, so as to accord with its own subjective experience? e.g., all else equal, will it be twice as likely to 'find itself' in a possible world with twice as many copies?" -- then we need some account of the subjective experience of copyable entities before we can even start to answer the question.

(2) Yes, certainly it's possible that we're all living in a digital simulation -- in which case, maybe we're uncopyable from within the simulation, but copyable by someone outside the simulation with "sysadmin access." But in that case, what can I do, except try to reason based on the best theories we can formulate from within the simulation? It's no different than with any "ordinary" scientific question.

(3) Yes, I raised the possibility that copyable minds might have no subjective experience or a different kind of subjective experience, but I certainly don't think we can determine the truth of that possibility by introspection -- or for that matter, even by "extrospection"! :-) The most we could do, maybe, is investigate whether the physical substrate of our minds makes them uncopyable, and therefore whether it's even logically coherent to imagine a distinction between them and copyable minds.

Comment by ScottAaronson on [link] Scott Aaronson on free will · 2013-06-20T07:29:05.274Z · LW · GW

(1) I agree that we can easily conceive of a world where most entities able to pass the Turing Test are copyable. I agree that it's extremely interesting to think about what such a world would be like --- and maybe even try to prepare for it if we can. And as for how the copyable entities will reason about their own existence -- well, that might depend on the goals of whoever or whatever set them loose! As a simple example, the Stuxnet worm eventually deleted itself, if it decided it was on a computer that had nothing to do with Iranian centrifuges. We can imagine that each copy "knew" about the others, and "knew" that it might need to kill itself for the benefit of its doppelgangers. And as for why it behaved that way --- well, we could answer that question in terms of the code, or in terms of the intentions of the people who wrote the code. Of course, if the code hadn't been written by anyone, but was instead (say) the outcome of some evolutionary process, then we'd have to look for an explanation in terms of that process. But of course it would help to have the code to examine!

(2) You argue that, if I were copyable, then the copies would wonder about the same puzzles that the "uncopyable" version wonders about -- and for that reason, it can't be legitimate even to try to resolve those puzzles by assuming that I'm not copyable. Compare to the following argument: if I were a character in a novel, then that character would say exactly the same things I say for the same reasons, and wonder about the same things that I wonder about. Therefore, when reasoning about (say) physics or cosmology, it's illegitimate even to make the tentative assumption that I'm not a character in a novel. This is a fun argument, but there are several possible responses, among them: haven't we just begged the question, by assuming there is something it's like to be a copyable em or a character in a novel? Again, I don't declare with John Searle that there's obviously nothing that it's like, if you think there is then you need your head examined, etc. etc. On the other hand, even if I were a character in a novel, I'd still be happy to have that character assume it wasn't a character -- that its world was "real" -- and see how far it could get with that assumption.

(3) No, I absolutely don't think that we can learn whether we're copyable or not by "introspecting on the quality of our subjective experience," or that we'll ever be able to do such a thing. The sort of thing that might eventually give us insight into whether we're copyable or not would be understanding the effect of microscopic noise on the sodium-ion channels, whether the noise can be grounded in PMDs, etc. If you'll let me quote from Sec. 2.1 of my essay: "precisely because one can’t decide between conflicting introspective reports, in this essay I’ll be exclusively interested in what can be learned from scientific observation and argument. Appeals to inner experience—including my own and the reader’s—will be out of bounds."

Comment by ScottAaronson on Quotes and Notes on Scott Aaronson’s "The Ghost in the Quantum Turing Machine" · 2013-06-19T16:56:18.101Z · LW · GW

shminux: I don't know any way, even in principle, to prove that uncertainty is Knightian. (How do you decisively refute someone who claims that if only we had a better theory, we could calculate the probabilities?) Though even here, there's an interesting caveat. Namely, I also would have thought as a teenager that there could be no way, even in principle, to "prove" something is "truly probabilistic," rather than deterministic but with complicated hidden parameters. But that was before I learned the Bell/CHSH theorem, which does pretty much exactly that (if you grant some mild locality assumptions)! So it's at least logically possible that some future physical theory could demand Knightian uncertainty in order to make internal sense, in much the same way that quantum mechanics demands probabilistic uncertainty.

But setting aside that speculative possibility, there's a much more important point in practice: namely, it's much easier to rule out that a given source of uncertainty is Knightian, or at least to place upper bounds on how much Knightian uncertainty it can have. To do so, you "merely" have to give a model for the system so detailed that, by using it, you can:

(1) calculate the probability of any event you want, to any desired accuracy,

(2) demonstrate, using repeated tests, that your probabilities are well-calibrated (e.g., of the things you say will happen roughly 60% of the time, roughly 60% of them indeed happen, and moreover the subset of those things that happen passes all the standard statistical tests for not having any further structure), and

(3) crucially, provide evidence that your probabilities don't merely reflect epistemic ignorance. In practice, this would almost certainly mean providing the causal pathways by which the probabilities can be traced down to the quantum level.

Admittedly, (1)-(3) sound like a tall order! But I'd say that they've already been done, more or less, for all sorts of complicated multi-particle quantum systems (in chemistry, condensed-matter physics, etc.): we can calculate the probabilities, compare them against observation, and trace the origin of the probabilities to the Born rule.

Of course, if you have a large ensemble of identical copies of your system (or things you regard as identical copies), then that makes validating your probabilistic model a lot more straightforward, for then you can replace step (2) by direct experimental estimation of the probabilities. But in the above, I was careful never to assume that we had lots of identical copies --- since if the freebit picture were accepted, then in many cases of interest to us we wouldn't!

Comment by ScottAaronson on [link] Scott Aaronson on free will · 2013-06-18T19:24:42.306Z · LW · GW

Alright, consider the following questions:

  • What's it like to be simulated in homomorphically encrypted form (http://en.wikipedia.org/wiki/Homomorphic_encryption)---so that someone who saw the entire computation (including its inputs and outputs), and only lacked a faraway decryption key, would have no clue that the whole thing is isomorphic to what your brain is doing?

  • What's it like to be simulated by a reversible computer, and immediately "uncomputed"? Would you undergo the exact same set of experiences twice? Or once "forwards" and then once "backwards" (whatever that means)? Or, since the computation leaves no trace of its ever having happened, and is "just a convoluted implementation of the identity function," would you not experience anything?

  • Once the code of your brain is stored in a computer, why would anyone even have to bother running the code to evoke your subjective experience? And what counts as running it? Is it enough to do a debugging trace with pen and paper?

  • Suppose that, purely for internal error-correction purposes, a computer actually "executes" you three times in parallel, then outputs the MAJORITY of the results. Is there now one conscious entity or three? (Or maybe 7, for every nonempty subset of executions?)

Crucially, unlike some philosophers (e.g. John Searle), I don't pound the table and declare it "obvious" that there's nothing that it's like to be simulated in the strange ways above. All I say is that I don't think I have any idea what it's like, in even the same imperfect way that I can imagine what it's like to be another human being (or even, say, an unclonable extraterrestrial) by analogy with my own case. And that's why I'm not as troubled as some people are, if some otherwise-plausible cosmological theory predicts that the overwhelming majority of "copies" of me should be Boltzmann brains, computer simulations, etc. I view that as a sign, not that I'm almost certainly a copy (though I might be), but simply that I don't yet know the right way to think about this issue, and maybe that there's a good reason (lack of freebits??) why the supposed "copies" shouldn't even be included in my reference class.

Comment by ScottAaronson on Quotes and Notes on Scott Aaronson’s "The Ghost in the Quantum Turing Machine" · 2013-06-18T18:21:52.257Z · LW · GW

Well, I can try to make my best guess if forced to -- using symmetry arguments or any other heuristic at my disposal -- but my best guess might differ from some other, equally-rational person's best guess. What I mean by a probabilistic system's being "mechanistic" is that the probabilities can be calculated in such a way that no two rational people will disagree about them (as with, say, a radioactive decay time, or the least significant digit of next week's Dow Jones average).

Also, the point of my "Earth C" example was that symmetry arguments can only be used once we know the reference class of things to symmetrize over -- but which reference class to choose is precisely the sort of thing about which rational people can disagree, with no mathematical or empirical way to resolve the disagreement. (And moreover, where it's not even obvious that there is a "true, pre-existing answer" out there in the world, or what it would mean for something to count as such an answer.)

Comment by ScottAaronson on [link] Scott Aaronson on free will · 2013-06-18T14:10:40.389Z · LW · GW

Well, all I can say is that "getting a deity off the hook" couldn't possibly be further from my motives! :-) For the record, I see no evidence for a deity anything like that of conventional religions, and I see enormous evidence that such a deity would have to be pretty morally monstrous if it did exist. (I like the Yiddish proverb: "If God lived on earth, people would break His windows.") I'm guessing this isn't a hard sell here on LW.

Furthermore, for me the theodicy problem isn't even really connected to free will. As Dostoyevsky pointed out, even if there is indeterminist free will, you would still hope that a loving deity would install some "safety bumpers," so that people could choose to do somewhat bad things (like stealing hubcaps), but would be prevented from doing really, really bad ones (like mass-murdering children).

One last clarification: the whole point of my perspective is that I don't have to care about so-called "causal determination"---either the theistic kind or the scientific kind---until and unless it gets cashed out into actual predictions! (See Sec. 2.6.)

Comment by ScottAaronson on [link] Scott Aaronson on free will · 2013-06-18T13:15:46.295Z · LW · GW

Wei, I completely agree that people should "directly attack the philosophical problems associated with copyable minds," and am glad that you, Eliezer, and others have been trying to do that! I also agree that I can't prove I'm not living in a simulation --- nor that that fact won't be revealed to me tomorrow by a being in the meta-world, who will also introduce me to dozens of copies of myself running in other simulations. But as long as we're trading hypotheticals: what if minds (or rather, the sorts of minds we have) can only be associated with uncopyable physical substrates? What if the very empirical facts that we could copy a program, trace its execution, predict its outputs using an abacus, run the program backwards, in heavily-encrypted form, in one branch of a quantum computation, at one step per millennium, etc. etc., were to count as reductios that there's probably nothing that it's like to be that program --- or at any rate, nothing comprehensible to beings such as us?

Again, I certainly don't know that this is a reasonable way to think. I myself would probably have ridiculed it, before I realized that various things that confused me for years and that I discuss in the essay (Newcomb, Boltzmann brains, the "teleportation paradox," Wigner's friend, the measurement problem, Bostrom's observer-counting problems...) all seemed to beckon me in that direction from different angles. So I decided that, given the immense perplexities associated with copyable minds (which you know as well as anyone), the possibility that uncopyability is essential to our subjective experience was at least worth trying to "steelman" (a term I learned here) to see how far I could get with it. So, that's what I tried to do in the essay.

Comment by ScottAaronson on Quotes and Notes on Scott Aaronson’s "The Ghost in the Quantum Turing Machine" · 2013-06-18T12:33:08.676Z · LW · GW

The relevant passage of the essay (p. 65) goes into more detail than the paraphrase you quoted, but the short answer is: how does the superintelligence know it should assume the uniform distribution, and not some other distribution? For example, suppose someone tips it off about a third Earth, C, which is "close enough" to Earths A and B even if not microscopically identical, and in which you made the same decision as in B. Therefore, this person says, the probabilities should be adjusted to (1/3,2/3) rather than (1/2,1/2). It's not obvious whether the person is right---is Earth C really close enough to A and B?---but the superintelligence decides to give the claim some nonzero credence. Then boom, its prior is no longer uniform. It might still be close, but if there are thousands of freebits, then the distance from uniformity will quickly get amplified to almost 1.

Your prescription corresponds to E. T. Jaynes's "MaxEnt principle," which basically says to assume a uniform (or more generally, maximum-entropy) prior over any degrees of freedom that you don't understand. But the conceptual issues with MaxEnt are well-known: the uniform prior over what, exactly? How do you even parameterize "the degrees of freedom that you don't understand," in order to assume a uniform prior over them? (You don't want to end up like the guy interviewed on Jon Stewart, who earnestly explained that, since the Large Hadron Collider might destroy the earth and might not destroy it, the only rational inference was that it had a 50% chance of doing so. :-) )

To clarify: I don't deny the enormous value of MaxEnt and other Bayesian-prior-choosing heuristics in countless statistical applications. Indeed, if you forced me at gunpoint to bet on something about which I had Knightian uncertainty, then I too would want to use Bayesian methods, making judicious use of those heuristics! But applied statisticians are forced to use tricks like MaxEnt, precisely because they lack probabilistic models for the systems they're studying that are anywhere near as complete as (let's say) the quantum model of the hydrogen atom. If you believe there are any systems in nature for which, given the limits of our observations, our probabilistic models can never achieve hydrogen-atom-like completeness (even in principle)---so that we'll always be forced to use tricks like MaxEnt---then you believe in freebits. There's nothing more to the notion than that.

Comment by ScottAaronson on Quotes and Notes on Scott Aaronson’s "The Ghost in the Quantum Turing Machine" · 2013-06-18T08:19:17.231Z · LW · GW

As a point of information, I too am only interested in predicting macroscopic actions (indeed, only probabilistically), not in what you call "absolute prediction." The worry, of course, is that chaotic amplification of small effects would preclude even "pretty good" prediction.

Comment by ScottAaronson on Quotes and Notes on Scott Aaronson’s "The Ghost in the Quantum Turing Machine" · 2013-06-17T19:19:50.629Z · LW · GW

"Even if we could not, by physical law, possibly know the fact, this still does not equate to the fact having inherent unknowability."

I think the sentence above nicely pinpoints where I part ways from you and Eliezer. To put it bluntly, if a fact is impossible for any physical agent to learn, according to the laws of physics, then that's "inherently unknowable" enough for me! :-) Or to say it even more strongly: I don't actually care much whether someone chooses to regard the unknowability of such a fact as "part of the map" or "part of the territory" -- any more than, if a bear were chasing me, I'd worry about whether aggression was an intrinsic attribute of the bear, or an attribute of my human understanding of the bear. In the latter case, I mostly just want to know what the bear will do. Likewise in the former case, I mostly just want to know whether the fact is knowable -- and if it isn't, then why! I find it strange that, in the free-will discussion, so many commentators seem to pass over the empirical question (in what senses can human decisions actually be predicted?) without even evincing curiosity about it, in their rush to argue over the definitions of words. (In AI, the analogue would be the people who argued for centuries about whether a machine could be conscious, without --- until Turing --- ever cleanly separating out the "simpler" question, of whether a machine could be built that couldn't be empirically distinguished from entities we regard as conscious.) A central reason why I wrote the essay was to try to provide a corrective to this (by my lights) anti-empirical tendency.

"you would be mistaken if you tried to draw on that fundamental "randomness" in any way that was not exactly equivalent to any other uncertainty, because on the map it looks exactly the same."

Actually, the randomness that arises from quantum measurement is empirically distinguishable from other types of randomness. For while we can measure a state |psi> in a basis not containing |psi>, and thereby get a random outcome, we also could've measured |psi> in a basis containing |psi> -- in which case, we would've confirmed that a measurement in the first basis must give a random outcome, whose probability distribution is exactly calculable by the Born rule, and which can't be explainable in terms of subjective ignorance of any pre-existing degrees of freedom unless we give up on locality.

But the more basic point is that, if freebits existed, then they wouldn't be "random," as I use the term "random": instead they'd be subject to Knightian uncertainty. So they couldn't be collapsed with the randomness arising from (e.g.) the Born rule or statistical coarse-graining, for that reason even if not also for others.

"Refusing to bet is, itself, just making a different bet."

Well, I'd regard that statement as the defining axiom of a certain limiting case of economic thinking. In practice, however, most economic agents exhibit some degree of risk-aversion, which could be defined as "that which means you're no longer in the limiting case where everything is a bet, and the only question is which bet maximizes your expected utility."

Comment by ScottAaronson on Quotes and Notes on Scott Aaronson’s "The Ghost in the Quantum Turing Machine" · 2013-06-17T17:44:08.401Z · LW · GW

In both cases, the question that interests me is whether an external observer could build a model of the human, by non-invasive scanning, that let it forecast the probabilities of future choices in a well-calibrated way. If the freebits or the trillions of bouncing molecules inside cells served only as randomization devices, then they wouldn't create any obstruction to such forecasts. So the relevant possibility here is that the brain, or maybe other complex systems, can't be cleanly decomposed into a "digital computation part" and a "microscopic noise part," such that the former sees the latter purely as a random number source. Again, I certainly don't know that such a decomposition is impossible, but I also don't know any strong arguments from physics or biology that assure us it's possible -- as they say, I hope future research will tell us more.

Comment by ScottAaronson on Quotes and Notes on Scott Aaronson’s "The Ghost in the Quantum Turing Machine" · 2013-06-17T17:19:40.314Z · LW · GW

Hi Eliezer,

(1) One of the conclusions I came to from my own study of QM was that we can't always draw as sharp a line as we'd like between "map" and "territory." Yes, there are some things, like Stegosauruses, that seem clearly part of the "territory"; and others, like the idea of Stegosauruses, that seem clearly part of the "map." But what about (say) a quantum mixed state? Well, the probability distribution aspect of a mixed state seems pretty "map-like," while the quantum superposition aspect seems pretty "territory-like" ... but oops! we can decompose the same mixed state into a probability distribution over superpositions in infinitely-many nonequivalent ways, and get exactly the same experimental predictions regardless of what choice we make.

(Since you approvingly mentioned Jaynes, I should quote the famous passage where he makes the same point: "But our present QM formalism is not purely epistemological; it is a peculiar mixture describing in part realities of Nature, in part incomplete human information about Nature --- all scrambled up by Heisenberg and Bohr into an omelette that nobody has seen how to unscramble.")

Indeed, this strikes me as an example where -- to put it in terms LW readers will understand -- the exact demarcation line between "map" and "territory" is empirically sterile; it doesn't help at all in constraining our anticipated future experiences. (Which, again, is not to deny that certain aspects of our experience are "definitely map-like," while others are "definitely territory-like.")

(2) It's not entirely true that the ideas I'm playing with "make no new experimental predictions" -- see Section 9.

(3) I don't agree that Knightian uncertainty must always be turned into betting odds, on pain of violating this or that result in decision theory. As I said in this essay, if you look at the standard derivations of probability theory, they typically make a crucial non-obvious assumption, like "given any bet, a rational agent will always be willing to take either one side or the other." If that assumption is dropped, then the path is open to probability intervals, Dempster-Shafer, and other versions of Knightian uncertainty.

Comment by ScottAaronson on Quotes and Notes on Scott Aaronson’s "The Ghost in the Quantum Turing Machine" · 2013-06-17T11:37:13.293Z · LW · GW

Just as a quick point of information, these arguments are all addressed in Sections 2.2 and 3.1. In particular, while I share the common intuition that "random" is just as incompatible with "free" as "predictable" is, the crucial observation is that "unpredictable" does not in any way imply "random" (in the sense of governed by some knowable probability distribution). But there's a broader point. Suppose we accepted, for argument's sake, that unpredictability is not "fundamental to freedom" (whatever we take "freedom" to mean). Wouldn't the question of whether human choices are predictable or not remain interesting enough in its own right?

Comment by ScottAaronson on Quotes and Notes on Scott Aaronson’s "The Ghost in the Quantum Turing Machine" · 2013-06-17T08:51:56.260Z · LW · GW

shminux: Thanks so much for compiling these notes and quotes! But I should say that I thought the other LW thread was totally fine. Sure, lots of people strongly disagreed with me, but I'd be disappointed if LW readers didn't! And when one or two people who hadn't read the paper got things wrong, they were downvoted and answered by others who had. Kudos to LW for maintaining such a high-quality discussion about a paper that, as DanielVarga put it, "moves in a direction that's very far from any kind of LW consensus."

Comment by ScottAaronson on [link] Scott Aaronson on free will · 2013-06-16T20:18:50.294Z · LW · GW

Hi Paul. I completely agree that I see no reason why you couldn't "get a functional human out of a brain scan" --- though even there, I probably wouldn't convert my failure to see such a reason into a bet at more than 100:1 odds that there's no such reason. (Building a scalable quantum computer feels one or two orders of magnitude easier to me, and I "merely" staked $100,000 on that being possible --- not my life or everything I own! :-) )

Now, regarding "whether there can be important aspects of your identity or continuity of experience that are locked up in uncopyable quantum state": well, I regard myself as sufficiently confused about what we even mean by that idea, and how we could decide its truth or falsehood in a publicly-verifiable way, that I'd be hesitant to accept almost ANY bet about it, regardless of the odds! If you like, I'm in a state of Knightian uncertainty, to whatever extent I even understand the question. So, I wrote the essay mostly just as a way of trying to sort out my thoughts.

Comment by ScottAaronson on [link] Scott Aaronson on free will · 2013-06-16T19:56:09.698Z · LW · GW

"Intertemporal solidarity is just as much a choice today as it will be should teleporters arrive."

I should clarify that I see no special philosophical problem with teleportation that necessarily destroys the original copy, as quantum teleportation would (see the end of Section 3.2). As you suggest, that strikes me as hardly more perplexing than someone's boarding a plane at Newark and getting off at LAX.

For me, all the difficulties arise when we imagine that the teleportation would leave the original copy intact, so that the "new" and "original" copies could then interact with each other, and you'd face conundrums like whether "you" will experience pain if you shoot your teleported doppelganger. This sort of issue simply doesn't arise with the traditional problem of intertemporal identity, unless of course we posit closed timelike curves.

Comment by ScottAaronson on [link] Scott Aaronson on free will · 2013-06-16T19:42:09.502Z · LW · GW

"But calling this Knightian unpredictability 'free will' just confuses both issues."

torekp, a quick clarification: I never DO identify Knightian unpredictability with "free will" in the essay. On the contrary, precisely because "free will" has too many overloaded meanings, I make a point of separating out what I'm talking about, and of referring to it as "freedom," "Knightian freedom," or "Knightian unpredictability," but never free will.

On the other hand, I also offer arguments for why I think unpredictability IS at least indirectly relevant to what most people want to know about when they discuss "free will" -- in much the same way that intelligent behavior (e.g., passing the Turing Test) is relevant to what people want to know about when they discuss consciousness. It's not that I'm unaware of the arguments that there's no connection whatsoever between the two; it's just that I disagree with them!