[link] Scott Aaronson on free will

post by DanielVarga · 2013-06-10T23:24:07.259Z · LW · GW · Legacy · 109 comments

Contents

109 comments

Scott Aaronson has a new 85 page essay up, titled "The Ghost in the Quantum Turing Machine". (Abstract here.) In Section 2.11 (Singulatarianism) he explicitly mentions Eliezer as an influence. But that's just a starting point, and he then moves in a direction that's very far from any kind of LW consensus. Among other things, he suggests that a crucial qualitative difference between a person and a digital upload is that the laws of physics prohibit making perfect copies of a person. Personally, I find the arguments completely unconvincing, but Aaronson is always thought-provoking and fun to read, and this is a good excuse to read about things like (I quote the abstract) "the No-Cloning Theorem, the measurement problem, decoherence, chaos, the arrow of time, the holographic principle, Newcomb's paradox, Boltzmann brains, algorithmic information theory, and the Common Prior Assumption". This is not just a shopping list of buzzwords, these are all important components of the author's main argument. It unfortunately still seems weak to me, but the time spent reading it is not wasted at all.

109 comments

Comments sorted by top scores.

comment by cousin_it · 2013-06-11T13:19:58.149Z · LW(p) · GW(p)

The main disagreement between Aaronson's idea and LW ideas seems to be this:

If any of these technologies—brain-uploading, teleportation, the Newcomb predictor, etc.—were actually realized, then all sorts of “woolly metaphysical questions” about personal identity and free will would start to have practical consequences. Should you fax yourself to Mars or not? Sitting in the hospital room, should you bet that the coin landed heads or tails? Should you expect to “wake up” as one of your backup copies, or as a simulation being run by the Newcomb Predictor? These questions all seem “empirical,” yet one can’t answer them without taking an implicit stance on questions that many people would prefer to regard as outside the scope of science.

(...)

As far as I can see, the only hope for avoiding these difficulties is if—because of chaos, the limits of quantum measurement, or whatever other obstruction—minds can’t be copied perfectly from one physical substrate to another, as can programs on standard digital computers. So that’s a possibility that this essay explores at some length. To clarify, we can’t use any philosophical difficulties that would arise if minds were copyable, as evidence for the empirical claim that they’re not copyable. The universe has never shown any particular tendency to cater to human philosophical prejudices! But I’d say the difficulties provide more than enough reason to care about the copyability question.

LW mostly prefers to bite the bullet on such questions, by using tools such as UDT. I'd be really curious to see Aaronson's response to Wei's UDT post.

Replies from: Wei_Dai, Qiaochu_Yuan, gjm, Kawoomba
comment by Wei Dai (Wei_Dai) · 2013-06-17T09:29:41.749Z · LW(p) · GW(p)

As far as I can see, the only hope for avoiding these difficulties is if—because of chaos, the limits of quantum measurement, or whatever other obstruction—minds can’t be copied perfectly from one physical substrate to another, as can programs on standard digital computers.

Even if Aaronson's speculation that human minds are not copyable turns out to be correct, that doesn't rule out copyable minds being built in the future, either de novo AIs or what he (on page 58) calls "mockups" of human minds that are functionally close enough to the originals to fool their close friends. The philosophical problems with copyable minds will still be an issue for those minds, and therefore minds not being copyable can't be the only hope of avoiding these difficulties.

To put this another way, suppose Aaronson definitively shows that according to quantum physics, minds of biological humans can't be copied exactly. But how does he know that he is actually one of the original biological humans, and not for example a "mockup" living inside a digital simulation, and hence copyable? I think that is reason enough for him to directly attack the philosophical problems associated with copyable minds instead of trying to dodge them.

Replies from: ScottAaronson
comment by ScottAaronson · 2013-06-18T13:15:46.295Z · LW(p) · GW(p)

Wei, I completely agree that people should "directly attack the philosophical problems associated with copyable minds," and am glad that you, Eliezer, and others have been trying to do that! I also agree that I can't prove I'm not living in a simulation --- nor that that fact won't be revealed to me tomorrow by a being in the meta-world, who will also introduce me to dozens of copies of myself running in other simulations. But as long as we're trading hypotheticals: what if minds (or rather, the sorts of minds we have) can only be associated with uncopyable physical substrates? What if the very empirical facts that we could copy a program, trace its execution, predict its outputs using an abacus, run the program backwards, in heavily-encrypted form, in one branch of a quantum computation, at one step per millennium, etc. etc., were to count as reductios that there's probably nothing that it's like to be that program --- or at any rate, nothing comprehensible to beings such as us?

Again, I certainly don't know that this is a reasonable way to think. I myself would probably have ridiculed it, before I realized that various things that confused me for years and that I discuss in the essay (Newcomb, Boltzmann brains, the "teleportation paradox," Wigner's friend, the measurement problem, Bostrom's observer-counting problems...) all seemed to beckon me in that direction from different angles. So I decided that, given the immense perplexities associated with copyable minds (which you know as well as anyone), the possibility that uncopyability is essential to our subjective experience was at least worth trying to "steelman" (a term I learned here) to see how far I could get with it. So, that's what I tried to do in the essay.

Replies from: Wei_Dai, gjm
comment by Wei Dai (Wei_Dai) · 2013-06-19T03:13:00.752Z · LW(p) · GW(p)

But as long as we're trading hypotheticals: what if minds (or rather, the sorts of minds we have) can only be associated with uncopyable physical substrates?

If that turns out to be the case, I don't think it would much diminish either my intellectual curiosity about how problems associated with mind copying ought to be solved nor the practical importance of solving such problems (to help prepare for a future where most minds will probably be copyable, even if my own isn't).

various things that confused me for years and that I discuss in the essay (Newcomb, Boltzmann brains, the "teleportation paradox," Wigner's friend, the measurement problem, Bostrom's observer-counting problems...) all seemed to beckon me in that direction from different angles

It seems likely that in the future we'll be able to build minds that are very human-like, but copyable. For example we could take someone's gene sequence, put them inside a virtual embryo inside a digital simulation, let it grow into an infant and then raise it in a virtual environment similar to a biological human child's. I'm assuming that you don't dispute this will be possible (at least in principle), but are saying that such a mind might not have the same kind of subjective experience as we do. Correct?

Now suppose we built such a mind using your genes, and gave it an upbringing and education similar to yours. Wouldn't you then expect it to be puzzled by all the things that you mentioned above, except it would have to solves those puzzles in some way other than by saying "I can get around these confusions if I'm not copyable"? Doesn't that suggest to you that there have to be solutions to those puzzles that do not involve "I'm not copyable" and therefore the existence of the puzzles shouldn't have beckoned you in the direction of thinking that you're uncopyable?

So I decided that, given the immense perplexities associated with copyable minds (which you know as well as anyone), the possibility that uncopyability is essential to our subjective experience was at least worth trying to "steelman" (a term I learned here) to see how far I could get with it.

If you (or somebody) eventually succeed in showing that uncopyability is essential to our subjective experience, that would mean that by introspecting on the quality of our subjective experience, we would be able to determine whether or not we are copyable, right? Suppose we take a copyable mind (such as the virtual Scott Aaronson clone mentioned above), make another copy of it, then turn one of the two copies into an uncopyable mind by introducing some freebits into it. Do you think these minds would be able to accurately report whether they are copyable, and if so, by what plausible mechanism?

Replies from: ScottAaronson
comment by ScottAaronson · 2013-06-20T07:29:05.274Z · LW(p) · GW(p)

(1) I agree that we can easily conceive of a world where most entities able to pass the Turing Test are copyable. I agree that it's extremely interesting to think about what such a world would be like --- and maybe even try to prepare for it if we can. And as for how the copyable entities will reason about their own existence -- well, that might depend on the goals of whoever or whatever set them loose! As a simple example, the Stuxnet worm eventually deleted itself, if it decided it was on a computer that had nothing to do with Iranian centrifuges. We can imagine that each copy "knew" about the others, and "knew" that it might need to kill itself for the benefit of its doppelgangers. And as for why it behaved that way --- well, we could answer that question in terms of the code, or in terms of the intentions of the people who wrote the code. Of course, if the code hadn't been written by anyone, but was instead (say) the outcome of some evolutionary process, then we'd have to look for an explanation in terms of that process. But of course it would help to have the code to examine!

(2) You argue that, if I were copyable, then the copies would wonder about the same puzzles that the "uncopyable" version wonders about -- and for that reason, it can't be legitimate even to try to resolve those puzzles by assuming that I'm not copyable. Compare to the following argument: if I were a character in a novel, then that character would say exactly the same things I say for the same reasons, and wonder about the same things that I wonder about. Therefore, when reasoning about (say) physics or cosmology, it's illegitimate even to make the tentative assumption that I'm not a character in a novel. This is a fun argument, but there are several possible responses, among them: haven't we just begged the question, by assuming there is something it's like to be a copyable em or a character in a novel? Again, I don't declare with John Searle that there's obviously nothing that it's like, if you think there is then you need your head examined, etc. etc. On the other hand, even if I were a character in a novel, I'd still be happy to have that character assume it wasn't a character -- that its world was "real" -- and see how far it could get with that assumption.

(3) No, I absolutely don't think that we can learn whether we're copyable or not by "introspecting on the quality of our subjective experience," or that we'll ever be able to do such a thing. The sort of thing that might eventually give us insight into whether we're copyable or not would be understanding the effect of microscopic noise on the sodium-ion channels, whether the noise can be grounded in PMDs, etc. If you'll let me quote from Sec. 2.1 of my essay: "precisely because one can’t decide between conflicting introspective reports, in this essay I’ll be exclusively interested in what can be learned from scientific observation and argument. Appeals to inner experience—including my own and the reader’s—will be out of bounds."

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2013-06-20T08:11:10.227Z · LW(p) · GW(p)

And as for how the copyable entities will reason about their own existence

I'm not interested so much in how they will reason, but in how they should reason.

The sort of thing that might eventually give us insight into whether we're copyable or not would be understanding the effect of microscopic noise on the sodium-ion channels, whether the noise can be grounded in PMDs, etc.

When you say "we" here, do you literally mean "we" or do you mean "biological humans"? Because I can see how understanding the effect of microscopic noise on the sodium-ion channels might give us insight into whether biological humans are copyable, but it doesn't seem to tell us whether we are biological humans or for example digital simulations (and therefore whether your proposed solution to the philosophical puzzles is of any relevance to us). I thought you were proposing that if your theory is correct then we would eventually be able to determine that by introspection, since you said copyable minds might have no subjective experience or a different kind of subjective experience.

Replies from: ScottAaronson
comment by ScottAaronson · 2013-06-20T13:15:45.061Z · LW(p) · GW(p)

(1) Well, that's the funny thing about "should": if copyable entities have a definite goal (e.g., making as many additional copies as possible, taking over the world...), then we simply need to ask what form of reasoning will best help them achieve the goal. If, on the other hand, the question is, "how should a copy reason, so as to accord with its own subjective experience? e.g., all else equal, will it be twice as likely to 'find itself' in a possible world with twice as many copies?" -- then we need some account of the subjective experience of copyable entities before we can even start to answer the question.

(2) Yes, certainly it's possible that we're all living in a digital simulation -- in which case, maybe we're uncopyable from within the simulation, but copyable by someone outside the simulation with "sysadmin access." But in that case, what can I do, except try to reason based on the best theories we can formulate from within the simulation? It's no different than with any "ordinary" scientific question.

(3) Yes, I raised the possibility that copyable minds might have no subjective experience or a different kind of subjective experience, but I certainly don't think we can determine the truth of that possibility by introspection -- or for that matter, even by "extrospection"! :-) The most we could do, maybe, is investigate whether the physical substrate of our minds makes them uncopyable, and therefore whether it's even logically coherent to imagine a distinction between them and copyable minds.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2013-06-20T20:29:20.498Z · LW(p) · GW(p)

The most we could do, maybe, is investigate whether the physical substrate of our minds makes them uncopyable, and therefore whether it's even logically coherent to imagine a distinction between them and copyable minds.

If that's the most you're expecting to show at the end of your research program, then I don't understand why you see it as a "hope" of avoiding the philosophical difficulties you mentioned. (I mean I have no problems with it as a scientific investigation in general, it's just that it doesn't seem to solve the problems that originally motivated you.) For example according to Nick Bostrom's Simulation Argument, most human-like minds in our universe are digital simulations run by posthumans. How do you hope to conclude that the simulations "shouldn't even be included in my reference class" if you don't hope to conclude that you, personally, are not copyable?

comment by gjm · 2013-06-18T14:07:23.356Z · LW(p) · GW(p)

What would make them "count as reductios that there's probably nothing that it's like to be that program", and how?

Replies from: ScottAaronson
comment by ScottAaronson · 2013-06-18T19:24:42.306Z · LW(p) · GW(p)

Alright, consider the following questions:

  • What's it like to be simulated in homomorphically encrypted form (http://en.wikipedia.org/wiki/Homomorphic_encryption)---so that someone who saw the entire computation (including its inputs and outputs), and only lacked a faraway decryption key, would have no clue that the whole thing is isomorphic to what your brain is doing?

  • What's it like to be simulated by a reversible computer, and immediately "uncomputed"? Would you undergo the exact same set of experiences twice? Or once "forwards" and then once "backwards" (whatever that means)? Or, since the computation leaves no trace of its ever having happened, and is "just a convoluted implementation of the identity function," would you not experience anything?

  • Once the code of your brain is stored in a computer, why would anyone even have to bother running the code to evoke your subjective experience? And what counts as running it? Is it enough to do a debugging trace with pen and paper?

  • Suppose that, purely for internal error-correction purposes, a computer actually "executes" you three times in parallel, then outputs the MAJORITY of the results. Is there now one conscious entity or three? (Or maybe 7, for every nonempty subset of executions?)

Crucially, unlike some philosophers (e.g. John Searle), I don't pound the table and declare it "obvious" that there's nothing that it's like to be simulated in the strange ways above. All I say is that I don't think I have any idea what it's like, in even the same imperfect way that I can imagine what it's like to be another human being (or even, say, an unclonable extraterrestrial) by analogy with my own case. And that's why I'm not as troubled as some people are, if some otherwise-plausible cosmological theory predicts that the overwhelming majority of "copies" of me should be Boltzmann brains, computer simulations, etc. I view that as a sign, not that I'm almost certainly a copy (though I might be), but simply that I don't yet know the right way to think about this issue, and maybe that there's a good reason (lack of freebits??) why the supposed "copies" shouldn't even be included in my reference class.

comment by Qiaochu_Yuan · 2013-06-12T01:16:01.497Z · LW(p) · GW(p)

I really don't like the term "LW consensus" (isn't there a LW post about how you should separate out bundles of ideas and consider them separately because there's no reason to expect the truth of one idea in a bundle to correlate strongly with the truth of the others? If there isn't, there should be). I've been using "LW memeplex" instead to emphasize that these ideas have been bundled together for not necessarily systematically good reasons.

Replies from: cousin_it
comment by cousin_it · 2013-06-12T07:20:08.310Z · LW(p) · GW(p)

OK, replaced with "LW ideas".

comment by gjm · 2013-06-11T15:32:10.417Z · LW(p) · GW(p)

I think that last paragraph you quote needs the following extra bit of context:

To clarify, we can’t use any philosophical difficulties that would arise if minds were copyable, as evidence for the empirical claim that they’re not copyable. The universe has never shown any particular tendency to cater to human philosophical prejudices! But I’d say the difficulties provide more than enough reason to care about the copyability question.

... because otherwise it looks as if Aaronson is saying something really silly, which he isn't.

Replies from: cousin_it
comment by cousin_it · 2013-06-11T15:38:07.665Z · LW(p) · GW(p)

Good point, thanks! Added that bit.

comment by Kawoomba · 2013-06-11T13:44:44.427Z · LW(p) · GW(p)

If we could fax ourselves to Mars, or undergo uploading, then still wonder whether we're still "us" -- the same as we wonder now when such capabilities are just theoretical/hypotheticals -- that should count as a strong indication that such questions are not very practically relevant, contrary to Aaronson's assertion. Surely we'd need some legal rules, but the basis for those wouldn't be much different than any basis we have now -- we'd still be none the wiser about what identity means, even standing around with our clones.

For example, if we were to wonder about a question of "what effect will a foom-able AI have on our civilization", surely asking after the fact would yield different answers to asking before. With copies / uploads etc., you and your perfect copy could hold a meeting contemplating who stays married with the wife, and still start from the same basis with the same difficulty of finding the "true" answer as if you'd discussed the topic with a pal roleplaying your clone, in the present time.

comment by Qiaochu_Yuan · 2013-06-11T04:15:11.182Z · LW(p) · GW(p)

This paper has some useful comments on methodology that seem relevant to some recent criticism of MIRI's recent research, e.g. the discussion in Section 2.2 about replacing questions with other questions, which is arguably what both the Löb paper and the prisoner's dilemma paper do.

Replies from: lukeprog
comment by lukeprog · 2013-06-12T03:30:49.237Z · LW(p) · GW(p)

In particular:

whenever it’s been possible to make definite progress on ancient philosophical problems, such progress has almost always involved a [kind of] “bait-and-switch.” In other words: one replaces an unanswerable philosophical riddle Q by a “merely” scientific or mathematical question Q′, which captures part of what people have wanted to know when they’ve asked Q. Then, with luck, one solves Q′.

Of course, even if Q′ is solved, centuries later philosophers might still be debating the exact relation between Q and Q′! And further exploration might lead to other scientific or mathematical questions — Q′′, Q′′′, and so on — which capture aspects of Q that Q′ left untouched. But from my perspective, this process of “breaking off” answerable parts of unanswerable riddles, then trying to answer those parts, is the closest thing to philosophical progress that there is.

Successful examples of this breaking-off process fill intellectual history. The use of calculus to treat infinite series, the link between mental activity and nerve impulses, natural selection, set theory and first-order logic, special relativity, Gödel’s theorem, game theory, information theory, computability and complexity theory, the Bell inequality, the theory of common knowledge, Bayesian causal networks — each of these advances addressed questions that could rightly have been called “philosophical” before the advance was made. And after each advance, there was still plenty for philosophers to debate about truth and provability and infinity, space and time and causality, probability and information and life and mind. But crucially, it seems to me that the technical advances transformed the philosophical discussion as philosophical discussion itself rarely transforms it! And therefore, if such advances don’t count as “philosophical progress,” then it’s not clear that anything should.

Appropriately for this essay, perhaps the best precedent for my bait-and-switch is the Turing Test... with legendary abruptness, Turing simply replaced the original question by a different one: “Are there imaginable digital computers which would do well in the imitation game?”...

...The claim is not that the new question, about the imitation game, is identical to the original question about machine intelligence. The claim, rather, is that the new question is a worthy candidate for what we should have asked or meant to have asked, if our goal was to learn something new rather than endlessly debating definitions. [Luke adds: I'm reminded of Dennett's quip that "Philosophy... is what you have to do until you figure out what questions you should have been asking in the first place."] In math and science, the process of revising one’s original question is often the core of a research project, with the actual answering of the revised question being the relatively easy part!

A good replacement question Q′ should satisfy two properties:

(a) Q′ should capture some aspect of the original question Q — so that an answer to Q′ would be hard to ignore in any subsequent discussion of Q.

(b) Q′ should be precise enough that one can see what it would mean to make progress on Q′: what experiments one would need to do, what theorems one would need to prove, etc.

The Turing Test, I think, captured people’s imaginations precisely because it succeeded so well at (a) and (b). Let me put it this way: if a digital computer were built that aced the imitation game, then it’s hard to see what more science could possibly say in support of machine intelligence being possible. Conversely, if digital computers were proved unable to win the imitation game, then it’s hard to see what more science could say in support of machine intelligence not being possible. Either way, though, we’re no longer “slashing air,” trying to pin down the true meanings of words like “machine” and “think”: we’ve hit the relatively-solid ground of a science and engineering problem. Now if we want to go further we need to dig (that is, do research in cognitive science, machine learning, etc). This digging might take centuries of backbreaking work; we have no idea if we’ll ever reach the bottom. But at least it’s something humans know how to do and have done before. Just as important, diggers (unlike air-slashers) tend to uncover countless treasures besides the ones they were looking for.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-12-26T20:53:24.293Z · LW(p) · GW(p)

whenever it’s been possible to make definite progress on ancient philosophical problems, such progress has almost always involved a [kind of] “bait-and-switch.” In other words: one replaces an unanswerable philosophical riddle Q by a “merely” scientific or mathematical question Q′, which captures part of what people have wanted to know when they’ve asked Q. Then, with luck, one solves Q′.

Yes, this is what modern causal inference did (I suppose by taking Hume's counterfactual definition of causation, and various people's efforts to deal with confounding/incompatability in data analysis as starting points).

comment by buybuydandavis · 2013-06-12T03:44:53.660Z · LW(p) · GW(p)

I'm not a perfect copy of myself from one moment to the next, so I just don't see the force of his objection.

Fundamentally, those willing to teleport themselves will and those unwilling won't. Intertemporal solidarity is just as much a choice today as it will be should teleporters arrive. Practically, it will be convenient for both the teleporters and the nonteleporters to treat the teleporters as if they have continuous identity.

Replies from: torekp, ScottAaronson, Locaha
comment by torekp · 2013-06-15T18:55:37.666Z · LW(p) · GW(p)

Intertemporal solidarity is just as much a choice today as it will be should teleporters arrive.

That is admirably concise, correct, and totally on target.

Replies from: ESRogs
comment by ESRogs · 2013-06-19T02:24:21.564Z · LW(p) · GW(p)

Is the meaning of this statement that it's a choice whether I consider me-at-different-times to be the same person as me-now?

Replies from: torekp
comment by torekp · 2013-06-21T16:34:13.850Z · LW(p) · GW(p)

That would be one way to look at it. Another would be to put aside the "same person?" question and just answer the "do I intend to work for the benefit of this future person?" question more directly, using the facts about causal connections, personality similarities, etc.

Replies from: ESRogs
comment by ESRogs · 2013-06-21T19:38:15.460Z · LW(p) · GW(p)

Ah, thanks!

comment by ScottAaronson · 2013-06-16T19:56:09.698Z · LW(p) · GW(p)

"Intertemporal solidarity is just as much a choice today as it will be should teleporters arrive."

I should clarify that I see no special philosophical problem with teleportation that necessarily destroys the original copy, as quantum teleportation would (see the end of Section 3.2). As you suggest, that strikes me as hardly more perplexing than someone's boarding a plane at Newark and getting off at LAX.

For me, all the difficulties arise when we imagine that the teleportation would leave the original copy intact, so that the "new" and "original" copies could then interact with each other, and you'd face conundrums like whether "you" will experience pain if you shoot your teleported doppelganger. This sort of issue simply doesn't arise with the traditional problem of intertemporal identity, unless of course we posit closed timelike curves.

Replies from: cousin_it
comment by cousin_it · 2013-06-17T13:22:58.151Z · LW(p) · GW(p)

Sometimes you don't need copying to get a tricky decision problem, amnesia or invisible coinflips are enough. For example, we have the Sleeping Beauty problem, the Absent-Minded Driver which is a good test case for LW ideas, or Psy-Kosh's problem which doesn't even need amnesia.

comment by Locaha · 2013-06-12T10:33:59.024Z · LW(p) · GW(p)

You are not a COPY (perfect or otherwise) of yourself from one moment to the next. Not by any meaningful definition of the word copy.

Replies from: buybuydandavis
comment by buybuydandavis · 2013-06-12T10:40:58.581Z · LW(p) · GW(p)

The whole copying language kind of begs the question.

Compare Dan(t=n) and Dan(t=n+1). Not identical. That's as true now as it will be in a teleporting and replicating future. Calling it "the same" Dan or a "different" Dan is a choice.

Replies from: Locaha, Eugine_Nier
comment by Locaha · 2013-06-12T11:04:14.199Z · LW(p) · GW(p)

"Copy" implies having more than 1 object : The Copy and the Original at the same point of time, but not space. Dan(t=n) and Dan(t=n+1) are not copies. Dan(Time=n, Location=a) and Dan(Time=n, Location=b) are copies.

Replies from: naasking, buybuydandavis
comment by naasking · 2013-06-16T15:24:47.209Z · LW(p) · GW(p)

"Copy" implies having more than 1 object : The Copy and the Original at the same point of time, but not space.

Why preference space over time? Time is just another dimension after all. buybuydandavis's definition of "copy" seems to avoiding preference for a particular dimension, and so seems more general.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-06-18T00:17:43.121Z · LW(p) · GW(p)

You may want to read up on the no-cloning theorem in quantum mechanics.

The simple answer to your question is that time interacts differently with causality from space.

comment by buybuydandavis · 2013-06-13T03:07:25.732Z · LW(p) · GW(p)

I don't think so, and I don't think the original author assumes as much. If your digital copy is created through a process that destroys you, is it not a copy?

Replies from: Locaha
comment by Locaha · 2013-06-13T07:18:22.562Z · LW(p) · GW(p)

Hmmm...

I suppose it is. But are you saying you are being constantly destroyed and remade from one moment to the next? I know Pratchett used the idea in " The Thief of Time", but that's a fantasy author...

But even if we assume that what happening to the atoms of your body from one moment to the other can be described as destruction and recreation (I'm not even sure those words have meaning when we talk about atoms), you will still have the burden of proving that the process is analogous to the whatever way you are going to teleport yourself to Mars.

Replies from: buybuydandavis
comment by buybuydandavis · 2013-06-13T09:13:19.831Z · LW(p) · GW(p)

There's not much analogy required, as he argued from "not a perfect copy", i.e., the existence of difference.

But are you saying you are being constantly destroyed and remade from one moment to the next?

No, but that intertemporal solidarity is my choice, just as someone's intertemporal solidarity with their teleported copy would be their choice.

Replies from: Locaha
comment by Locaha · 2013-06-13T10:23:53.718Z · LW(p) · GW(p)

You can choose to think whatever you like, but I don't think it changes the laws of the universe. You either have a continuous existence in time or you don't. You may decide that your Copy on Mars is you, but it is not. Your mind won't continue to operate on Mars if you shoot yourself on Earth.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-13T15:50:00.564Z · LW(p) · GW(p)

(shrug) The laws of the universe, in the sense you mean the term here, are silent on many things.

Is Sam my friend, or not? Basically, I choose. If I decide Sam is, then Sam is (although I might not be Sam's). If I decide Sam's not, then Sam's not. There's no fact of the matter beside that. The laws of the universe are silent.*

Is my copy on Mars me, or isn't it? Perhaps the laws of the universe are equally silent.

* - of course, at another level of description this is false, since the laws of the universe also constrain what choice I make, but at that level "you can choose to think whatever you like" is false, so I assume that's not the level you are referencing.

Replies from: Locaha
comment by Locaha · 2013-06-13T23:52:16.542Z · LW(p) · GW(p)

Is my copy on Mars me, or isn't it? Perhaps the laws of the universe are equally silent.

So, if you decide that your brain after being shot is still you and then shoot yourself, you will not die?

Can I decide I'm Bill Gates? Like, for a couple of days?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-14T00:08:05.391Z · LW(p) · GW(p)

So, if you decide that your brain after being shot is still you and then shoot yourself, you will not die?

Yes. In fact, this isn't hypothetical; lots of people on this site in fact do believe that their brains after they've been shot, if adequately cryopreserved, are still them and that they haven't necessarily died.

Can I decide I'm Bill Gates? Like, for a couple of days?

I don't know, can you? Have you tried? (Of course, that won't alter what the legal system does.)

Replies from: Locaha
comment by Locaha · 2013-06-14T08:47:46.587Z · LW(p) · GW(p)

I don't know, can you? Have you tried?

Yeah, it's not working, If I was Bill Gates, I'd be in a different body and location.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-14T16:40:03.432Z · LW(p) · GW(p)

Then: no, apparently you can't. Your notion of personal identity seems to be tied to a particular body and location, if I'm reading you right. Which also implies that your notion of personal identity can't survive death, and can't be simultaneously present on Earth and Mars.

Which of course does not preclude the possibility of someone on Mars, or existing after your death, who would pass all conceivable tests of being you as well as you would.

Replies from: Matthew_Opitz
comment by Matthew_Opitz · 2014-05-20T18:55:43.888Z · LW(p) · GW(p)

TheOtherDave, you seem to be implying that Locaha is unusual in not being able to experience Bill Gates's reality, and that in principle it should be possible to "identify with" Bill Gates and then suddenly "wake up" in Bill Gates's body with all of his memories and whatnot, thinking that you had always been Bill Gates and being none the wiser that you had just been experiencing a different body's reality a moment ago.

If that is possible, then how do we know that we aren't doing this all the time? Also, if this were possible, then we would not really have to worry about death necessarily entailing non-existence. We would just "wake up" as someone else that next second with all of that person's memories, thinking that we had always been that person. (Of course, then that begs the question: who would we wake up as? Perhaps the person with the most similar brain as our former one, since that seems to be how we stick with our existing brain as it changes incrementally from moment to moment?)

Replies from: TheOtherDave
comment by TheOtherDave · 2014-05-20T19:23:30.416Z · LW(p) · GW(p)

I don't think Locaha's inability to experience themselves as Bill Gates is unusual in the slightest. I suspect most of us are unable to do so.

Also, I haven't said a word about Bill Gates' memory and whatnot. If having all Bill Gates' memories and whatnot is necessary for someone to be Bill Gates, then very few people indeed are capable of it. (Indeed, there are plausible circumstances under which Bill Gates himself would no longer be capable of being Bill Gates.)

comment by Eugine_Nier · 2013-06-13T00:28:53.120Z · LW(p) · GW(p)

The point is that teleported Dan may be different from non-teleported Dan in ways that are very different (meta-different?) from the differences between Dan(t=n) and Dan(t=n+1).

This is certainly how quantum systems work.

Replies from: buybuydandavis
comment by buybuydandavis · 2013-06-13T03:06:12.068Z · LW(p) · GW(p)

Maybe. But the teleported differences aren't necessarily worse.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-06-15T02:32:30.366Z · LW(p) · GW(p)

They won't necessarily exist either. I'm describing a way the world might turn out to be, I never said this is the only way.

comment by torekp · 2013-06-15T19:38:06.172Z · LW(p) · GW(p)

I tend to see Knightian unpredictability as a necessary condition for free will

But it's not. (In the link, I use fiction to defang the bugbear and break the intuition pumps associating prediction and unfreedom.) ETA: Aaronson writes

even if Alice can’t tell Bob what he’s going to do, it’s easy enough for her to demonstrate to him afterwards that she knew.

But that's not a problem for Bob's freedom or free will, even if Bob finds it annoying. That's the point of my story.

"Knightian freedom" is a misnomer, in something like the way "a wine margarita" is. Except that the latter at least contains alcohol, something one usually wants from a margarita. Sometimes it's good to be predictable (coordinating with friends); sometimes it's bad (facing enemies). But at no time is it crucial to freedom. Prediction isn't control.

None of this is to deny the potential interest of Aaronson's arguments regarding the feasibility of brain scanning, etc. But calling this Knightian unpredictability "free will" just confuses both issues.

Replies from: ScottAaronson, shminux, torekp, Manfred
comment by ScottAaronson · 2013-06-16T19:42:09.502Z · LW(p) · GW(p)

"But calling this Knightian unpredictability 'free will' just confuses both issues."

torekp, a quick clarification: I never DO identify Knightian unpredictability with "free will" in the essay. On the contrary, precisely because "free will" has too many overloaded meanings, I make a point of separating out what I'm talking about, and of referring to it as "freedom," "Knightian freedom," or "Knightian unpredictability," but never free will.

On the other hand, I also offer arguments for why I think unpredictability IS at least indirectly relevant to what most people want to know about when they discuss "free will" -- in much the same way that intelligent behavior (e.g., passing the Turing Test) is relevant to what people want to know about when they discuss consciousness. It's not that I'm unaware of the arguments that there's no connection whatsoever between the two; it's just that I disagree with them!

Replies from: torekp
comment by torekp · 2013-06-18T00:05:01.241Z · LW(p) · GW(p)

Sorry about misrepresenting you. I should have said "associating it with free will" instead of "calling it free will". I do think the association is a mistake. Admittedly it fits with a long tradition, in theology especially, of seeing freedom of action as being mutually exclusive with causal determination. It's just that the tradition is a mistake. Probably a motivated one (it conveniently gets a deity off the hook for creating and raising such badly behaved "children").

Replies from: ScottAaronson
comment by ScottAaronson · 2013-06-18T14:10:40.389Z · LW(p) · GW(p)

Well, all I can say is that "getting a deity off the hook" couldn't possibly be further from my motives! :-) For the record, I see no evidence for a deity anything like that of conventional religions, and I see enormous evidence that such a deity would have to be pretty morally monstrous if it did exist. (I like the Yiddish proverb: "If God lived on earth, people would break His windows.") I'm guessing this isn't a hard sell here on LW.

Furthermore, for me the theodicy problem isn't even really connected to free will. As Dostoyevsky pointed out, even if there is indeterminist free will, you would still hope that a loving deity would install some "safety bumpers," so that people could choose to do somewhat bad things (like stealing hubcaps), but would be prevented from doing really, really bad ones (like mass-murdering children).

One last clarification: the whole point of my perspective is that I don't have to care about so-called "causal determination"---either the theistic kind or the scientific kind---until and unless it gets cashed out into actual predictions! (See Sec. 2.6.)

comment by Shmi (shminux) · 2013-06-17T04:12:48.544Z · LW(p) · GW(p)

Downvoted for extremely uncharitable reading of the paper.

Replies from: MrMind
comment by MrMind · 2013-06-17T14:43:42.716Z · LW(p) · GW(p)

Upvoted for being one the very few downvoters who explains why. Without feedback there's no improvement.

comment by torekp · 2013-06-15T23:46:10.791Z · LW(p) · GW(p)

It's worse than I thought. Aaronson really does want to address the free will debate in philosophy - and utterly botches the job.

Aaronson speaks for some of his interlocutors (who are very smart - i.e. they say what I would say ;) ):

Suppose that only “ordinary” quantum randomness and classical chaos turned out to be involved: how on earth could that matter, outside the narrow confines of free-will debates? Is the variety of free will that apparently interests you—one based on the physical unpredictability of our choices—really a variety “worth wanting,” in Daniel Dennett’s famous phrase [27]?

and answers:

As a first remark, if there’s anything in this debate that all sides can agree on, hopefully they can agree that the truth (whatever it is) doesn’t care what we want, consider “worth wanting,” or think is necessary or sufficient to make our lives meaningful!

Uh, so: the truth about the modal number of hairs on a human foot, for example, doesn't care what we want. But, if I were to claim that (a good portion of) a famous philosophical debate amounted to the question of how many hairs are on our feet, you could reject the claim immediately. And you could cite the fact that nobody gives a damn about that as a sufficient reason to reject the substitution of the new question for the old. Sure, in this toy example, you could cite many other reasons too. But with philosophical (using the next word super broadly) definitions, sometimes the most obvious flaw is that nobody gives a damn about the definiens, while lots care passionately about the concept supposedly defined. Dennett rightly nails much philosophizing about free will, on precisely this point.

Interlocutors:

But if the event is undetermined, it isn’t “free” either: it’s merely arbitrary, capricious, and random.

Aaronson:

An event can be “arbitrary,” in the sense of being undetermined by previous events, without being random in the narrower technical sense of being generated by some known or knowable probabilistic process.

But this doesn't address the point, unless Knightian-uncertain "actions" somehow fall more under the control of the agent than do probabilistic processes. The truth is the opposite. I have the most control over actions that flow deterministically from my beliefs and desires. I have almost as much, if it is highly probable that my act will be the one that the beliefs and desires indicate is best. And I have none, if it is completely uncertain. For example, if I am pondering whether to eat a berry and then realize that this type of berry is fatally poisonous, this realization should ideally be decisive, with certainty. But I'll settle for 99.99...% probability that I don't eat it. If it is wide-open uncertain whether I will eat it - arbitrary in a deep way - that does not help my sense of control and agency. To put it mildly!

Finally, Aaronson has a close brush with the truth when he rejects the following premise of the Consequence Argument (section 2.9):

(ii) The state of the universe 100 million years ago is clearly outside our ability to alter.

Aaronson replies:

there might be many possible settings of the past microfacts—the polarizations of individual photons, etc. [...] our choices today might play a role in selecting one past from a giant ensemble of macroscopically-identical but microscopically-different pasts.

Only one microscopic past is consistent with our choice today. But the same can be said of the whole past on a classically-determinist scientific picture. As Carl Hoefer says in the Stanford Encyclopedia

they can be viewed as bi-directionally deterministic. That is, a specification of the state of the world at a time t, along with the laws, determines not only how things go after t, but also how things go before t. Philosophers, while not exactly unaware of this symmetry, tend to ignore it when thinking of the bearing of determinism on the free will issue.

And that's a mistake, as Hoefer indicates in his conclusion. The physics of deterministic theories (and also, I would add, any probabilistic quantum theory that gives sufficient probabilistic weight to our beliefs, desires, and reasoning in producing action) supports strong compatibilism, not Aaronson's weak version.

Missed it by (fingers close together) that much!

Replies from: ESRogs
comment by ESRogs · 2013-06-19T02:38:40.164Z · LW(p) · GW(p)

I agree with you on unpredictability not being important for agency, but I don't understand the last third of this comment. What is the point that you are trying to make about bi-directional determinism? Specifically, could you restate the mistake you think Aaronson is making in the "our choices today might play a role" quote?

Replies from: torekp
comment by torekp · 2013-06-21T16:48:46.821Z · LW(p) · GW(p)

Sorry, I was unclear. I don't think that's a mistake at all! The only "problem" is that it may be an understatement. On a bi-directional determinist picture, our choices today utterly decisively select one past, in a logical sense. That is, statements specifically describing a single past follow logically from statements describing our choices today plus other facts of today's universe. The present still doesn't cause the past, but that's a mere tautology: we call the later event the "effect" and the earlier one the "cause".

Replies from: ESRogs
comment by ESRogs · 2013-06-21T19:41:44.073Z · LW(p) · GW(p)

our choices today utterly decisively select one past, in a logical sense

That's not necessarily true if multiple pasts are consistent with the state of the present, right? In other words, if there is information loss as you move forward in time.

Replies from: torekp
comment by torekp · 2013-06-22T01:24:27.184Z · LW(p) · GW(p)

Indeed. Those wouldn't be bi-directional determinist theories, though. Interestingly, QM gets portrayed a lot like a bi-directional determinist theory in the wiki article on the black hole information paradox. (I don't know enough QM to know how accurate that is.)

comment by Manfred · 2013-06-15T23:39:52.353Z · LW(p) · GW(p)

Not bad at all :)

comment by [deleted] · 2013-06-11T04:08:13.201Z · LW(p) · GW(p)

A better summary of Aaronson's paper:

I want to know:

Were Bohr and Compton right or weren’t they? Does quantum mechanics (specifically, say, the No-Cloning Theorem or the uncertainty principle) put interesting limits on an external agent’s ability to scan, copy, and predict human brains and other complicated biological systems, or doesn’t it?

EY is mentioned once, for his work in popularizing cryonics, and not for anything fundamental to the paper. Several other LW luminaries like Silas Barta and Jaan Tallinn show up in the acknowledgements.

If you have followed Aaronson at all in the past couple years, the new stuff begins around section 3.3, page 36. His definition of "freedom" is at first glance interesting, and may dovetail slightly with the standard reduction of free will.

Replies from: SilasBarta
comment by SilasBarta · 2013-06-11T18:43:28.207Z · LW(p) · GW(p)

Eh, I don't think I count as a luminary, but thanks :-)

Aaronson's crediting me is mostly due to our exchanges on the blog for his paper/class about philosophy and theoretical computer science.

One of them, about Newcomb's problem where my main criticisms were

a) he's overstating the level and kind of precision you would need when measuring a human for prediction; and

b) that the interesting philosophical implications of Newcomb's problem follow from already-achievable predictor accuracies.

The other, about average-human performance on 3SAT, where I was skeptical the average person actually notices global symmetries like the pigeonhole principle. (And, to a lesser extent, whether the other in which you stack objects affects their height...)

comment by [deleted] · 2013-06-11T04:00:25.932Z · LW(p) · GW(p)

That Aaronson mentions EY isn't exactly a surprise; the two shared a well-known discussion on AI and MWI several years ago. EY mentions it in the Sequences.

comment by JohnSidles · 2013-06-15T19:32:06.976Z · LW(p) · GW(p)

Rancor commonly arises when STEM discussions in general, and discussions of quantum mechanics in particular, focus upon personal beliefs and/or personal aesthetic sensibilities, as contrasted with verifiable mathematical arguments and/or experimental evidence and/or practical applications.

In this regard, a pertinent quotation is the self-proclaimed "personal belief" that Scott asserts on page 46:

"One obvious way to enforce a macro/micro distinction would be via a dynamical collapse theory. ... I personally cannot believe that Nature would solve the problem of the 'transition between microfacts and macrofacts' in such a seemingly ad hoc way, a way that does so much violence to the clean rules of linear quantum mechanics."

Scott's personal belief calls to mind Nature's solution to the problem of gravitation; a solution that (historically) has been alternatively regarded as both "clean" or "unclean". His quantum beliefs map onto general relativity as follows:

General relativity is "unclean" "We can be confident that Nature will not do violence to the clean rules of linear Euclidean geometry; the notion is so repugnant that the ideas of general relativity CANNOT be correct."

as contrasted with

General relativity is "clean" "Matter tells space how to curve; space tells matter how to move; this principle is so natural and elegant that general relativity MUST be correct!"

Of course, nowadays we are mathematically comfortable with the latter point-of-view, in which Hamiltonian dynamical flows are naturally associated to non-vanishing Lie derivatives of metric structures , that is .

This same mathematical toolset allow us to frame the ongoing debate between Scott and his colleagues in mathematical terms, by focusing our attention not upon the metric structure , but similarly upon the complex structure .

In this regard a striking feature of Scott's essay is that it provides precisely one numbered equation (perhaps this a deliberate echo of Stephen Hawking's A Brief History of Time, which also has precisely one equation?). Fortunately, this lack is admirably remedied by the discussion in Section 8.2 "Holomorphic Objects" of Andrei Moroianu's textbook Lectures on Kahler Geometry. See in particular the proof arguments that are associated to Moroianu's Lemma 8.7, which conveniently is freely available as Lemma 2.7 of an early draft of the textbook, that is available on the arxiv server as arXiv:math/0402223v1. Moroianu's draft textbook is short and good, and his completed textbook is longer and better!

Scott's aesthetic personal beliefs naturally join with Moroianu's mathematical toolset to yield a crucial question: Should/will 21st Century STEM researchers embrace with enthusiasm, or reject with disdain, dynamical theories in which ?

Scott's essay is entirely correct to remind us that this crucial question is (in our present state-of-knowledge) not susceptible to any definitively verifiable arguments from mathematics, physical science, or philosophy (although plenty of arguments from plausibility have been set forth). But on the other hand, students of STEM history will appreciate that the community of engineers has rendered a unanimous verdict: in essentially all modern large-scale quantum simulation codes (matrix product-state calculations provide a prominent example).

So to the extent that biological systems (including brains) are accurately and efficiently simulable by these emerging dynamic-J methods, then Scott's definition of quantum dynamical systems may have only marginal relevance to the practical understanding of brain dynamics (and it is plausible AFAICT that this proposition is entirely consonant with Scott's notion of "freebits").

Here too there is ample precedent in history: early 19th Century textbooks like Nathaniel Bowditch's renowned New American Practical Navigator (1807) succinctly presented the key mathematical elements of non-Euclidean geometry (many decades in advance of Gauss, Riemann, and Einstein).

Will 21st Century adventurers learn to navigate nonlinear quantum state-spaces with the same exhilaration that adventurers of earlier centuries learned to navigate first the Earth's nonlinear oceanography, and later the nonlinear geometry of near-earth space-time (via GPS satellites, for example)?

Conclusion Scott's essay is right to remind us: We don't know whether Nature's complex structure is comparably dynamic to Nature's metric structure, and finding out will be a great adventure! Fortunately (for young people especially) textbooks like Moroianu's provide a well-posed roadmap for helping mathematicians, scientists, engineers --- and philosophers too --- in setting forth upon this great adventure. Good!

comment by MrMind · 2013-06-11T08:47:17.682Z · LW(p) · GW(p)

I feel that his rebuttal of the Lisbet-like experiments (paragraph 2.12) is strikingly weak, exactly where it should have been one of the strongest point. Scott says:

My own view is that the quantitative aspects are crucial when discussing these experiments.

What? Just because predicting human behaviour one minute before it's happening with 99% accuracy is more impressive, it doesn't mean that it involves any kind of different process than predicting human behaviour 5 seconds before with 60% accurateness. Admittedly, it might imply different kind, maybe even unachievable or uncomputable kind of process, but it also may be just a matter of better probes/more computational power. Lack of impressiveness is not a refutation at all. Also

So better-than-chance predictability is just too low a bar for clearing it to have any relevance to the free-will debate.

This is plainly wrong, as any Bayesian-minded person will know: it all depends on the prior information you are using. Predicting with 99.99% accuracy that any person, put in front of the dilemma of tasting a pleasant cake or receive a kick in the teeth (or, to stay in the Portal metaphor, to be burned alive) will chose the cake, is clearly not relevant to the free will debate.
At the same time, predicting what will be the next choice in pressing the button, exclusively from neurological data (and very broadly aggregated, as is the case of fMRI) with 60% accuracy, is in direct contrast with the Knightian unpredictability thesis.

Replies from: ESRogs, None, Eugine_Nier
comment by ESRogs · 2013-06-11T19:22:47.466Z · LW(p) · GW(p)

Predicting with 99.99% accuracy that any person, put in front of the dilemma of tasting a pleasant cake or receive a kick in the teeth (or, to stay in the Portal metaphor, to be burned alive) will chose the cake, is clearly not relevant to the free will debate.

Is there an existing principled explanation for why this is not relevant to the free will debate, but predicting less obvious behaviors is?

Replies from: MrMind
comment by MrMind · 2013-06-12T07:37:25.398Z · LW(p) · GW(p)

Because any working system evolved from self-preservation would do that. It doesn't add any bit of information, although it's a prediction that has striking accuracy.

Replies from: ESRogs
comment by ESRogs · 2013-06-12T18:24:36.981Z · LW(p) · GW(p)

That seems to have already conceded the point by acknowledging that our behaviors are determined by systems. No?

It seems that the argument must be that some of our behaviors are determined and some are the result of free will -- I'm wondering if there's a principled defense of this distinction.

Replies from: MrMind
comment by MrMind · 2013-06-13T09:15:33.368Z · LW(p) · GW(p)

They way I see it is this: if pressing a button of your choice is not an expression of free-will, then nothing is, because otherwise you can just say that free-will determines whatever in the brain is determined by quantum noise, so that it becomes an empty concept.
That said, it's true that we don't know very much about the inner working of the brain, but I believe that we know enough to say that it doesn't store and uses quantum bits for elaboration.
But even before invoking that, Lisbet-like experiments directly link free-will with available neuronal data: I'm not saying that it's a direct refutation, but it's a possible direct refutation.
My pet-peeves is the author not acknowledging the conclusion, instead saying that the experiments were not impressive enough to constitute a refutation of his claim.

comment by [deleted] · 2013-06-11T13:28:55.397Z · LW(p) · GW(p)

It is completely not about being more or less impressive.

So better-than-chance predictability is just too low a bar for clearing it to have any relevance to the free-will debate.

This is plainly wrong, as any Bayesian-minded person will know: it all depends on the prior information you are using.

If you can throw out the fMRI data and get better predictive power, something is wrong with the fMRI data.

At the same time, predicting what will be the next choice in pressing the button, exclusively from neurological data (and very broadly aggregated, as is the case of fMRI) with 60% accuracy, is in direct contrast with the Knightian unpredictability thesis.

The fMRI results are not relevant, because quantum effects in the brain are noise on an fMRI. Aaronson explicitly locates any Knightian noise left in the system to be at the microscopic level; see the third paragraph under section 3.

TL;DR: 2.12 is about forestalling a bad counterargument (that being the heading of 2.12) and does not give evidence against Knightian upredictability.

Replies from: MrMind
comment by MrMind · 2013-06-12T07:56:11.666Z · LW(p) · GW(p)

It is completely not about being more or less impressive.

Care to elaborate? Because otherwise I can say "it totally is!", and we leave at that.

If you can throw out the fMRI data and get better predictive power, something is wrong with the fMRI data.

Absolutely not. You can always add the two and get even more predictive power.
Notice in particular that the algorthm Scott uses looks at past entries in the button pressing game, while fMRI data concerns only the incoming entry. They are two very different kind of prior information, and of course they have different predictive power. It doesn't mean that one or the other is wrong.

The fMRI results are not relevant, because quantum effects in the brain are noise on an fMRI.

That's exactly the point: if (and I reckon it's a big if) the noise is irrelevant in predicting the behaviour of a person, then it means that in the limit, it's irrelevant in the uploading/emulation process.
This is what the Lisbet-like experiments shows, and the fact that with very poor prior information, like an fMRI, a person can be predicted with 60% accuracy and 4 seconds in advance, is to me a very strong indication in that sense, but it is not such for the author (which reduces the argument to an issue of impressiveness, and why those experiments are not a direct refutation).
As far as I can see, the correct conclusion should have been: the experiments shows that it's possible to aggregate high level neuronal data, very far from quantum noise, to be used to predict people's behaviour in advance, with higher than chance accuracy. This shows that, at least for this kind of task, quantum irreproducible noise is not relevant to the emulation or free will problem. Of course, nothing excludes (but at the same time, nothing warrants) that different kind of phoenomena will emerge in the investigation of higher resolution experiments.

Replies from: Ronak, None
comment by Ronak · 2013-06-13T19:28:32.088Z · LW(p) · GW(p)

Care to elaborate? Because otherwise I can say "it totally is!", and we leave at that.

Basically, signals take time to travel. If it is ~.1 s, then predicting it that much earlier is just the statement that your computer has faster wiring.

However, if it is a minute earlier, we are forced to consider the possibility - even if we don't want to - that something contradicting classical ideas of free will is at work (though we can't throw out travel and processing time either).

comment by [deleted] · 2013-06-12T13:19:34.508Z · LW(p) · GW(p)

It is completely not about being more or less impressive.

Care to elaborate? Because otherwise I can say "it totally is!", and we leave at that.

That's why it wasn't the entirety of my comment. Sigh.

Absolutely not. You can always add the two and get even more predictive power.

This is plainly wrong, as any Bayesian-minded person will know. P(X|A, B) = P(X|A) is not a priori forbidden by the laws of probability.

Saying "absolutely not" when nobody's actually done the experiment yet (AFAIK) is disingenuous.

Of course, nothing excludes (but at the same time, nothing warrants) that different kind of phoenomena will emerge in the investigation of higher resolution experiments.

If you actually believe this, then this conversation is completely pointless, and I'm annoyed that you've wasted my time.

comment by Eugine_Nier · 2013-06-13T00:32:22.371Z · LW(p) · GW(p)

What? Just because predicting human behaviour one minute before it's happening with 99% accuracy is more impressive, it doesn't mean that it involves any kind of different process than predicting human behaviour 5 seconds before with 60% accurateness. Admittedly, it might imply different kind, maybe even unachievable or uncomputable kind of process, but it also may be just a matter of better probes/more computational power.

So would you have been wiling to draw the same conclusion from an experiment that predicted the button pushing 1 second before with 99.99999% probability by scanning the neurons in the arm?

Replies from: MrMind
comment by MrMind · 2013-06-13T09:03:07.654Z · LW(p) · GW(p)

As I said in another comment: no, because that doesn't add information, since pushing the button = neurons in the arm firing. The threshold is when the elaboration leaves the brain.

comment by Ronak · 2013-06-13T12:55:42.608Z · LW(p) · GW(p)

I like his causal answer to Newcomb's problem:

In principle, you could base your decision of whether to one-box or two-box on anything you like: for example, on whether the name of some obscure childhood friend had an even or odd number of letters. However, this suggests that the problem of predicting whether you will one-box or two-box is “you-complete.” In other words, if the Predictor can solve this problem reliably, then it seems to me that it must possess a simulation of you so detailed as to constitute another copy of you (as discussed previously). But in that case, to whatever extent we want to think about Newcomb’s paradox in terms of a freely-willed decision at all, we need to imagine two entities separated in space and time—the “flesh-and-blood you,” and the simulated version being run by the Predictor—that are nevertheless “tethered together” and share common interests. If we think this way, then we can easily explain why one-boxing can be rational, even without backwards-in-time causation. Namely, as you contemplate whether to open one box or two, who’s to say that you’re not “actually” the simulation? If you are, then of course your decision can affect what the Predictor does in an ordinary, causal way.

Replies from: Manfred, TheOtherDave
comment by Manfred · 2013-06-15T23:09:19.394Z · LW(p) · GW(p)

Simple but misleading.

This is because Newcomb's problem is not reliant on the predictor being perfectly accurate. All they need to do is predict you so well that people who one-box walk away with more expected utility than people who two-box. This is easy - even humans can predict other humans this well (though we kinda evolved to be good at it).

So if it's still worth it to one-box even if you're not being copied, what good is an argument that relies on you being copied to work?

Replies from: Ronak, Eugine_Nier
comment by Ronak · 2013-06-16T16:51:59.469Z · LW(p) · GW(p)

In response to this, I want to roll back to saying that while you may not actually be simulated, having the programming to one-box is what causes there to be a million dollars in there. But, I guess that's the basic intuition behind one-boxing/the nature of prediction anyway so nothing non-trivial is left (except the increased ability to explain it to non-LW people).

Also, the calculation here is wrong.

comment by Eugine_Nier · 2013-06-16T03:04:42.968Z · LW(p) · GW(p)

Ok, in that case, am I allowed to roll a dice to determine whether to one box?

Replies from: Manfred
comment by Manfred · 2013-06-16T05:42:53.772Z · LW(p) · GW(p)

Depends on the rules. Who do I look like, Gary Drescher?

What sort of rules would you implement to keep Newcomb's problem interesting in the fact of coins that you can't predict?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-06-18T00:22:17.784Z · LW(p) · GW(p)

Why would I want to keep the problem interesting? I want to solve it.

Replies from: Ronak
comment by Ronak · 2013-06-18T17:55:23.828Z · LW(p) · GW(p)

Because the solution to the problem is worthless except to the extent that it establishes your position in an issue it's constructed to illuminate.

comment by TheOtherDave · 2013-06-13T15:30:55.994Z · LW(p) · GW(p)

It seems way simpler to leave out the "freely willed decision" part altogether.

If we posit that the Predictor can reliably predict my future choice based on currently available evidence, it follows that my future choice is constrained by the current state of the world. Given that, what remains to be explained?

Replies from: Ronak
comment by Ronak · 2013-06-13T18:49:06.788Z · LW(p) · GW(p)

Yes, I agree with you - but when you tell some people that the question arises of what is in the big-money box after Omega leaves... and the answer is "if you're considering this, nothing."

A lot of others (non-LW people) I tell this to say it doesn't sound right. The bit just shows you that the seeming closed-loop is not actually a closed loop in a very simple and intuitive way** (oh and it actually agrees with 'there is no free will'), and also it made me think of the whole thing from a new light (maybe other things that look like closed loops can be shown not to be in similar ways).

** Anna Salamon's cutting argument is very good too but a) it doesn't make the closed-loop-seeming thing any less closed-loop-seeming and b) it's hard to understand for most people and I'm guessing it will look like garbage to people who don't default to compatibilist.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-13T19:20:35.879Z · LW(p) · GW(p)

I suppose.
When dealing with believers in noncompatibilist free will, I typically just accept that on their view a reliable Predictor is not possible in the first place, and so they have two choices... either refuse to engage with the thought experiment at all, or accept that for purposes of this thought experiment they've been demonstrated empirically to be wrong about the possibility of a reliable Predictor (and consequently about their belief in free will).

That said, I can respect someone refusing to engage with a thought experiment at all, if they consider the implications of the thought experiment absurd.

As long as we're here, I can also respect someone whose answer to "Assume Predictor yadda yadda what do you do?" is "How should I know what I do? I am not a Predictor. I do whatever it is someone like me does in that situation; beats me what that actually is."

Replies from: Ronak
comment by Ronak · 2013-06-14T19:11:58.331Z · LW(p) · GW(p)

I usually deal with people who don't have strong opinions either way, so I try to convince them. Given total non-compatibilists, what you do makes sense.

Also, it struck me today that this gives a way of one-boxing within CDT. If you naively blackbox prediction, you would get an expected utility table {{1000,0},{1e6+1e3,1e6}} where two-boxing always gives you 1000 dollars more.

But, once you realise that you might be a simulated version, the expected utility of one-boxing is 1e6 but of two-boxing is now is 5e5+1e3. So, one-box.

A similar analysis applies to the counterfactual mugging.

Further, this argument actually creates immunity to the response 'I'll just find a qubit arbitrarily far back in time and use the measurement result to decide.' I think a self-respecting TDT would also have this immunity, but there's a lot to be said for finding out where theories fail - and Newcomb's problem (if you assume the argument about you-completeness) seems not to be such a place for CDT.

Disclaimer: My formal knowledge of CDT is from wikipedia and can be summarised as 'choose A that maximises %20=%20\Sigma_i%20P(A%20\rightarrow%20O_i)%20D(O_i)$) where D is the desirability function and P the probability function.'

comment by IlyaShpitser · 2013-06-11T10:11:53.241Z · LW(p) · GW(p)

But that's just a starting point, and he then moves in a direction that's very far from any kind of LW consensus.

If he says:

"In this essay I’ll argue strongly for a different perspective: that we can easily imagine worlds consistent with quantum mechanics (and all other known physics and biology) where the answer to the question [scanning of minds possible?] is yes, and other such worlds where the answer is no."

and he's right, then LW consensus is religion (in other words, you made up your mind too early).

Replies from: Luke_A_Somers, paulfchristiano, Viliam_Bur, MrMind, Will_Newsome
comment by Luke_A_Somers · 2013-06-11T17:32:31.010Z · LW(p) · GW(p)

I'm not quite sure what you mean here. Do you mean that if he's right, then LW consensus is wrong, and that makes LW consensus a religion?

That seems both wrong and rather mean to both LW consensus and religion.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-06-11T18:16:16.103Z · LW(p) · GW(p)

LW consensus is not necessarily wrong, even if Scott is right. However, making up your mind on unsettled empirical questions (which is what LW had done if Scott is right) is a dangerous practice.


I found the phrasing "he then moves in a direction that's very far from any kind of LW consensus" broadly similar to "he's not accepting the Nicene Creed, good points though he may make." Is there even a non-iffy reason to say this about an academic paper?

Replies from: DanielVarga, Luke_A_Somers
comment by DanielVarga · 2013-06-11T19:59:22.962Z · LW(p) · GW(p)

I was trying to position the paper in terms of LW opinions, because my target audience were LW readers. (That's also the reason I mentioned the tangential Eliezer reference.) It's beneath my dignity to list all the different philosophical questions where my opinion is different from LW consensus, so let's just say that I used the term as a convenient reference point rather than a creed.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-06-11T21:36:37.509Z · LW(p) · GW(p)

I was trying to position the paper in terms of LW opinions

Why?

Replies from: somervta
comment by somervta · 2013-06-13T05:35:17.100Z · LW(p) · GW(p)

because my target audience were LW readers.

Presumably, he wanted some relatively quick way to tell people why he was posting it to lesswrong, and what they should expect from it.

comment by Luke_A_Somers · 2013-06-11T20:07:16.642Z · LW(p) · GW(p)

However, making up your mind on unsettled empirical questions

Either this is self-contradictory, or it means 'never be wrong'. If you're always right, you're making too few claims and therefore being less effective than you could be. Being wrong doesn't mean you're doing it wrong.

As for iffiness, I read that phrase more as "Interesting argument ahead!"

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-06-12T10:06:06.781Z · LW(p) · GW(p)

Either this is self-contradictory, or it means 'never be wrong'.

I think if you are making up your mind on unsettled empirical questions, you are a bad Bayesian. You can certainly make decisions under uncertainty, but you shouldn't make up your mind. And anyways, I am not even sure how to assign priors for the upload fidelity questions.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-06-12T13:58:33.693Z · LW(p) · GW(p)

In that case then you're the one who made the jump from 'goes against consensus' to 'this was assigned 0 probability'. If we all agreed that some proposition was 0.0001% likely, then claiming that this proposition is true would seem to me to be going against consensus.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-06-13T09:57:01.765Z · LW(p) · GW(p)

Ok, what exactly is your posterior belief that uploads are possible? What would you say the average LW posterior belief of same? Where did this number come from? How much 'cognitive effort' is spent at LW thinking about the future where uploads are possible vs the future where uploads are not possible?

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-06-13T14:29:05.294Z · LW(p) · GW(p)

To answer the last question first - not a heck of a lot, but some. It was buried in an 'impossible possible world', but lack of uploading was not what made it the impossible possible world, so that doesn't mean that it's considered impossible.

To answer your questions:

-- Somewhere around 99.5% that it's possible for me. The reasons for it to be possible are pretty convincing.

-- I would guess that the median estimate of likelihood among active posters who even have an estimate would be above 95%, but that's a pretty wild guess. Taking the average would probably amount to a bit less than the fraction of people who think it'll work, so that's not very meaningful. My estimate of that is rough - I checked the survey, but the most applicable question was cryonics, and of course cryonics can be a bad idea even if uploading is possible (if you think that you'll end up being thawed instead of uploaded) And of course if you somehow think you could be healed instead of uploaded, it could go the other way. 60% were on the fence or in favor of getting cryonically preserved, which means they think that the total product of the cryo Drake equation is noticeable. Most cryo discussions I've seen here treat organization as the main problem, which suggests that a majority consider recovery a much less severe problem. Being pessimistic for a lower bound on that gives me 95%.

-- The most likely to fail part of uploading is the scanning. Existing scanning technology can take care of anything as large as a dendrite (though in an unreasonably large amount of time). So, for uploading to be impossible, it would have to require either dynamical features or features which would necessarily be destroyed by any fixing process, and no other viable mechanism.

  • The former seems tremendously unlikely because personality can recover from some pretty severe shocks to the system like electrocution, anaerobic metabolic stasis, and inebriation (or other neurotoxins). I'd say that there being some relevant dynamical process that contains crucial nondeducible information is maybe 1 in 100 000, ballpark. Small enough that it's not significant.

  • The latter seems fairly unlikely as well - if plastination or freezing erases some dendritic state, and that encodes personality information. Seems very unlikely indeed that there's literally no way around this at all - no choice of means of fixing will work possible within the laws of physics. Maybe one in 20 that we can't recover that state... and maybe one in 20 that it was vital to determine long-term psychological features (for the reasons outlined above, though weakened since we're allowing that this is not transient, just fragile). Orders of magnitude, here.

Certainly, our brains are far larger than they need to be, and so it seems like you're not going to run into the limits of physics. Heisenberg is irrelevant, and the observer effect won't come and bite you at full strength because you have probes much less energetic than the cells in question. If nothing else, you should be able to insinuate something into the brain and measure it that way.

But of course I could have screwed up my reasoning, which accounts for the rest of the 0.5%. Maybe our brains are sufficiently fragile that you're going to lose a lot when you poke it hard enough to get the information out. I doubt it to the tune of 199:1. As a check, I would feel comfortable taking a 20:1 bet on the subject, and not comfortable with a 2000:1 bet on it.

~~~~

Of course, the real reason that we don't talk too much about what happens if uploading isn't possible is that that would just make the future that much more like the present. We know how to deal with living in meat bodies already. If it works out that that's the way we're stuck, then, well, I guess we don't need to worry about em catastrophes, and any FAI will really want to work on its biotech.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-06-17T13:42:05.541Z · LW(p) · GW(p)

Ok -- thanks for a detailed response. To be honest, I think you are quibbling. If your posterior is 99.5% and 95% if being pessimistic you made up your mind essentially as far as a mind can be made up in practice. If the answer to the upload question depends on an empirical test that has not yet been done (because of lack of tech), then you made up your mind too soon.

Of course, the real reason that we don't talk too much about what happens if uploading isn't possible is that that would just make the future that much more like the present.

I think a cynic would say you talk about the upload future more because its much nicer (e.g. you can conquer death!)

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-06-18T16:08:11.467Z · LW(p) · GW(p)

If your posterior is 99.5% and 95% if being pessimistic you made up your mind essentially as far as a mind can be made up in practice. If the answer to the upload question depends on an empirical test that has not yet been done (because of lack of tech), then you made up your mind too soon.

These two statements clash very strongly. VERY strongly.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-06-18T16:30:33.043Z · LW(p) · GW(p)

They don't. 99.5% is far too much.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-06-18T20:58:13.925Z · LW(p) · GW(p)

If you can predict the outcome of the empirical test with that degree of confidence or a higher one, then they're perfectly compatible. We're talking what's physically possible with any plan of action and physically possible capabilities, not merely what can be done with today's tech. The negative you're pushing is actually a very very strong nonexistence statement.

comment by paulfchristiano · 2013-06-11T14:44:40.179Z · LW(p) · GW(p)

I would guess that he thinks that the probability of this hypothetical---worlds in which brain scanning isn't possible---is pretty low (based on having discussed it briefly with him). I'm sure everyone around here thinks it is possible as well, it's just a question of how likely it is. It may be worth fleshing out the perspective even if it is relatively improbable.

In particular, the probability that you can't get a functional human out of a brain scan seems extremely low (indeed, basically 0 if you interpret "brain scan" liberally), and this is the part that's relevant to most futurism.

Whether there can be important aspects of your identity or continuity of experience that are locked up in uncopyable quantum state is more up for grabs, and I would be much more hesitant to bet against that at 100:1 odds. Again, I would guess that Scott takes a similar view.

Replies from: ScottAaronson
comment by ScottAaronson · 2013-06-16T20:18:50.294Z · LW(p) · GW(p)

Hi Paul. I completely agree that I see no reason why you couldn't "get a functional human out of a brain scan" --- though even there, I probably wouldn't convert my failure to see such a reason into a bet at more than 100:1 odds that there's no such reason. (Building a scalable quantum computer feels one or two orders of magnitude easier to me, and I "merely" staked $100,000 on that being possible --- not my life or everything I own! :-) )

Now, regarding "whether there can be important aspects of your identity or continuity of experience that are locked up in uncopyable quantum state": well, I regard myself as sufficiently confused about what we even mean by that idea, and how we could decide its truth or falsehood in a publicly-verifiable way, that I'd be hesitant to accept almost ANY bet about it, regardless of the odds! If you like, I'm in a state of Knightian uncertainty, to whatever extent I even understand the question. So, I wrote the essay mostly just as a way of trying to sort out my thoughts.

comment by Viliam_Bur · 2013-06-11T11:46:43.436Z · LW(p) · GW(p)

we can easily imagine worlds consistent with quantum mechanics (and all other known physics and biology) where the answer to the question [scanning of minds possible?] is yes, and other such worlds where the answer is no

If it is so easy, could someone please explain me the main idea in less than 85 pages?

(Let's suppose that the scanned mind does not have to be an absolutely perfect copy; that differences as big as the difference between me now and me 1 second later are acceptable.)

Replies from: IlyaShpitser, Ronak
comment by IlyaShpitser · 2013-06-11T12:52:37.030Z · LW(p) · GW(p)

Absolutely, here's the relevant quote:

"The question also has an “empirical core” that could turn out one way or another, depending on details of the brain’s physical organization that are not yet known. In particular, does the brain possess what one could call a clean digital abstraction layer: that is, a set of macroscopic degrees of freedom that

(1) encode everything relevant to memory and cognition,

(2) can be accurately modeled as performing a classical digital computation, and

(3) “notice” the microscopic, quantum-mechanical degrees of freedom at most as pure randomnumber sources, generating noise according to prescribed probability distributions?"

You could do worse things with your time than read the whole thing, in my opinion.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-06-11T15:12:26.907Z · LW(p) · GW(p)

Thank you for the quote! (I tried to read the article, but after a few pages it seemed to me the author makes too many digressions, and I didn't want to know his opinions on everything, only on the technical problems with scanning brains.)

Do I understand it correctly that the question is, essentially, whether there exists a more efficient way of modelling the brain than modelling all particles of the brain?

Because if there is no such efficient way, we can probably forget about running the uploaded brains in real time.

Then, even assuming we could successfully scan the brains, we could get some kind of immortality, but we could not get greater speed, or make life cheaper... which is necessary for the predicted economical consequences of "ems".

Some smaller economical impacts could still be possible, for example if a person would be so miraculously productive, that even running them at 100× slower speed and 1000× higher costs could be meaningful. (Not easy to imagine, but technically not impossible.) Or perhaps if the quality of life increases globally, the costs of real humans could grow faster than costs of emulated humans, so at some moment emulation could be economically meaningful.

Still, my guess is that there probably is a way to emulate brain more efficiently, because it is a biological mechanism made by evolution, so it has a lot of backwards compatibility and chemistry (all those neurons have metabolism).

Replies from: IlyaShpitser, Ronak
comment by IlyaShpitser · 2013-06-11T17:20:20.640Z · LW(p) · GW(p)

Do I understand it correctly that the question is, essentially, whether there exists a more efficient way of modelling the brain than modelling all particles of the brain?

I don't presume to speak for Scott, but my interpretation is that it's not a question of efficiency but fidelity (that is, it may well happen that classical sims of brains are closely related to the brain/person scanned but aren't the same person, or may indeed not be a person of any sort at all. Quantum sims are impossible due to no-cloning).

For more detailed questions I am afraid you will have to read the paper.

comment by Ronak · 2013-06-13T19:18:44.980Z · LW(p) · GW(p)

No his thesis is that it is possible that even a maximal upload wouldn't be human in the same way. His main argument goes like this:

a) There is no way to find out the universe's initial state, thanks to no-cloning, the requirement of low entropy, and there being only one copy.

b) So we have to talk about uncertainty about wavefunctions - something he calls Knightian uncertainty (roughly, a probability distribution over probability distributions).

c) It is conceivable that particles in which the Knightian uncertainties linger (ie they have never spoken to anything macroscopic enough for decoherence to happen) mess around with us, and it is likely that our brain and only our brain is sensitive enough to one photon for that to mess around with how it would otherwise interact (he proposes Na-ion pathways).

d) We define "non-free" as something that can be predicted by a superintelligence without destroying the system (ie you can mess around with everything else if you want, though within reasonable bounds the interior of which we can see extensively).

e) Because of Knightian uncertainty it is impossible to predict people, if such an account is true.

My disagreements (well, not quite - more, why I'm still compatibilist after reading this):

a) predictability is different from determinism - his argument never contradicts determinism (modulo prob dists but we never gave a shit about that anyway) unless we consider Knightian uncertainties ontological rather than epistemic (and I should warn you that physics has a history of things suddenly making a jump from one to the other rather suddenly). And if it's not deterministic, according to my interpretation of the word, we wouldn't have free will any more.

b) this freedom is still basically random. It has more to do with your identification of personality than anything Penrose ever said, because these freebits only hit you rarely and only at one place in your brain - but when they do affect it they affect it randomly among considered possiblities,

I'd say I was rather benefitted by reading it, because it is a stellar example of steelmanning a seemingly (and really, I can say now that I'm done) incoherent position (well, or being the steel man of said position). Here's a bit of his conclusion that seems relevant here:

To any “mystical” readers, who want human beings to be as free as possible from the mechanistic chains of cause and effect, I say: this picture represents the absolute maximum that I can see how to offer you, if I confine myself to speculations that I can imagine making contact with our current scientific understanding of the world. Perhaps it’s less than you want; on the other hand, it does seem like more than the usual compatibilist account offers! To any “rationalist” readers, who cheer when consciousness, free will, or similarly woolly notions get steamrolled by the advance of science, I say: you can feel vindicated, if you like, that despite searching (almost literally) to the ends of the universe, I wasn’t able to offer the “mystics” anything more than I was! And even what I do offer might be ruled out by future discoveries.

comment by Ronak · 2013-06-13T19:21:25.622Z · LW(p) · GW(p)

For less than 85 pages, his main argument is in sections 3 and 4, ~20 pages.

comment by MrMind · 2013-06-11T10:36:01.911Z · LW(p) · GW(p)

Easily imagining worlds doesn't mean they are possible or even consistent, as per the p-zombie world.

This is not argument against Aaronson paper in general, although I think it's far from correct, but against your deduction.

Plus, I think there exist multiple, reasonable and independent arguments that favors LW consensus, and this is evidential weight against Aaronson paper, not the opposite.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-06-11T10:40:37.416Z · LW(p) · GW(p)

I think he proposes an empirical question the answer to which influences whether e.g. uploading is possible. Do you think his question has already answered? Do you have links explaining this, if so?

Replies from: MrMind
comment by MrMind · 2013-06-11T12:23:04.287Z · LW(p) · GW(p)

I have yet to read the full paper, so a full reply will have to wait. But I've already commented that he hand-waves a sensible argument against his thesis. so this is not promising.

comment by Will_Newsome · 2013-06-11T11:19:26.801Z · LW(p) · GW(p)

LW consensus is religion

What an offensive analogy! Please don't tar a vast, nuanced thing like religion with hasty analogies to something as trifling and insignificant as LW consensus. After all, denotation may win arguments, technically, but connotation changes minds—so I beg thee, be careful.