Anyone have any questions for David Chalmers?
post by Solvent · 2012-03-10T21:57:53.248Z · LW · GW · Legacy · 25 commentsContents
25 comments
I'm doing an undergraduate course on the Free Will Theorem, with three lecturers: a mathematician, a physicist, and David Chalmers as the philosopher. The course is a bit pointless, but the company is brilliant. Chalmers is a pretty smart guy. He studied computer science and math as an undergraduate, before "discovering that he could get paid for doing the kind of thinking he was doing for free already". He's friendly; I've been chatting with him after the classes.
So if anyone has any questions for him, if they seem interesting enough I could approach him with them.
Emails to him also work, of course, but discussion in person lets more understanding happen faster. For example, in a short discussion with him I understood his position on consciousness way better than I would have just from reading his papers on the topic.
25 comments
Comments sorted by top scores.
comment by Manfred · 2012-03-11T01:30:27.323Z · LW(p) · GW(p)
Wait, the "free will theorem" meaning the paper co-authored by Conway that proved that if electrons are deterministic, then humans are also deterministic, and if you can violate determinism for humans you can also violate it for electrions? 'Cuz really they just used the definition of "free will" as "violating determinism."
Sounds like a short class.
EDIT: Okay, so I suppose you could cover the history of the idea of free will. And you could teach the physics used in the paper to people who didn't already know it. Or, possibly, you could rehash the same debates a dozen times.
Replies from: Solventcomment by antigonus · 2012-03-11T01:43:15.496Z · LW(p) · GW(p)
Questions on anything, or just topics that relate to the class? If the former, I'd like to hear his response to Drew McDermott's critique of his Singularity article in JCS, even though I think he's going to publish a response to it and others in the next issue.
Replies from: lukeprog↑ comment by lukeprog · 2012-03-11T08:29:30.460Z · LW(p) · GW(p)
The response Anna and I give in our forthcoming chapter "Intelligence Explosion: Evidence and Import" is the following:
Chalmers (2010) suggested that AI will lead to intelligence explosion if an AI is produced by an "extendible method," where an extendible method is "a method that can easily be improved, yielding more intelligent systems." McDermott (2012a, 2012b) replies that if P≠NP (see Goldreich 2010 for an explanation) then there is no extendible method. But McDermott's notion of an extendible method is not the one essential to the possibility of intelligence explosion. McDermott's formalization of an "extendible method" requires that the program generated by each step of improvement under the method be able to solve in polynomial time all problems in a particular class — the class of solvable problems of a given (polynomially step-dependent) size in an NP-complete class of problems. But this is not required for an intelligence explosion in Chalmers' sense (and in our sense). What intelligence explosion (in our sense) would require is merely that a program self-improve to vastly outperform humans, and we argue for the plausibility of this in section 3 of our chapter. Thus while we agree with McDermott that it is probably true that P≠NP, we do not agree that this weighs against the plausibility of intelligence explosion. (Note that due to a miscommunication between McDermott and the editors, a faulty draft of McDermott (2012a) was published in Journal of Consciousness Studies. We recommend reading the corrected version at http://cs-www.cs.yale.edu/homes/dvm/papers/chalmers-singularity-response.pdf.)
I sent this to Drew and he said he agreed with our rebuttal.
Replies from: antigonus, Solvent↑ comment by antigonus · 2012-03-11T18:48:21.428Z · LW(p) · GW(p)
Do you feel this is a full rebuttal to McDermott's paper? I agree that his generalized argument against "extendible methods" is a straw man; however, he has other points about Chalmers' failure to argue for existing extendible methods being "extendible enough."
Replies from: lukeprog↑ comment by Solvent · 2012-03-11T11:13:35.755Z · LW(p) · GW(p)
Do you mean "I sent this to Chalmers and he said he agreed with our rebuttal."?
Replies from: lukeprog↑ comment by lukeprog · 2012-03-11T12:05:52.187Z · LW(p) · GW(p)
No. I sent the rebuttal to Drew McDermott, and Drew McDermott agreed with our rebuttal of Drew McDermott.
Replies from: Normal_Anomaly, Solvent↑ comment by Normal_Anomaly · 2012-03-12T13:55:57.221Z · LW(p) · GW(p)
Good for Drew McDermott!
comment by wantstojoinin · 2012-03-11T12:19:50.178Z · LW(p) · GW(p)
I'd like to ask him for an explanation of what the hard problem is and why it's an actual problem, in a way that I can understand it (without reference to undefinable things like "qualia" or "subjective experience"). Would probably have to discuss it in person with him and even then doubt either of us would get anywhere though.
Replies from: XiXiDu↑ comment by XiXiDu · 2012-03-11T13:06:12.538Z · LW(p) · GW(p)
I'd like to ask him for an explanation of what the hard problem is and why it's an actual problem, in a way that I can understand it (without reference to undefinable things like "qualia" or "subjective experience").
Oh yeah, I'd love to see a consciousness for dummies or consciousness 101 written by him. So far I haven't read up on the whole issue but merely thought a few times about consciousness myself and ran into intractable problems rather quickly. To read up on the vast amount of literature on the topic, and the hard problem in particular, won't become a priority any time soon due to the massive amount of time it would take me to digest it, and given that the expected payoff I assign to doing so is virtually zero.
It would also be interested if he is able to circumscribe the problem in layman's terms. It might hint at the possibility that a lot of the literature is made up of language games throwing around mysterious terminology that nobody ever bothered to define.
comment by David_Gerard · 2012-03-12T19:53:26.605Z · LW(p) · GW(p)
I'm trying and failing to think of something polite to ask about p-zombies.
Replies from: Tyrrell_McAllister, Grognor↑ comment by Tyrrell_McAllister · 2012-03-13T22:13:14.648Z · LW(p) · GW(p)
That's okay. The p-zombies won't feel offended.
↑ comment by Grognor · 2012-03-14T19:10:13.891Z · LW(p) · GW(p)
In fairness to Chalmers, from here:
I think that most arguments that use zombies can actually be rephrased in a zombie-free way, so that these arguments can be set aside if one prefers; but zombies at least provide a vivid and provocative illustration.
He does believe in the logical possibility of zombies (though I can't understand why) but does not feel that they are necessary to his position. You can certainly read his papers if you want to know anything more specific.
comment by Will_Newsome · 2012-03-11T18:09:47.681Z · LW(p) · GW(p)
For example, in a short discussion with him I understood his position on consciousness way better than I would have just from reading his papers on the topic.
My girlfriend says the same about Searle.
comment by XiXiDu · 2012-03-11T13:11:03.797Z · LW(p) · GW(p)
I'd like to ask him if it would be possible to frame the hard problem in terms of computer science (e.g. in terms of a cellular automaton), ideally coming up with a mathematical description of the problem.
If Turing-completeness is insufficient to compute consciousness then it should be perfectly possible to pinpoint where computer science breaks down.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2012-03-11T14:08:38.507Z · LW(p) · GW(p)
ideally coming up with a mathematical description of the problem.
That would be like asking for a mathematical description of the problem "why is there something rather than nothing?"
One way in which people lose their sensitivity to such questions is that they train themselves to turn every problem into something that can be solved by their favorite formalized methods. So if it can't be turned into a program, or a Bayesian formula, or..., it's deemed to be meaningless.
But every formalism starts life as an ontology. Before we "formalized" logic or arithmetic, we related to the ontological content of those topics: truth, reasoning, numbers... The quest for a technical statement of philosophical hard problems often amounts to an evasion of a real ontological problem that underlies or transcends the formalism or discipline of choice. XiXiDu, you don't strike me as someone who would deliberately do this, so maybe you're just being a little naive - you want to think rigorously, so you're reaching for a familiar model of rigor. But the really hard questions are characterized by the fact that we don't know how to think rigorously about them - we don't have a method, ready at hand, which allows us to mechanically compute the answer. There was a time when there was no such thing as algebra, or calculus, or propositional logic. How were they invented? Look into that question, and you will be investigating how rigor and method was introduced where previously it did not exist. That is the level at which "hard problems" live.
The combination of computationalism and physicalism has become a really potent ossifier of thought, because it combines the rule-following of formalism with the empirical relevance of physics. "We know it's all atoms, so if we can reduce it to atoms we're done, and neurocomputationalism means we can focus on explaining why a question was asked, rather than on engaging with its content" - that's how this particular reductionism works.
There must be a Zen-like art to reawakening fresh perception of reality in individuals attached to particular formalisms, formulas, and abstractions, but it would require considerable skill, because you have to enter into the formalism while retaining awareness of the ontological context it supposedly represents: you have to reach the heart of the conceptual labyrinth where the reifier of abstractions is located, and then lead them out, so they can see directly again the roots in reality of their favorite constructs, and thereby also see the aspects of reality that aren't represented in the formalism, but which are just as real as those which are.
If Turing-completeness is insufficient to compute consciousness then it should be perfectly possible to pinpoint where computer science breaks down.
But that isn't the problem. Chalmers never asserts that you can't simulate consciousness, in the sense of making an abstract state-machine model that imitates the causal relations of consciousness with the world. The question is why it feels like something to be what we are: why is there any awareness. (There are, again, ways to evade the question here, e.g. by defining awareness behavioristically.)
Replies from: Will_Newsome, XiXiDu↑ comment by Will_Newsome · 2012-03-20T15:07:51.776Z · LW(p) · GW(p)
There was a time when there was no such thing as algebra, or calculus, or propositional logic. How were they invented? Look into that question, and you will be investigating how rigor and method was introduced where previously it did not exist.
History of concept of computation seems very analogous to development of concept of justification. I think we're at roughly Leibniz stage of figuring out justification. (I sorta wanna write up a thorough analysis of this somewhere.)
↑ comment by XiXiDu · 2012-03-11T15:20:06.326Z · LW(p) · GW(p)
One way in which people lose their sensitivity to such questions is that they train themselves to turn every problem into something that can be solved by their favorite formalized methods.
I tried to ask a question that best fits this community. The answer to it would be interesting even if it is a wrong question.
Besides, I am not joking when I say that I haven't thought about the whole issue. It does simply not have any priority at this point because it is a very complex issue and I still have to learn other things first. I admit my ignorance here. Yet I used the chance to indirectly ask one of the leading experts in the field to answer a question that I perceived to suit the Less Wrong community.
So if it can't be turned into a program, or a Bayesian formula, or..., it's deemed to be meaningless.
I don't think that way at all. I think that it is a fascinating possibility and I am very glad that people like you take it seriously and encourage you to keep it up and to not let yourself be discouraged by negative reactions.
Yet I don't know what it would mean for a problem to be algorithmically or mathematically undefinable. I can only rely on my intuition here and admit that I feel that there are such problems. But that's it.
But every formalism starts life as an ontology.
You pretty much lost me at ontology. All I really know about the term "ontology" is the Wikipedia abstract (I haven't read your LW posts on the topic either).
Please just stop here if I am wasting your time. I really didn't do the necessary reading yet.
To better be able to fathom what you are talking about, here are three points you might or might not agree with:
1) "Experiences like "green" are ontologically basic, are not reducible to interacting nonmental parts."
Well, I don't know what "mental" denotes so I am unable to discern it from that which is "nonmental".
Isn't the reason that we perceive "green" instead of "particle interactions" the same that we don't perceive the earth to be a sphere? Due to a lack of information and our relation to earth we perceive it to be round from afar and flat at a close-up range.
If you view "green" as an approximation then the fact that there are "subjects" that experience "green" can be deduced from fundamental limitations in the propagation and computation of information.
It is simply impossible for a human being to view a sphere. We are only able to deduce it. Does that mean that flatness of earth is ontologically basic, because it does not follow from physics? Well, it does.
2) "If consciousness is the spatial arrangement of particles, then you ought to be able to build a conscious machine in the Game of Life. It is Turing-complete after all, and most physicalists insist that consciousness is computable as well.
But I find that absurd. Sure you can get complex patterns, but how would these patterns share information? Individual cells aren't aware of whatever pattern they're in. If there is awareness, it should either cover only individual cells (no unified field of consciousness, only pixel-like awareness) or cover all cells - panpsychism, which isn't true either. It would be really weird to look at the board and say, this 10000x10000 section of the board is consciousness, but the 10x10 section to the left isn't, nor is this 10k x 10k section of noise.
Where is the information coming from that allows someone to make this judgment about consciousness? It doesn't seem to be in the board. So if you stick to the automaton, you have only two options."
The above has been written by user:muflax in a reply on Google+. I don't know enough about cellular automatons but it sure sounds like a convincing argument that Turing-completeness is insufficient.
3) See my post here.
How is what I wrote there related to the overall problem, if at all, and do you agree?
Before we "formalized" logic or arithmetic, we related to the ontological content of those topics: truth, reasoning, numbers...
I am not sure that truth, reasoning or numbers really exist in any meaningful sense because I don't know what it means for something to "exist". There are no borders out there.
There was a time when there was no such thing as algebra, or calculus, or propositional logic. How were they invented? Look into that question...
I don't like the word "invented" very much. I think that everything is discovered. And the reason for the discovery is that on the level that we reside, given that we all have similar computationally limits, the world appears to have distinguishable properties that can be labeled by the use of shapes. But that's simply a result of our limitations rather than hinting at something more fundamental.
The question is why it feels like something to be what we are: why is there any awareness.
Is the human mind a unity? Different parts seem to observe each other. What you call an "experience" might be the interactive inspection of conditioned data. A sort of hierarchical computation that decreases quickly. Your brain might have a strong experience of green but a weaker experience that you are experiencing green. That you have an experience of the experience of the experience of green is an induction that completely replaces the original experience of green and is observed instead.
I have this vision of consciousness that is similar to two cameras behind semi-transparent mirrors facing each other, fainting as all computational resources are being exhausted.
I don't know how this could possible work out given a cellular automaton though.
Chalmers never asserts that you can't simulate consciousness, in the sense of making an abstract state-machine model that imitates the causal relations of consciousness with the world.
Wow okay, I seem to confuse him with someone else.
comment by Anubhav · 2012-03-17T17:08:08.948Z · LW(p) · GW(p)
He studied computer science and math as an undergraduate, before "discovering that he could get paid for doing the kind of thinking he was doing for free already".
Somehow, this had never occurred to me, although connecting the dots it was obvious someone had to be paying these guys.
Intriguing idea, though. How likely is a J. Random Philosopher to make a good living?
comment by FiftyTwo · 2012-03-13T00:53:58.308Z · LW(p) · GW(p)
If you get a chance go drinking with him, met him at a conference at my university and he was great fun, in my experience most major philosophical discussion actually seems to happen in the bar not the lecture theatre. (That conference also resulted in Karoake about P-zombies if I recall correctly.)
Replies from: khafra↑ comment by khafra · 2012-03-14T12:12:13.331Z · LW(p) · GW(p)
It was probably the 21st Century Monads, Chalmers' favorite band.
comment by hankx7787 · 2012-03-16T20:29:37.713Z · LW(p) · GW(p)
How would you respond to this?
I would defend Type A materialism:
http://consc.net/papers/nature.html
"One way to argue for type-A materialism is to argue that there is some intermediate X such that (i) explaining functions suffices to explain X, and (ii) explaining X suffices to explain consciousness. One possible X here is representation: it is often held both that conscious states are representational states, representing things in the world, and that we can explain representation in functional terms. If so, it may seem to follow that we can explain consciousness in functional terms. On examination, though, this argument appeals to an ambiguity in the notion of representation. There is a notion of functional representation, on which P is represented roughly when a system responds to P and/or produces behavior appropriate for P. In this sense, explaining functioning may explain representation, but explaining representation does not explain consciousness. There is also a notion of phenomenal representation, on which P is represented roughly when a system has a conscious experience as if P. In this sense, explaining representation may explain consciousness, but explaining functioning does not explain representation. Either way, the epistemic gap between the functional and the phenomenal remains as wide as ever. Similar sorts of equivocation can be found with other X's that might be appealed to here, such as "perception" or "information.""
The function of the brain is not merely to input the phenomena of the environment and automatically output a behavior, as the functional "representations" of a simple circuit do - with such a model one could not say that functional representation explains a phenomenal, volitional consciousness - but rather the function of the brain is a very sophisticated, active physical process involving such complex, dynamic representations as would be suitable for reason - of the environment, one's imagination/thoughts/deliberation, and of one's volitional actions - i.e. the phenomenal, volitional experiences of consciousness. In this way, explaining functioning explains representation, and explaining representation explains consciousness. Hence there is no epistemic gap between physical and phenomenal truths, and there is no "hard problem" of explaining consciousness remaining once one has solved the easy problems of explaining the various cognitive, behavioral, and environmental functions.