Eight questions for computationalists
post by dfranke · 2011-04-13T12:46:40.017Z · LW · GW · Legacy · 89 commentsContents
"Consciousness is really just computation" None 89 comments
This post is a followup to "We are not living in a simulation" and intended to help me (and you) better understand the claims of those who took a computationalist position in that thread. The questions below are aimed at you if you think the following statement both a) makes sense, and b) is true:
"Consciousness is really just computation"
I've made it no secret that I think this statement is hogwash, but I've done my best to make these questions as non-leading as possible: you should be able to answer them without having to dismantle them first. Of course, I could be wrong, and "the question is confused" is always a valid answer. So is "I don't know".
- As it is used in the sentence "consciousness is really just computation", is computation:
a) Something that an abstract machine does, as in "No oracle Turing machine can compute a decision to its own halting problem"?
b) Something that a concrete machine does, as in "My calculator computed 2+2"?
c) Or, is this distinction nonsensical or irrelevant? - If you answered "a" or "c" to question 1: is there any particular model, or particular class of models, of computation, such as Turing machines, register machines, lambda calculus, etc., that needs to be used in order to explain what makes us conscious? Or, is any Turing-equivalent model equally valid?
- If you answered "b" or "c" to question 1: unpack what "the machine computed 2+2" means. What is that saying about the physical state of the machine before, during, and after the computation?
- Are you able to make any sense of the concept of "computing red"? If so, what does this mean?
- As far as consciousness goes, what matters in a computation: functions, or algorithms? That is, does any computation that give the same outputs for the same inputs feel the same from the inside (this is the "functions" answer), or do the intermediate steps matter (this is the "algorithms" answer)?
- Would an axiomatization (as opposed to a complete exposition of the implications of these axioms) of a Theory of Everything that can explain consciousness include definitions of any computational devices, such as "and gate"?
- Would an axiomatization of a Theory of Everything that can explain consciousness mention qualia?
- Are all computations in some sense conscious, or only certain kinds?
ETA: By the way, I probably won't engage right away with individual commenters on this thread except to answer requests for clarification. In a few days I'll write another post analyzing the points that are brought up.
89 comments
Comments sorted by top scores.
comment by cousin_it · 2011-04-13T13:23:29.502Z · LW(p) · GW(p)
I don't know the answer to any of these questions, and I don't know which of them are confused.
Here's a way to make the statement "consciousness is computation" a little less vague, let's call the new version X: "you can simulate a human brain on a fast enough computer, and the simulation will be conscious in the same sense that regular humans are, whatever that means". I'm not completely sure if X is meaningful, but I assign about 80% probability to its being meaningful and true, because current scientific consensus says individual neurons operate in the classical regime, they're too large for quantum effects to be significant.
But even if X turns out to be meaningful and true, I will still have leftover object-level questions about consciousness. In particular, knowing that X is true won't help me solve anthropic problems until I learn more about the laws that govern multiple instantiations of isomorphic conscious thingies, whatever that means. Consciousness could "be" one instantiated computation, or an equivalence class of computations, or an equivalence class plus probability-measure, or something even more weird. I don't believe we can enumerate all the possibilities today, much less choose one.
comment by XiXiDu · 2011-04-13T14:01:29.438Z · LW(p) · GW(p)
There is too much vagueness involved here. A better question would be if there is any reason to believe that even though evolution could create consciousness we can not.
No doubt we don't know much about intelligence and consciousness. Do we even know enough to be able to tell that the use of the term "consciousness" makes sense? I don't know. But what I know is that we know a lot about physics and biological evolution and that we know that we are physical and an effect of evolution.
We know a bit less about the relation between evolutionary processes and intelligence but we do know that there is an important difference and that the latter can utilize the former.
Given all that we know, is it reasonable to doubt the possibility that we can create "minds", conscious and intelligent agents? I don't think so.
Replies from: byrnema, Laoch, lessdazed↑ comment by byrnema · 2011-04-13T15:56:01.040Z · LW(p) · GW(p)
A better question would be if there is any reason to believe that even though evolution could create consciousness we can not.
Very good point! Even if consciousness does require something mysterious and metaphysical we don't know about, if it's harnessed within us (and robustly passes from parent to child over billions of births), we can harness it elsewhere.
↑ comment by Laoch · 2012-08-24T17:19:48.343Z · LW(p) · GW(p)
I reject the "Consciousness is really just computation" if you define computation as the operation of contemporary computers not brains, but I wholeheartedly agree that we are physical and an effect of evolution as is our subjective experience. I just don't think that the mind/consciousness is solely the neural connections of ones brain. Cell metabolism and whole organism metabolism and the environment of that organism define the concious experience also. If it's reduced to a neural net important factors will most certainly be lost.
Replies from: shminux, Dolores1984↑ comment by Shmi (shminux) · 2012-08-24T20:01:50.454Z · LW(p) · GW(p)
Does this mean that amputees should be less conscious?
Replies from: gwern, Laoch↑ comment by gwern · 2012-08-24T22:36:02.594Z · LW(p) · GW(p)
Maybe not with humans, but definitely for octopuses!
(More seriously, depending on how seriously you take embodied cognition, there may be some small loss. I mean, we know that your gut bacteria influence your mood via the nerves to the gut; so there are connections. And once there are connections, it becomes much more plausible that cut connections may decrease consciousness. After a few weeks in a float tank, how conscious would you be? Not very...)
Replies from: shminux↑ comment by Shmi (shminux) · 2012-08-24T23:22:26.702Z · LW(p) · GW(p)
I'm pretty sure that you agree that none of this means that a human brain in a vat with proper connections to the environment, real or simulated, is inherently less conscious than one attached to a body.
Replies from: gwern↑ comment by Dolores1984 · 2012-08-24T18:06:23.886Z · LW(p) · GW(p)
Well, that ought to be testable. If he upload a human, and the source of consciousness is lost, they should stop feeling it. Provided they're honest, we can just ask them.
Replies from: Laoch, None↑ comment by lessdazed · 2011-04-13T14:23:16.865Z · LW(p) · GW(p)
Do we even know enough to be able to tell that the use of the term "consciousness" makes sense? I don't know.
Is there a better word than "consciousness" for the explanation for why (I think I) say "I see red" and "I am conscious"? I do (think I) claim those things, so there is a causal explanation.
Replies from: Pfft↑ comment by Pfft · 2011-04-14T01:01:18.181Z · LW(p) · GW(p)
I think any word would be better than "conciousness"! :) It really is a very confusing term, since it is often used (vaguely) to refer to quite different concepts.
Cognitive scientists often use it to mean something similar to "attention" or as the opposite of "unconscious". This is an "implementation level" view -- it refers to certain mechanisms used by the brain to process information.
Then there is what Ned Block calls "access consciousness", "the phenomenon whereby information in our minds is accessible for verbal report, reasoning, and the control of behavior" (to quote Wikipedia). This is a "functional specification level" view: conciousness is correctly implemented if it lets you accurately describe the world around you or the state of your own mind.
Then finally there's "phenomenological conciousness" or qualia or whatever you want to call it -- the mystical secret sauce.
No doubt these are all interrelated in complicated ways, but it certainly does not help matter to use terminology which further blurs the distinction. Especially since they are not equally mysterious: the actual implementation in the brain will take a long time to figure out, and as for the qualia it's hard to say even what a successful answer would look like. But at the functional specification level, it seems quite easy to give a (teleological) explanation. That is, it's easy to see that an agent benefits from being able to represent the world (and be able to say "I see a red thing") and to reason about itself ("each time I see a red thing I feel hungry"). So it's not very mysterious that we have mental concepts for "what I'm currently feeling", etc.
comment by Perplexed · 2011-04-13T16:51:27.102Z · LW(p) · GW(p)
The distinction doesn't make sense to me. But then neither does the statement "Consciousness is really just computation." The only charitable reading I can give that statement is "Consciousness is really just and, as you will notice, the only really powerful or mysterious component of that system is computation". But even with that clarification, I really don't understand what you are getting at with the a vs b distinction. I get the impression that you attach a lot more importance to the (abstract vs concrete) distinction than I do.
It is probably responsive here to state that there is one aspect of consciousness that the Turing machine model of computation completely fails to capture. This is the fact that consciousness is inherently (essentially) embedded in time. One thinks conscious thoughts concurrently with ongoing processing of sense data. Google for the CompSci research papers of Peter Wegner and Dina Goldin to see how the Turing machine model of computation is inadequate for many purposes. Including, IMHO, a central role in the explanation of consciousness.
I'm not sure what you are getting at here, but yes, I do think that some of the more physical aspects of computation - such as that it takes time and produces entropy - may be relevant regarding its use in modeling consciousness.
I am unable to make sense of the concept of "experiencing red"! I am not just a 'qualia' agnostic. I favor the torture and burning of anyone who even mentions 'qualia'. Particularly color qualia.
Ah. Extensional functions vs intensional algorithms. You are flirting with the right ideas here, but not quite nailing the essential issues. As mentioned above in #2, the key question is how the computation is embedded in time rather than whether the individual steps of the computation are achieved intensionally or extensionally.
Interesting question. Just how fundamental is computation? I suppose you realize that a ToE might have a variety of axiomatizations, and that some of them might include AND gates as primitives, while others do not.
No. And take care lest ye be taken before the Inquisitor.
No computations are conscious. But all consciousnesses embed computations. And some computations (but no special kinds of computations) are embedded in consciousnesses, or, more generally, embedded in minds.
Now some questions for you. Somewhat rhetorical, so you only need respond to the overall implicit argument.
- What is it like to be a bat?
- What is it like to be a pocket calculator?
- What would it be like to be an embedded controller for this thing? More like being a bat than being a pocket calculator?
- Is your answer based more on extensional (how does it appear to behave?) or intensional (how is it implemented internally?) thinking?
- Why is this appropriate?
- Is your evidence regarding the nature of your own consciousness extensional or intensional?
- Are you sure of that?
↑ comment by PhilGoetz · 2011-04-14T23:07:14.638Z · LW(p) · GW(p)
I am unable to make sense of the concept of "experiencing red"! I am not just a 'qualia' agnostic. I favor the torture and burning of anyone who even mentions 'qualia'. Particularly color qualia.
Then why favor torturing and burning them, instead of feeding them ice cream?
Please explain - to me, it sounds like you are claiming to be a p-zombie. Even p-zombies shouldn't do that.
comment by ata · 2011-04-14T03:23:11.878Z · LW(p) · GW(p)
I think you're completely mistaken about what computationalism claims. It's not that consciousness is a mysterious epiphenomenon of computation-in-general; it's more that we expect consciousness to be fully reducible to specific algorithms. "Consciousness is really just computation" left at that would be immediately rejected as a mysterious answer, fake causality, attempting to explain away what needs only to be explained, and other related mistakes; 'computation' only tells us where we should be looking for an explanation of consciousness, it can't claim to be one itself.
(But my answers would be: (1) c, irrelevant; (2) that's not the level of abstraction where you'd be explaining consciousness, any more than you'd talk about Turing machines or register machines in explaining Photoshop; (4) no; (5) I'm leaning toward functions, but with a broad view of what should be considered "input" and "output" — e.g. your internal monologue serves as both; (6)(7) what's the relevance of a Theory of Everything here? (8) only certain kinds, obviously.)
comment by RobinZ · 2011-04-13T13:18:18.260Z · LW(p) · GW(p)
Tentatively, my gut reactions are:
- (c)
- Any Turing, I expect.
- To say that a machine computed 2+2 means that it had taken data representing "2" and "2" and performed an operation which, based on the same interpretation that establishes the isomorphism, is equivalent to addition of an arbitrary pair of numbers.
- "computing red" makes as much sense as "computing 2", and for roughly the same reasons. "Red" is a symbol representing either emissive or reflective color.
- Algorithm, at a guess, but the distinction is moot - the output of the algorithm will include references to consciousness in such detail that I find it implausible that any other algorithm implementing the same function will fail to share the relevant features.
- I have no idea what this question means.
- If you're talking about physics, the question makes no sense - a ToE in physics is no more likely to explain consciousness than it is to explain economics.
- Consciousness appears to be a self-reflective behavior to me. At a minimum, any conscious algorithm would have to reflect this.
↑ comment by Peterdjones · 2011-04-13T18:19:50.182Z · LW(p) · GW(p)
(4) It is easy to see how "red" could be computed in that sense. The OP clearly thinks it is difficult, so presumably had another sense in mind.
Replies from: RobinZ↑ comment by RobinZ · 2011-04-13T18:57:29.137Z · LW(p) · GW(p)
If so, further elaboration would be helpful.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-13T19:59:29.505Z · LW(p) · GW(p)
Well, the existence of qualia is a classic objection to computationalism and physicalism, and red is the classic quale.
Replies from: RobinZcomment by [deleted] · 2011-04-13T16:48:54.308Z · LW(p) · GW(p)
I think you will find this paper useful--Daniel Dennett answers some of these questions and explains why he thinks Searle is wrong about consciousness. Pretty much all of the positions Dennett endorses therein are computationalist, so it should help you organize your thoughts.
comment by Giles · 2011-04-13T16:44:20.411Z · LW(p) · GW(p)
I feel that dfranke's questions make all kinds of implicit assumptions about the reader's worldview which makes them difficult for most computationalists to answer. I've prepared a different list - I'm not really interested in answers, just an opinion as to whether they're reasonable questions to ask people or whether they only make sense to me.
But you can answer them if you like.
For probability estimates, I'm talking about subjective probability. If you believe it doesn't make sense to give a probability, try answering as a yes/no question and then guess the probability that your reasoning is flawed.
1: Which of these concepts are at least somewhat meaningful?
a) consciousness
b) qualia
2: Do you believe that an agent is conscious if and only if it experiences qualia?
3: Are qualia epiphenomenal?
4: If yes:
a) Would you agree that there is no causal connection between the things we say about qualia and the actual qualia we experience?
b) Are there two kinds of qualia: the ones we talk about and the ones we actually experience?
5: Is it possible to build a computer simulation of a human to any required degree of accuracy?
a) If you did, what is the probability that simulation would be conscious/experience qualia?
b) Would this probability depend on how the simulation is constructed or implemented?
6: What is the probability that we are living in a simulation?
a) If you prefer to talk about how much "measure of our existence" comes from simulations, give that instead
7: What is the probability that a Theory of Everything would explain consciousness?
8: Would you agree that it makes sense to describe a universe as "real" if and only if it contains conscious observers?
9: Suppose the universe that we see can be described completely by a particular initial state and evolution rule. Suppose also for the sake of simplicity that we're not in a simulation.
a) What is the probability that our universe is the only "real" one?
b) What is the probability that all such describable universes are "real"?
c) If they are all "real", are they all equally real or does each get a different "measure"? How is that measure determined?
d) Are simulated universes "real"? How much measure do they inherit from their parent universe?
10: Are fictional universes "real"? Do they contain conscious observers? (Or give a probability)
a) If you answered "no" here but answered "yes" for simulated universes, explain what makes the simulation special and the fiction not.
11: Is this entire survey nonsense?
Replies from: dfranke, Giles, TheOtherDave, Perplexed, cousin_it↑ comment by dfranke · 2011-04-13T18:12:49.936Z · LW(p) · GW(p)
I'll save my defense of these answers for my next post, but here are my answers:
- Both of them.
- Yes. The way I understand these words, this is a tautology.
- No. Actually, hell no.
- N/A
- Yes; a. I'm not quite sure how to make sense of "probability" here, but something strictly between 0 and 1; b. Yes.
- Negligibly larger than 0.
- 1, tautologically.
- For the purposes of this discussion, "No". In an unrelated discussion about epistemology, "No, with caveats."
- This question is nonsense.
- No.
- If I answered "yes" to this, it would imply that I did not think question 11 was nonsense, leading to contradiction.
↑ comment by Giles · 2011-04-13T20:11:43.737Z · LW(p) · GW(p)
I'll try and clarify the questions which came out as nonsense merely due to being phrased badly (rather than philosophical disagreement).
5: I basically meant, "can you simulate a human brain on a computer?". The "any degree of accuracy" thing was just to try and prevent arguments of the kind "well you haven't modelled every single atom in every single neuron", while accepting that a crude chatbot isn't good enough.
7: By "Theory of everything" I mean a set of axioms that will in principle predict the result of any physics experiment. Would you expect to see equations such as "consciousness = f(x), qualia = g(x)"? Or would you instead say "these equations describe the physical world to any required level of detail, yet I still don't see where the consciousness comes from"? (EDIT: I'm still not making sense here, so it may be best just to ignore this one)
8: People seem more eager to taboo the word "real" than the word "conscious". Not sure there's much I can do to rephrase this one. I wrote it in order to frame q9, which was easier to phrase in terms of reality than consciousness.
9: Sorry for the inferential distance. I was basically referring to the concept some people here call "reality fluid". A better question might be: how do you resolve Eliezer Yudkowsky's little confusion here?
http://lesswrong.com/lw/19d/the_anthropic_trilemma/
11: This question is referring to q2-10 only.
↑ comment by TheOtherDave · 2011-04-13T19:24:47.632Z · LW(p) · GW(p)
Oh, all right. I'm bored and suggestible.
1 - Both potentially meaningful
2 - That's a question about the meanings of words. I don't object to those constraints on the meanings of those words, though I don't feel strongly about them.
3 - If "qualia" is meaningful (see 1), then no.
4 - N/A
5 - Ugh. "Any required degree" is damningly vague. Labeling confidence levels as follows:
- C1 that it's in-principle-possible to build as good a simulation of a particular human as any other human is.
- C2 that it's ipp to build a good enough simulation of a human that no currently existing test could reliably tell it apart from the original.
- C3 that it's ipp to build one that could pass an "interview test" (a la Turing) with the most knowledgeable currently available judges.
...I'd say C1 > C2 > C3 > 99%, though C2 would require also implementing the computer in neurons in a cloned body.
5a - Depends on the required level of accuracy: ~0% for a stone statue, for example. For any of the above examples, I'd expect it to do so as much as the original does.
5b - Not in the sense you mean.
6 - I am not sure that question makes sense. If it does, accurate priors are beyond me. For lack of anything better, I go with a universal prior of 50%.
7 - Mostly that's a question about definitions... if it doesn't explain consciousness, is it really a Theory of Everything? But given what I think you mean by ToE: 99+%.
8 - Question about definitions. I'm willing to constrain my definition of "real" that way, for the sake of discussion.
9 - I have no idea and am not convinced the questions make sense, x4.
10 - x5.
11 - Not entirely, though it is a regular student at a nonsensei-run dojo.
↑ comment by Perplexed · 2011-04-13T17:28:08.338Z · LW(p) · GW(p)
Is this entire survey nonsense?
No, though parts of it were. Of course, people here who agree with me on that will likely disagree as to which parts those are.
The main virtue of this list, and of dfranke's list that led to its production, is that the list stimulates thinking. For example, your question 9c struck me as somewhat nonsensical, and I think I learned something by trying to read some sense into it. (A space can have many measures. One imposes a particular measure for some purpose. What are we trying to accomplish by imposing a measure here?)
Another thought stimulated by your list of questions was whether it might be interesting/useful/fun to produce a LessWrong version of the Philpapers survey. My conclusion was that it would probably require more work than it would be worth. But YMMV, so I will put the idea "out there".
comment by zaph · 2011-04-13T13:29:24.610Z · LW(p) · GW(p)
I would describe myself as a computationalist by default, in that I can't come up with an ironclad argument against it. So, here are my stabs:
1) I'm not sure what you mean by an abstract machine (and please excuse me if that's a formal term). Is that a potential or theoretical machine? That's how I'm reading it. If that's the case, I would say that CIRJC means both a and b. It's a computation of an extremely sophisticated algorithm, the way 2 + 2 = 4 is the computation of a "simple" one (that still needs something really big like math to execute).
2) I don't know if there needs to be a particular class of models; do you mean we know in advance what the particular human consciousness model is? I'd probably say we'd need several models operating in parallel, and that set would be the "human consciousness model".
3) To me, that just means that a simple state machine took in an input, executed some steps, and provided an output on a screen. There was some change of register positions via electricity.
4) Computing red: here's where qualia is going to make things messy. In a video game, I don't have any problem imagine someone issuing a command to a Sim to "move the red box" and the Sim would do so. That's all computation (I don't think there's "really" a Sim or a red box for that matter living in my TV set), but the video game executed what I was picturing in my head via internal qualia. So it seems like there would be an approximation of "computing" red.
5) I don't have any problem saying the algorithm would be very important. I can put this in completely human terms. A psychopath can perfectly imitate emotions, and enact the exact same behavioral output as someone else in similar circumstances. The internal algorithm, if you will, is extremely different however.
6) I would say this is an emphatic yes. Neurons, for instance, serve as some sort of gate analog.
7) I think it would mention qualia, in as much as people would ask about it (so there would at least be enough of an explanation to explain it away, so to speak).
8) I don't think computations are conscious in and of themselves. If I'm doing math in notebook, I don't think the equations are conscious. I don't think the circuitry of a calculator or a computer are conscious. That said, I don't think individual cells of my brain are conscious, and if you were to remove portion of a person's brain (surgery for cancer, for example) that those portions remain conscious, or that person is less conscious by the percentage of tissue removed. Consciousness, to me, may be algorithmically based, but is still the awareness of self, history, etc. that makes humans human. Saying CIRJC doesn't remove the complexity of the calculation.
I haven't read that other thread; can I ask what your opinions are? Briefly of course, and while I can't speak for everyone else, I promise to read them as thumbnails and not absolute statements to be used against you. You could point to writers (Searle? Penrose?) if you like.
Replies from: dfranke, dfranke↑ comment by dfranke · 2011-04-13T16:36:57.887Z · LW(p) · GW(p)
I haven't read that other thread; can I ask what your opinions are? Briefly of course, and while I can't speak for everyone else, I promise to read them as thumbnails and not absolute statements to be used against you. You could point to writers (Searle? Penrose?) if you like.
Searle, to a zeroth approximation. His claims need some surgical repair, but you can do that surgery without killing the patient. See my original post for some "first aid".
↑ comment by dfranke · 2011-04-13T15:58:16.213Z · LW(p) · GW(p)
I'm not sure what you mean by an abstract machine (and please excuse me if that's a formal term)
I'd certainly regard anything defined within the framework of automata theory as an abstract machine. I'd probably accept substitution of a broader definition.
comment by Kaj_Sotala · 2011-04-14T09:04:13.199Z · LW(p) · GW(p)
- I'm not sure of what exactly you're after with this question, or what the question would even mean.
- Any Turing-equivalent model seems equally valid.
- In my mind, "a machine computed X" means that we can use the machine to figure out the answer to X. For instance, John Searle claims that any physical process can be interpreted to instantiate any computation, given a complex enough interpretation. According to this view, e.g. an arbitrary wall can be said to be computing 2+2 as well as 583403 + 573493. But the flaw here is that you cannot actually use the wall to tell you the answer to 583403 + 573493. If you already know the answer, you can come up with a contrived mapping of the wall's atoms to a computation giving the result, but then you cannot use this mapping to tell you the result to any other calculation. So "the machine computed 2+2" means that after the computation, the machine was in such a state that you could somehow read "2+2 = 4" from its state.
- "Computing red" means that a system has an internal representation of the external world, where a specific kind of sensory data produced by the eyes (or equivalent sensors) is coded as being that specific type. This coding is subjectively experienced as the color red.
- The intermediate steps matter. A giant look-up table wouldn't be conscious, though the process that originally produced the table could be.
- I have no idea of what a physical theory explaining consciousness would be like. So I don't know.
- See above.
- Only certain kinds, though I'm unsure of the exact properties required.
comment by DanielLC · 2011-04-14T06:58:57.685Z · LW(p) · GW(p)
I suppose I can consider myself a weak computationalist. I think a computer running a human mind will generate qualia, if it's a simple enough computer. After all, you could interpret a rock in such a way that it's a computer running a human mind.
It's the algorithm that matters.
- c, or at least I don't understand the distinction.
- Any sufficiently simple Turing machine. Since there's nothing that can clearly be called the output, if you didn't limit it in some way, you could say that a clock is a Turing machine if you map each time to the state the Turing machine would have at that time.
- A function was given two inputs that map to two by a sufficiently simple mapping. The result of the function tends to be the sum. In this case, it maps to four.
- You could have an image-recognition program that involves computations about the color red. It would generate a qualia distinct from what it calls "green". I very much doubt it would resemble the qualia of red you feel. In fact, I doubt we even have the same qualia for red.
- Algorithms, although at some point you get to steps where it's just functions.
- What do you mean by "consciousness"? If you mean that it generates qualia, yes. If you mean intelligence, of course not. You could run an intelligent program on Conway's Game of Life.
- Yes.
- I'd say all, but not all equally conscious.
comment by XiXiDu · 2011-04-13T18:57:10.258Z · LW(p) · GW(p)
I'm currently having an exchange with Massimo Pigliucci of Rationally Speaking who might be known here due to his Bloggingheads debate with Eliezer Yudkowsky where he was claiming that "you can simulate the 'logic' of photosynthetic reactions in a computer, but you ain't gonna get sugar as output." I have a hard time to wrap my mind around his line of reasoning, but I'll try:
Let's assume that you wanted to simulate gold. What does it mean to simulate gold?
According to Wikipedia to simulate something means to represent certain key characteristics or behaviours of a selected physical system.
If we were going to simulate the chemical properties of gold, would we be able to use it as a vehicle for monetary exchange on the gold market? Surely not, some important characteristics seem to be missing. We do not assign the same value to a simulation of gold that we assign to gold itself.
What would it take to simulate the missing properties? A particle accelerator or nuclear reactor.
In conclusion, we need to create gold to get gold, no simulation apart from the creation of the actual physically identical substance will do the job. Consequently, in the case of gold at least, substrate neutrality is false.
Replies from: AdeleneDawner, None, timtyler, luminosity, None↑ comment by AdeleneDawner · 2011-04-13T19:36:44.681Z · LW(p) · GW(p)
Don't paper money and electronic money represent gold's 'key characteristic' of being useable for monetary exchange?
↑ comment by [deleted] · 2011-04-13T23:27:48.561Z · LW(p) · GW(p)
According to Wikipedia to simulate something means to represent certain key characteristics or behaviours of a selected physical system.
The key word here is "represent", which is not to be confused with "reproduce".
If we were going to simulate the chemical properties of gold, would we be able to use it as a vehicle for monetary exchange on the gold market? Surely not, some important characteristics seem to be missing. We do not assign the same value to a simulation of gold that we assign to gold itself.
What would it take to simulate the missing properties? A particle accelerator or nuclear reactor.
No, we don't need a nuclear reactor or particle accelerator to simulate, i.e. to represent the missing properties. We need them to reproduce the missing properties. But to simulate something is to represent characteristics of it, not reproduce them.
Now, there's an obvious opening here for someone to try to build an argument based on the fact that a simulation need not reproduce characteristics. It would then be necessary to argue that mere representation of certain characteristics is sufficient to reproduce others. But that would be a new argument, and I'm just addressing this one.
Replies from: jtk3↑ comment by jtk3 · 2011-04-15T04:00:02.752Z · LW(p) · GW(p)
When I run an old 8 bit game on a Commodore-64 emulator it seems to me that the emulation functionally reproduces a Commodore-64. The experience of playing the game can clearly be faithfully reproduced.
Hasn't something been reproduced if one cannot tell the difference between the operation of the original system and that of the simulation?
Replies from: kurokikaze↑ comment by kurokikaze · 2011-04-18T11:22:47.149Z · LW(p) · GW(p)
In case of C64 emulator, the game is represented, your experience is reproduced. As for second, I think it's purely subjectional as it depends on what level of output you expect from simulation. For gamer the emulator game can be "reproduction", for engineer that seek some details on inner workings of Commodore it can be just an approximation of "real thing" and of no use for him.
↑ comment by timtyler · 2011-04-24T11:38:52.693Z · LW(p) · GW(p)
In conclusion, we need to create gold to get gold, no simulation apart from the creation of the actual physically identical substance will do the job. Consequently, in the case of gold at least, substrate neutrality is false.
That just seems confused to me. Simulated gold would be exchanged on simulated gold markets - where it would work just fine.
You can simulate anything - at least according to the Church–Turing–Deutsch principle.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-04-24T16:47:35.350Z · LW(p) · GW(p)
Simulated gold would be exchanged on simulated gold markets - where it would work just fine.
See my longer comment here.
↑ comment by luminosity · 2011-04-13T22:53:01.294Z · LW(p) · GW(p)
Gold in a simulation is less useful to us because we can't use it for everything we could use 'real' gold for. However that gold should be just as useful to anything inside the simulation as our gold is to us, barring changes in value due to changes in quantity. Does anyone really think that we would simulate gold in order to use it in exactly the ways we want to use real gold?
↑ comment by [deleted] · 2011-04-13T23:10:59.524Z · LW(p) · GW(p)
But what about Eliezer's reply to Pigliucci's photosynthesis argument? As I understand it, Eliezer's counterargument was that intelligence and consciousness are like math in the sense that the simulation is the same as the real thing. In other words, we don't care about simulated sugar because we want the physical stuff itself, but we aren't so particular when it comes to arithmetic--the same answer in any form will do.
As far as I can tell, this argument still applies to gold unless there are good reasons to think that consciousness is substrate dependent. But as Eliezer pointed out in that diavlog, that doesn't seem likely.
Replies from: mkehrt, XiXiDu↑ comment by mkehrt · 2011-04-14T01:59:19.492Z · LW(p) · GW(p)
That reply is entirely begging the question. Whether or not consciousness is a phenomenon "like math" or a phenomenon "like photosynthesis" is exactly is being argued about. So it's not an answering argument; it's an assertion.
Replies from: None↑ comment by [deleted] · 2011-04-14T02:04:54.266Z · LW(p) · GW(p)
I completely agree--XiXiDu was summarizing Massimo Pigliucci's argument, so I figured I'd summarize Eliezer's reply. The real heart of the question, then, is figuring out which one consciousness is really like. I happen to think that consciousness is closer to math than sugar because we know that intelligence is so, and it seems to me that the rest follows logically from Minsky's idea that minds are simply what brains do. That is, if consciousness is what an intelligent algorithm feels like from the inside, then it wouldn't make much sense for it to be substrate-dependent.
↑ comment by XiXiDu · 2011-04-14T10:15:26.493Z · LW(p) · GW(p)
As I understand it, Eliezer's counterargument was that intelligence and consciousness are like math in the sense that the simulation is the same as the real thing.
This morning I followed another discussion on Facebook between David Pearce and someone else about the same topic and he mentioned a quote by Stephen Hawking:
What is it that breathes fire into the equations and makes a universe for them to describe? The usual approach of science of constructing a mathematical model cannot answer the questions of why there should be a universe for the model to describe.
What David Pearce and others seem to be saying is that physics doesn't disclose the nature of the "fire" in the equations. For this and other reasons I am increasingly getting the impression that the disagreement all comes down to the question if the Mathematical universe hypothesis is correct, i.e. if Platonism is correct.
None of them seem to doubt that we will eventually be able to "artificially" create intelligent agents. They don't even doubt that we will be able to use different substrates. The basic disagreement seems to be that, as Constant notices in another comment, a representation is distinct from a reproduction.
People like David Pearce or Massimo Pigliucci seem to be arguing that we don't accept the crucial distinction between software and hardware.
For us the only difference between a mechanical device, a physical object and software is that the latter is the symbolic (formal language) representation of the former. Software is just the static description of the dynamic state sequence exhibited by an object. One can then use that software (algorithm) and some sort of computational hardware and evoke the same dynamic state sequence so that the machine (computer) mimics the relevant characteristics of the original object.
Massimo Pigliucci and others actually agree about the difference between a physical thing and its mathematical representation but they don't agree that you can represent the most important characteristic as long as you do not reproduce the physical substrate.
The position hold by those people who disagree with the Less Wrong consensus on this topic is probably best represented by the painting La trahison des images. It is a painting of a pipe. It represents a pipe but it is not a pipe, it is an image of a pipe.
Why would people concerned with artificial intelligence care about all this? That is up to the importance and nature of consciousness and to what extent general intelligence is dependent upon the the brain as a biological substrate and its properties (e.g. the chemical properties of carbon versus silicon).
(Note that I am just trying to account for the different positions here and not argue in favor of substrate-dependence.)
comment by lessdazed · 2011-04-13T14:16:39.899Z · LW(p) · GW(p)
1) I don't know. I also think there is a big difference between c) "nonsensical" and c) "irrelevant". To me, "irrelevant" means all possible worlds are instantiated, and those also computed by machines within such worlds are unfathomably thicker.
2) I don't know.
3) Probably causation between before and after is important, because I doubt a single time slice has any experience due to the locality of physics.
4) Traditionally I go point at things, a stop sign, a fire truck, and apple, and say "red" each time. Then I point at the grass and sky and say "not red". Red is a relational property within the system of: me plus the object. Each part of the system can in principle be replaced by a different, potentially Rube-Goldberg part with identical output without affecting the rest of the system. The computation is the part inside my brain. Whether the stop sign is real or I am blind and my nervous system is being stimulated by mad scientists makes no difference in that respect.
5) In the red system consisting of me and the stop sign, generally the stuff outside my skull can be replaced by functions, the inside stuff needs specific algorithms to produce sensations.
6) Note to self: when giving a list of questions, include something that doesn't actually mean anything and see what the answers to it are like. My best guess is that you're not doing that, but I have no idea what this means.
7) Why would it have to? Meaning no, any patterns larger than the smallest are explained by their components.
8) I can't think of any output that in principle couldn't be produced by a conscious computational process. But not all computational processes are conscious.
Replies from: Peterdjones, dfranke↑ comment by Peterdjones · 2011-04-13T18:25:31.135Z · LW(p) · GW(p)
(4) The question of identical inputs and outputs is a tricky one. No two physically different systems produce unconditionally identical inputs and oputputs imder all circumsntances, since that would imply that there are no circumstances under which there physical differrence could be observed or measured. The "identity" of outputs required by functional equivlance means either
a) identity under an abstract definitions which subsumes a number of physical differences (eg. a "1" or "0" can be multiply realised), or
(b) absolute identity of a subset of outputs, witht the rest being deamed to be irrelevant., eg we can regard two systems as ebing compuationally equivalent although they produce different amounts of heat and noise when running.
Replies from: lessdazed↑ comment by lessdazed · 2011-04-13T19:55:56.288Z · LW(p) · GW(p)
that would imply that there are no circumstances under which there physical differrence could be observed or measured.
How, exactly? I am allowing any section of the system to become as if a black box, replaceable with a different black box. As the insides of the boxes are different, they are not identical. Open the boxes, and see the differences. All I'm arguing is that so long as the boxes are closed, they may do the same thing.
As an example, imagine a pair of motors that take in sunlight and oil and create heat and energy. One has inefficient sun and oil to energy converters, the other has an efficient oil engine and simply wastes the sunlight as heat. Arbitrarily, its program regulates its efficiency as a function of the sunlight it receives.
Or imagine a modern PC emulating a mac OS emulating Windows, as against a slightly older PC.
Bear in mind that I didn't understand your a) or b).
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-13T20:07:13.958Z · LW(p) · GW(p)
One black box is equivalent to another so long as you don't peek inside. So the outputs you get if, or instance, you X ray it, are not part of the subset of outputs under which they are equivalent.
If such official, at-the-edge outputs are all that matters for computationalism, then dumb-but-fast Look Up Tables could be conscious, which is a problem.
If the inner workings of black boxes count, then the Turing Test is flawed,. for similar reasons.
Replies from: TheOtherDave, lessdazed↑ comment by TheOtherDave · 2011-04-13T20:33:43.023Z · LW(p) · GW(p)
dumb-but-fast Look Up Tables could be conscious, which is a problem
Sincere question: why would this be a problem?
I mean, I get that LUTs violate our intuitions about what ought to be necessary to get genuine consciousness, but then they also violate my intuitions about what ought to be necessary to get a convincing simulation of it. If I throw out the latter intuitions to accept a convincing LUT, I'm not sure why I shouldn't be willing to throw out the former intuitions as well.
Is there more here than just dueling intuitions?
Replies from: PhilGoetz, Peterdjones↑ comment by PhilGoetz · 2011-04-16T03:36:39.260Z · LW(p) · GW(p)
Sincere question: why would this be a problem?
See my lower bound for consciousness. Lookup tables don't satisfy the lower bound. The lower bound is that point at which Quine's theory of ontological relativity / confirmation holism is demonstrably false, and so "meaning" can exist.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-04-16T04:39:10.089Z · LW(p) · GW(p)
Do you expect lookup tables to be able to demonstrate convincing consciousnesslike behavior (a la Searle's Chinese Room), while still not satisfying your lower bound?
If not, would encountering such a convincing GLUT-based system (that is, one that violated your expectations) change your opinions at all about where the lower bound actually is?
Because in general, I agree with you that there exists a lower bound and GLUTs don't satisfy it, but I don't think a GLUT can convincingly simulate consciousness, and if I encountered one that did (as I initially understood Peter to be proposing) I'd have to significantly update my beliefs in this whole area.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2011-04-17T14:39:59.183Z · LW(p) · GW(p)
Do you expect lookup tables to be able to demonstrate convincing consciousnesslike behavior (a la Searle's Chinese Room), while still not satisfying your lower bound?
I expect them to be theoretically able to exhibit conscious-like behave, but don't endorse the idea that Searle's Chinese Room is a lookup table, or unconscious. Searle's Chinese Room is carrying out algorithms; and Searle's commentary on it is incoherent, and I disagree with his definitions, assumptions, arguments, and conclusions.
In practice, I don't expect a lookup table to produce any such behavior until long after we have learned much more about consciousness. A lookup table might be theoretically incapable of exhibiting human-like behavior due to the limited memory and computational capacity of this universe.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-04-17T15:48:19.706Z · LW(p) · GW(p)
A lookup table might be theoretically incapable of exhibiting human-like behavior due to the limited memory and computational capacity of this universe.
Yeah, that's my expectation. So confirming the actual existence of a human-like GLUT would cause me to sharply revise many of my existing beliefs on the whole subject.
My confidence, in that scenario, that the GLUT was not conscious would not be very high.
↑ comment by Peterdjones · 2011-04-13T20:52:11.947Z · LW(p) · GW(p)
You shouldn't because they are different intuitions. In fact I don't know why you have the intuition that you can't simulate complex processing with a Giant Look Up Table. All you have to do is record a series of inputs and outputs from a piece of software, and there is the database for your GLUT. Of course, that GLUT will only be convincing if it is asked the right questions. If any software is Gluttable up to a point, the Consciousness Programme is Gluttable UTAP. But we don't have to believe a programme that is spitting out pre recorded digits of pi is calculating pi. We can keep that intuition.
Replies from: rwallace, lessdazed, TheOtherDave↑ comment by rwallace · 2011-04-13T21:46:09.998Z · LW(p) · GW(p)
In fact I don't know why you have the intuition that you can't simulate complex processing with a Giant Look Up Table. All you have to do is record a series of inputs and outputs from a piece of software, and there is the database for your GLUT.
That's not a lookup table, that's just a transcript. I only ever heard of one person believing a transcript is conscious. A lookup table gives the right answers for all possible inputs.
The reason we have the intuition that you can't simulate complex processing with a lookup table is that it's physically impossible - the size would be exponential in the amount of state, making it larger than the visible universe for anything nontrivial. But it is logically possible to have a lookup table for, say, a human mind for the duration of a human lifespan; and such a thing would, yes, contain consciousness.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-04-13T22:37:39.282Z · LW(p) · GW(p)
Note that the piece of the original comment you don't quote attempts to engage with this, by admitting that such a GLUT "will only be convincing if it is asked the right questions" and thus only simulates the original "up to a point."
Which is trivially true with even a physically possible LUT. Heck, a one-line perl script that prints "Yes" every time it is given input simulates the behavior of any person you might care to name, as long as the questioner is sufficiently constrained.
Whether Peterdjones intends to generalize from that to a less trivial result, I don't know.
↑ comment by lessdazed · 2011-04-13T21:28:05.433Z · LW(p) · GW(p)
If I say a GLUT can't compute the output that is consciousness (suppose we have a consciousness detecting machine, the output will be whatever causes the needle on that machine to jump) without a model of a person equivalent to a person, you'll probably say I'm begging the question. I can't think of a way around that, but if you could refute that thought of mine, that would probably resolve a lot for me.
↑ comment by TheOtherDave · 2011-04-13T21:01:25.086Z · LW(p) · GW(p)
I agree that if the questioner is sufficiently constrained, then a GLUT (or even a Tiny Lookup Table) can simulate any process's responses to that questioner, however complex or self-referential the process.
So, yes, any process -- including conscious processes -- can be simulated UTAP by a simple look-up table, in the same sense that living biological systems can be simulated by rocks UTAP.
I've lost track of why that is important.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-13T21:07:28.616Z · LW(p) · GW(p)
If the intuition that look-up is not sufficient computation for consciousness is correct, then a flaw in the Turing Test is exposed. If a complex Computation Programme could pass the TT, then a GLUT version must be able to as well.
Replies from: TheOtherDave, Nornagest↑ comment by TheOtherDave · 2011-04-13T22:25:28.083Z · LW(p) · GW(p)
Sure, I agree that with a sufficiently constrained questioner, the Turing Test is pretty much useless.
↑ comment by Nornagest · 2011-04-13T21:23:09.432Z · LW(p) · GW(p)
The values of the GLUT have to be populated somehow, which means matching an instance of the associated computation against an identical stimulus by some means at some point in the past. Intuitively it seems likely that a GLUT is too simple to instantiate consciousness on its own, but it seems to be better viewed as one component of a larger system that must in practice include a conscious agent, albeit one temporally and spatially removed from the thought experiment's present.
Isn't this basically a restatement of the Chinese Room?
↑ comment by lessdazed · 2011-04-13T21:23:37.111Z · LW(p) · GW(p)
If such official, at-the-edge outputs are all that matters for computationalism, then dumb-but-fast Look Up Tables could be conscious, which is a problem.
That's not what I claimed, in fact, I was trying to be careful to discredit that. I said the system can be arbitrarily divided, and replacing any part with a different part/black box that gives the same outputs as the original would have would not affect the rest of the system. Some patterns of replacement of parts remove the conscious parts. Some do not.
This is important because I am trying to establish "red" and other phenomena as relational properties of a system containing both me and a red object. This is something that I think distinguishes my answer from others'.
I'm distinguishing further between removing my eyes and the red object and replacing them with a black box sending inputs into my optic nerves, which preserves consciousness, and replacing my brain with a black box lookup table and keeping my eyes and the object intact, which removes the conscious subsystem of the larger system. Note that some form of the larger system is a requirement for seeing red.
My answer highlights how only some parts of the conscious system are necessary for the output we cal consciousness, and makes sure we don't confuse ourselves and think that all elements of the conscious computing system are essential to consciousness, or that all may be replaced.
The algorithm is sensitive to certain replacement of its parts with functions, but not others.
comment by prase · 2011-04-13T13:51:37.811Z · LW(p) · GW(p)
Hm. I am not a 100% computationalist, but let me try.
- b) 80% c) 15% a) 5% (there should be a physical structure, but its details probably don't matter. I can imagine several intuition pumps supporting all answers).
- Don't know.
- There is an isomorphism between instantaneous physical states of the machine and (a subclass of) mathematical formulae, and the machine went from a state representing "2+2" to a state representing "4".
- There is an isomorphism between the physical states of the machine and colors (say represented by RGB) and the machine has arrived to the state whose partner (as given by the isomorphism) is close to (255,0,0).
- Algorithms. I can learn that a ball is round by seeing it or by touching it and it certainly feels different. However among the algorithms there may be equivalence classes with respect to consciousness.
- Don't know. I am not even convinced that axiomatic logic would not need to be replaced by something more general before arriving to the said Theory of Everything, or whether such Theory can be constructed.
- (If such a Theory exists, then) no 75%, yes 25%. Partly depends on what "explain consciousness" means: if by "explanation" we mean a set of statements which, when properly communicated, cause people to feel that there is nothing mysterious with consciousness, there may be some need for qualia. Strongly depends on meaning of "qualia": the definition presently used by philosophers will probably be useless for any axiomatisation.
- Consciousness isn't a binary property, so, in some sense, yes (90%).
By the way, upvoted for asking interesting questions.
comment by XiXiDu · 2011-04-14T13:34:12.300Z · LW(p) · GW(p)
Here is another attempt to rephrase one of the opinions hold within the philosophy camp:
Imagine 3 black boxes, each of them containing a quantum-level emulation of some existing physical system. Two boxes contain the emulations of two different human beings and one box the emulation of an environment.
Assume that if you were to connect all 3 black boxes and observe the behavior of the two humans and their interactions you would be able to verify that the behavior of the humans, including their utterances, would equal that of the originals.
If one was to disconnect one of the black boxes containing the emulation of a human and store it within the original physical environment, containing the other original human being, the new system would not exhibit the same behavior as either the system of black boxes or the genuinely physical system.
A system made up of black boxes containing emulations of physical objects and genuinely physical objects does not equal a system made up of only black boxes or physical objects alone.
- Emulations only exhibit emulated behavior.
- The black boxes only exhibit a representation of the behavior of the physical systems they are emulating.
- The black boxes are only able to emulate a representation of behavior given an equally emulated environment.
The representations of the original physical systems that are being emulated within the black boxes are one level removed from the originals. A composition of those levels will exhibit a different interrelationship.
Once you enable the black box to interact with the higher level in which it resides, the system made up of the black box, the original environment and the human being (representation-level / physical-level / physical-level) will approach the behavior exhibited in a context of emulated systems and the original physical system.
You can equip the black box with sensors and loudspeakers yet it will not exhibit the same behavior. You can further equip it with an avatar, still, the original and emulated human will treat an avatar differently than another original, respectively emulated human. You can give it a robot body. The behavior will still not equal that of the behavior that a system consisting of the original physical systems would exhibit and neither the behavior that would be exhibited in the context of a system made up of emulations.
You may continue to tweak what was once the black box containing an emulation of a human being. But as you approach a system that will exhibit the same behavior as the original system you are slowly reproducing the original human being, you are turning the representation into a reproduction.
Replies from: Dolores1984↑ comment by Dolores1984 · 2012-08-24T20:47:27.525Z · LW(p) · GW(p)
...This argument strikes me as, pardon me, tremendously silly. Just off the top of my head, it seems to still hold if you replace the 'quantum level simulation of a person' with an exact duplicate of the original brain in a saline bath, hooked up to a feed of oxygenated blood. Should we therefore conclude that human brains are not conscious?
EDIT: Oh blast, didn't realize this was from months ago.
comment by Peterdjones · 2011-04-13T18:08:05.417Z · LW(p) · GW(p)
(2) Humans can manually compute any algorithm that a TM compute (this is just the Church Turing conjecture in reverse), so a human has to be at least a UTM. The significant part of the computationalist claim is that humans are at most a UTM.
(4) No.
(5) If intermediate steps matter, the Turing Test is invalidate, since a Giant Lookup table could produce the same results with trivial intermediate steps. However, computationalists do not have to subscribe to the TT.
(7) An "And gate" looks like a piece of hardware, but it is really anything that computes a certain abstract function. Computationalism requires that a mind is essentially a class of programmes. Computationalism probably does not require a very fine grained description of the Consciousness Programme, since that would make it hard to explain how numerous different people with different brains and life expreiences could all be conscious. (Computationalism has to retrodict human consciousness as well as predict AI). That being the case, computationalism can stop short of a description of computation that does not go all the way down to primitive elements. (Computation is of course not essentially tied to the binary system. You could have a decimal UTM).
(6) Either qualia don't exist at all, even as mere appearances, or they have to be mentioned. I go for the latter. (A complete theory of optics has to be able to explain mirages!)
(8) I don't think any computaitonalist thinks all computations are conscious. Consciousness, for computationalists, cannot be any programme, or just one programme, but must be a set of programmes each embedding a UTM.
comment by byrnema · 2011-04-13T15:30:10.304Z · LW(p) · GW(p)
(b) Consciousness is something that a concrete machine does, as in "My calculator computed 2+2".
Instructed to skip.
Unpack what "the machine computed 2+2" means. (I'll try.) A machine computes 2+2 if it has an algorithm (perhaps a subroutine) that accepts two inputs a and b (where a and b are some set of numbers containing at least the natural numbers through 5) and generally (almost always) outputs the sum a+b. The machine may output a+b by any means whatsoever -- even just using a look up table or appending two strings of symbols. (In contrast, an algorithm that always outputs '4' does not compute 2+2.)
Unpack what 'computing red' means. At the first level, computing red means identifying that a set of wavelengths are within a certain range. At the subjective, qualia level, computing red means identifying the range of wavelengths "red" with a network of information about that range (possibly including past experience with that range). The experience of the quale is observing the association of the range of wavelengths with the network of related associations; it is the observation that "red" is passed through a node and redirected to a set of other nodes. (This is why consciousness is required to experience qualia.) I observe that I have a little bit of control over what associations red is passed to (different subsets of the network of associations) and asking myself 'what does red feel like?' or 'what is red?' is an exercise in lingering around this locus of control; associating, pulling back, reassociating, etc. When I actually look at something red, the shape and texture dominate my associations so I don't think 'red' is a very strong quale for me, or not a very good way to divide up my experience of it.
At this level of questioning, I'm not sure what 'feel' means. Feelings about mental processes depend upon associations with physical feelings, including immediate or remembered physical sensations that the brain produces while thinking, whatever those are. So if I'm on drugs, thinking should feel different ... is this functional or algorithmic?
Don't think so..
No, I don't imagine qualia are an essential ingredient of consciousness.
I agree with RobinZ, consciousness requires self-reflective behavior -- which I see as an algorithm modeling itself to some unspecified extent, though not necessarily containing any component of self-awareness.
I just remembered that I wrote up my thoughts about consciousness is this post titled Hypotheses for Dualism. There are likely to be some comments relevant to this discussion there.
comment by Cyan · 2011-04-13T13:55:14.062Z · LW(p) · GW(p)
My own answers, before reading anyone else's, were:
- (b)
- The calculator processes information. In the same system that gives the interpretation of inputs and outputs as rational numbers, the information processing of the calculator can be seen to be isomorphic to addition.
- I can make many senses of the phrase, depending on the context in which it is used. Here's one: light with wavelength around 650 nm was reflected off the petals and entered the two-year-old's eyes; electrochemical signal processing occurred in her brain such that she reported, "The rose is red!".
- I originally answered algorithms, but I find RobinZ's answer persuasive.
- I don't know, but I doubt it. I expect consciousness has something to do with self-reflection, and Gödel showed that self-reflection is pervasive even if you don't build it in explicitly at the start.
- I don't know, but I doubt it.
- Only certain kinds.
comment by kurokikaze · 2011-04-18T11:05:25.705Z · LW(p) · GW(p)
Okay, here's my answers. Please take note that full answers will be too big, so expect some vagueness:
1) B 3) Big topic For me, It can use result of "computation". 4) Invoking memory or associations? Mostly no. 5) Hard to say yet. I'll take a guess that it's mostly functions, with maybe some parts where steps really matter. 6) I think it's possible. 7) I guess so. 8) They have something in common, but I think it depends on your definition of "conscious". They are most certainly not self-conscious, though.
comment by DuncanS · 2011-04-17T23:25:37.956Z · LW(p) · GW(p)
I think the logic behind this argument is actually much, much simpler.
Let us suppose that consciousness is not a type of computation.
Rational argument, and hence rational description IS a type of computation - it can be made into forms that are computable.
Therefore consciousness, if it is not a type of computation, is also not describeable within, or reducible to, rational argument.
I call this type of thing the para-rational - it's not necessarily against rationality to suppose that something exists which isn't rationally describable. What doesn't make sense is to go on to either
a) Attempt to rationally describe it in detail afterwards. or b) Use it as an excuse to avoid thinking rationally about things you CAN think about in a rational way. c) Try and use its properties in a logical argument - all this gives you on the whole is an illogical argument.
So yes, there might be an aspect of consciousness which is beyond the rational, and which is always associated with certain types of existent being. But I would prefer the proposition that this is para-rational - alongside the rational realm, rather than irrational - joined to the rational realm, and making it non-rational after all.
This is a difficult area - as one should necessarily believe para-rational things for para-rational reasons (whatever THAT means). But I can't see how we could rule out other types of 'existence'. However, I can see good reasons not to make it a subject of too much rational discussion - if you can't rationally describe something, don't attempt to....
comment by PhilGoetz · 2011-04-14T22:59:03.807Z · LW(p) · GW(p)
There is no such thing as an abstract machine, nor an abstract computation. If you imagine a machine adding two and two, the computation is implemented in your brain, which holds a representation of the operations. Physics is information; information is also physics. There is no information without a physical embodiment; there is no computation without physical operations.
Humans don't have infinite memory, and thus are less-powerful than Turing machines.
"Computing red": Please put more words into that phrase. It's too ambiguous to deal with.
Functions vs. algorithms: This is a good question. Can a lookup table be conscious? I said no. Therefore I must choose 'algorithm'.
A theory that explains consciousness should be statable in abstract terms using mathematical operations.
Yes, a theory that explains consciousness must explain qualia.
Answering that question would mislead people more than it would inform.
comment by see · 2011-04-14T09:21:07.524Z · LW(p) · GW(p)
Either "qualia" are ultimately a type of experience that can be communicated to a conscious being who hasn't had the experience, or they cannot. If they can be, they cease to have any distinction from any other communicable fact. If they cannot, you can't actually use them to determine if something is conscious, because nobody can communicate to you their own individual qualia. Either way, qualia by necessity drop out of any theory of consciousness that can classify whether something as inert as a brick is a conscious being or not. And if a theory of consciousness does not predict, either way, whether or not a brick is conscious, then it is a waste of time.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2011-04-14T10:23:10.013Z · LW(p) · GW(p)
Either "qualia" are ultimately a type of experience that can be communicated to a conscious being who hasn't had the experience, or they cannot.
That's the sort of dilemma I don't trust as a reasoning step. What if they can partially or vaguely or approximately (but not precisely and entirely) be communicated to a conscious being who hasn't had the experience?
Replies from: PhilGoetz, see↑ comment by see · 2011-04-14T22:05:29.045Z · LW(p) · GW(p)
That's the sort of dilemma I don't trust as a reasoning step. What if they can partially or vaguely or approximately (but not precisely and entirely) be communicated to a conscious being who hasn't had the experience?
Insofar as they are communicable, the communication can be emitted by someone that doesn't experience them, and thus doesn't serve as evidence that the communicating being experiences the quale. (In the classic "Mary the color scientist" formulation, Mary, who has never experienced seeing red, can tell people partially/vaguely what it's like to see red, since knows every communicable fact about seeing red, including how people describe it.)
Replies from: ArisKatsaris, PhilGoetz↑ comment by ArisKatsaris · 2011-04-14T22:48:33.292Z · LW(p) · GW(p)
Let's say you speak to an alien from another universe, and they give you mathematical equations for a phenomenon that only people in that universe experience. For example, a weird slight periodic shift in the gravitational constant.
I can communicate this information further, even though I don't experience such shifts to the gravitational constant myself. And yet you're saying that the alien who first originated those equations, that isn't evidence for their own experiences either?
Perhaps you mean it isn't proof, but to say it's not evidence at all is a rather big claim.
Replies from: see↑ comment by see · 2011-04-15T03:04:20.682Z · LW(p) · GW(p)
How would his formulating equations give me any evidence that he feels the shift in the gravitational constant? Newton's laws weren't evidence that Newton ever experienced orbiting another body.
Look, back to the basic point of the sterility of qualia, how would you go about distinguishing whether I actually experience qualia, or whether I am just programmed by evolution to mimic the responses of other people when asked about their experiences of qualia?
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2011-04-15T08:58:29.614Z · LW(p) · GW(p)
How would his formulating equations give me any evidence that he feels the shift in the gravitational constant? Newton's laws weren't evidence that Newton ever experienced orbiting another body.
Newton did orbit the sun, while riding the Earth; and his laws were certainly evidence for that rather than against it.
Saying that event A is zero evidence for event B, really means that the two events are completely uncorrelated with each other -- do you really mean to argue that the existence of Newton's equations is completely uncorrelated with the fact Newton lived in a (to the limits of his understanding) Newtonian universe?
Look, back to the basic point of the sterility of qualia, how would you go about distinguishing whether I actually experience qualia, or whether I am just programmed by evolution to mimic the responses of other people when asked about their experiences of qualia?
Occam's razor can be useful there, I think, until we have enough understanding of neuroscience to be able to tell between a brain doing mimicry, and a brain doing an honest and lucid self-evaluation.
Replies from: see↑ comment by see · 2011-04-18T04:31:43.804Z · LW(p) · GW(p)
Newton did orbit the sun, while riding the Earth
And he had no particular qualia that would distinguish that from any of a billion other arrangements.
do you really mean to argue that the existence of Newton's equations is completely uncorrelated with the fact Newton lived in a (to the limits of his understanding) Newtonian universe?
No, I mean to argue that the existence of Newton's equations is completely uncorrelated with whether Newton experienced any qualia. A properly-designed curve-fitting algorithm, given the right data, could produce them as well; there is no evidence of consciousness (at least distinct from computation) as a result.
Occam's razor can be useful there, I think, until we have enough understanding of neuroscience to be able to tell between a brain doing mimicry, and a brain doing an honest and lucid self-evaluation.
Aliens arrive to visit Earth. Their knowledge of their own neural architecture is basically useless when evaluating ours. How do they determine that humans "actually experience" qualia, rather than humans simulating the results of experience of qualia as a result of evolution?
The Occam's Razor result that "they act in a manner consistent with having qualia, therefore they probably experience qualia, therefore they are probably conscious" is immediately displaced by the Occam's Razor result that "they act in a manner consistent with being conscious, therefore they probably are conscious". The qualia aren't necessary, and therefore drop out of the axiomization of a theory of consciousness.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2011-04-19T09:24:58.190Z · LW(p) · GW(p)
No, I mean to argue that the existence of Newton's equations is completely uncorrelated with whether Newton experienced any qualia
You misunderstood my argument. I wasn't talking about qualia when I talked about Newton, I was talking about gravity, another phenomenon. Newton was affected by gravity -- this was highly correlated with the fact he talked about gravity. We talk about qualia -- this is therefore evidence in favour of us being affected by qualia.
How do they determine that humans "actually experience" qualia, rather than humans simulating the results of experience of qualia as a result of evolution?
What would be the evolutionary benefit of simulating the results of experience of qualia, in a world where nobody experiences qualia for real? That's like an alien parrot simulating the voice of a human in a planet where there exist no humans. Highly unlikely to be stumbled upon coincidentally by evolution.
The Occam's Razor result that "they act in a manner consistent with having qualia, therefore they probably experience qualia, therefore they are probably conscious" is immediately displaced by the Occam's Razor result that "they act in a manner consistent with being conscious, therefore they probably are conscious". The qualia aren't necessary, and therefore drop out of the axiomization of a theory of consciousness.
What do you mean by "conscious"? Self-aware? Not sleeping or knocked out? These seem different and more complex constructs than qualia, who have the benefit of current seeming irreducability at some level (I might be able to reduce individidual color qualia to separate qualia of red/green/blue and brightness, but not further).
Replies from: AlephNeil↑ comment by AlephNeil · 2011-04-19T12:53:19.828Z · LW(p) · GW(p)
What makes qualia problematic - the only thing that makes it problematic - is that it's tied up with the notion of subjectivity.
Subjective facts are not 'objective'. Any attempt to define qualia objectively, as something a scientist could detect by careful study of your behaviour and/or neurophysiology, will give you a property X such that Chalmers' hard question remains "and why does having property X feel like this from the inside?"
I think it's helpful to consider the analogy (perhaps it's more than an analogy) between subjectivity and indexicality. Obviously science is not going to explain why the universe views itself through my eyes, or why the year is 2011. It's only by 'borrowing' the existence of something called 'you', who is 'here', that indexical statements can have truth values. I think that similarly, you need to 'borrow' the fact that red looks like this in order for red to look like this. The statements that you make in between 'borrowing' subjectivity and 'paying it back' simply do not belong to science - they are not "objectively true or false".
Of course the question of who or what does the 'borrowing' is Deeply Mysterious - in fact it's something that even in principle we can have no knowledge of, because it's not something that happens within the universe. (Gee, this is getting dangerously theological. I guess I'm confused about something...)
(On this view, whatever kind of fact it is that 'rabbits have colour qualia', it cannot be a fact with an evolutionary explanation. It's not really a fact at all, except from the perspective of a rabbit. And there isn't even such a thing as 'the perspective of a rabbit' except from the perspective of a rabbit.)
comment by kpreid · 2011-04-13T16:12:55.768Z · LW(p) · GW(p)
- c. The abstraction wouldn't be a very good abstraction if it fails to be similar to real machines except in what it abstracts away.
- Tautologically, all equivalent models of computation are equivalent for this purpose.
- The portion of the machine which is responsible for that computation was in a state which is isomorphic to "2+2" and is now in a state which is isomorphic to "4".
- The phrase “computing red” is too vague/lacking context to interpret.
- Functions. Your report of how your algorithm feels from the inside is part of the output of the algorithm, and therefore of the function; a mind made of modules with no side-outputs would not have the corresponding “feelings”.
- Neither consciousness nor computation exists in the axioms of a sensible Theory of Everything; both are descriptions of certain sorts of systems which are possible in any sufficiently complex universe, just as "7" occurs in descriptions of some states of any universe containing at least 7 objects.
- No.
- Consciousness is not a sharp-edged category. More usefully-described-as-conscious computations are less frequent in the space of all possible computations.
comment by TheOtherDave · 2011-04-13T15:37:33.628Z · LW(p) · GW(p)
My $0.02, without reading other answers:
\1. I'm not sure, but I lean towards (b).
Unpacking a bit: As it is used in the sentence "the sum of 1 and 1 to yield 2 is a computation", my intuition is that something like (a) is meant. That said, it seems likely that this intuition comes from me reasoning about a category of computations as a cognitive shortcut, and then sloppily reifying the category. Human brains do that a lot. So I'm inclined to discard that intuition and assert that statements about 1+1=2 are statements about an abstract category in my mind of concretely instantiated computations (including hypothetical and counterfactual instantiations).
I'm perfectly comfortable using "machine" here as a catchall for anything capable of instantiating a computation, though if you mean something more specific by the use of that word then I might balk.
\2. N/A
\3. Oh, hey, look at that! (I wrote the above before reading this question.)
A couple of caveats: It's not saying anything terribly precise about the physical state of the machine, but I suppose we can speak very loosely about it. And there's an important use-mention distinction here; I can program a computer to reliably and meaningfully compute "blue" as the result of "2+2" by overriding the standard meanings of "2" and "+". Less absurdly, the question of whether 2+2=4 or 2 + 2 = 4.00000 actually can come up in real life.
Waving all that stuff aside, though, it's saying that prior to performing that computation the machine had some data structure(s) reliably isomorphic to two instances of values at a particular (identical) point along a number line, and of an operation that given a pair of such values returned a third value with a particular relationship to them. (I don't feel like trying to define addition right now.)
\4. Sure.
It means different things for different kinds of computing devices. For example, for the Pantone-matching software on my scanner, computing (a particular shade of) red means looking up the signals it's getting from the scanner in a lookup table and returning the corresponding code, which downstream processes can use for other purposes (e.g., figuring out what CMYK values to display on my monitor).
For my eye and visual cortex, it means something very roughly similar, though importantly different. The most important aspects of the difference for this discussion have to do with what sorts of downstream processes use the resulting signal for what sorts of operations.
\5. Either the question is confused, or I am.
Unpacking: How it feels to be a particular computation is an artifact of the structure of that computation; changing that structure in a particular way might change how it feels, or might not, depending on specifics. I'm not sure how to map the labels "outputs", "inputs", "inside", and "intermediate" in your question to that answer, though.
\6. That hinges a bit on how precisely we define "computational devices"; there are no doubt viable definitions for which I'd answer "yes," although my answer for an AND gate is "no".
Come to that, it also hinges on what axiomatization you're using; I suppose you could construct one that did, if you wanted to. I would just consider it unnecessarily complex.
Having said all that: No, I wouldn't expect a "simplest possible axiomatization" of a "theory of everything" to contain any "computational devices." (Scare quotes used to remind myself that I'm not sure I know what those phrases mean.)
\7. As above; it might, much as it might mention AND gates, but I would look askance at one that did. (Also as above, this hinges a fair bit on what counts as a quale, but my answer for "the perception of red" is "no".)
\8. Mostly, I think this question is a question about words, rather than about their referents.
There are computations to which my intuitive notion of the label "conscious" simply does not apply. But I suppose I could accept for the purposes of discussion a definition of "conscious" that applies in some sense to all computations, if someone were to propose one, though it would be a counterintuitive one.
comment by bogus · 2011-04-13T15:27:13.597Z · LW(p) · GW(p)
Would an axiomatization of a Theory of Everything that can explain consciousness mention qualia?
LessWrong User:Mitchell_Porter has made some headway on this very interesting question. See his submissions How to think like a quantum monadologist and Consciousness.
comment by JenniferRM · 2011-04-13T14:31:46.850Z · LW(p) · GW(p)
My answers:
Your terminology is confused and the question is ill-formed. There is a difference between mathematically abstract computation and implementation. Implementation usually requires energy to carry out, and (based on concerns around reversible computing) it will always take energy to communicate the output of an implemented computation to some other physical system.
The Church Turing Thesis is probably correct. Moreover, any one of these formalisms can emulate any other with a runtime hit of some constant plus a scalar multiplier.
That a calculator "computes 2+2" means that something in its environment put it into a configuration where these three symbols were represented within its machinery and set its symbol manipulation rules chug along deterministically until its machinery represent the symbol four.
Saying something "computes red" is non-idiomatic and demonstrates confusion. If I was being generous, I would say that this involves a physical system having part of itself manipulated by light at a certain freq