Consciousness affecting the world
post by DavidPlumpton · 2013-12-06T19:37:27.685Z · LW · GW · Legacy · 57 commentsContents
57 comments
In Zombies! Zombies? Eliezer mentions that one aspect of consciousness is that it can causally affect the real world, e.g. cause you to say "I feel conscious right now", or result in me typing out these words.
Even if a generally accepted mechanism of consciousness has not been found yet are there any tentative explanations for this "can change world" property? Googling around I was unable to find anything (although Zombies are certainly popular).
I had an idea of how this might work, but just wanted to see if it was worth the effort of writing.
57 comments
Comments sorted by top scores.
comment by ThrustVectoring · 2013-12-06T19:50:29.658Z · LW(p) · GW(p)
It's fairly straightforward - "consciousness" is nothing more than a compact way to talk about certain kinds of very complicated interactions of neurons, and those neurons are part of and affect the world. We might not know exactly which neuron firing patterns correspond to "consciousness", but we know that some patterns (say, the one you're experiencing right now) correspond to it, and others (say, a MMA fighter immediately after getting choked to unconciousness) don't.
Replies from: passive_fist, asr, Will_Newsome↑ comment by passive_fist · 2013-12-07T01:43:11.030Z · LW(p) · GW(p)
That's not the meaning of the word 'consciousness' that Elezier talks about. He's talking about the hard problem of consciousness i.e. the 'hearer'.
The Zombie Argument is that if the Zombie World is possible—not necessarily physically possible in our universe, just "possible in theory", or "imaginable", or something along those lines—then consciousness must be extra-physical, something over and above mere atoms. Why? Because even if you somehow knew the positions of all the atoms in the universe, you would still have be told, as a separate and additional fact, that people were conscious—that they had inner listeners—that we were not in the Zombie World, as seems possible.
You're talking, instead, about reductionism (EDIT: I previously said it was dualism, which is incorrect) i.e. the idea that consciousness resuts naturally from the firing of neurons.
EDIT: If you're trying to answer the question "how can Zombies write about consciousness", then you are absolutely correct.
Replies from: FeepingCreature, ThrustVectoring↑ comment by FeepingCreature · 2013-12-07T01:54:44.791Z · LW(p) · GW(p)
You're talking, instead, about dualism i.e. the idea that consciousness actually causally affects the firing of neurons in a way that is not explainable by physical theory alone.
No he's talking about reductionism, the idea that consciousness is a thing that arises from physical laws "naturally", without requiring special consideration in them.
Substance dualism is the claim that consciousness is a thing that exists, but has separate laws controlling its behavior.
Replies from: ThrustVectoring, passive_fist↑ comment by ThrustVectoring · 2013-12-07T03:08:23.395Z · LW(p) · GW(p)
yup, exactly one thing is going on in our brains - a vast sea of elementary particles and fundamental fields interacting. It's very inconvenient to talk about things this way, so we can refer to the gross effect of these interactions in the brain as thoughts, neurons firing, conciousness, chemical reactions, etc, as it becomes covenient to do so.
Replies from: pragmatist, hyporational↑ comment by pragmatist · 2013-12-08T13:37:29.855Z · LW(p) · GW(p)
This does not accurately describe what most people mean when they talk about consciousness. It may turn out that conscious experience can be reduced to certain neuronal interactions, but such a reduction (even establishing the existence of such a reduction) would count as a major empirical advance. If all our talk about consciousness was in fact merely a matter of convenience, then the existence of the reduction wouldn't be an empirical question. It would be true by definition. Analogy: we now understand that heat is simply molecular motion, but that doesn't mean that when people were talking about heat in the 18th century, it was merely a convenient way for them to talk about certain molecular motions. They were talking about a phenomenon with which they had direct experience, that subsequently turned out to be reducible to molecular motion.
When most people talk about consciousness, they are referring to phenomenal experience (the fact that it feels like something to be awake and observing the universe). Now it may turn out that this phenomenon is completely reducible to neuronal interactions, but to simply say "consciousness" is a convenient label for a set of neuronal interactions is to elide an important and substantive problem. Dualists aren't confused about what "consciousness" means, they disagree with you on the possibility of a reduction. It's generally not a good idea to settle a debate by mere definition.
Replies from: FeepingCreature↑ comment by FeepingCreature · 2013-12-09T03:42:56.821Z · LW(p) · GW(p)
but such a reduction (even establishing the existence of such a reduction) would count as a major empirical advance.
I think Eliezer makes the point that the specificity of the various neurological disorders and breakdowns forms some very strong evidence for how stuff happens in the brain.
to simply say "consciousness" is a convenient label for a set of neuronal interactions is to elide an important and substantive problem.
No, because you're here, talking about phenomenal experience, and it feeling like things. Imagine following that causal trail back; where do you expect to end up as the intermediary of going from world to photons to retina to brain to ? to brain to fingers to talking about phenomenally perceiving the world? In an area underlaid with completely different physical laws, that attaches to the brain by a bridging system? What would that even explain? Nothing. It's not an explanation, it's mysticism slotted in to explain a mysterious-to-you process. We have not seen any evidence that the physics of the brain are in any way magical. We've taken apart brains and no mysterious things have happened. We've cut into living brains and we've practically watched specific pieces of functionality fall away. With modern scanning tech, we can watch as memories are retrieved. We can literally look at reconstructions of your visual imagination. What does it take?
The entirety of dualism is an argument from ignorance. "I cannot imagine how conscious experience of reality could arise from physical laws." -> "Hence, physics is not the complete answer." No, hence your imagination is limited.
Replies from: pragmatist↑ comment by pragmatist · 2013-12-09T06:17:24.972Z · LW(p) · GW(p)
Many post-Newtonian physicists believed that physical reality consisted of just matter in motion in absolute space. For them, physicalism meant reducing everything to particle interactions. The only nodes in the fundamental causal graph were properties of particles. However, it gradually became clear that reducing all physical phenomena to this basis was a very difficult, if not insurmountable, task. But by introducing a new element into the fundamental physical ontology -- fields -- one could in fact formulate much better physical theories.
Now consider a hypothetical early physicalist resisting the introduction of these new nodes into the fundamental causal graph. He might have objected to simply introducing a new primitive object into the ontology on the grounds that this object is "mysterious" or "magical". The correct response would have been, "Why is my primitive any more mysterious or magical than yours?" If introducing a new primitive allows us to construct more and better explanations, it might be worth it. Of course, the new nodes in the graph need to be doing some work, they need to be allowing for the construction of better theories and all that. You can object to them on the grounds that they do not do that work, but simply objecting to them on the grounds that they are new irreducible nodes isn't much of an argument.
My impression is that property dualists like Chalmers think of irreducible psychological properties and laws along these lines. They are new irreducible nodes in the causal graph, yes, but they are essential for constructing good explanations. Without these "psycho-physical" laws, we don't have any good explanations of the phenomenal quality of conscious experience. Maybe the dualists are wrong about this, but they are not (at least not necessarily) simply positing an idle dangler that does no explanatory work. They are claiming we need to add new nodes to our fundamental causal graph, and that these nodes will do important explanatory work. Also, the nodes are linked to the pre-existing nodes by law-like relationships (hence the term "psycho-physical law"), just as the new "field" nodes were linked to the pre-existing "particle" nodes by law-like relationships. If we have predictable, empirically discoverable laws connecting these nodes, I don't know why one would describe it as "mysterious" or "magic". Dualists of this stripe aren't simply helping themselves to a semantic stopsign, they're writing a promissory note. They are saying that incorporating psychological (or, perhaps more plausibly, broadly informational) properties into our fundamental ontology will lead to better theories.
All that said, I AM NOT A DUALIST. I just don't think the rejection of dualism is virtually a priori true, that there is no non-magical alternative to reductionism. I think there is a substantive empirical disagreement about what our best theories of consciousness will look like. This is not a question that can be settled by simply defining one position to be true, by declaring that "consciousness" is just a label for a set of neural interactions. If that is what one means by "consciousness", then the dualist position doesn't even make sense. And if your definition of a word renders your opponents (some of whom are quite smart) incoherent, then chances are you are talking past them rather than genuinely refuting their position.
Replies from: FeepingCreature↑ comment by FeepingCreature · 2013-12-09T06:38:09.819Z · LW(p) · GW(p)
The correct response would have been, "Why is my primitive any more mysterious or magical than yours?"
Because yours doesn't do any work; as Eliezer said, if you postulate different laws for consciousnesses then you don't actually end up less confused about how consciousness works. Besides, I've never seen anybody even attempt to write down such a law, they're just referred to as this amorphous anyblob.
I'd be more charitable if dualists got their alternative laws to do actual predictive work, even if they just predicted properties of personal experience. But no, it happens that the people who do all the useful work are neurologists. This is the universe saying "hint" pretty loudly.
Without these "psycho-physical" laws, we don't have any good explanations of the phenomenal quality of conscious experience
But that's my point, psychophysical laws don't explain anything either, especially if you never get called on actually providing them! The only definition those laws have is that they ought to allow you to explain consciousness, but they never actually get around to actually doing so! This isn't reason, it's an escape hatch.
All that said, I AM NOT A DUALIST. I just don't think the rejection of dualism is virtually a priori true
Neither do I, I think it's evidentially true. (If they haven't argued themselves into epiphenomenality again, then it's a priori true, or at least obvious.)
Replies from: pragmatist, pragmatist↑ comment by pragmatist · 2013-12-09T07:04:36.976Z · LW(p) · GW(p)
A question, just to make sure we're not totally talking past each other: What is the kind of evidence that would lead you to update in the direction of dualism being true?
Replies from: FeepingCreature↑ comment by FeepingCreature · 2013-12-09T07:10:08.806Z · LW(p) · GW(p)
Scientists cutting into a living brain and finding the interface points, brains doing physically impossible computational feats, physical systems behaving differently in the presence of brains. Hard to say more than that in the absence of specific predictions.
Replies from: pragmatist↑ comment by pragmatist · 2013-12-09T07:12:57.628Z · LW(p) · GW(p)
I gave a prediction in my other comment. Do you agree that the continuing absence of a substantive reductive theory, or even an adequate approach to a reductive theory, of phenomenal consciousness is (weak) evidence against reductionism? If so, do you consider it (weak) evidence for dualism?
Also, not all dualists are substance dualists. Chalmers doesn't believe that brains are getting signals from some non-physical realm.
Replies from: FeepingCreature↑ comment by FeepingCreature · 2013-12-09T07:24:02.101Z · LW(p) · GW(p)
Chalmers has been declared silly in an Eliezer article somewhere on here; I agree with it completely, so just read that instead.
Regarding evidence: in fact, I'll go the other way around, and say that the brain is a triumph of reductionism in progress. Starting from when we thought the brain was there to cool the blood, and we had no idea where reason happened, the realm of dualism has only gotten smaller; motor control, sensoric processing, reflexes, neuronal disorders disabling specific aspects of cognition - the reductionist foundation for our minds has been gaining strength so predictably that I'd call a complete reductionist explanation of consciousness a matter of when, not if.
You failed to count all the myriad aspects of minds that have reductionst explanations. Consciousness is what's left.
Replies from: pragmatist↑ comment by pragmatist · 2013-12-09T07:41:53.475Z · LW(p) · GW(p)
You failed to count all the myriad aspects of minds that have reductionst explanations. Consciousness is what's left.
I don't see how this alters the claim that the continuing absence of a reductive theory of consciousness is evidence against reductionism. Counting all the myriad aspects doesn't change that fact, and thatt's the only claim I made. I didn't say that the continuing absence of a reduction has demonstrated that reductionism is false. I'm only claiming that Pr(Reductionism | No Reduction of Consciousness available) < Pr(Reductionism).
I think the existence of the Bible is evidence for Jesus's divinity. That doesn't mean I'm discounting the overwhelming evidence telling against his divinity.
Replies from: FeepingCreature↑ comment by FeepingCreature · 2013-12-09T09:23:01.393Z · LW(p) · GW(p)
Fair enough. I just think that seen in the context of the human mind, so far the evidence in general comes down fairly solidly on the side of reductionism, so I wouldn't recommend clinging to consciousness as the dualist liferaft in the metaphorical reductionist storm.
↑ comment by pragmatist · 2013-12-09T07:00:02.535Z · LW(p) · GW(p)
Because yours doesn't do any work; as Eliezer said, if you postulate different laws for consciousnesses then you don't actually end up less confused about how consciousness works.
I don't see why this has to be the case. We posited different laws for fields (they don't behave like particles), but that doesn't mean they don't do any work. The dualists I'm describing (and an actual example may or may not exist) aren't describing some completely parallel psychological realm unconnected to the physical realm. They believe one can build good theories where fundamental psychological variables are causally entangled with physical variables, kind of like field variables are causally entangled with particle variables.
I agree that if these psychological properties are completely epiphenomenal then they do no work, but I don't see why they'd have to be. That's a substantive question. Maybe it will turn out that laws like the Weber-Fechner law are the best we can do in the relevant domain, that we can't come up with equally useful generalizations that don't appeal to sensations (a hypothetical example; for all I know, we have already done better in this particular case). In that case, our best theory has sensations as an irreducible component, but I don't see why that makes it mysterious or magical.
If successful reductions are evidence for the general thesis of reductionism, then the absence of a successful reduction is evidence against the thesis. Weak evidence, perhaps, but evidence nonetheless. And the longer the absence persists, the stronger evidence it is.
I'd be more charitable if dualists got their alternative laws to do actual predictive work, even if they just predicted properties of personal experience. But no, it happens that the people who do all the useful work are neurologists. This is the universe saying "hint" pretty loudly.
Well, psychophysics is a field, even though it doesn't presume dualism. Dualists are claiming that we can't do better. Their position is largely a negative one, and so difficult to construct a research program around. I generally dislike positions of that kind in science, but that doesn't mean they couldn't be right. Also, I suspect the intersection of "dualists" and "neurologists" is not the empty set. Some of the neurologists doing useful work might be dualists of some stripe.
In any case, I didn't intend to debate the efficacy (or lack thereof) of dualists. Like I said, I'm no dualist. Perhaps all dualists are crappy philosophers, terrible scientists and horribly confused individuals. Doesn't affect the point I was making.
Neither do I, I think it's evidentially true.
Um... OK, then I don't see where we disagree. In the original comment you responded to, I was simply saying that "consciousness" isn't just a label for a set of neural interactions. If you think dualism is false based on evidence, then I presume you agree. After all, if you believed that "consciousness" simply meant a set of neural interactions, then "consciousness is not reducible to neural interactions" would be false based on the meanings of the words alone, not based on empirical evidence.
Replies from: FeepingCreature↑ comment by FeepingCreature · 2013-12-09T07:18:52.581Z · LW(p) · GW(p)
I don't see why this has to be the case. We posited different laws for fields
Sorry, let me restate my point.
if you postulate [merely the existence of] different laws for consciousnesses then you don't actually end up less confused about how consciousness works.
Actually stating the bridging laws might help with this.
good theories where fundamental psychological variables are causally entangled with physical variables, kind of like field variables are causally entangled with particle variables.
I don't see what making the psychology fundamental even buys you.
In that case, our best theory has sensations as an irreducible component
My standard is "could a superintelligence reduce these laws to underlying simple physics?" It's possible that psychology will turn out to be practically irreducible; I have no beef with that claim. I don't buy that it's fundamentally irreducible though.
If successful reductions are evidence for the general thesis of reductionism, then the absence of a successful reduction is evidence against the thesis
to the extent that a reduction would have been expected. Give neurology some time. We're making good progress. Remember, there was a time we didn't even know what the brain was for. In that time, dualism would have had a much easier stance, and its island has only gotten smaller since. Winds of evidence and all.
Dualists are claiming that we can't do better.
That we can't, even in theory, do better. That we, as in cognitively limited humans, can't do better is merely implausible.
Um... OK, then I don't see where we disagree. In the original comment you responded to, I was simply saying that "consciousness" isn't just a label for a set of neural interactions.
I think consciousness is just physics. I don't perceive consciousness as just physics, but then again, I don't perceive anything as just physics, even things that unambiguously are, like rocks and air and stuff. I can imagine a causal path in the brain that starts with "photons hitting a rose" and ends with me talking about the effing redness of red, and I can, in my imagination, identify this path with "redness". I suspect this will get clearer as we become able to stimulate specific parts of the brain more easily.
↑ comment by hyporational · 2013-12-07T05:27:03.256Z · LW(p) · GW(p)
It's also inconvenient that going from the brain doing work to neurons firing doesn't necessarily mean we've done any reduction in the causal sense, it could also mean we've just described exactly the same physical events on different levels of magnification, which is what I think you're trying to say here.
Replies from: ThrustVectoring↑ comment by ThrustVectoring · 2013-12-07T15:44:52.059Z · LW(p) · GW(p)
I not sure what you mean exactly by "causal reduction" as opposed to a magnification. Surely the rules governing neural activity are simpler than descriptions of thoughts - even if the number and complexity of these neural interactions have little hope of human comprehension.
↑ comment by passive_fist · 2013-12-07T07:04:11.432Z · LW(p) · GW(p)
Thanks. Corrected. My point still stands, though - it's not what Elezier was talking about in his zombies article.
↑ comment by ThrustVectoring · 2013-12-07T02:56:52.871Z · LW(p) · GW(p)
I'm not talking about dualism - my explanation is fully explained by the neurons alone. The "hearer" is nothing more than certain patterns of neuron firing, with absolutely zero mysteriousness left over (given sufficient detail on those neuron firing patterns).
Taking conciousness out of a person means physically changing their responsible neural patterns - which means at least something as severe as a lobotomy. Taking a person, leaving their brain alone, and removing 'conciousness' is physical nonsense.
Replies from: passive_fist, hyporational↑ comment by passive_fist · 2013-12-07T07:02:26.893Z · LW(p) · GW(p)
Yes, I removed the bit about dualism in my post. However, again, what you're saying is not the same thing Elezier is saying.
Replies from: hyporational↑ comment by hyporational · 2013-12-07T08:27:56.631Z · LW(p) · GW(p)
I think you should spell out what you think they're saying. I don't see any signs that ThrustVectoring is misunderstanding the Zombie argument or the hard problem.
Replies from: passive_fist↑ comment by hyporational · 2013-12-07T05:22:38.395Z · LW(p) · GW(p)
The "hearer" is nothing more than certain patterns of neuron firing
I agree, but of course you don't literally mean what you're saying here if you're truly a reductionist. I think the problem people have thinking about this is that people are thinking about what we're seeing in neurons with modern science, rather than what the neurons are actually doing and what the computations and physics they're actually reducing to are.
Replies from: ThrustVectoring↑ comment by ThrustVectoring · 2013-12-07T15:41:52.516Z · LW(p) · GW(p)
Yes, I do literally mean this. Of course, when I say "patterns of neurons firing", that is nothing more than a compact way to talk about the brain biochemistry and cells going on.
↑ comment by asr · 2013-12-08T13:15:32.605Z · LW(p) · GW(p)
I don't think this is the LW consensus. This view implies that "consciousness" is a property of neurons, rather than a property of programs or agents. Surely the LW view is that uploads can be conscious.
I also don't think this is straightforward. Where I get confused is when I try to formalize the sense that it's about programs. This implies that there's such a thing as a conscious Turing machine. Is it still conscious if it stops executing? If I implement it on paper?
Replies from: ThrustVectoring↑ comment by ThrustVectoring · 2013-12-08T14:25:13.106Z · LW(p) · GW(p)
I may have not been explicit enough about this: it's not the neurons, but the patterns of neural activity. Implement the patterns using not-neurons, and you still have consciousness.
Replies from: asr↑ comment by asr · 2013-12-08T18:48:30.028Z · LW(p) · GW(p)
This doesn't address my confusion. Suppose I just wrote down on paper the states of a simulated brain at time t, t1, t2, etc. Would the model be conscious? Is it still conscious if I simulate step t1, and then wander away for a while?
My sense is that consciousness is a property of biological humans in the physical world and that it's not necessarily useful as a description for anything else. It's a social construction and probably doesn't have any useful summary in terms of physics or computer science or properties of neurons.
Replies from: Viliam_Bur, fubarobfusco↑ comment by Viliam_Bur · 2013-12-09T08:42:09.278Z · LW(p) · GW(p)
Would the model be conscious?
Yes.
Is it still conscious if I simulate step t1, and then wander away for a while?
Would a human be conscious, if you could magically stop time? No, while the time is stopped, there is no consciousness.
Consciousness is an activity; it happens in time. It's like running. Is the simulated robot running? Yes. If you pause the simulation, is the robot still running? No, it's frozen in the middle of running, it is not running while paused.
Replies from: asr↑ comment by asr · 2013-12-09T17:55:53.120Z · LW(p) · GW(p)
Consciousness is an activity; it happens in time. It's like running. Is the simulated robot running? Yes. If you pause the simulation, is the robot still running? No, it's frozen in the middle of running, it is not running while paused.
This makes me think that consciousness cannot be a property of minds or programs. "Active" and "inactive" aren't really properties that a program can have. A program execution is just a sequence of states; there's no well-defined interval of time in which the program is "running".
So this makes me think consciousness isn't a property of minds, but rather of how minds relate to the passage of external time. Is there any particular harm in drawing a definition of "consciousness" that excludes disembodied uploads and AIs?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-12-09T18:08:44.133Z · LW(p) · GW(p)
consciousness isn't a property of minds, but rather of how minds relate to the passage of external time.
I don't think I understand what you're saying here.
I would certainly agree that consciousness occurs over time, and that therefore to say that a system "is conscious" at a particular instant is true enough for normal conversation, but is technically speaking problematic, just like to say that the system "is falling" or "is burning" at a particular instant is technically speaking problematic.
Is there any particular harm in drawing a definition of "consciousness" that excludes disembodied uploads and AIs?
No more so than drawing a definition of "falling" that excludes what skydivers do when they jump from airplanes. These are English words, we can define them however we choose.
Are you suggesting we adopt such a definition?
Why?
↑ comment by asr · 2013-12-10T15:10:53.206Z · LW(p) · GW(p)
I would certainly agree that consciousness occurs over time, and that therefore to say that a system "is conscious" at a particular instant is true enough for normal conversation, but is technically speaking problematic, just like to say that the system "is falling" or "is burning" at a particular instant is technically speaking problematic.
Hrm? In the ordinary physical world, we can talk perfectly well about the interval in which an object is falling -- it's the period in which it's moving downward as a result of gravity. And likewise it's possible to say "the building started burning at time t, and was on fire until time t1, at which point it smouldered until t2". If you think chemical or positional change requires time, we can do the usual move of asking about "the small interval of time around t."
The thing that confuses me is that it seems like we have two conflicting desires or intuitions:
To say that consciousness is a property of minds, which is to say, of programs.
We also want to talk about consciousness as though it's an ordinary physical state, like "walking", or "digesting" -- the property starts (possibly gradually), continues for a period, and then fades out. It seems naturally to say "I become conscious when I wake up in the morning, and then am conscious until night-time when I go to sleep.
And the problem is that our intuitions for physical processes and time don't match with what we know how to define about programs. There's no formal way to talk about "the program is executing at time t"; that's an implementational detail that isn't an intrinsic property of the program or even of the program's execution history.
I am indeed suggesting we adopt a definition of consciousness that only applies to the physical embodiment of a mind. The advantage of this definition is that it captures all the conventional uses of the term in everyday life, without forcing us to deal with definitional puzzles in other contexts.
As a rough example of such a definition: "Consciousness is the property of noticing changes over time in one's mental state or beliefs."
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-12-10T16:18:02.031Z · LW(p) · GW(p)
Ah, I think I understand your position better now. Thanks for clarifying.
So, OK... if I understand you correctly, you're saying that when a building is burning, we can formally describe that process as a property of the building, but when a program is executing, that isn't a property of the program, but rather of the system executing the program, such as a brain.
And therefore you want to talk about consciousness as a property of program-executors (physical embodiments of minds) rather than of programs themselves.
Yes? Have I got that more or less right?
Sure, I would agree with that as far as it goes, though I don't think it's particularly important.
I would similarly say that I can't actually write a program that calculates the square root of 200, because programs can't do any such thing. Rather, it is the program executor that performs the calculation, making use of the program.
And I would similarly say that this distinction, while true enough, isn't particularly important.
Digressing a little... it's pretty common for humans to use metonymy to emphasize what we consider the important part of the system. So it's no surprise that we're in the habit of talking metonymically about what "the program" does, since most of us consider the program the important part... that is, we consider two different program-executors running the same program identical in the most important ways, and consider a single program-executor running two different programs (at different times) to be different in the most important ways.
So it's unlikely that we'll stop talking that way, since it expresses something we consider important.
But, sure, I agree that when we want more precision for some reason it can be useful to recognize that it's actually the program executor that is exhibiting the behavior which the program specifies (such as consciousness, or calculating the square root of 200), not the program itself.
So, going back to your original comment with that context in mind:
Suppose I just wrote down on paper the states of a simulated brain at time t, t1, t2, etc. Would the model be conscious?
In normal conversation I would say "yes."
But to be more precise about this: in this case there is a system (call it S) comprising your brain, a piece of paper, and various other things. S is executing some unspecified program that generates a series of mind-executing-system states (in this case, simulated brain-states). Not all programs cause consciousness when executed, of course, any more than all programs cause the calculation of the square root of 200. But if S is executing such a program, then S is conscious while executing it.
I'm not exactly sure what "the model" refers to in this case, but if the model includes the relevant parts of S then yes, the model is conscious.
Is it still conscious if I simulate step t1, and then wander away for a while?
This, too, is to some extent a matter of semantics.
If system S2 spends 5 seconds calculating the first N digits of pi, pauses to do something else for a second, then spends 5 seconds calculating the next N digits of pi, it's not particularly interesting to ask whether S2 spent 10 or 11 seconds calculating those 2N digits. If I want to be more precise, I can say S2 is calculating pi for the intervals of time around each step in the execution, so if we choose intervals short enough S2's calculation is intermittent, while if we choose intervals long enough it is continuous. But there is no right answer about the proper interval length to choose; it depends on what kind of analysis we're doing.
Similarly, if S generates state t1 and then disintegrates (for example, you wander away for a while) and then later reintegrates (you return) and generates t2, whether we consider S to remain conscious in the interim is essentially a function of the intervals we're using for analysis.
My sense is that consciousness is a property of biological humans in the physical world and that it's not necessarily useful as a description for anything else. It's a social construction and probably doesn't have any useful summary in terms of physics or computer science or properties of neurons.
Why do you believe this? It doesn't seem to follow from the above: even if consciousness is a property of program-executors rather than of the programs themselves, it doesn't follow that only biological humans can execute the appropriate programs.
Replies from: asr↑ comment by asr · 2013-12-10T16:35:34.857Z · LW(p) · GW(p)
Appreciate the time and care you put into your response -- I find it helpful to work through this a step at a time.
I would similarly say that I can't actually write a program that calculates the square root of 200, because programs can't do any such thing. Rather, it is the program executor that performs the calculation, making use of the program.
One of the big achievements of programming language research, in the 1960s and 1970s, was to let us make precise assertions about the meaning of a program, without any discussion of running it. We can say "this program emits the number that's the best possible approximation to the square root of its input", without running the program, without even having any physical system that can run the program. There are some non-trivial and important properties of programs, as separate from a particular physical execution.
Not all properties are of this form. "This program is currently paged out" is a property of the particular concrete execution, not of an abstract execution history or of the program itself.
I assume there are properties that human minds possess that are of both sorts. For example, I suspect "this is a sad brain" is a meaningful assertion even if the brain is frozen or if we're looking at one time-step of a simulation. However, I think consciousness, the way we usually use the word, is the second sort of property.
And the reason the distinction matters is that I want a sense what aspects of minds are relevant when discussing, e.g., an upload that isn't currently running or a synthesized AGI.
Why do you believe this? It doesn't seem to follow from the above: even if consciousness is a property of program-executors rather than of the programs themselves, it doesn't follow that only biological humans can execute the appropriate programs.
I agree with this. I didn't mean to say "nothing but a biological human can ever be conscious" -- just that "not all physical embodiments would have the right property." I expect that uploaded-human-attached-to-robot would be usefully described as conscious, and that a paper-and-pencil simulation would NOT.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-12-10T17:03:47.465Z · LW(p) · GW(p)
We can say "this program emits the number that's the best possible approximation to the square root of its input", without running the program, without even having any physical system that can run the program. There are some non-trivial and important properties of programs, as separate from a particular physical execution. Not all properties are of this form. "This program is currently paged out" is a property of the particular concrete execution, not of an abstract execution history or of the program itself.
I agree with this as far as it goes, but I observe that your examples conflate two different distinctions. Consider the following statements about a program:
P1: "This program emits the number that's the best possible approximation to the square root of its input"
P2: "This program is currently paged out"
P3: "This program is currently emitting the number that's the best possible approximation to the square root of its input"
P4: "This program pages out under the following circumstances"
I agree with you that P1 is about the abstract program and P2 is about the current instantiation. But P3 is also about the instantiation, and P4 is about the abstract program.
That said, I would also agree that P4 is not actually something that's meaningful to say about most programs, and might not even be coherently specifiable. In practice, the more sensible statement is:
P5: "This program-executor pages programs out under the following circumstances."
And similarly in the other direction:
P6: "This program-executor executes programs which emit the best possible approximations to the square roots of their input"
is a goofy thing to say.
So, OK, yes. Some aspects of a program's execution (like calculating square roots) are properties of the program, and some aspects (like page-swapping) are properties of the program-executor, and some aspects (like assisting in the derivation of novel mathematical theorems) are properties of still larger contexts.
I suspect "this is a sad brain" is a meaningful assertion even if the brain is frozen or if we're looking at one time-step of a simulation. However, I think consciousness, the way we usually use the word, is the second sort of property.
Why do you suspect/think this? It doesn't seem to follow from anything we've said so far.
And the reason the distinction matters is that I want a sense what aspects of minds are relevant when discussing, e.g., an upload that isn't currently running or a synthesized AGI.
Sure... I'm all in favor of having language available that makes precise distinctions when we want to be precise, even if we don't necessarily use that language in casual conversation.
I expect that uploaded-human-attached-to-robot would be usefully described as conscious, and that a paper-and-pencil simulation would NOT.
Well, I expect a paper-and-pencil simulation of an appropriate program to be essentially impossible, but that's entirely for pragmatic reasons. I wouldn't expect an "uploaded human" running on hardware as slow as a human with paper and pencil to work, either.
I suspect your reasons are different, but I'm not sure what your reasons are.
↑ comment by fubarobfusco · 2013-12-08T20:56:54.796Z · LW(p) · GW(p)
Consciousness is something that we each notice about ourselves.
(Or rather, "consciousness" is a philosopher's way of talking about that thing which we notice about ourselves that we can't directly notice about others, though we infer them about others.)
It seems to me that part of the question, then, is "What is noticing?"
↑ comment by Will_Newsome · 2013-12-07T17:48:26.535Z · LW(p) · GW(p)
Replies from: ThrustVectoring, IlyaShpitserIt's fairly straightforward - "consciousness" is nothing more than a compact way to talk about certain kinds of very complicated interactions of neurons
↑ comment by ThrustVectoring · 2013-12-08T05:12:12.736Z · LW(p) · GW(p)
No, this is pretty much the standard LW orthodox response to the vast majority of philosophical problems - reductionism and mixed reference. I'm not saying that consciousness doesn't exist, I'm saying that it's nothing more than a kind of brain activity.
As far as I know, my position should not be surprising or novel in the least if you've read the sequences.
↑ comment by IlyaShpitser · 2013-12-09T14:51:02.473Z · LW(p) · GW(p)
My favorite hypothesis for consciousness skepticism is the "colorblindness" theory (advocated by one of Yvain's old profs, apparently). That is, just as some people cannot tell red from green, some people have a hard time with qualia.
Under this hypothesis, there are observable ramifications of a lack of (or difficulty with) qualia. Folks who have difficulty with qualia actually write about consciousness differently from folks with qualia. If you have qualia, you write like Chalmers. If you don't, you write like Dennett (or the grandparent) :).
comment by Strilanc · 2013-12-07T17:42:10.944Z · LW(p) · GW(p)
are there any tentative explanations for this "can change world" property?
No, we don't know how what we refer to as consciousness affects the world and (congruently) we don't know why think we have consciousness (as in experiencing qualia). There clearly are effects, but we don't know the nature of the mechanism. Frankly, it's not even clear that we're making sense when we talk about consciousness.
I think once we understand what consciousness is, there will be clear answers to questions like:
- Why do we experience high-level things (seeing a dog) instead of low-level things (figuring out inputs correspond to a dog)?
- Are there high-level things we don't experience (e.g. a conspiracy in your brain intentionally planning to make you reproduce)?
- How do we distinguish consciousness from introspection in terms of effects on the world?
- Are we conscious when we're asleep, but just don't remember?
- Are we ever not conscious when we're awake?
- How do we distinguish consciousness from short term memory in terms of effects on the world?
- How do we detect consciousness?
- Is all matter conscious? Some matter? All people? Some people? Space? Forces?
- How do we increase/decrease/create/prevent/control consciousness?
But for the moment I'm so confused that I have no idea what a satisfying answer to those questions will even look like.
comment by Shmi (shminux) · 2013-12-06T20:13:33.842Z · LW(p) · GW(p)
The cosmologist G.F.R. Ellis once wrote a one-page essay in Nature about it: http://www.mth.uct.ac.za/~ellis/nature.pdf
A simple statement of fact: there is no physics theory that explains the nature of, or even the existence of, football matches, teapots, or jumbo-jet aircraft. The human mind is physically based, but there is no hope whatever of predicting the behaviour it controls from the underlying physical laws. Even if we had a satisfactory fundamental physics ‘theory of everything’, this situation would remain unchanged: physics would still fail to explain the outcomes of human purpose, and so would provide an incomplete description of the real world around us.
[...]
Replies from: Cyanthe higher levels in the hierarchy of complexity have autonomous causal powers that are functionally independent of lower-level processes. Topdown causation takes place as well as bottom-up action, with higher-level contexts determining the outcome of lowerlevel functioning, and even modifying the nature of lower-level constituents.
↑ comment by Cyan · 2013-12-06T22:31:08.854Z · LW(p) · GW(p)
Topdown causation takes place as well as bottom-up action, with higher-level contexts determining the outcome of lowerlevel functioning, and even modifying the nature of lower-level constituents.
I think this is a hugely unappreciated fact about the universe. Macroscopic variation can be insensitive to virtually all microscopic variation, in the sense that some small set macroscopic variables obeys some relation without special regard to the particular microstate in existence, e.g., PV=nRT. And yet, interactions that can be described entirely at the macroscopic level may end up causing huge changes to microscopic states.
E. T. Jaynes had an important insight about these sorts of things: if something macroscopic happens reproducibly in spite of no fine control over the microstate, then it must be the case that the process is insensitive to microscopic variation; (no duh, right? But --) therefore we will be able to make macroscopic predictions in spite of having no microstate knowledge just by picking the probability distribution over microstates that maximizes entropy subject to the constraints of our macroscopic knowledge.
Consider what this means for so-called "emergent properties". If a system reproducibly displays some "emergent" property once enough constituent parts are aggregated and the aggregation process is largely or entirely uncontrolled, then we ought to be able to predict the emergence of that property by taking a maximum entropy distribution over the details of the aggregation process. (And if control of some aspect of the aggregation process is important, we can incorporate that fact as a constraint in the entropy maximization.)
And consciousness is sometimes said to be an emergent property of brain processes...
Replies from: Dentin, ThrustVectoring, passive_fist↑ comment by Dentin · 2013-12-17T19:35:02.885Z · LW(p) · GW(p)
This is a classic "Microsoft help desk" answer: while technically correct, it doesn't really help solve the problem. Predicting via entropy distribution for complex systems is hugely more complicated than other methods, and the only places it can really work are things like the ideal gas law and rubber bands. Put together a bunch of systems capable of exporting entropy to each other and interacting, and you'll see the difficulty ramp up absurdly fast.
Replies from: Cyan↑ comment by Cyan · 2013-12-17T20:40:01.108Z · LW(p) · GW(p)
Since this is meta-level advice, there is no "the problem" in sight. Your criticism would seem to apply to cases not covered by my claim, to wit, cases where the phenomenological macrostate predictions are sharp even though the microstates are uncontrolled. If you're saying the cases that are covered are rare, I do not deny it.
↑ comment by ThrustVectoring · 2013-12-07T03:00:48.795Z · LW(p) · GW(p)
My philosophy/epistemology holds that the word "emergent" can be replaced with "reducible" with no loss of meaning. People try to sneak in anti-reductionist ideas with the concept of emergence, so what I usually do is replace the word as I'm reading and see if it still makes sense.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2013-12-07T14:11:34.958Z · LW(p) · GW(p)
Yes. That is also my view. Except that "reducible" carries with it the view from below, often with the goal of explaining or deriving the macroscopic behavior (and often with the implied valuation that the macroscopic effects have no merit of their own) whereas "emergent" carries with it the view from above, where the interplay on the macroscopic level has merit and is of interest of its own.
Replies from: ThrustVectoring↑ comment by ThrustVectoring · 2013-12-07T15:27:59.294Z · LW(p) · GW(p)
I fully support using high-level models. After all, doing any sort of macroscopic work by modeling elementary particles is computationally intractable on a fundamental level.
The problem with emergence is that it's used to sneak in nonreducible magic too often. I need the reminder that however interesting airplanes are, there's nothing there that isn't the result of elementary particles and fundamental fields.
↑ comment by passive_fist · 2013-12-08T21:53:56.139Z · LW(p) · GW(p)
That's all 'emergence' is, really. Macroscopic behavior that is common among a huge variety of systems, possibly with widely differing microscopic details.
comment by Manfred · 2013-12-06T20:08:08.733Z · LW(p) · GW(p)
A good first step could be to ask what our brain does when we ask "am I conscious?"
But this is not as good a way to identify "consciousness" as it could be because of question substitution. Just because we can answer the question quickly does not necessarily mean it's easy - it just means we've done something easy in order to answer the question. How do we reduce question substitution? Engage system 2. So maybe a good second step is to ask what our brain does when we write a philosophy paper about consciousness.
Some things we might check on through introspection: plan-making ("I can make choices"), world-modeling ("I know where my keys are"), basic sense constructs ("red sure does look red"), membership in humanity ("humans are conscious by definition!"), emotional state ("I'm bored"). Personally, when doing the quick version of the question, I seem to just answer the question about my senses.
comment by [deleted] · 2013-12-08T22:26:45.743Z · LW(p) · GW(p)
Dora Marsden wrote about this topic at length in the 1915 - 1919 run of her magazine The Egoist. Her multi-year arguments were atomized into numbered sections. A sensible quote would be too long, a short quote wouldn't make sense. I found her arguments about the origin of consciousness and language compelling enough to put her work into circulation for the first time in over a century. How many other people (much less young women) were writing about these topics, about legalized prostitution and open marriages, about comic books and automated factories... one hundred years ago?
comment by passive_fist · 2013-12-07T02:19:14.540Z · LW(p) · GW(p)
My take on Elezier's position is this: If consciousness cannot affect the real world, then our behavior is exactly the same behavior as would be expected if we were zombies, and so the fact that zombies (us) can somehow talk about aspects of consciousness (such as the "mysterious redness of red"), without any sort of input from the consciousness itself, and somehow have these aspects of consciousness be in full agreement with what the non-zombie would think, seems unlikely. It would be like watching a show on TV and the presenter telling the viewer exactly what the viewer is thinking.
The 'changing the world' thing he's talking about refers to the idea that this startling agreement between what zombies and non-zombies think must be the result of some causal effect of consciousness on the world. Say, a phone call from the viewer on to the TV show.
Or otherwise just stop assuming that consciousness exists as a separate thing. Or, even better, admit our ignorance (which I agree with).
It's an interesting argument but it requires a bit more effort to prove. One possibility is that both the physical world and consciousness are both causal effects of some deeper underlying reality. In this case consciousness and physics would be entirely mutually consistent, but one need not have any causal effect on the other.
Replies from: hyporational↑ comment by hyporational · 2013-12-07T09:43:25.460Z · LW(p) · GW(p)
I understand and agree with what you're saying here. I still don't understand your objections to ThrustVectoring.
Did you think he conflated consciousness and awakeness? I don't think he did, although the example he used might make you think so. The hard-problem-consciousness depends on being awake (or dreaming), so patterns of brain activity you have now correspond to hard-problem-consciousness more than patterns of brain activity you'd have if you were choked to sleep.
Nitpick: Eliezer, not Elezier.
Replies from: passive_fist↑ comment by passive_fist · 2013-12-07T22:09:31.442Z · LW(p) · GW(p)
My objection is that he's talking about the purely physical activity of the brain causing someone to write about consciousness and the 'mysterious redness of red', which is something a zombie could also do (by Chalmer's argument). Eliezer, on the other hand, is trying to explain what's wrong with Chalmers' argument. He's talking about the effect of that metaphysical 'hearer' on the world, something which Chalmers says is zero. That's also what DavidPlumpton is asking about, I think.
Replies from: ThrustVectoring, hyporational↑ comment by ThrustVectoring · 2013-12-08T05:30:25.513Z · LW(p) · GW(p)
I don't believe that p-zombies are well defined in the first place - since consciousness is nothing more than the normal action of the brain, and p-zombies have normal brain action, p-zombies experience consciousness. This is a contradiction, which is a big problem for those who believe that p-zombies are a logically coherent concept.
Replies from: passive_fist↑ comment by passive_fist · 2013-12-08T05:50:36.739Z · LW(p) · GW(p)
Yes I guessed this might be your position, hence my reply to your top-level comment.
↑ comment by hyporational · 2013-12-08T04:42:45.939Z · LW(p) · GW(p)
Eliezer is arguing for the hearer being physical, i.e. affecting the world. Ditch the meta.
Whatever qualia are, they happen in the brain, and are physical. Presumably in the future you can connect them to particular measurable brain activity because people can report them.
comment by hyporational · 2013-12-07T05:35:30.896Z · LW(p) · GW(p)
Even if a generally accepted mechanism of consciousness has not been found yet are there any tentative explanations for this "can change world" property?
You need to unravel this question a bit more, I don't understand what you're asking.