My recent thoughts on consciousness
post by AlexLundborg · 2015-06-24T00:37:58.695Z · LW · GW · Legacy · 64 commentsContents
64 comments
I have lately come to seriously consider the view that the everyday notion of consciousness doesn’t refer to anything that exists out there in the world but is rather a confused (but useful) projection made by purely physical minds onto their depiction of themselves in the world. The main influences on my thinking are Dan Dennett, (I assume most of you are familiar with him) and to a lesser extent Yudkowsky (1) and Tomasik (2). To use Dennett’s line of thought: we say that honey is sweet, that metal is solid or that a falling tree makes a sound, but the character tag of sweetness and sounds is not in the world but in the brains internal model of it. Sweetness in not an inherent property of the glucose molecule, instead, we are wired by evolution to perceive it as sweet to reward us for calorie intake in our ancestral environment, and there is neither any need for non-physical sweetness-juice in the brain – no, it's coded (3). We can talk about sweetness and sound as if being out there in the world but in reality it is a useful fiction of sorts that we are "projecting" out into the world. The default model of our surroundings and ourselves we use in our daily lives (the manifest image, or ’umwelt’) is puzzling to reconcile with the scientific perspective of gluons and quarks. We can use this insight to look critically on how we perceive a very familiar part of the world: ourselves. It might be that we are projecting useful fictions onto our model of ourselves as well. Our normal perception of consciousness is perhaps like the sweetness of honey, something we think exist in the world, when it is in fact a judgement about the world made (unconsciously) by the mind.
What we are pointing at with the judgement “I am conscious” is perhaps the competence that we have to access states about the world, form expectations about those states and judge their value to us, coded in by evolution. That is, under this view, equivalent with saying that suger is made of glucose molecules, not sweetness-magic. In everyday language we can talk about suger as sweet and consciousness as “something-to-be-like-ness“ or “having qualia”, which is useful and probably necessary for us to function, but that is a somewhat misleading projection made by our world-accessing and assessing consciousness that really exists in the world. That notion of consciousness is not subject to the Hard Problem, it may not be an easy problem to figure out how consciousness works, but it does not appear impossible to explain it scientifically as pure matter like anything else in the natural world, at least in theory. I’m pretty confident that we will solve consciousness, if we by consciousness mean the competence of a biological system to access states about the world, make judgements and form expectations. That is however not what most people mean when they say consciousness. Just like ”real” magic refers to the magic that isn’t real and the magic that is real, that can be performed in the world, is not “real magic”, “real” consciousness turns out to be a useful, but misleading assessment (4). We should perhaps keep the word consciousness but adjust what we mean when we use it, for diplomacy.
Having said that, I still find myself baffled by the idea that I might not be conscious in the way I’ve found completely obvious before. Consciousness seems so mysterious and unanswerable, so it’s not surprising then that the explanation provided by physicalists like Dennett isn’t the most satisfying. Despite that, I think it’s the best explanation I've found so far, so I’m trying to cope with it the best I can. One of the problems I’ve had with the idea is how it has required me to rethink my views on ethics. I sympathize with moral realism, the view that there exist moral facts, by pointing to the strong intuition that suffering seems universally bad, and well-being seems universally good. Nobody wants to suffer agonizing pain, everyone wants beatific eudaimonia, and it doesn't feel like an arbitrary choice to care about the realization of these preferences in all sentience to a high degree, instead of any other possible goal like paperclip maximization. It appeared to me to be an unescapable fact about the universe that agonizing pain really is bad (ought to be prevented), that intelligent bliss really is good (ought to be pursued), like a label to distinguish wavelength of light in the brain really is red, and that you can build up moral values from there. I have a strong gut feeling that the well-being of sentience matters, and the more capacity a creature has of receiving pain and pleasure the more weight it is given, say a gradience from beetles to posthumans that could perhaps be understood by further inquiry of the brain (5). However, if it turns out that pain and pleasure isn’t more than convincing judgements by a biological computer network in my head, no different in kind to any other computation or judgement, the sense of seriousness and urgency of suffering appears to fade away. Recently, I’ve loosened up a bit to accept a weaker grounding for morality: I still think that my own well-being matter, and I would be inconsistent if I didn’t think the same about other collections of atoms that appears functionally similar to ’me’, who also claim, or appear, to care about their well-being. I can’t answer why I should care about my own well-being though, I just have to. Speaking of 'me': personal identity also looks very different (nonexistent?) under physicalism, than in the everyday manifest image (6).
Another difficulty I confront is why e.g. colors and sounds looks and sounds the way they do or why they have any quality at all, under this explanation. Where do they come from if they’re only labels my brain uses to distinguish inputs from the senses? Where does the yellowness of yellow come? Maybe it’s not a sensible question, but only the murmuring of a confused primate. Then again, where does anything come from? If we can learn to shut up our bafflement about consciousness and sensibly reduce it down to physics – fair enough, but where does physics come from? That mystery remains, and that will possibly always be out of reach, at least probably before advanced superintelligent philosophers. For now, understanding how a physical computational system represents the world, creates judgements and expectations from perception presents enough of a challenge. It seems to be a good starting point to explore anyway (7).
I did not really put forth any particularly new ideas here, this is just some of my thoughts and repetitions of what I have read and heard others say, so I'm not sure if this post adds any value. My hope is that someone will at least find some of my references useful, and that it can provide a starting point for discussion. Take into account that this is my first post here, I am very grateful to receive input and criticism! :-)
- Check out Eliezer's hilarious tear down of philosophical zombies if you haven't already
- http://reducing-suffering.org/hard-problem-consciousness/
- [Video] TED talk by Dan Dennett http://www.ted.com/talks/dan_dennett_cute_sexy_sweet_funny
- http://ase.tufts.edu/cogstud/dennett/papers/explainingmagic.pdf
- Reading “The Moral Landscape” by Sam Harris increased my confidence in moral realism. Whether moral realism is true of false can obviously have implications for approaches to the value learning problem in AI alignment, and for the factual accuracy of the orthogonality thesis
- http://www.lehigh.edu/~mhb0/Dennett-WhereAmI.pdf
- For anyone interested in getting a grasp of this scientific challenge I strongly recommend the book “A User’s Guide to Thought and Meaning” by Ray Jackendoff.
Edit: made some minor changes and corrections. Edit 2: made additional changes in the first paragraph for increased readability.
64 comments
Comments sorted by top scores.
comment by Shmi (shminux) · 2015-06-24T01:21:22.463Z · LW(p) · GW(p)
Consider reading Scott Aaronson on the matter. Here is an excerpt:
In my opinion, how to construct a theory that tells us which physical systems are conscious and which aren’t—giving answers that agree with “common sense” whenever the latter renders a verdict—is one of the deepest, most fascinating problems in all of science. Since I don’t know a standard name for the problem, I hereby call it the Pretty-Hard Problem of Consciousness. Unlike with the Hard Hard Problem, I don’t know of any philosophical reason why the Pretty-Hard Problem should be inherently unsolvable; but on the other hand, humans seem nowhere close to solving it (if we had solved it, then we could reduce the abortion, animal rights, and strong AI debates to “gentlemen, let us calculate!”).
http://www.scottaaronson.com/blog/?p=1799
http://www.scottaaronson.com/blog/?p=1951
Aaronson asks the right questions, rather than settle for "dissolving", the way Eliezer tends to do.
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-06-24T01:38:35.259Z · LW(p) · GW(p)
All "explanations" of consciousness reduce to assertions about magic. It's strange that the attitude that's most mystical on first examination, that even a rock is conscious (if in a sense orders below the consciousness of a gnat) provokes the least amount of quasi-magical auxiliary assertions to support it (i.e. attempting to ground consciousness in quantum phenomena...seems like confusion on two topics naturally leads to an attempt to synthesize them).
So is Aaronson really asking "the right question"? Asking "which physical systems are conscious" is begging the question somewhat. Here's a thought: is one who assigns greater probability to the notion that physical systems display consciousness when in particular configurations (without knowing any possible reason for such a restriction) than to the notion that physical systems in general are conscious commiting the conjunction fallacy?
Replies from: shminux, Unknowns↑ comment by Shmi (shminux) · 2015-06-24T03:36:13.081Z · LW(p) · GW(p)
He is asking a question whose answer would be testable, so no "assertions about magic". That's the best one can do.
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-06-24T03:39:33.426Z · LW(p) · GW(p)
How does one test a machine for consciousness?
Replies from: shminux, Richard_Kennaway↑ comment by Shmi (shminux) · 2015-06-24T07:28:48.629Z · LW(p) · GW(p)
The same way the Tononi's IIT is testable (and false): it predicts that a Vandermonde matrix multiplier is conscious, and more so that you and I, if the matrix is large enough.
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-06-24T12:15:43.432Z · LW(p) · GW(p)
Where can I find the experiments that tested the Vandermonde matrix multiplier for consciousness?
↑ comment by Richard_Kennaway · 2015-06-24T07:31:12.640Z · LW(p) · GW(p)
How does one test a machine for consciousness?
Nobody knows yet. That's what makes it the Pretty-Hard Problem. Our ignorance of how to test it should not be projected onto the world and mistaken for proof that it is untestable.
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-06-24T12:22:27.197Z · LW(p) · GW(p)
It's not merely the "pretty hard problem". It's the "impossible to attack, by definition" problem. If you use "consciousness" as a synonym for something such as "intelligence" then it becomes tractable, at least in principle, but you will always have those who insist you've simply changed the subject (including me).
Our ignorance of how to test for something we insist exists is not proof of anything, but it is strong evidence of a conceptual muddle.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-06-24T12:40:29.135Z · LW(p) · GW(p)
Our ignorance of how to test for something we insist exists is not proof of anything, but it is strong evidence of a conceptual muddle.
In the case of consciousness, we do not have to insist on anything. We can simply point to our internal experience, and say, "this is what we mean, when we talk about consciousness." That we have no explanation for how there could possibly be any such thing does not invalidate the experience, for even a faultily conceptualised experience is still an experience. No matter how the experience is reinterpreted, it obstinately remains an experience.
The conceptual muddle is in thinking that because we do not understand a thing, it therefore does not exist.
Descartes said all that in three words.
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-06-24T12:52:40.040Z · LW(p) · GW(p)
The conceptual muddle is in thinking that because a thing exists, we must be capable of understanding it!
↑ comment by Unknowns · 2015-06-24T08:05:14.454Z · LW(p) · GW(p)
No, you are misinterpreting the conjunction fallacy. If someone assigns a greater probability to the claim that "humans are conscious and rocks are not" than to the claim that "humans are conscious", then this will be the conjunction fallacy. But it will also be the conjunction fallacy to believe that it is more likely that "physical systems in general are conscious" than that "humans are conscious."
The conjunction fallacy is basically not relevant to comparing "humans are conscious and rocks are not" to "both humans and rocks are conscious."
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-06-24T12:34:13.144Z · LW(p) · GW(p)
the claim that "humans are conscious and rocks are not" than to the claim that "humans are conscious", then this will be the conjunction fallacy
Indeed. Thank you.
it will also be the conjunction fallacy to believe that it is more likely that "physical systems in general are conscious" than that "humans are conscious."
Forget humans for a second. Just focus on the statement "only physical systems in a particular type of configuration will be conscious"; without knowing which type you mean. You cannot assign a higher probability to any particular system without already having some deciding criteria. It's when you fix your deciding criteria on the statement "x is human" that the conjunction fallacy comes back on you.
Ofcourse it's not more likely for a human and a rock to be conscious than just for the human, you have to grant the latter just to avoid being obtuse. But who's arguing that being human is the deciding criteria for whether a system may be conscious? That's defenestrating any hope of investigating the phenomenon in other systems which does not do much to assist an empiricist framework for it.
Replies from: Unknowns↑ comment by Unknowns · 2015-06-24T12:45:19.725Z · LW(p) · GW(p)
"Only humans are conscious" should indeed have a lower prior probability than "Only physical systems specified in some way are conscious", since the latter must be true for the former to be true but not the other way around.
However, whether or not only humans are conscious is not the issue. Most people think that many or all animals are conscious, but they do not think that "all physical systems are conscious." And this is not because of the prior probabilities, but is a conclusion drawn from evidence. The reason people think this way is that they see that they themselves appear to do certain things because they are conscious, and other people and animals do similar things, so it is reasonable to suppose that the reason they do these things is that they are conscious as well. There is no corresponding reason to believe that rocks are conscious. It is not even clear what it would mean to say that they are, since it would take away the ordinary meaning of the word (e.g. you yourself are sometimes conscious and sometimes not, so it cannot be universal).
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-06-24T13:04:59.289Z · LW(p) · GW(p)
"Only humans are conscious" should indeed have a lower prior probability than "Only physical systems specified in some way are conscious"
Yes, but not lower than "only physical systems specified in some way are conscious, and that specification criteria is not "x is one of {human, dog, parakeet...}"". If your idea of a "particular configuration" is a defined by a set of examplars then yes, "only physical systems of some particular configuration" follows. Given that, as you yourself say, whether humans are conscious or not is not the issue, we should consider "particular configurations" determined by some theoretical principle instead. And it seems to me my original argument concerning the conjunction fallacy does hold, given these caveats.
It is not even clear what it would mean to say that they are
That I must concede. But it's not clear what it means to say a human being is conscious either (were it clear, there would not be so many impenetrable tomes of philosophy on the topic). Ofcourse it's even less clear in the case of rocks, but at least it admits of the possibility of the rock's inherent, latent consciousness being amplified by rearranging it into some particular configuration of matter as opposed to flashing into awareness at once upon the reconfiguration.
comment by Epictetus · 2015-06-24T03:59:35.675Z · LW(p) · GW(p)
We can talk about sweet and sound being “out there” in the world but in reality it is a useful fiction of sorts that we are “projecting” out into the world.
I hate to put on my Bishop Berkeley hat. Sweet and sound are things we can directly perceive. The very notion of something being "out there" independent of us is itself a mental model we use to explain our perceptions. We say that our sensation of sweetness is caused by a thing we call glucose. We can talk of glucose in terms of molecules, but as we can't actually see a molecule, we have to speak of it in terms of the effect it produces on a measurement apparatus.
The same holds for any scientific experiment. We come up with a theory that predicts that some phenomenon is to occur. To test it, we devise an apparatus and say that the phenomenon occurred if we observe the apparatus behave one way, and that it did not occur if we observe the apparatus to behave another way.
There's a bit of circular reasoning. We can come up with a scientific explanation of our perception of taste or color, but the very science we use depends upon the perceptions it tries to explain. The very notion of a world outside of ourselves is a theory used to explain certain regularities in our perceptions.
This is part of what makes consciousness a hard problem. Since consciousness is responsible for our perception of the world, it's very hard to take an outside view and define it in terms of other concepts.
Replies from: AlexLundborg, None↑ comment by AlexLundborg · 2015-06-24T11:49:31.691Z · LW(p) · GW(p)
The very notion of something being "out there" independent of us is itself a mental model we use to explain our perceptions.
Yes, I think that's right, the conviction that something exists in the world is also a (unconscious) judgement made by the mind that could be mistaken. However, when we what to explain why we have the perceptual data, and it's regularities, it makes sense to attribute it to external causes, but this conviction could perhaps too be mistaken. The underpinnings of rational reasoning seems to bottom out to in unconsciously formed convictions as well, basic arithmetic is obviously true but can I trust these convictions? Justifying logic with logic is indeed circular. At some point we just have to accept them in order to function in the world. The signs that these convictions are ofter useful suggest to me that we have some access to objective reality. But for everything I know, we could be Boltzmann brains floating around in high entropy with false convictions. Despite this, I think the assessment that objective reality exists and that our access and knowledge of it is limited but expandable is a sensible working hypothesis.
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-06-24T12:25:42.259Z · LW(p) · GW(p)
Solipsism is not really workable due to changes in perceptual data that you cannot predict. Even if you're hallucinating, the data factory is external to the conscious self. So assuming an "objective reality" (whether generated by physics or by DMT) is nothing to apologize for.
↑ comment by [deleted] · 2015-06-26T00:29:26.996Z · LW(p) · GW(p)
This is part of what makes consciousness a hard problem. Since consciousness is responsible for our perception of the world, it's very hard to take an outside view and define it in terms of other concepts.
Really? What's the quale of a number? I think we can investigate consciousness scientifically precisely because science is one of our very few investigation methods that doesn't amount to introspecting on intuitions and qualia. It keeps working, where previous introspective philosophy kept failing.
Replies from: Epictetus↑ comment by Epictetus · 2015-06-26T03:42:36.925Z · LW(p) · GW(p)
If you're arguing that the scientific method is our best known way of investigating consciousness, I don't think anyone disputes that. If we assume the existence of an external world (as common sense would dictate), we have a great deal of confidence in science. My concern is that it's hard to investigate consciousness without a good definition.
Any definition ultimately depends on undefined concepts. Let's take numbers. For example, "three" is a property shared by all sets that can be put in one-to-one correspondence with the set { {}, {{}}, { {{}}, {} } } (to use Peano's construction). A one-to-one correspondence between two sets A and B is simply a subset of the Cartesian product A x B that satisfies certain properties. So numbers can be thought of in terms of sets. But what is a set? Well, it's a collection of objects. We can then ask what collections are and what objects are, etc. At some point we have to decide upon primitive elements that remain undefined and build everything up around those. It all rests on intuitions in the end. We decide which intuitions are the most trustworthy and go from there.
So, if we want to define "consciousness", we are going to have to found it upon some elementary concepts. The trouble is that, since our consciousness forms an important part of all our perceptions and even our very thoughts, it's difficult to get a good outside perspective and see how the edifice is built.
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-06-27T01:07:53.880Z · LW(p) · GW(p)
{ {}, {{}}, { {{}}, {} } } (to use Peano's construction)
These are Von Neumann's ordinals!
Want to register agreement with your post, though it seems incongruous to say, one the one hand, that consciousness seems to escape definition and on the other the scientific method is the best known tool for explaining it.
comment by Elo · 2015-06-24T23:58:27.845Z · LW(p) · GW(p)
Meta: You appear to have various negative responses; I am not completely clear as to why.
I found this idea useful to discover; while I can't really see its applications in modifying the way I access the real world, it certainly does raise some interesting ethical ideas.
I immediately thought of person X that I know in relation to the idea of ethics and consciousness. X (is real and) does not have the same ethics model as commonly found in people. They value themselves over and beyond other humans, both near and far (while not unlike many people; this is particularly abundant in their life). A classic label for this is "having a big ego" or "narcissism". If consciousness is reduced to "nothing but brain chemicals", the value of other entities is considerably lower than the value an entity might put on itself (because it can). Although this does seems like an application of fundamental attribution error (kinda also reverse-typical mind fallacy [AKA everyone else does not have a mind like me]) being the value one places internally is higher than that which is places on other external entities.
when adding the idea of "not much makes up consciousness", potentially unethical narcissism actions turn into boring, single-entity self-maximisation actions.
an entity which lacks capacity to reflect outwardly in the same capacity that it reflects inwards would have a nacissism problem (if it is a problem).
Should we value outside entities as much as we do ourselves? Why?
Replies from: AlexLundborg↑ comment by AlexLundborg · 2015-06-26T15:49:57.908Z · LW(p) · GW(p)
Should we value outside entities as much as we do ourselves? Why?
Nate Soares recently wrote about problems with using the word "should" that I think are relevant here, if we assume meta-ethical relativism (if there are no objective moral shoulds). I think his post "Caring about something larger than yourself" could be valuable in providing a personal answer to the question, if you accept meta-ethical relativism.
Replies from: Elocomment by eternal_neophyte · 2015-06-24T01:15:31.253Z · LW(p) · GW(p)
My problem with materialist reductionism is that it entails that explanations should suffice to provide descriptions. The taste of honey refers to something entirely descriptive, both without the power of furnishing an explanation of anything about the honey, and incapable of being grasped by means of anything that does explain something about the honey.
You could model the world that provokes you into experiencing sensations without any access to the sensations themselves (you wouldn't know what you were modelling however) using nothing but a collection of flavourless tokens related by explanatory mechanisms.
Neither you nor I will ever know what a glucose molecule is in the way that we know what an orange tastes like. If you found out tomorrow that molecular theory has been a grand, ingenious, astounding and improbable swindle, nothing about your beliefs concerning the taste of oranges will change in the least. I really cannot see how explanatory statements can bear descriptive burdens, which is ultimately what the problem of "qualia" is driving at.
There are even further problems I see, for example if your language is restricted to explanatory statements then your ability to communicate is effectively restricted to statements about the states of some machine (for the sake of completeness of argument, take this to be a turing-complete computer) and changes in this state. This leaves no room for the possibility of a description, any statement concerning a "qualia" could not augment the information we have about the state of the machine or the rules by which it changes. It follows from this that all "senses" available to the machine could at best only be different schemes of drawing up a more compact declaration about machine states and programs, there could be no sense of "sight" as distinct from the sense of "sound" or "emotion". For a machine, the sensation of "sound" might as well be identical to the sensation of sight, but always accompanied by a unique, peculiar shade of blue. If you deny a human being the ability to separate out information according to the sense by which it arrives, he's hardly capable of communicating anything whatsoever.
Explanations are useful because they organize descriptive information, rather than vice versa.
Pan-psychism resolves all philosophical hitches I've been able to come up with, and I would argue that it's not a "mysterious answer" but simply an assumption that our inner experience of consciousness is in fact a feature of the world as mass-energy is assumed to be, without the need for further explanation and not admitting of any such a possibility. Now I don't actually enjoy insisting that no explanation is possible of what is a very confusing topic, but in this case it's not the topic that needs to be explained but the process by which we confused ourselves over it.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-06-24T07:32:42.144Z · LW(p) · GW(p)
Pan-psychism resolves all philosophical hitches I've been able to come up with
Does it answer such questions as "how does consciousness work?" and "how can we make one (by other than the traditional method)?"?
Replies from: eternal_neophyte, IlyaShpitser↑ comment by eternal_neophyte · 2015-06-24T12:11:31.119Z · LW(p) · GW(p)
I don't believe it does, it suggests no particular theory of how to build one machine that is more conscious than the other. But that is, I believe, precisely its strength. If consciousness is a basic feature of the universe, your theory of consciousness ought not to provide you with any such information, otherwise your theory would have some internal components and consciousness would not be the terminal level of analysis. Would you expect string theory to tell you how to make strings?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-06-24T12:33:34.986Z · LW(p) · GW(p)
Would you expect string theory to tell you how to make strings?
I would expect it at least to make testable predictions. The difficulty of doing so is an argument made by some physicists against its value, notably Lee Smolin. But at least there is something there, ideas with mathematical structure and parts, that one can study to attempt to get observable consequences from. I don't see even that much with panpsychism. We don't know what consciousness is, only that we experience it ourselves and recognise it from outward signs in people and to various extents in other animals. What can I do with the claim that rocks are conscious? Or trees, or bacteria?
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-06-24T12:47:16.118Z · LW(p) · GW(p)
Strictly speaking, what can you do with the claim that "apples are red"? Can you test apples for their inherent redness in a way that doesn't rely on your own petulent insistence to magically intuit the redness of apples? Ofcourse not, whatever chemicals you show to exist in the skin of an apple to prove its redness will rely on an association between that chemical and that colour which you will itself defend on the grounds of being able to see that the chemical produces redness in certain circumstances. Your ability to use the concept of redness to distinguish red apples from yellow ones similarly relies on your having direct, unmediated knowledge of redness. Conceptual analysis has to terminate somewhere, and it might as well (and arguably, ought to) terminate with whatever ideas we find necessary but impossible to investigate.
What can I do with the claim that rocks are conscious? Or trees, or bacteria?
What can you do with the claim that people are conscious?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-06-24T13:01:04.031Z · LW(p) · GW(p)
Strictly speaking, what can you do with the claim that "apples are red"?
I can tell this to someone who is unfamiliar with them, and they will be able to predict what they will look like. (Of course, we are both glossing over the irrelevant detail that apples come in a variety of colours.) They will also be able to predict something of their objectively measurable reflectance properties.
But this is well into the land of Proves-Too-Much. What can I do with the claim that water is made of hydrogen and oxygen, that doesn't rely on "your own petulent insistence to magically intuit" (we're into the land of Straw Men also) its constitution? No, whatever (etc.etc., paralleling your own paragraph).
What can you do with the claim that people are conscious?
I can describe my sensation of my own presence, and say that this is what I am talking about. If the other person experiences something that my words seem to describe, then they will recognise what I am talking about.
Can you do anything similar with the claim that rocks are conscious? You can say, whatever conscious experience is, that you and I recognise, rocks have it as well. But that doesn't help me recognise it in a rock.
If rocks are conscious, so presumably are corpses. How does the consciousness of a corpse relate to the consciousness that animated it in life?
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-06-24T13:14:44.028Z · LW(p) · GW(p)
I can tell this to someone who is unfamiliar with them, and they will be able to predict what they will look like.
There's an ancient philosophical chestnut: "is my red your red"? So this is in fact not clear at all.
They will also be able to predict something of their objectively measurable reflectance properties.
Same argument as for the chemicals applies. You won't be able to make any useful prediction that doesn't ultimately rely on your ability to simply perceive red.
What can I do with the claim that water is made of hydrogen and oxygen, that doesn't rely on "your own petulent insistence to magically intuit" (we're into the land of Straw Men also) its constitution?
Derive its chemical properties. There is some intuition involved in your knowledge of mathematics, but that's not the same as relying on an innate intuition as to its constitution. There was some point in time when the constitution of water was unknown, and anyone with enough knowledge of chemistry would have been able to make valuable predictions about the behaviour of water under various experiments once they learned how it was constructed, which did not rely on his ability to intuit the H20ness of water.
But that doesn't help me recognise it in a rock.
It doesn't help you recognise it in somebody with total bodily and facial paralysis either. Does it mean that it's nonsensical to ascribe consciousness to such persons?
How does the consciousness of a corpse relate to the consciousness that animated it in life?
By degree of complexity and organization, if nothing else.
Replies from: Richard_Kennaway, Viliam↑ comment by Richard_Kennaway · 2015-06-24T15:55:44.365Z · LW(p) · GW(p)
I can tell this to someone who is unfamiliar with them, and they will be able to predict what they will look like.
There's an ancient philosophical chestnut: "is my red your red"? So this is in fact not clear at all.
I don't know what point you're making now. Of course I see my red and from my description of the apple he will know to expect his red. It makes no difference to the present topic whether his red is the same as mine or not. It will make a difference if one of us is colourblind, but colourblindness is objectively measurable.
How does the consciousness of a corpse relate to the consciousness that animated it in life?
By degree of complexity and organization, if nothing else.
How do we measure the complexity and organization of the consciousness of a corpse at above zero?
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-06-24T16:56:15.971Z · LW(p) · GW(p)
I don't know what point you're making now.
That there are meaningful statements that cannot be empirically grounded - and the fact that you cannot communicate your own specific experience of redness to someone else shows that it's not empirically grounded: nobody would (or at least, I've never found anybody who seemed to) argue that the concepts of molecular theory or other statements about the material world are similarly ineffible. Insisting that any characterization of the nature of conscious experience in general is superfluous if it yields no predictive power (even if it resolves conceptual issues) is to insist that - categorically - statements that don't make any predictions about the material world are vacuous. The experience of colour as such serves as one particular counterexample.
How do we measure the complexity and organization of the consciousness of a corpse at above zero?
My entire point is that the idea that you could measure consciousness under any circumstances whatsoever, of a rock, a tree, a person or a corpse, follows from an incorrect application of empirical epistemic standards to conceptual problems.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-06-25T09:02:59.639Z · LW(p) · GW(p)
That there are meaningful statements that cannot be empirically grounded - and the fact that you cannot communicate your own specific experience of redness to someone else shows that it's not empirically grounded
A blind man once said that although he had never experienced red, he imagined that it was something like the sound of a trumpet, which I think is pretty good. And fictionally:
Menahem sighed. 'How can one explain colours to a blind man?'
'One says', snapped Rek, 'that red is like silk, blue is like cool water, and yellow is like sunshine on the face.'
— David Gemmell "Legend"
My entire point is that the idea that you could measure consciousness under any circumstances whatsoever, of a rock, a tree, a person or a corpse, follows from an incorrect application of empirical epistemic standards to conceptual problems.
Medics routinely assess the state of consciousness of patients. People routinely, automatically assess the states of the people around them: whether they are asleep or awake, whether they are paying attention or daydreaming.
To me, our experience that we have experience, and our simultaneous inability to explain it, amount to our ignorance about the matter, not a proof that there is any conceptual error in seeking an explanation.
ETA: BTW, I'm not the one who's giving you a -1 on every post in this thread, and I wouldn't even if I was not one of the participants.
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-06-25T10:34:16.503Z · LW(p) · GW(p)
Medics routinely assess the state of consciousness of patients. People routinely, automatically assess the states of the people around them: whether they are asleep or awake, whether they are paying attention or daydreaming.
What they're testing is the patient's responsiveness - If the internal, private experience of consciousness were open to measurement we could simply knock a tree with a rubber hammer or what have you instead of analysing the problem. Any metric of consciousness that you could invent, applicable to humans, would entail some assumptions about how consciousness manifests in humans or at best in animals. You'd be excluding the possibility of measuring it in non-living matter a priori. In effect you'd be defining consciousness to mean whatever is measurable: responsiveness, intelligence, capacity for memory, etc. This is why it's a conceptual problem - if consciousness is conceptually distinct from those measurable qualities, then how could you justify the use of any particular metric?
our experience that we have experience, and our simultaneous inability to explain it, amount to our ignorance about the matter, not a proof that there is any conceptual error in seeking an explanation
It's not a matter of proof; I find panpsychism appealing on abductive grounds - if it were true then it wouldn't be surprising that human beings are capable of consciousness.
Re. downvotes: I march toward the sound of gunfire, so I'll probably be in negative reputation before too long.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-06-25T12:52:07.836Z · LW(p) · GW(p)
Any metric of consciousness that you could invent, applicable to humans, would entail some assumptions about how consciousness manifests in humans or at best in animals.
The gross signs that doctors measure are just a pragmatic method that does the job the doctors are interested in (saving lives), not a definition of consciousness. The only definition we have of consciousness is the extensional one of pointing to our own experiences. Everything we observe about how this experience is modulated by physical circumstances suggests that it is specifically a physical process of the brain. We may ascribe it also to other animals, but we observe nothing to suggest that it is something a rock could have.
I find panpsychism appealing on abductive grounds - if it were true then it wouldn't be surprising that human beings are capable of consciousness.
"X implies Y, therefore Y implies X" does not work as an argument, especially when we already know Y ("humans are conscious") to be true. Any number of things imply Y, including, for example "only humans are conscious", "all terrestrial animals with a nervous system are conscious", or "any physically faithful simulation of a conscious entity is conscious." I don't see any reason to favour "everything is conscious" over any of these.
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-06-25T16:31:43.388Z · LW(p) · GW(p)
The gross signs that doctors measure are just a pragmatic method that does the job the doctors are interested in (saving lives), not a definition of consciousness.
If you already acknowledge this then why did you bring medical tests up to begin with?
Everything we observe about how this experience is modulated by physical circumstances suggests that it is specifically a physical process of the brain
Some physical process in the brain may just as likely simply be involved in organizing and amplifying consciousness as be totally responsible for it.
we observe nothing to suggest that it is something a rock could have
What specifically do we observe in other people to suggest that they could have it? Isn't the entire point of the "p-zombie" concept to show that nothing we observe about other people could possibly evince consciousness?
"X implies Y, therefore Y implies X" does not work as an argument, especially when we already know Y ("humans are conscious") to be true
The whole process of model-building is to find X's which imply Y's where Y is already known. That's pretty much what science is, right? Nothing about the world is known purely through deduction or induction. It "does not work as an argument" insofar as it's not a species of deductive (or, narrowly speaking, inductive) activity; but that's not to say it's epistemically inert.
Any number of things imply Y, including, for example "only humans are conscious", "all terrestrial animals with a nervous system are conscious", or "any physically faithful simulation of a conscious entity is conscious." I don't see any reason to favour "everything is conscious" over any of these.
Because "everything is conscious" is vastly less arbitrary than any of the other choices you've identified.
↑ comment by Viliam · 2015-06-30T07:53:39.159Z · LW(p) · GW(p)
"is my red your red"?
We need to look at the brain activity, whether seeing "red" activates the same parts of the brain for different people.
Take one person, show them a red screen, a green screen, a blue screen. Record the brain activity. Do the same thing with another person. Based on the first person's data, looking at the brain activity of the second person, could you tell what color do they see?
Thoughts and feelings are not immaterial, they can be detected, even if we still have a problem decoding them. Even if we don't know how exactly a given pattern of brain data creates the feeling of "red", these things could be simple enough so that we could compare patterns from different people, and see whether they are similar.
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-06-30T12:26:47.200Z · LW(p) · GW(p)
Such an experimental procedure depends on materialism; and materialism itself is the topic under scrutiny. Which is to say its results would under-determine the materialist/psychist dichotomy.
↑ comment by IlyaShpitser · 2015-06-24T08:27:40.529Z · LW(p) · GW(p)
It can in principle, in the same way that atomic theory eventually told us how to transmute lead into gold. It's the right approach -- decompose into simple parts and understand their laws.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-06-24T09:37:56.166Z · LW(p) · GW(p)
It's a long stretch from Epicurean atoms to nuclear physics, too long for me to regard the former as an explanation of the latter. Atomic theory wasn't of any use until Bernoulli used the idea to derive properties of gases, and Dalton to explain stoichiometric ratios. Pan-psychism consists of nothing more than hitching the word "consciousness" to the word "matter", and offers no direction for further investigation. Principles that suggest no practice are vanity.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-06-24T16:31:55.038Z · LW(p) · GW(p)
It's a long stretch from Epicurean atoms to nuclear physics, too long for me to regard the former as an explanation of the latter.
Ok, but if you have a choice of theory while being an ancient Greek, the rightest you could have been was sticking with the atomic theory they had. Maybe you are an ancient Greek now.
Panpsychism offers a way forward in principle, by reverse-engineering self-report. Folks like Dennett aren't even addressing the problem.
Replies from: Richard_Kennaway, eternal_neophyte↑ comment by Richard_Kennaway · 2015-06-24T20:46:05.873Z · LW(p) · GW(p)
Ok, but if you have a choice of theory while being an ancient Greek, the rightest you could have been was sticking with the atomic theory they had. Maybe you are an ancient Greek now.
What could they do, what did they do, with their atomic theory? Conceive of the world running without gods, and that's about it, which may be significant in the history of religion, but is no more than a footnote to the history of atomic theory.
What can we do with panpsychism?
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-06-24T21:33:06.072Z · LW(p) · GW(p)
What can we do with panpsychism?
In principle, try to construct a mapping between experience self-report and arrangements of "atoms of experience" corresponding to it.
What could they do, what did they do, with their atomic theory?
Even if they ended up doing nothing, they were still better off sticking with the atomic theory, than with an alternative theory.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-06-25T08:41:01.892Z · LW(p) · GW(p)
What can we do with panpsychism?
In principle, try to construct a mapping between experience self-report and arrangements of "atoms of experience" corresponding to it.
Rocks can't talk. Experience self-report only helps for those systems that are capable of reporting their experience.
Panpsychism might be an interesting idea to think about, but it is a question, not an answer. Does everything have a soul? (I use the shorter word for convenience.) If I split a rock in two, do I split a soul in two? If not, what happens when I separate the pieces? Or grind them into dust? Are the sounds of a blacksmith's work the screams of tortured metal in agony? Do the trees hear us when we talk to them? Do we murder souls when we cut them down? Does the Earth have a single soul, or are we talking about some sort of continuum of soul-stuff, parallel to the continuum of rock, that is particularly concentrated in brains? Is this soul-stuff a substance separate from matter, or a property of the arrangement of matter? An arrangement that doesn't have to be the sort we see (brains) in the definitive examples (us), but almost any arrangement at all will have a non-zero amount of soul-nature?
Plenty of fantasy story-seeds there, but I see nothing more.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-06-25T12:47:00.781Z · LW(p) · GW(p)
Experience self-report only helps for those systems that are capable of reporting their experience.
Yup. Still useful (just very very hard).
Plenty of fantasy story-seeds there, but I see nothing more.
Not super interested in arguments from incredulity.
Note that I am not aware of any competitor in the market place of ideas that offers any way forward at all.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-06-25T13:03:58.075Z · LW(p) · GW(p)
Not super interested in arguments from incredulity.
That was an argument from the current absence of any way of answering these questions. It is not that the hypothesis is absurd, but that it is useless. As I said before, panpsychism merely utters the word "conscious" when pointing to everything.
Note that I am not aware of any competitor in the market place of ideas that offers any way forward at all.
You can do experiments on people to investigate how consciousness is affected by various interventions. Drugs, TMS, brain imaging, etc. There's lots of this.
Here's a rock. It's on my bookshelves. How does panpsychism suggest I investigate the soul that it claims it to have?
Replies from: eternal_neophyte, IlyaShpitser↑ comment by eternal_neophyte · 2015-06-25T22:28:06.566Z · LW(p) · GW(p)
It is not that the hypothesis is absurd, but that it is useless
All philosophical concepts are in a sense useless except insofar as they can limit what you attempt to do, rather than open new avenues for investigation. Panpsychism limits the possibility of investigating the ultimate nature of mind in the same sense that materialism limits the possiblity of investigating the ultimate nature of matter - given that everything is made of mass-energy, you could never disconfirm "X is composed of mass-energy". Materialism is quite useless, in the same way as Panpsychism.
↑ comment by IlyaShpitser · 2015-06-27T13:05:24.199Z · LW(p) · GW(p)
You keep saying panpsychism is useless, and I keep saying it's not. Do you understand why I am saying that? I am not proposing we ask a rock. I am proposing we ask a human, and try to reverse engineer from a human's self report. That is very very hard, but not in principle impossible.
. How does panpsychism suggest I investigate the soul that it claims it to have?
Panpsychism of the kind I am talking about does not make claims about souls, it makes claims about "consciousness as a primitive in physics." Adding primitives when forced to has a long history in science/math.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-06-28T09:38:30.190Z · LW(p) · GW(p)
Panpsychism of the kind I am talking about does not make claims about souls, it makes claims about "consciousness as a primitive in physics." Adding primitives when forced to has a long history in science/math.
I was just using "soul" to avoid typing out "consciousness" all the time. But perhaps we are talking at cross purposes? My understanding of the word "panpsychism" is the doctrine that everything ("pan-") has whatever-you-want-to-call-it ("-psych-"), and from the etymology, dictionaries, philosophical encyclopedias, and the internet generally, that is how the word is universally used and understood.
"Consciousness as a primitive" is independent of that doctrine, and needs a different name. "Psychism"? (Materialists will call it "magic", but that's a statement of disagreement with the doctrine, rather than a name for it.)
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-06-28T09:52:52.209Z · LW(p) · GW(p)
I was just going by my understanding of what Chalmers calls panpsychism. Did I misunderstand Chalmers?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-06-28T10:18:22.218Z · LW(p) · GW(p)
Chalmers here begins, "Panpsychism, taken literally, is the doctrine that everything has a mind", which agrees with the general use. Then he redefines the word to mean "the thesis that some fundamental physical entities have mental states".
His "taken literally" qualification implies that the universal quantification of the "pan-" prefix is usually limited in some unspecified way, making his redefinition seem less of a break, but I do not think that the SEP article on panpsychism supports a limitation as drastic as the one he is making. His "some" could accommodate consciousness being present only in humans; no historical use of "panpsychism" in the SEP article can.
So you did not misunderstand Chalmers, but Chalmers would better have picked a different word. I think "psychism" fits the bill.
If some entities have a soul and others do not, there remains the same question as for the materialistic doctrine: why these and not those, and how does it work? We then get "emergent psychism", where what emerges from unensouled matter is not the right configuration to be a soul, but the right configuration to have a soul. And if answers to these questions are found, we end up with materialist psychism, with an expanded set of materials. At which point materialist philosophers can point out that this was materialism all along.
↑ comment by eternal_neophyte · 2015-06-24T17:14:01.272Z · LW(p) · GW(p)
Panpsychism offers a way forward in principle, by reverse-engineering self-report.
This is new to me, but googling "panpsychism reverse engineering", "panpsychism reverse-engineering self-report", "panpsychism self-report" doesn't bring anything that seems relevant. Has this been discussed anywhere?
comment by gurugeorge · 2015-06-27T13:37:39.307Z · LW(p) · GW(p)
Sweetness isn't an intrinsic property of the thing, but it is a relational property of the thing - i.e. the thing's sweetness comes into existence when we (with our particular characteristics) interact with it. And objectively so.
It's not right to mix up "intrinsic" or "inherent" with objective. They're different things. A property doesn't have to be intrinsic in order to be objective.
So sweetness isn't a property of the mental model either.
It's an objective quality (of a thing) that arises only in its interaction with us. An analogy would be how we're parents to our children, colleagues to our co-workers, lovers to our lovers. We are not parents to our lovers, or intrinsically or inherently parents, but that doesn't mean our parenthood towards our children is solely a property of our childrens' perception, or that we're not really parents because we're not parents to our lovers.
And I think Dennett would say something like this too; he's very much against "qualia" (at least to a large degree, he does allow some use of the concept, just not the full-on traditional use).
When we imagine, visualize or dream things, it's like the activation of our half of the interaction on its own. The other half that would normally make up a veridical perception isn't there, just our half.
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-06-27T15:32:30.414Z · LW(p) · GW(p)
If the brain were rewired to find lemons sweet, would sweetness then be an objective quality of lemons?
Replies from: None, gurugeorge, ChristianKl↑ comment by [deleted] · 2015-06-27T16:14:47.122Z · LW(p) · GW(p)
No need to rewire the brain, just eat some Synsepalum dulcificum and lemons will be sweet, for a while.
↑ comment by gurugeorge · 2015-06-30T12:05:47.107Z · LW(p) · GW(p)
Yes, for that person. Remember, we're not talking about an intrinsic or inherent quality, but an objective quality. Test it however many times you like, the lemon will be sweet to that person - i.e. it's an objective quality of the lemon for that person.
Or to put it another way, the lemon is consistently "giving off" the same set of causal effects that produce in one person "tart", another person "sweet".
The initial oddness arises precisely because we think "sweetness" must itself be an intrinsic quality of something, because there's several hundred years of bad philosophy that tells us there are qualia, which are intrinsically private, intrinsically subjective, etc.
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-06-30T12:31:09.876Z · LW(p) · GW(p)
So whenever you could wire a brain to undergo some particular set of sensory experiences given stimulation of a particular type by a particular object, the sensory experiences are then an objective quality of the object. Surely it follows that all qualities are objective qualities? It's a category of quality that doesn't tell us anything.
Replies from: gurugeorge↑ comment by gurugeorge · 2015-07-06T13:29:56.673Z · LW(p) · GW(p)
All purely sensory qualities of an object are objective, yes. Whatever sensory experience you have of an object is just precisely how that object objectively interacts with your sensory system. The perturbation that your being (your physical substance) undergoes upon interaction with that object via the causal sensory channels is precisely the perturbation caused by that object on your physical system, with the particular configuration ("wiring") it has.
There are still subjective perceived qualities of objects though - e.g. illusory (e.g.like Müller-Lyer, etc., but not "illusions" like the famous "bent" stick in water, that's a sensory experience), pleasant, inspiring, etc.
I'm calling "sensory" here the experience (perturbation of one's being) itself, "perception" the interpretation of it (i.e. hypothetical projection of a cause of the perturbation outside the perturbation itself). Of course in doing this I'm "tidying up" what is in ordinary language often mixed (e.g. sometimes we call sensory experiences as I'm calling them "perceptions", and vice-versa). At least, there are these two quite distinct things or processes going on, in reality. There may also be caveats about at what level the brain leaves off sensorily receiving and starts actively interpreting perception, not 100% sure about that.
↑ comment by ChristianKl · 2015-06-27T16:27:17.453Z · LW(p) · GW(p)
If the brain were rewired to find lemons sweet, would sweetness then be an objective quality of lemons?
It would be an objective quality of your relation to the lemon.
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-06-27T16:29:34.539Z · LW(p) · GW(p)
What is a subjective quality if not a "quality of [someone's] relation to [something]"?
Replies from: ChristianKl↑ comment by ChristianKl · 2015-06-27T16:55:10.989Z · LW(p) · GW(p)
I can run an objective experiment where I tell people in hypnosis that the lemon tastes sweet. Given good hypnosis subject the result will be that a bunch of the people do feel the qualia of sweetness in relation to the lemon.
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-06-27T17:10:20.242Z · LW(p) · GW(p)
Well OK. I'm not sure if what I think we're talking about is what you think we're talking about. I'm wondering if there's any difference between a subjective quality of a thing and an "objective quality of a relation" between a subject and a thing. Is this what your hypothetical is meant to be addressing?
Replies from: ChristianKl↑ comment by ChristianKl · 2015-06-27T17:21:27.712Z · LW(p) · GW(p)
It's subjective if it's the relationship that you have to something. It's objective if you talk about the relationship someone else has with something.
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-06-27T17:33:36.824Z · LW(p) · GW(p)
So you could say that its being a subjective relation to you is not an objective relation between you and the object? Or is it?