Book review: Rethinking Consciousness
post by Steven Byrnes (steve2152) · 2020-01-10T20:41:27.352Z · LW · GW · 56 commentsContents
What is attention schema theory? When people talk about consciousness, they're introspecting about their attention schema The meta-problem of consciousness Illusionism Emulations Implications for AGI safety Implications for morality None 57 comments
(Update September 2024: My thinking has developed a bit since 2020. See [Intuitive self-models] 2. Conscious Awareness [LW · GW] for my own Rethinking-Consciousness-adjacent theorizing.)
Princeton neuroscientist Michael Graziano wrote the book Rethinking Consciousness (2019) to explain his "Attention Schema" theory of consciousness (endorsed by Dan Dennett![1]). If you don't want to read the whole book, you can get the short version in this 2015 article.
I'm particularly interested in this topic because, if we build AGIs, we ought to figure out whether they are conscious, and/or whether that question matters morally. (As if we didn't already have our hands full thinking about the human impacts of AGI!) This book is nice and concrete and computational, and I think it at least offers a start to answering the first part of that question.
What is attention schema theory?
There are two ingredients.
For the first ingredient, you should read Kaj Sotala's excellent review of Consciousness and the Brain by Stan Dehaene [? · GW] (or read the actual book!) To summarize, there is a process in the brain whereby certain information gets promoted up to a "Global Neuronal Workspace" (GNW), a special richly-connected high-level subnetwork of the brain. Only information in the GNW can be remembered and described—i.e., this is the information of which we are "aware". For example, if something flashes in our field of view too quickly for us to "notice", it doesn't enter the GNW. So it does get processed to some extent, and can cause local brain activity that persists for a couple seconds, but will not cascade to a large, widespread signal with long-lasting effects.
Every second of every day, information is getting promoted to the GNW, and the GNW is processing it and pushing information into other parts of the brain. This process does not constitute all of cognition, but it's an important part. Graziano calls this process attention.[2]
The second ingredient is that the brain likes to build predictive models of things—Graziano calls them "schemas" or "internal models". If you know what an apple is, your brain has an "apple model", that describes apples' properties, behavior, affordances, etc. Likewise, we all have a "body schema", a deeply-rooted model that tracks where our body is, what it's doing, and how it works. If you have a phantom limb, that means your body schema has a limb where your actual body does not. As the phantom limb example illustrates, these schemas are deeply rooted, and not particularly subject to deliberate control.
Now put the two together, and you get an "attention schema", an internal model of attention (i.e., of the activity of the GNW). The attention schema is supposedly key to the mystery of consciousness.
Why does the brain build an attention schema? Graziano offers two reasons, and I'll add a third.
-
First, it's important that we control attention (it being central to cognition), and control theory says it's impossible to properly control something unless you're modeling it. Graziano offers an example of trying to ignore a distraction. Experiments show that, other things equal, this is easier if we are aware of the distraction. That's counter-intuitive, and supports his claim.
-
Second, the attention schema can also be used to model other people's attention, which is helpful for interacting with them, understanding them, deceiving them, etc.
-
Third (I would add), the brain is a thing that by default builds internal models of everything it encounters. The workings of the GNW obviously has a giant impact on the signals going everywhere in the brain, so of course the brain is going to try to build a predictive model of it! I mention this partly because of my blank-slate-ish sympathies [LW · GW], but I think it's an important possibility to keep in mind, because it would mean that even if we desperately want to build a human-cognition-like AGI without an attention schema (if we want AGIs to be unconscious for ethical reasons; more on which below), it might be essentially impossible.
To be clear, if GNW is "consciousness" (as Dehaene describes it), then the attention schema is "how we think about consciousness". So this seems to be at the wrong level! This is a book about consciousness; shouldn't we be talking directly about the nature of consciousness itself?? I was confused about this for a while. But it turns out, he wants to be one level up! He thinks that's where the answers are, in the "the meta-problem of consciousness". See below.
When people talk about consciousness, they're introspecting about their attention schema
Let's go through some examples.
Naive description: I have a consciousness, and I can be aware of things, like right now I'm aware of this apple.
...and corresponding sophisticated description: One of my internal models is an attention schema. According to that schema, attention has a particular behavior wherein attention kinda "takes possession" of a different internal model, e.g. a model of a particular apple. Objectively, we would say that this happens when the apple model becomes active in the GNW.
Naive description: My consciousness is not a physical thing with color, shape, texture. So it's sorta metaphysical, although I guess it's roughly located in my head.
...and corresponding sophisticated description: Just as my internal model of "multiplication" has no property of "saltiness", by the same token, my attention schema describes attention as having no color, shape, or texture.
Naive description: I have special access to my own consciousness. I alone can truly experience my experiences.
...and corresponding sophisticated description: The real GNW does not directly interact with other people; it only interacts with the world by affecting my own actions. Reflecting that fact, my attention schema describes attention as a thing to which I have privileged access.
Naive description An intimate part of my consciousness is its tie to long-term memory. If you show me a video of me going scuba diving this morning, and I absolutely have no memory whatsoever of it, and you can prove that the video is real, well I mean, I don't know what to say, I must have been unconscious or something!
...and corresponding sophisticated description: Essentially everything that enters the GNW leaves at least a slight trace in long-term memory. Thus, one aspect of my attention schema is that it describes attention and memory as inextricably linked. According to my internal models, when attention "takes possession" of some piece of information, it leaves a trace in long-term memory, and conversely, nothing can get into long-term memory unless attention first takes possession of it.
Naive description: Hey, hey, what are you going on about "internal models" and "attention schema"? I don't know anything about that. I know what my consciousness is, I can feel it. It's not a model, it's not a computation, it's not a physical thing. (And don't call me naive!)
...and corresponding sophisticated description: All my internal models are simplified entities, containing their essential behavior and properties, but not usually capturing the nuts-and-bolts of how they work in the real world. (In a programming analogy, you could say that we're modeling the GNW's API & documentation, not its implementation.) Thus, my attention schema does not involve neurons or synapses or GNWs or anything like that, even if, in reality, that's what it's modeling.
The meta-problem of consciousness
The "hard problem of consciousness" is "why is there an experience of consciousness; why does information processing feel like anything at all?"
The "meta-problem of consciousness" is "why do people believe that there's a hard problem of consciousness?"
The meta-problem has the advantage of having obvious and non-confusing methods of attack: the belief that there's a hard problem of consciousness is an observable output of the brain, and can be studied by normal cognitive neuroscience.
But the real head-scratcher is: If we have a complete explanation of the meta-problem, is there anything left to explain regarding the hard problem? Graziano's answer seems to be a resounding "No!", and we end up with conversations like these:
Normal Person: What about qualia?
Person Who Has Solved The Meta-Problem Of Consciousness: Let me explain why the brain, as an information processing system, would ask the question "What about qualia"...
NP: What about subjective experience?
PWHSTMPOC: Let me explain why the brain, as an information processing system, would ask the question "What about subjective experience"...
NP: You're not answering my questions!
PWHSTMPOC: Let me explain why the brain, as an information processing system, would say "You're not answering my questions"...
...
The book goes through this type of discussion several times. I feel a bit torn. One side of me says: obviously Graziano's answers are correct, and obviously no other answer is possible. The other side of me says: No no no, he did not actually answer these questions!!
On reflection, I have to side with "Obviously Graziano's are correct, and no other answer is possible." But I still find it annoying and deeply unsatisfying.
(Update: A commenter points me to Luke Muehlhauser's report on consciousness Appendix F for ideas and further reading. Having read a bit more, I still find this line of thought counterintuitive, but less so.) (Update 2: Ditto Joe Carlsmith's blog.)
Illusionism
Graziano says that his theory is within the philosophical school of thought called "Illusionism". But he thinks that term is misleading. He says it's not "illusion as in mirage", but "illusion as in mental construction", like how everything we see is an "illusion" rather than raw perceptual data.
Edited to add: Graziano makes illusionism sound very straightforward and unobjectionable. Maybe he has a way to think about it such that it really is straightforward and unobjectionable. ...Or maybe he's dancing around the counterintuitive or controversial aspects of his theory, to make it more palatable to a broad audience. I'm inclined to think it's the latter. There's another example of this elsewhere in the book: his discussion of Integrated Information Theory. He could have just said "IIT is baloney" and I would have been totally on board. I think IIT is fundamentally wrong; it was an interesting idea to look into, but let's now put it in the garbage and move on. And that's exactly what Graziano's theory implies. But Graziano doesn't say that. Instead, as I recall, he has a scrupulously non-confrontational discussion of how the GNW stuff he talks about involves a lot of integration of information in a way that the IIT "Φ" calculation would endorse as conscious. So, I think he wants to pick his battles, and that's why he dances around how weird and unintuitive illusionism really is. I could be wrong.
Emulations
He has a fun chapter on brain uploading, which is not particularly related to the rest of the book. He discusses some fascinating neuroscience aspects of brain-scanning, like the mystery of whether glial cells do computations, but spends most of the time speculating about the bizarre implications for society.
Implications for AGI safety
He suggests that, since humans are generally pro-social, and part of that comes from modeling each other using attention schemas, perhaps the cause of AGI Safety could be advanced by deliberately building conscious AGIs with attention schemas (and, I presume, other human-like emotions). Now, he's not a particular expert on AGI Safety, but I think this is not an unreasonable idea; in fact it's one that I'm very interested in myself. (We don't have to blindly copy human emotions ... we can turn off jealousy etc.)
Implications for morality
One issue where Graziano is largely silent is the implications for moral philosophy.
For example, someday we'll have to decide: When we build AGIs, should we assign them moral weight? Is it OK to turn them off? Are our AGIs suffering? How would we know? Should we care? If humans go extinct but conscious AGIs have rich experiences as they colonize the universe, do we think of them as our children/successors? Or as our hated conquerers in a now-empty clockwork universe?
I definitely share the common intuition is that we should care about the suffering of things that are conscious (and/or sentient, I'm not sure what the difference is). However, in attention schema theory, there does not seem to be a sharp dividing line between "things with an attention schema" and "things without an attention schema", especially in the wide space of all possible computations. There are (presumably) computations that arguably involve something like an "attention schema" but with radically alien properties. There doesn't seem to be any good reason that, out of all the possible computational processes in the universe, we should care only and exactly about computations involving an attention schema. Instead, the picture I get is more like we're taking an ad-hoc abstract internal model and thoughtlessly reifying it. It's like if somebody worshipped the concept of pure whiteness, and went searching the universe for things that match that template, only to discover that white is a mixture of colors, and thus pure whiteness—when taken to be a literal description of a real-world phenomenon—simply doesn't exist. What then?
It's a mess.
So, as usual when I start thinking too hard about philosophy, I wind up back at Dentin's Prayer of the Altruistic Nihilist [LW(p) · GW(p)]:
Why do I exist? Because the universe happens to be set up this way. Why do I care (about anything or everything)? Simply because my genetics, atoms, molecules, and processing architecture are set up in a way that happens to care.
So, where does that leave us? Well, I definitely care about people. If I met an AGI that was pretty much exactly like a nice person, inside and out, I would care about it too (for direct emotional reasons), and I would feel that caring about it is the right thing to do (for intellectual consistency reasons). For AGIs running more alien types of algorithms—man, I just have no idea.
(thanks Tan Zhi Xuan for comments on a draft.)
More specifically, I went to a seminar where Graziano explained his theory, and then Dan Dennett spoke and said that he had essentially nothing to disagree with concerning what Graziano had said. I consider that more-or-less an "endorsement", but I may be putting words in his mouth. ↩︎
I found his discussion of "attention" vs "awareness" confusing. I'm rounding to the nearest theory that makes sense to me, which might or might not be exactly what he was trying to describe. ↩︎
56 comments
Comments sorted by top scores.
comment by romeostevensit · 2020-01-11T18:58:57.195Z · LW(p) · GW(p)
I think representationalism helps much more than the illusionist stance for loading the right intuitions. If you get close to a screen you see that really it's a bunch of rgb pixels. Is it helpful to call the projected image you see from father away an illusion? Better to say we have multiple representations at different levels of abstraction and seen to get different sorts of computational advantages out of crunching on these different representations.
Replies from: ESRogs, steve2152↑ comment by ESRogs · 2020-01-12T02:28:45.994Z · LW(p) · GW(p)
Semi-relatedly, I'm getting frustrated with the term "illusionist". People seem to use it in different ways. Within the last few weeks I listened to the 80k podcast with David Chalmers and the Rationally Speaking podcast with "illusionist" Keith Frankish.
Chalmers seemed to use the term to mean that consciousness was an illusion, such that it means we don't really have consciousness. Which seems very dubious.
Frankish seemed to use the term to mean that many of the properties that other philosophers think our consciousness has are illusory, but that of course we are conscious.
From listening to the latter interview, it's not clear to me that Frankish (who, according to Wikipedia, is "known for his 'illusionist' stance in the theory of consciousness") believes anything different from the view described in this post (which I assume you're classing as "representationalism").
Maybe I'm just confused. But it seems like leading philosophers of today still haven't absorbed the lesson of Wittgenstein and are still talking past each other with confusing words.
Replies from: ESRogs, TAG↑ comment by ESRogs · 2020-01-12T02:34:36.325Z · LW(p) · GW(p)
I guess my request of philosophers (and the rest of us) is this: when you are using an every day term like "free will" or "consciousness", please don't define it to mean one very specific thing that bakes in a bunch of philosophical assumptions. Because then anyone who questions some of those assumptions ends up arguing whether the thing exists. Rather than just saying it's a little different than we thought before.
It'd be like if we couldn't talk about "space" or "time" anymore after Einstein. Or if half of us started calling ourselves "illusionists" w.r.t. space or time. They're not illusions! They exist! They're just a little different than we thought before.
(See also this comment [LW(p) · GW(p)], and remember that all abstractions are leaky!)
↑ comment by Steven Byrnes (steve2152) · 2020-01-12T14:05:22.337Z · LW(p) · GW(p)
Can you suggest a reference which you found helpful for "loading the right intuitions" about consciousness?
Replies from: romeostevensit↑ comment by romeostevensit · 2020-01-19T20:32:20.671Z · LW(p) · GW(p)
Unfortunately I don't know of a good overview. Chalmers might have one. Lukeprogs post on consciousness has some pointers.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2020-01-20T01:25:47.104Z · LW(p) · GW(p)
Thanks! I just read Luke's report Appendix F on illusionism and it's definitely pointing me in fruitful directions.
comment by [deleted] · 2020-01-11T10:38:41.286Z · LW(p) · GW(p)
On reflection, I have to side with "Obviously Graziano's are correct, and no other answer is possible." But I still find it annoying and deeply unsatisfying.
Sounds like you are noticing you are still confused, having received a a mysterious answer to a mysterious question. That seems a clear indication that your intuition is correct and the hard problem of consciousness has not been resolved by this theory.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2020-01-12T02:25:44.157Z · LW(p) · GW(p)
Ha! Maybe!
Or maybe it's like the times I've read poorly-written math textbooks, and there's a complicated proof of Theorem X, and I'm able to check that every step of the proof is correct, but all the steps seem random, and then out of nowhere, the last step says "Therefore, Theorem X is true". OK, well, I guess Theorem X is true then.
...But if I had previously found Theorem X to be unintuitive ("it seems like it shouldn't be true"), I'm now obligated to fix my faulty intuitions and construct new better ones to replace them, and doing so can be extremely challenging. In that sense, reading and verifying the confusing proof of Theorem X is "annoying and deeply unsatisfying".
(The really good math books offer both a rigorous proof of Theorem X and an intuitive way to think about things such that Theorem X is obviously true once those intuitions are internalized. That saves readers the work of searching out those intuitions for themselves from scratch.)
So, I'm not saying that Graziano's argument is poorly-written per se, but having read the book, I find myself more-or-less without any intuitions about consciousness that I can endorse upon reflection, and this is an annoying and unsatisfying situation. Hopefully I'll construct new better intuitions sooner or later. Or—less likely I think—I'll decide that Graziano's argument is baloney after all :-)
Replies from: None↑ comment by [deleted] · 2020-01-16T15:18:10.341Z · LW(p) · GW(p)
Sorry, but you can be better than that. You should not be trusting textbook authors when they say that Theorem X is true. If you don't follow the chain of reasoning and see for yourself why it works, then you shouldn't take it on face value. You can do better.
This is an unpopular opinion because people don't like doing the work. But if you've read the memoirs of anyone who has achieved greatness through originality in their work, like Richard Feynman for example, there is a consistent lesson: don't trust what you don't understand yourself.
In a community where the explicit goal is to be less wrong, then I cannot think of a stronger mandate than to not trust authority and to develop your own intuitive understanding of everything. Anyone who says this isn't possible hasn't really tried.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2020-01-17T02:46:35.673Z · LW(p) · GW(p)
develop your own intuitive understanding of everything
I agree 100%!! That's the goal. And I'm not there yet with consciousness. That's why I used the word "annoying and unsatisfying" to describe my attempts to understand consciousness thus far. :-P
You should not be trusting textbook authors when they say that Theorem X is true
I'm not sure you quite followed what I wrote here.
I am saying that it's possible to understand a math proof well enough to have 100% confidence—on solely one's own authority—that the proof is mathematically correct, but still not understand it well enough to intuitively grok it. This typically happens when you can confirm that each step of the proof, taken on its own, is mathematically correct.
If you haven't lived this experience, maybe imagine that I give you a proof of the Riemann hypothesis in the form of 500 pages of equations kinda like this, with no English-language prose or variable names whatsoever. Then you spend 6 months checking rigorously that every line follows from the previous line (or program a computer to do that for you). OK, you have now verified on solely your own authority that the Riemann hypothesis is true. But if I now ask you why it's true, you can't give any answer better than "It's true because this 500-page argument shows it to be true".
So, that's a bit like where I'm at on consciousness. My "proof" is not 500 pages, it's just 4 steps, but that's still too much for me to hold the whole thing in my head and feel satisfied that I intuitively grok it.
-
I am strongly disinclined to believe (as I think David Chalmers has suggested) that there's a notion of p-zombies, in which an unconscious system could have exactly the same thoughts and behaviors as a conscious one, even including writing books about the philosophy of consciousness, for reasons described here [LW · GW] and elsewhere.
-
If I believe (1), it seems to follow that I should endorse the claim "if we have a complete explanation of the meta-problem of consciousness, then there is nothing left to explain regarding the hard problem of consciousness". The argument more specifically is: Either the behavior in which a philosopher writes a book about consciousness has some causal relation to the nature of consciousness itself (in which case, solving the meta-problem requires understanding the nature of consciousness), or it doesn't (in which case, unconscious p-zombies should (bizarrely) be equally capable of writing philosophy books about consciousness).
-
I think that Attention Schema Theory offers a complete and correct answer to every aspect of the meta-problem of consciousness, at least every aspect that I can think of.
-
...Therefore, I conclude that there is nothing to consciousness beyond the processes discussed in Attention Schema Theory.
I keep going through these steps and they all seem pretty solid, and so I feel somewhat obligated to accept the conclusion in step 4. But I find that conclusion highly unintuitive, I think for the same reason most people do—sorta like, why should any information processing feel like anything at all?
So, I need to either drag my intuitions into line with 1-4, or else crystallize my intuitions into a specific error in one of the steps 1-4. That's where I'm at right now. I appreciate you and others in this comment thread pointing me to helpful and interesting resources! :-)
Replies from: None, TAG↑ comment by TAG · 2020-01-17T15:44:28.784Z · LW(p) · GW(p)
I am strongly disinclined to believe (as I think David Chalmers has suggested) that there’s a notion of p-zombies, in which an unconscious system could have exactly the same thoughts and behaviors as a conscious one, even including writing books about the philosophy of consciousness, for reasons described here and elsewhere.
Again: Chalmers doesn't think p-zombies are actually possible.
If I believe (1), it seems to follow that I should endorse the claim “if we have a complete explanation of the meta-problem of consciousness, then there is nothing left to explain regarding the hard problem of consciousness”.
That doesn't follow from (1). It would follow from the claim that everyone is a zombie, because then there would be nothing to consciousness except false claims to be conscious. However, if you take the view that reports of consciousness are caused by consciousness per se, then consciousness per se exists and needs to be explained separately from reports and behaviour.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2020-01-17T17:36:42.576Z · LW(p) · GW(p)
Hmm. I do take the view that reports of consciousness are (at least in part) caused by consciousness (whatever that is!). (Does anyone disagree with that?) I think a complete explanation of reports of consciousness necessarily include any upstream cause of those reports. By analogy, I report that I am wearing a watch. If you want a "complete and correct explanation" of that report, you need to bring up the fact that I am in fact wearing a watch, and to describe what a watch is. Any explanation omitting the existence of my actual watch would not match the data. Thus, if reports of consciousness are partly caused by consciousness, then it will not be possible to correctly explain those reports unless, somewhere buried within the explanation of the report of consciousness, there is an explanation of consciousness itself. Do you see where I'm coming from?
Replies from: TAG↑ comment by TAG · 2021-01-01T02:49:01.512Z · LW(p) · GW(p)
If explaining reports of consciousness involves solving the hard problem, then no one has explained reports of consciousness, since no one has has solved the HP.
Of course, some people (eg. Dennett) think that reports of consciousness can be explained ... and don't accept that there is an HP.
And the HP isn't about consciousness in general, it is about qualia or phenomenal consciousnessm the very thing that illusionism denies.
Edit: the basic problem with what you are saying is that there are disagreements about what explanation is, and about what needs to be explained. The Dennet side was that once you have explained all the objective phenomena objectively, you have explained everything. The Chalmers side thinks that leaves out the most important stuff.
comment by Shmi (shminux) · 2020-01-11T03:49:57.453Z · LW(p) · GW(p)
This reminds me of the Eliezer's classic post Dissolving the Question [LW · GW].
From your post:
The "hard problem of consciousness" is "why is there an experience of consciousness; why does information processing feel like anything at all?"
The "meta-problem of consciousness" is "why do people believe that there's a hard problem of consciousness?"
From Eliezer's post:
Your assignment is not to argue about whether people have free will, or not.
Your assignment is not to argue that free will is compatible with determinism, or not.
Your assignment is not to argue that the question is ill-posed, or that the concept is self-contradictory, or that it has no testable consequences.
You are not asked to invent an evolutionary explanation of how people who believed in free will would have reproduced; nor an account of how the concept of free will seems suspiciously congruent with bias X. Such are mere attempts to explain why people believe in "free will", not explain how.
Your homework assignment is to write a stack trace of the internal algorithms of the human mind as they produce the intuitions that power the whole damn philosophical argument.
Is there anything else to the book you review beyond what Eliezer captured back 12 years ago?
Replies from: shminux, TAG↑ comment by Shmi (shminux) · 2020-01-11T03:54:08.146Z · LW(p) · GW(p)
And even simpler summary in a follow-up post Righting a Wrong Question [? · GW]:
When you are faced with an unanswerable question—a question to which it seems impossible to even imagine an answer—there is a simple trick which can turn the question solvable.
Compare:
- "Why do I have free will?"
- "Why do I think I have free will?"
↑ comment by [deleted] · 2020-01-11T11:14:27.105Z · LW(p) · GW(p)
That approach doesn’t work in this case, however. It works great for free will, where uncovering the way in which we made decisions feels like “free will” from the inside. It is a problem that dissolves entirely upon answering the meta question.
But there are other problems which do not get dissolved by answering the meta question. “Why do I think reality exists?” for example. You could conceivably convince me that we are living inside the matrix and that what I think is immutable reality is actually manipulatable data in a running computer program. But what you can never convince me of is that there is NO reality, that I do not exist.
For the exact same reasons, you cannot convince me that I am “not conscious,” or expect that explaining why the mostly deterministic computational process which is my brain asks questions about consciousness is a suitable answer for why I, or anything, subjectively feel conscious. “I think therefore I am” is not dissolved by knowing the how thinking works.
Free will is an artifact of how a decision process feels from the inside. The hard problem of consciousness is why ANY process “feels” anything at all, which cannot be resolved in the same way.
I am really puzzled as to why people think the question of consciousness can be resolved in this way. The best I can come up with is that this is a form of belief in belief. People have seen the meta question resolve similar sounding problems before, so far without exception. Dennett goes to great lengths in his books to explain that asking “why” must ALWAYS be transformed into asking “how.” So they assume it must work the same for consciousness. But the hard problem of consciousness is one of the unique exceptions because it deals with subjective experience, specifically why we have subjective experience at all. (It is, in fact, a variant of the first-cause problem.)
Replies from: shminux↑ comment by Shmi (shminux) · 2020-01-11T21:55:47.403Z · LW(p) · GW(p)
“Why do I think reality exists?”
Is already answerable. You can list a number of reasons why you hold this belief. You are not supposed to dissolve the new question, only reformulate the original one in a way that is becomes answerable.
why ANY process “feels” anything at all
Is harder because we do not have a good handle on what physical process creates feelings, or in Dennett's approach, how do feelings form. But at least we know what kind of research needs to be conducted in order to make progress in that area. In that way the question is answerable, at least in principle, we are just lacking the good understanding of how human brain works. So the question is ultimately about the neuroscience and the algorithms.
But the hard problem of consciousness is one of the unique exceptions because it deals with subjective experience, specifically why we have subjective experience at all. (It is, in fact, a variant of the first-cause problem.)
That's the "dangling unit" (my grade 8 self says "lol!" at the term) Eliezer was talking about. There are no "unique exceptions", we are algorithms, and some of the artifacts of running our algorithms is to generate "feelings" or "qualia" or "subjective experiences". If this leaves you saying "but... but... but...", then the next quote from Eliezer already anticipates that:
This dangling unit feels like an unresolved question, even after every answerable query [? · GW] is answered. No matter how much anyone proves to you that no difference of anticipated experience depends on the question, you're left wondering: "But does the falling tree really make a sound, or not?"
Replies from: None, TAG
↑ comment by [deleted] · 2020-01-12T00:49:58.921Z · LW(p) · GW(p)
I agree with this post. However once you take this line of thinking to its conclusion, the result is panpsychism (which Tegmark professes) rather than the "explain away" belief of Dennett et al.
Replies from: shminux↑ comment by Shmi (shminux) · 2020-01-12T02:41:05.876Z · LW(p) · GW(p)
I am not sure how this leads to panpsychism. What are the logical steps there?
Replies from: None↑ comment by [deleted] · 2020-01-16T15:09:46.940Z · LW(p) · GW(p)
1. I exist. (Cogito, ergo sum). I'm a thinking, conscious entity that experiences existence at this specific point in time in the multiverse.
2. Our understanding of physics is that there is no fundamental thing that we can reduce conscious experience down to. We're all just quarks and leptons interacting.
These appear to be in conflict. Taking (2) to its logical conclusion seems to imply that we live in a deterministic block universe, or at least we can frame our physical understanding as if we do. But if that's true, and if the universe is big enough (it is a big place!), then somewhere out there in space or time is a computational process that resembles the me-of-right-now. Maybe a Boltzmann brain, or maybe a simulation of me in the future, or maybe just the split off Everett branches of alternate histories. Since there are multiple instances of me out there, how come I'm stuck in the "now"?
Any fundamental theory of physics must explain ALL the evidence we have available to us. This includes both highly precise quantum measurements, and the fact that I'm a thinking, conscious entity that experiences existence at this specific point in time in the multiverse. One of the chief problems here is that physics, so far as we can tell, is entirely local. We expect future physical laws to also be local. But our best guesses at understanding consciousness is that it is information processing, and is only really described at a much higher level than quarks and leptons. So our prior is that we need a physical theory that explains consciousness at the level of quarks and leptons, but that seems irreconcilable with our current understanding of biological consciousness. I'd accept an alternative theory if it led to testable predictions, but I'm not willing to bite the bullet of non-local physical theories of consciousness without experimental evidence. The prior for locality in fundamental physics is simply too high to realistically consider alternatives otherwise.
The jump to panpsychism is not an inference from evidence but rather a deduction from a reasonable prior: a local theory of consciousness would imply that (1) a single lepton interacting with a field (an electron emitting or absorbing a photon) has some epsilon experience of consciousness; and (2) conscious experience locally aggregates. So two electrons exchanging a photon is a single consciousness event of, say, 2*epsilon magnitude (although the relationship need not be linear). Higher order structure further aggregates this singular experience of consciousness, in a progression from quarks and leptons to atoms, to molecules, to organelles, to cell structures, to tissue, to organs, to entire organisms. However at some point the system interacts with a non-factorable stochastic boundary, the environment, which prevents further aggregation. The singularly conscious entity interacts with the environment, but each interaction is isolated and either peels off or adds epsilon consciousness stochastically, like the steady-state boundary between a liquid and a gas.
This so-far qualitatively descriptive theory explains why information-processing systems like our brains (or AI) have singular experiences of consciousness, without having to evoke theories like epiphenomenalism with questionable physical basis. My evidence for it is simply an Occam prior: it's the simplest theory with local physics which explains the evidence. But as you expect of any local theory, what's true of one part of the universe is true of another. If we have subjective experiences (and I'm not willing to bite the bullet of rejecting Descartes' Cogito), then so does a rock. And the ocean. And every little thing in the universe. Indeed the universe itself is conscious, to whatever degree that makes sense in an inflationary universe with local physics, and we are just factorable complex interactions within that universal consciousness that experience our own subjective sense of self. When we "die", our experience doesn't stop.. but it does stop being interesting from a human standpoint, as we return to the stochastic random noise of the environment in which we live. [*]
This is the basis of a physical theory of consciousness I thought up almost two decades ago when I first encountered the quantum teleport thought experiment in a philosophy class, but it is also basically the same as Max Tegmark's pansychic theory of consciousness, so I'll just point you to his articles for more detail.
[*] Aside: if this is true, being cremated might be the worst possible outcome after death. Being worm food is better than being perfectly split up into the perfectly stochastic entities (gas molecules) and dispersed in the environment... It would also mean that cryonics works, however, but destructive mind uploading is a kill-and-copy operation.
Replies from: shminux, TAG, steve2152↑ comment by Shmi (shminux) · 2020-01-17T07:56:09.767Z · LW(p) · GW(p)
Not surprisingly, I have a few issues with your chain of reasoning.
1. I exist. (Cogito, ergo sum). I'm a thinking, conscious entity that experiences existence at this specific point in time in the multiverse.
Cogito is an observation. I am not arguing with that one. Ergo sum is an assumption, a model. The "multiverse" thing is a speculation.
Our understanding of physics is that there is no fundamental thing that we can reduce conscious experience down to. We're all just quarks and leptons interacting.
This is very much simplified. Sure, we can do reduction, but that doesn't mean we can do synthesis. There is no guarantee that it is even possible to do synthesis. In fact, there are mathematical examples where synthesis might not be possible, simply because the relevant equations cannot be solved uniquely. I made a related point here [LW(p) · GW(p)]. Here is an example. Consciousness can potentially be reduced to atoms, but it may also be reduced to bits, a rather different substrate. Maybe there are other reductions possible.
And it is also possible that constructing consciousness out of quarks and leptons is impossible because of "hard emergence". Of the sorites kind. There is no atom of water. A handful of H2O molecules cannot be described as a solid, liquid or gas. A snowflake requires trillions of trillions of H2O molecules together. There is no "snowflakiness" in a single molecule. Just like there is no consciousness in an elementary particle. There is no evidence for panpsychism, and plenty against it.
Replies from: None↑ comment by [deleted] · 2020-01-17T08:08:03.971Z · LW(p) · GW(p)
Postulating hard emergence requires a non-local postulate. I’m not willing to accept that without testable predictions.
I don’t really see how “ergo sum” is an assumption. If any thing it is a direct inference, but not an assumption. Something exists that is perceiving. Any theory that says otherwise must be incorrect.
Replies from: TAG↑ comment by TAG · 2020-01-17T13:42:37.753Z · LW(p) · GW(p)
Postulating hard emergence requires a non-local postulate.
That is not obvious.
Replies from: None↑ comment by [deleted] · 2020-01-17T17:09:11.393Z · LW(p) · GW(p)
If consciousness only “emerges” when an information processing system is constructed at a higher level, then that implies that the whole is something different than the aggregate of its many individual interactions. This is unlike shminux’s description liquid water emerging from H2O interactions, which is confusing of map and territory. If a physical description stated that an interaction is conscious if and only if it is part of an information processing system, that is something that cannot be determined with local information at the exact time and place of the individual interactions.
I’m biting the bullet of QM (the standard model, or whatever quantum gravity formulation wins out) being all there is. If that is true, then explaining subjective experience requires a local postulate not an added rule, which results in panpsychism.
↑ comment by TAG · 2020-01-17T13:40:51.910Z · LW(p) · GW(p)
Taking (2) to its logical conclusion seems to imply that we live in a deterministic block universe,
That was not implied by (2) as stated, and isn't implied by physics in general. Both the block universe and determinism are open questions (and not equivalent to each other).
One of the chief problems here is that physics, so far as we can tell, is entirely local.
[emph. added]
Nope. What is specifically ruled out by test's of Bell's inequalities is the conjunction of (local, deterministic). The one thing we know is that the two things you just asserted are not both true. What we don't know is which is false.
Replies from: shminux, steve2152↑ comment by Shmi (shminux) · 2020-01-19T08:36:33.957Z · LW(p) · GW(p)
Actually the superdeterminism models allow for both to be true. There is a different assumption that breaks.
↑ comment by Steven Byrnes (steve2152) · 2020-01-17T15:05:43.185Z · LW(p) · GW(p)
What is specifically ruled out by test's of Bell's inequalities is the conjunction of (local, deterministic). The one thing we know is that the two things you just asserted are not both true. What we don't know is which is false.
I think you're nitpicking here. While we don't know the fundamental laws of the universe with 100% confidence, I would suggest that based on what we do know, they are extremely likely to be local and non-deterministic (as opposed to nonlocal hidden variables). Quantum field theory (QFT) is in that category, and adding general relativity doesn't change anything except in unusual extreme circumstances (e.g. microscopic black holes, or the Big Bang—where the two can't be sensibly combined). String theory doesn't really have a meaningful notion of locality at very small scales (Planck length, Planck time), but at larger scales in normal circumstances it approaches QFT + classical general relativity, which again is local and non-deterministic. (So yes, probably our everyday human interactions have nonlocality at a part-per-googolplex level or whatever, related to quantum fluctuations of the geometry of space itself, but it's hard to imagine that this would matter for anything.)
(By non-deterministic I just mean that the Born rule involves true randomness. In Copenhagen interpretation you say that collapse is a random process. In many-worlds you would say that the laws of physics are deterministic but the quasi-anthropic question "what branch of the wavefunction will I happen to find myself in?" has a truly random answer. Either way is fine; it doesn't matter for this comment.)
Replies from: TAG↑ comment by TAG · 2020-01-17T15:48:35.436Z · LW(p) · GW(p)
Well, I wasn't nitpicking you. Friedenbach was assserting locality+determinism. You are asserting locality+nondeterminism, which is OK.
Replies from: None↑ comment by [deleted] · 2020-01-17T16:21:52.319Z · LW(p) · GW(p)
FWIW I was asserting this:
In many-worlds you would say that the laws of physics are deterministic
The only thing non-deterministic in QM is the Born rule, which isn’t part of a MWI block universe formulation. (You need a source of randomness to specify where “you” will end up in the future evolution of the universe, but not to specify all paths you might end up in.)
↑ comment by Steven Byrnes (steve2152) · 2020-01-17T10:34:59.340Z · LW(p) · GW(p)
Interesting!
We also need (I would think) for the experience of consciousness to somehow cause your brain to instruct your hands to type "cogito ergo sum". From what you wrote, I'm sorta imagining physical laws plus experience glued to it ... and that physical laws without experience glued to it would still lead to the same nerve firing pattern, right? Or maybe you'll say physical laws without experience is logically impossible? Or what?
Replies from: None↑ comment by [deleted] · 2020-01-17T17:34:17.143Z · LW(p) · GW(p)
I don't find the question relevant. That's a physicist's application of Occam's razor: extra postulates about consciousness don't affect physical calculations, so we should ignore them--just like MWI vs CI doesn't affect experimental predictions, so a physicist shouldn't care what interpretation is used.
But my concern is the intersection of physics and philosophy: what moral weight should I give in my utilitarian assessment of possible futures outcomes? Whether a life form is conscious or not doesn't matter much from a physicists perspective because it doesn't affect the biochemical calculations, but it does matter to the question "should I protect this life?"
There is a division in the transhumanist community between whether one should identify with the instance of a computation, or the description of a computation. This has practical, real-world consequences: should I sign up for cryonics (with the possibility of revival, but you suffer some damage) or brain preservation (less damage, but only destructive uploading options)?
If the panpsychic consciousness-in-every-interaction postulate I stated is true, then morality and personal utility comes down instance of computation, not description of computation camp. This means cryonics (long sleep) is favored over brain preservation (kill-and-copy), and weird stuff like quantum suicide are also ruled out as options.
↑ comment by TAG · 2020-01-12T12:21:44.063Z · LW(p) · GW(p)
“Why do I think reality exists?” Is already answerable. You can list a number of reasons why you hold this belief.
There are also reasons for believing in non-illusory forms of free will and consciousness. If that argument is sufficient to establish realism in some cases, it is sufficient in all cases.
You are not supposed to dissolve the new question, only reformulate the original one in a way that is becomes answerable.
Supposed by whom? EY gives some instructions in the imperative voice, but that's not how logic works.
His argument is that if free will is possibly an illusion then it is an illusion. If valid, this argument would also show that consciousness and material reality are definitely illusions.
So it disproves too much.
But there is a valid form of the argument where you argue against the reality of X on addition to arguing for the possible illusory nature of X.
There are no “unique exceptions”, we are algorithms,
That's much more conjectural than most of the claims made here.
↑ comment by Steven Byrnes (steve2152) · 2020-01-11T09:38:50.076Z · LW(p) · GW(p)
Yep! I agree with you: Rethinking Consciousness and those two Eliezer posts are coming from a similar place.
(Just to be clear, the phrase "meta-problem of consciousness" comes from David Chalmers, not Graziano. More generally, I don't know exactly which aspects of really anything here are original Graziano inventions, versus Graziano synthesizing ideas from the literature. I'm not familiar with the consciousness literature, and also I listened to the audio book which omits footnotes and references.)
↑ comment by TAG · 2020-01-11T13:16:26.975Z · LW(p) · GW(p)
Except that EY is not an illusionist about consciousness! When considering free will, he assumes right off the bat that it can't possibly be real, and has to be explained away instead. But in the generalised anti zombie principle [LW · GW], he goes in the opposite direction, insisting that reports of consciousness are always caused by consciousness. [*]
So there is no unique candidate for being an illusion. Anything can be. Some people think consciousness is all, and matter is an illusion.
Leading to the anti-Aumann principle: two parties will never agree if they are allowed to dismiss each others evidence out of hand.
[*] Make no mistake, asserting illusionism about consciousness is asserting you yourself are a zombie.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2020-01-11T14:26:34.880Z · LW(p) · GW(p)
If you say that free will and consciousness are by definition non-physical, then if course naturalist explanations explain them away. But you can also choose to define the terms to encompass what you think is really going on. This is called "compatibilism" for free will, and this is Graziano's position on consciousness. I'm definitely signed up for compatibilism on free will and have been for many years, but I don't yet feel 100% comfortable calling Graziano's ideas "consciousness" (as he does), or if I do call it that, I'm not sure which of my intuitions and associations about "consciousness" are still applicable.
Replies from: None, TAG↑ comment by TAG · 2020-01-11T15:36:30.297Z · LW(p) · GW(p)
If you say that free will and consciousness are by definition non-physical, then if course naturalist explanations explain them away.
Object level reply: I don't. Most contemporary philosophers don't. If you see that sort of thing it is almost certainly a straw man.
meta level reply: And naturally idealists reject any notion of matter except as a bundle of sensation. Just because something is normal and natural, does not mean it is normatively correct. It is normal and natural to be tribal, biased and otherwise irrational. Immunity to evidence is a Bad Thing from the point of view of rationality.
But you can also choose to define the terms to encompass what you think is really going on
You can if you really know, but confusing assumptions and knowledge is another Bad Thing. We know that atoms can be split, so redefining an atom to be a divisible unit of matter.
I’m definitely signed up for compatibilism on free will and have been for many years
Explaining compatibilist free will is automatically explaining away libertarian free will. So what is the case against libertarian free will? It isn't false because of naturalism, since it isn't supernatural by definition -- and because naturalism needs to be defeasible to mean anything. EY dismisses libertarian free will out of hand. That is not knowledge.
but I don’t yet feel 100% comfortable calling Graziano’s ideas “consciousness” (as he does), or if I do call it that, I’m not sure which of my intuitions and associations about “consciousness” are still applicable.
What would it take for it to be false? If the answer is "nothing", then you are looking at suppression of evidence.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2020-01-12T14:27:25.788Z · LW(p) · GW(p)
Sorry for being sloppy, you can ignore what I said about "non-physical", I really just meant the more general point that "consciousness doesn't exist (if consciousness is defined as X)" is the same statement as "consciousness does not mean X, but rather Y", and I shouldn't have said "non-physical" at all. You sorta responded to that more general point, although I'm interested in whether you can say more about how exactly you define consciousness such that illusionism is not consciousness. (As I mentioned, I'm not sure I'll disagree with your definition!)
What would it take for it to be false?
I think that if attention schema theory can explain every thought and feeling I have about consciousness (as in my silly example conversation in the "meta-problem of consciousness" section), then there's nothing left to explain. I don't see any way around that. I would be looking for (1) some observable thought / behavior that AST cannot explain, (2) some reason to think those explanations are wrong, or (3) a good argument that true philosophical zombies are sensible, i.e. that you can have two systems whose every observable thought / behavior is identical but exactly one of them is conscious, or (4) some broader framework of thinking that accepts the AST story as far as it goes, and offers a different way to think about it intuitively and contextualize it.
Replies from: TAG↑ comment by TAG · 2020-01-12T18:23:03.407Z · LW(p) · GW(p)
I really just meant the more general point that “consciousness doesn’t exist (if consciousness is defined as X)” is the same statement as “consciousness does not mean X, but rather Y”
If you stipulate that consciousness means Y consciousness, not X consciousness, you haven't proven anything about X consciousness.
If I stipulate that when I say, "duck", I mean mallards, I imply nothing about the existential status of muscovys or teals. In order to figure out what is, real you have to look, not juggle definitions.
If you have an infallible way of establishing what really exists, that in some way bypasses language, and a normative rule that every term must have a realworld referent, then you might be in a place where you you can say what a word really means.
Otherwise, language is just custom.
I’m interested in whether you can say more about how exactly you define consciousness such that illusionism is not consciousness. (As I mentioned, I’m not sure I’ll disagree with your definition!)
Illusionism is not consciousness because it is a theory of consciousness.
Illusionism explicitly does not explain consciousness as typically defined, but instead switches the topic to third person reports of consciousness.
Edit1:
I think that if attention schema theory can explain every thought and feeling I have about consciousness (as in my silly example conversation in the “meta-problem of consciousness” section), then there’s nothing left to explain
Explaining consciousness as part of the hard problem of consciousness is different to explaining-away consciousness (or explaining reports of consciousness) as part of the meta problem of consciousness.
Edit2:
There are two ways of not knowing the correct explanation of something: the way where no one has any idea, and the way where everyone has an idea... but no one knows which explanation is right because they are explaining different things in different ways.
Having an explanation is only useful in the first situation. Otherwise, the whole problem is the difference between "an explanation" and "the explanation".
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2020-01-19T14:12:57.123Z · LW(p) · GW(p)
Explaining consciousness as part of the hard problem of consciousness is different to explaining-away consciousness (or explaining reports of consciousness) as part of the meta problem of consciousness.
I commented here [LW(p) · GW(p)] why I think that it shouldn't be possible to fully explain reports of consciousness without also fully explaining the hard problem of consciousness in the process of doing so. I take it you disagree (correct?) but do you see where I'm coming from? Can you be more specific about how you think about that?
comment by ESRogs · 2020-01-11T06:23:31.381Z · LW(p) · GW(p)
Now put the two together, and you get an "attention schema", an internal model of the activity of the GNW, which he calls attention.
To clarify, he calls "the activity of the GNW" attention, or he calls "an internal model of the activity of the GNW" attention?
My best guess interpretation of what you're saying is that it's the former, and when you add "an internal model of" on the front, that makes it a schema. Am I reading that right?
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2020-01-11T09:15:40.326Z · LW(p) · GW(p)
Yes! I have edited to make that clearer, thanks.
comment by Gordon Seidoh Worley (gworley) · 2020-01-14T22:18:22.957Z · LW(p) · GW(p)
Now put the two together, and you get an "attention schema", an internal model of attention (i.e., of the activity of the GNW). The attention schema is supposedly key to the mystery of consciousness.
The idea of an attention schema helps make sense of a thing talked about in meditation. In zen we talk sometimes about it via the metaphor of the mind like a mirror such that it sees itself reflecting in itself. In The Mind Illuminated it's referred to as metacognitive awareness. The point is that the process by which the mind operates can be observed by itself even as it operates, and and perhaps the attention schema is an important part of what it means to do that, specifically causing the attention schema to be able to model itself.
comment by torekp · 2020-07-06T00:52:11.542Z · LW(p) · GW(p)
When I read
To be clear, if GNW is "consciousness" (as Dehaene describes it), then the attention schema is "how we think about consciousness". So this seems to be at the wrong level! [...] But it turns out, he wants to be one level up!
I thought, thank goodness, Graziano (and steve2152) gets it. But in the moral implications section, you immediately start talking about attention schemas rather than simply attention. Attention schemas aren't necessary for consciousness or sentience; they're necessary for meta-consciousness. I don't mean to deny that meta-consciousness is also morally important, but it strikes me as a bad move to skip right over simple consciousness.
This may make little difference to your main points. I agree that "There are (presumably) computations that arguably involve something like an 'attention schema' but with radically alien properties." And I doubt that I could see any value in an attention schema with sufficiently alien properties, nor would I expect it to see value in my attentional system.
comment by Rafael Harth (sil-ver) · 2020-12-31T14:29:40.652Z · LW(p) · GW(p)
I guess it was too nice that I tend to agree with everything you say about the brain, so there had to be an exception.
Normal Person: What about qualia?
Person Who Has Solved The Meta-Problem Of Consciousness: Let me explain why the brain, as an information processing system, would ask the question "What about qualia"...
NP: What about subjective experience?
PWHSTMPOC: Let me explain why the brain, as an information processing system, would ask the question "What about subjective experience"...
NP: You're not answering my questions!
PWHSTMPOC: Let me explain why the brain, as an information processing system, would say "You're not answering my questions"...
It seems to me like PWHSTMPOC is being chicken here. The real answer is "there is no qualia" followed by "however, I can explain why your brain outputs the question about qualia". Right?
If so, well I know that there's qualia because I experience it, and I genuinely don't understand why that's not the end of the conversation. It's also true that a brain like main could say this if it weren't true, but this doesn't change anything about the fact that I experience qualia. (Unless the claim isn't that there's no qualia, in which case I don't understand illusionism.)
I'm also not following your part on morality. If consciousness isn't real, why doesn't that just immediately imply nihilism? (This isn't an argument for it being real, or course.) Anyway, please feel free to ignore this paragraph if the answer is too complicated.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2020-12-31T15:56:02.339Z · LW(p) · GW(p)
It's also true that a brain like [mine] could say this if it weren't true
This is the p-zombie thing [LW · GW], but I think there's a simpler way to think about it. You wrote down "I know that there's qualia because I experience it". There was some chain of causation that led to you writing down that statement. Here's a claim:
Claim: Your experience of qualia played no role whatsoever in the chain of causation in your brain that led to you writing down the statement "I know that there's qualia because I experience it".
This is a pretty weird claim, right? I mean, you remember writing down the statement. Would you agree with that claim? No way, right?
Well, if we reject that claim, then we're kinda stuck saying that if there are qualia, they are somewhere to be found within that chain of causation. And if there's nothing to be found in the chain of causation that looks like qualia, then either there are no qualia, or else qualia are not what they look like.
(Unless the claim isn't that there's no qualia, in which case I don't understand illusionism.)
I can't say I understand it very well either, and see also Luke's report Appendix F and Joe's blog post. From where I'm at right now, there's a set of phenomena that people describe using words like "consciousness" and "qualia", and nothing we say will make those phenomena magically disappear. However, it's possible that those phenomena are not what they appear to be.
We all perceive that we have qualia. You can think of statements like "I perceive X" as living on continuum, like a horizontal line. On the left extreme of the line, we can perceive things because those things are out there in the world and our senses are accurately and objectively conveying them to us. On the right extreme of the line, we can perceive things because of quirks of our perceptual systems.
So those motion illusions are closer to the right end, and "I see a rock" is closer to the left end. But as Graziano points out, there is nothing that's all the way at the left end—even "I see a rock" is very much a construction built by our brain that has some imperfect correlation with the configuration of atoms in the world. (I'm talking about when you're actually looking at a rock, in the everyday sense. If "I see a rock" when I'm hallucinating on LSD, then that's way over on the right end.)
"I have qualia" is describing a perception. Where is it on that line? I say it's over towards the right end. That's not necessarily the same as saying "no such thing as qualia". You could also say "qualia is part of our perception of the world". And so what if it is? Our perception of the world is pretty important, and I'm allowed to care about it...
If consciousness isn't real, why doesn't that just immediately imply nihilism?
There's a funny thing about nihilism: It's not decision-relevant. Imagine being a nihilist, deciding whether to spend your free time trying to bring about an awesome post-AGI utopia, vs sitting on the couch and watching TV. Well, if you're a nihilist, then the awesome post-AGI utopia doesn't matter. But watching TV doesn't matter either. Watching TV entails less exertion of effort. But that doesn't matter either. Watching TV is more fun (well, for some people). But having fun doesn't matter either. There's no reason to throw yourself at a difficult project. There's no reason not to throw yourself at a difficult project. Isn't it funny?
I don't have a grand ethical theory, I'm not ready to sit in judgment of anyone else, I'm just deciding what to do for my own account. There's a reason I ended the post with "Dentin's prayer of the altruistic nihilist"; that's how I feel, at least sometimes. I choose to care about information-processing systems that are (or "perceive themselves to be"?) conscious in a way that's analogous to how humans do that, with details still uncertain. I went them to be (or "to perceive themselves to be"?) happy and have awesome futures. So here I am :-D
Replies from: TAG, sil-ver↑ comment by TAG · 2021-01-01T02:58:57.490Z · LW(p) · GW(p)
Well, if we reject that claim, then we’re kinda stuck saying that if there are qualia, they are somewhere to be found within that chain of causation. And if there’s nothing to be found in the chain of causation that looks like qualia,
Looks from the inside. or looks from the outside?
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2021-01-01T13:56:05.642Z · LW(p) · GW(p)
I guess outside, see my comment about the watch [LW(p) · GW(p)] for what I was trying to get at there.
Replies from: TAG↑ comment by TAG · 2021-01-02T22:58:22.344Z · LW(p) · GW(p)
If you start from the assumption that only "outside" -- third person ,objective -- evidence counts , then it is easy to come to the conclusion that only physical causation counts. Qualia are found in the chain, subjectively, because , subjectively, they seem to cause things.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2021-01-02T23:19:05.815Z · LW(p) · GW(p)
I don't really understand what you're getting at, and I suspect it would take more than one sentence for me to get it. If there's an article or other piece of writing that you'd suggest I read, please let me know. :-)
↑ comment by Rafael Harth (sil-ver) · 2020-12-31T17:19:25.026Z · LW(p) · GW(p)
There's a funny thing about nihilism: It's not decision-relevant. Imagine being a nihilist, deciding whether to spend your free time trying to bring about an awesome post-AGI utopia, vs sitting on the couch and watching TV. Well, if you're a nihilist, then the awesome post-AGI utopia doesn't matter. But watching TV doesn't matter either. Watching TV entails less exertion of effort. But that doesn't matter either. Watching TV is more fun (well, for some people). But having fun doesn't matter either. There's no reason to throw yourself at a difficult project. There's no reason not to throw yourself at a difficult project. Isn't it funny?
I agree except for the funny part.
I don't have a grand ethical theory, I'm not ready to sit in judgment of anyone else, I'm just deciding what to do for my own account. There's a reason I ended the post with "Dentin's prayer of the altruistic nihilist"; that's how I feel, at least sometimes. I choose to care about information-processing systems that are (or "perceive themselves to be"?) conscious in a way that's analogous to how humans do that, with details still uncertain. I went them to be (or "to perceive themselves to be"?) happy and have awesome futures. So here I am :-D
Thanks for describing this. I'm both impressed and a bit shocked that you're being consistent.
This is a pretty weird claim, right? I mean, you remember writing down the statement. Would you agree with that claim? No way, right?
Let's assume I do. (I think I would have agreed a few years ago, or at least assigned significant probability to this.) I still think (and thought then) that there is a slam-dunk chain from 'I experience consciousness' to 'therefore, consciousness exists'.
Let and . Clearly because experiencing anything is already sufficient for what I call consciousness. Furthermore, clearly is true. Hence is true. Nothing about your Claim contradicts any step of this argument.
I think the reason why this topic has intuitions differ so much is that we are comparing very low probability theories against each other, and the question is which one is lower. (And operations with low numbers are prone to higher errors than operations with higher numbers.) At least my impression (correct me if I'm wrong) is that the subjective proof of consciousness would be persuasive, except that it seems to imply Claim, and Claim is a no-go, so therefore the subjective proof has to give in. I.e., you have both and , and therefore .
My main point is that it doesn't make sense to assign anything lower probability than and because is immediately proven by the fact that you experience stuff, and is the definition of so is utterly trivial. You can make a coherent-sounding (if far fetched) argument for why Claim is true, but I'm not familiar with any coherent argument that is false (other than that it must be false because of what it implies, which is again the argument above.)
My probabilities (not adjusted for the fact that one of them must be true) look something like this:
- or is false
- Consciousness is an emergent phenomenon. (I.e., matter is unconscious but consciousness appears as a result of information processing and has no causal effect on the world. This would imply Claim.)
- Something weird like Dual-aspect monism (consciousness and materialism are two views on the same process, in particular all matter is conscious),
Hence what I said earlier: I don't believe Claim right now because I think there is actually a not-super-low-probability explanation, but even if there weren't, it would still not change anything because is a lot more than . I do remember finding EY's anti-p-zombie post persuasive, although it's been years since I've read it.
I can't say I understand it very well either, and see also Luke's report Appendix F and Joe's blog post. From where I'm at right now, there's a set of phenomena that people describe using words like "consciousness" and "qualia", and nothing we say will make those phenomena magically disappear. However, it's possible that those phenomena are not what they appear to be.
We all perceive that we have qualia. You can think of statements like "I perceive X" as living on continuum, like a horizontal line. On the left extreme of the line, we can perceive things because those things are out there in the world and our senses are accurately and objectively conveying them to us. On the right extreme of the line, we can perceive things because of quirks of our perceptual systems.
I think that's just dodging the problem since any amount of subjective experience is enough for . The question isn't how accurately your brain reports on the outside world, it's why you have subjective experience of any kind.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2020-12-31T18:50:01.491Z · LW(p) · GW(p)
Thanks! I'm sympathetic to everything you wrote, and I don't have a great response. I'd have to think about it more. :-D
comment by Pattern · 2020-01-11T07:14:12.126Z · LW(p) · GW(p)
Errata:
school if thought
school of thought
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2020-01-11T09:23:16.341Z · LW(p) · GW(p)
Fixed it, thanks!