I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

post by Jemist · 2021-10-29T11:09:20.559Z · LW · GW · 107 comments

Contents

  "Higher" Consciousness
    Asleep Human < Awake Human < Human on Psychedelics/Zen Meditation
  Inner Listeners
  Confidence
  Conclusions
None
106 comments

From Twitter:

I'd say that I "don't understand" why the people who worry that chickens are sentient and suffering, don't also worry that GPT-3 is sentient and maybe suffering; but in fact I do understand, it's just not a charitable understanding. Anyway, they're both unsentient so no worries.

His overall thesis is spelt out in full here but I think the key passages are these ones:

What my model says is that when we have a cognitively reflective, self-modely thing, we can put very simple algorithms on top of that — as simple as a neural network having its weights adjusted — and that will feel like something, there will be something that it is like that thing to be, because there will be something self-modely enough to feel like there’s a thing happening to the person-that-is-this-person.

So I would be very averse to anyone producing pain in a newborn baby, even though I’d be truly shocked (like, fairies-in-the-garden shocked) to find them sentient, because I worry that might lose utility in future sentient-moments later.

I’m not totally sure people in sufficiently unreflective flow-like states are conscious, and I give serious consideration to the proposition that I am reflective enough for consciousness only during the moments I happen to wonder whether I am conscious.

I'm currently very confident on the following things, and I'm pretty sure EY is too:

  1. Consciousness (having qualia) exists and humans have it
  2. Consciousness isn't an epiphenomenon
  3. Consciousness is a result of how information is processed in an algorithm, in the most general sense: a simulation of a human brain is just as conscious as a meat-human

EY's position seems to be that self-modelling is both necessary and sufficient for consciousness. But I don't ever see him putting forward a highly concrete thesis for why this is the case. He is correct that his model has more moving parts than other models. But having more moving parts only makes sense if it's actually good at explaining observed data. And we only have one datapoint, which is that adult humans are conscious. Or do we?

"Higher" Consciousness

We actually have a few datapoints here. An ordering of consciousness as reported by humans might be:

Asleep Human < Awake Human < Human on Psychedelics/Zen Meditation

I don't know if EY agrees with this. From his beliefs he might say something along the lines of "having more thoughts doesn't mean you're more conscious". Given his arguments about babies, I'm pretty sure he thinks that you can have memories of times when you weren't conscious, and then consciously experience those things in a sort of "second hand" way by loading up those memories.

Now a lot of Zen meditation involves focusing on your own experiences, which seems like self-modelling. However something else I notice here is the common experience of "ego death" while using psychedelics and in types of meditation. Perhaps EY has a strong argument that this in fact requires more self-modelling than previous states. On the other hand, he might argue that consciousness is on/off, and then amount of experience is unrelated to whether or not those experiences are being turned into qualia.

I'm trying to give potential responses to my arguments, but I don't want to strawman EY so I ought to point out that there are lots of other counter-arguments to this he might have, which might be more insightful than my imagined ones.

Inner Listeners

EY talks a lot about "inner listeners", and mentions that a good theory should be able to have them arise naturally in some way. I agree with this point, and I do agree that his views provide a possible explanation as to what produces an inner listener.

Where I disagree is that we 100% need a separate "information processing" and "inner listener" module. The chicken-conscious, GPT-3-unconscious model seems to make sense from the following perspective:

Some methods of processing input data cause consciousness and some don't. We know that chickens process input data in a very similar way to humans (by virtue of being made of neurons) and we know that GPT-3 doesn't process information in that way (by virtue of not being made of neurons). I guess this is related to the binding problem.

Confidence

But what surprises me the most about EY's position is his confidence in it. He claims to have never seen any good alternatives to his own model. But that's simply a statement about the other beliefs he's seen, not a statement about all hypothesis-space. I even strongly agree with the first part of his original tweet! I do suspect most people who believe chickens are conscious but GPT-3 isn't believe it for bad reasons! And the quality of replies is generally poor.

EY's argument strikes me as oddly specific. There are lots of things which human brains do (or we have some uncertainty of them doing) which are kind of weird:

EY has picked out one thing (self modelling) and decided that it alone is the source of consciousness. Whether or not he has gone through all the weird and poorly-understood things brains do and ruled them out, I don't know. Perhaps he has. But he doesn't mention it in the thesis that he links to to explain his beliefs. He doesn't even mention that he's conducted such a search, the closest thing to that being references to his own theory treating qualia as non-mysterious (which is true). I'm just not convinced without him showing his working!

Conclusions

I am confused, and at the end of the day that is a fact about me, not about consciousness. I shouldn't use my own bamboozlement as strong evidence that EY's theory is false. On the other hand, the only evidence available (in the absence of experimentation) for an argument not making sense is that people can't make sense of it.

I don't think EY's theory of consciousness is completely absurd. I put about 15% credence in it. I just don't see what he's seeing that elevates it to being totally overwhelmingly likely. My own uncertainty is primarily due to the lack of truly good explanations I've seen of the form "X could cause consciousness", combined with the lack of strong arguments made of the form "Here's why X can't be the cause of consciousness". Eliezer sort of presents the first but not the second.

I would love for someone to explain to me why chickens are strongly unlikely to be conscious, so I can go back to eating KFC. I would also generally like to understand consciousness better.

107 comments

Comments sorted by top scores.

comment by So8res · 2021-10-30T04:07:30.249Z · LW(p) · GW(p)

Instrumental status: off-the-cuff reply, out of a wish that more people in this community understood what the sequences have to say about how to do philosophy correctly (according to me).

EY's position seems to be that self-modelling is both necessary and sufficient for consciousness.

That is not how it seems to me. My read of his position is more like: "Don't start by asking 'what is consciousness' or 'what are qualia'; start by asking 'what are the cognitive causes of people talking about consciousness and qualia', because while abstractions like 'consciousness' and 'qualia' might turn out to be labels for our own confusions, the words people emit about them are physical observations that won't disappear. Once one has figured out what is going on, they can plausibly rescue the notions of 'qualia' and 'consciousness', though their concepts might look fundamentally different, just as a physicist's concept of 'heat' may differ from that of a layperson. Having done this exercise at least in part, I (Nate's model of Eliezer) assert that consciousness/qualia can be more-or-less rescued, and that there is a long list of things an algorithm has to do to 'be conscious' / 'have qualia' in the rescued sense. The mirror test seems to me like a decent proxy for at least one item on that list (and the presence of one might correlate with a handful of others, especially among animals with similar architectures to ours)."

An ordering of consciousness as reported by humans might be:

Asleep Human < Awake Human < Human on Psychedelics/Zen Meditation

I don't know if EY agrees with this.

My model of Eliezer says "Insofar as humans do report this, it's a fine observation to write down in your list of 'stuff people say about consciousness', which your completed theory of consciousness should explain. However, it would be an error to take this as much evidence about 'consciousness', because it would be an error to act like 'consciousness' is a coherent concept when one is so confused about it that they cannot describe the cognitive antecedents of human insistence that there's an ineffable redness to red."

But what surprises me the most about EY's position is his confidence in it.

My model of Eliezer says "The type of knowledge I claim to have, is knowledge of (at least many components of) a cognitive algorithm that looks to me like it codes for consciousness, in the sense that if you were to execute it then it would claim to have qualia for transparent reasons and for the same reasons that humans do, and to be correct about that claim in the same way that we are. From this epistemic vantage point, I can indeed see clearly that consciousness is not much intertwined with predictive processing, nor with the "binding problem", etc. I have not named the long list of components that I have compiled, and you, who lack such a list, may well not be able to tell what consciousness is or isn't intertwined with. However, you can still perhaps understand what it would feel like to believe you can see (at least a good part of) such an algorithm, and perhaps this will help you understand my confidence. Many things look a lot more certain, and a lot less confusing, once you begin to see how to program them."

Replies from: Jemist, Ape in the coat, MichaelStJules, EI
comment by Jemist · 2021-10-30T18:40:20.393Z · LW(p) · GW(p)

I'm confident your model of Eliezer is more accurate than mine.

Neither the twitter thread or other writings originally gave me the impression that he had a model in that fine-grained detail. I was mentally comparing his writings on consciousness to his writings on free will. Reading the latter made me feel like I strongly understood free will as a concept, and since then I have never been confused, it genuinely reduced free will as a concept in my mind.

His writings on consciousness have not done anything more than raise that model to the same level of possibility as a bunch of other models I'm confused about. That was the primary motivation for this post. But now that you mention it, if he genuinely believes that he has knowledge which might bring him closer to (or might bring others closer to to) programming a conscious being, I can see why he wouldn't share it in high detail.

comment by Ape in the coat · 2021-10-30T08:38:01.802Z · LW(p) · GW(p)

While I agree with mostly everything your model of Eliezer said, I do not feel less confused about how Eliezer arrives to a conclusion that most animals are not conscious. Granted, I may, and probably actually am, lacking an important insight in the matter, but than it will be this insight that allows me to become less confused and I wish Eliezer shared it.

When I'm thinking about a thought process that allows to arrive to such a conclusion I imagine something like this. Consciousness is not fundamental but it feels like it is. That's why we intuitively apply concepts such as quantity towards consciousness, thinking about more or less conscious creatures as being more or less filled with conscious-fluid as we previously though about flogiston or caloric fluid. But this intuition is confused and leads us astray. Consciousness is a result of a specific cognitive algorithm. This algorithm can either be executed or not. There are good reasons to assume that such algorithm would be developped by evolution only among highly social animals as such conditions lead to necessity to model other creatures modelling yourself.

And I see an obvious problem with this line of thoughts. Reversed confusion isn't insight [LW · GW]. Our confused intuition which leads us to quantifying consciousness may be wrong, but it isn't necessary wrong. If anything, the idea that consciousness isn't quantifiable is also originally based on the idea of consciousness being fundamental. Think about ancient hebrews who claimed that animals didn't have souls. There are lots of bad reasons to think that farm animals are ethically irrelivant, indeed it would be super convinient, considered how tasty is their meat. That doesn't automatically mean that they are ethically relevant, just hints at the possibility.

We can think about hearing, or vision, or sense of smell. They are not fundamental. They are the result of a specific algorithm executed by our brain. Yet we can quantify them. Quantifying them actually makes a lot of sense, considered that evolution works incrementally. Why can't it be the same for consciousness?

Replies from: So8res
comment by So8res · 2021-10-30T14:41:34.764Z · LW(p) · GW(p)

I don't think the thought process that allows one to arrive at (my model of) Eliezer's model looks very much like your 2nd paragraph. Rather, I think it looks like writing down a whole big list of stuff people say about consciousness, and then doing a bunch of introspection in the vicinity, and then listing out a bunch of hypothesized things the cognitive algorithm is doing, and then looking at that algorithm and asking why it is "obviously not conscious", and so on and so forth, all while being very careful not to shove the entire problem under the rug in any particular step (by being like "and then there's a sensor inside the mind, which is the part that has feelings about the image of the world that's painted inside the head" or whatever).

Assuming one has had success at this exercise, they may feel much better-equipped to answer questions like "is (the appropriate rescuing of) consciousness more like a gradient quantity or more like a binary property?" or "are chickens similarly-conscious in the rescued sense?". But their confidence wouldn't be coming from abstract arguments like "because it is an algorithm, it can either be executed or not" or "there are good reasons to assume it would be developed by evolution only among social animals"; their confidence would be coming from saying "look, look at the particular algorithm, look at things X, Y, and Z that it needs to do in particular, there are other highly-probable consequences of a mind being able to do X, Y, and Z, and we difinitively observe those consequences in humans, and observe their absence in chickens."

You might well disbelieve that Eliezer has such insight into cognitive algorithms, or believe he made a mistake when he did his exercise! But hopefully this sheds some light on (what I believe is) the nature of his confidence.

comment by MichaelStJules · 2021-10-30T08:13:09.836Z · LW(p) · GW(p)

Thanks, this is helpful.

what are the cognitive causes of people talking about consciousness and qualia

Based on the rest of your comment, I'm guessing you mean talk about consciousness and qualia in the abstract and attribute them to themselves, not just talk about specific experiences they've had.

a cognitive algorithm that looks to me like it codes for consciousness, in the sense that if you were to execute it then it would claim to have qualia for transparent reasons and for the same reasons that humans do, and to be correct about that claim in the same way that we are

Why use the standard of claiming to be conscious/have qualia? That is one answer that gets at something that might matter, but why isn't that standard too high?

For example, he wrote:

I give serious consideration to the proposition that I am reflective enough for consciousness only during the moments I happen to wonder whether I am conscious.

If this proposition is false, we need to allow unsymbolized (non-verbal) ways to self-attribute consciousness for self-attributing consciousness to matter in itself, right? Would (solidly) passing the mirror test be (almost) sufficient at this point? There's a visual self-representation, and an attribution of the perception of the mark to this self-representation. What else would be needed?

Would it need to non-symbolically self-attribute consciousness generally, not just particular experiences? How would this work?

If the proposion is true, doesn't this just plainly contradict our everyday experiences of consciousness? I can direct my attention towards things other than wondering whether or not I'm conscious (and towards things other than and unrelated to my inner monologue), while still being conscious, at least in a way that still matters to me that I wouldn't want to dismiss. We can describe our experiences without wondering whether or not we're having (or had) them.

it would claim to have qualia for transparent reasons and for the same reasons that humans do, and to be correct about that claim in the same way that we are

What kinds of reasons? And what would being correct look like?

If unsymbolized self-attribution of consciousness is enough, how would we check just for it? The mirror test?

Replies from: So8res
comment by So8res · 2021-10-30T15:39:21.161Z · LW(p) · GW(p)

Based on the rest of your comment, I'm guessing you mean talk about consciousness and qualia in the abstract and attribute them to themselves, not just talk about specific experiences they've had.

If I were doing the exercise, all sorts of things would go in my "stuff people say about consciousness" list, including stuff Searl says about chinese rooms, stuff Chalmers says about p-zombies, stuff the person on the street says about the ineffable intransmissible redness of red, stuff schoolyard kids say about how they wouldn't be able to tell if the color they saw as green was the one you saw as blue, and so on. You don't need to be miserly about what you put on that list.

Why use the standard of claiming to be conscious/have qualia? That is one answer that gets at something that might matter, but why isn't that standard too high?

Mostly (on my model) because it's not at all clear from the getgo that it's meaningful to "be conscious" or "have qualia"; the ability to write an algorithm that makes the same sort of observable-claims that we make, for the same cognitive reasons, demonstrates a mastery of the phenomenon even in situations where "being conscious" turns out to be a nonsense notion.

Note also that higher standards on the algorithm you're supposed to produce are more conservative: if it is meanigful to say that an algorithm "is conscious", then producing an algorithm that is both conscious, and claims to be so, for the same cognitive reasons we do, is a stronger demonstration of mastery than isolating just a subset of that algorithm (the "being conscious" part, assuming such a thing exists).

I'd be pretty suspicious of someone who claimed to have a "conscious algorithm" if they couldn't also say "and if you inspect it, you can see how if you hook it up to this extra module here and initialize it this way, then it would output the Chinese Room argument for the same reasons Searl did, and if you instead initialize it that way, then it outputs the Mary's Room thought experiment for the same reason people do". Once someone demonstrated that sort of mastery (and once I'd verified it by inspection of the algorithm, and integrated the insights therefrom), I'd be much more willing to trust them (or to operate the newfound insights myself) on questions of how the ability to write philosophy papers about qualia relates to the ability of the mind to feel, but the qualifying bar for "do you have a reductionist explanation of consciousness" is "can you show me how to build something that produces the observations we set out to explain in the first place (people talking about 'consciousness') fo rthe same cognitive reasons?".

Note further that demonstrating an algorithm that produces the same sort of claims humans do (eg, claims about the redness of red) for the same cognitive reasons, is not the same thing as asserting that everything "with consciousness/qualia" must make similar claims.

If this proposition is false, we need to allow unsymbolized (non-verbal) ways to self-attribute consciousness for self-attributing consciousness to matter in itself, right?

My model of Eliezer says "In lieu of an algorithmic account of the cognitive antecedents of people insisting they are conscious, that sort of claim is not even wrong." (And similarly with various other claims in that section.) My model continues: "You seem to me to be trying to do far more with the word 'consciousness' than your understanding of the phenomenon permits. I recommend doing less abstract reasoning about how 'consciousness' must behave, and more thinking about the cognitive causes behind the creation of the Mary's Room hypothetical."

What kinds of reasons?

My model says: "The list of reasons is not particularly small, in this case."

And what would being correct look like?

"The claim is correct if the actual cognitive reasons for Searl inventing the Chinese Room hypothetical, are analogous to the cognitive reasons that the alleged algorithm invents the Chinese Room hypothetical, and so on and so forth.

"This is of course difficult to check directly. However, fairly strong evidence of correctness can be attained by reading the algorithm and imagining its execution. Just as you can stare at the gears of a watch until you understand how their interactions makes the watch-hands tick, at which point you can be justifiably confident that you understand the watch, you should be able to stare at a cognitive algorithm explaining 'consciousness' until you understand how its execution makes things like 'inner listeners' 'experiencing redness' (in a suitably rescued sense), at which point you can be justifiably confident that you understand experience.

"Your fellow tribemembers, who have not understood how gears can drive the hands of a watch, might doubt your claim, saying 'There are many theories of how the watch works, ranging from internal gears to external solar radiation to the whims of the spirits. How are you so confident that it is the turning of little gears, nevermind this specific mechanism that you claim you can sketch out in the dirt?'. And you could rightly reply, 'When we unscrew the back, we see gears. And there is an arrangement of gears, that I understand, that by inspection would tick the hands in just the way we observe the hands to tick. And while I have not fully taken the watch apart, the visible features of the gears we can see when we unscrew the back, match the corresponding properties of my simple gear mechanism. This is enough for me to be pretty confident that something like my mechanism, which I understand and which clealry by inspection ticks watch-hands, governs the watch before us."

Replies from: MichaelStJules
comment by MichaelStJules · 2021-10-30T21:55:58.893Z · LW(p) · GW(p)

Shouldn't mastery and self-awareness/self-modelling come in degrees? Is it necessary to be able to theorize and come up with all of the various thought experiments (even with limited augmentation from extra modules, different initializations)? Many nonhuman animals could make some of the kinds of claims we make about our particular conscious experiences for essentially similar reasons, and many demonstrate some self-awareness in ways other than by passing the mirror test (and some might pass a mirror test with a different sensory modality, or with some extra help, although some kinds of help would severely undermine a positive result), although I won't claim the mirror test is the only one Eliezer cares about; I don't know what else he has in mind. It would be helpful to see a list of the proxies he has in mind and what they're proxies for.

 

EY: I give serious consideration to the proposition that I am reflective enough for consciousness only during the moments I happen to wonder whether I am conscious.

Me: If this proposition is false, we need to allow unsymbolized (non-verbal) ways to self-attribute consciousness for self-attributing consciousness to matter in itself, right?

You: My model of Eliezer says "In lieu of an algorithmic account of the cognitive antecedents of people insisting they are conscious, that sort of claim is not even wrong." (And similarly with various other claims in that section.) My model continues: "You seem to me to be trying to do far more with the word 'consciousness' than your understanding of the phenomenon permits. I recommend doing less abstract reasoning about how 'consciousness' must behave, and more thinking about the cognitive causes behind the creation of the Mary's Room hypothetical."

To make sure I understand correctly, it's not the self-attribution of consciousness and other talk of consciousness like Mary's Room that matter in themselves (we can allow some limited extra modules for that), but their cognitive causes. And certain (kinds of) cognitive causes should be present when we're "reflective enough for consciousness", right? And Eliezer isn't sure whether wondering whether or not he's conscious is among them (or a proxy/correlate of a necessary cause)?

comment by EI · 2021-10-31T03:59:57.549Z · LW(p) · GW(p)

The mirror test seems to me like a decent proxy for at least one item on that list 

This is merely a bias on our own part as humans. I think people are confusing consciousness with self-awareness. They are completely different things. Consciousness is the OS that runs on the meat machine. Self-awareness is an algorithm that runs on the OS. All meat machines that run this OS have different algorithms for different functions. Some may not have any self-awareness algorithm running, some may have something similar but not exactly the same as our own self-awareness algorithm. That's where the mirror test fails. We can only observe the who-knows-how-many-levels of causality that lead to those animals to show or not show self-aware behaviors in front of a mirror. We can't say anything consequential about the actual algorithm(s) running on their OS when they stand in front of a mirror. We are just running our own set of self-awareness algorithms when we stand in front of a mirror. It seems like these algorithms change according to evolution, just like other systems within the multicellular ecosystem that make up the individual organisms. We often see animals that demonstrate these "self-aware" traits because of similar evolutionary conditions, like cats and dogs have evolved to run a lot of socializing algorithms that mingle well with our own social algorithms.

Whether the self-reflective aspect of running these algorithms on our own OS makes one feel certain way about eating meat is in and of itself the result of the relationship between multi-threading the self-aware part and the self-preservation part in terms of labeling kins and such.

At this point we aren't even conclusive about where to draw the boundary between hardware and software. We end up distinguishing between OS and simple firmware as conscious and unconscious. We mostly reduce the firmware down to simple physical reactions by the laws of physics while the OS exhibits something magical beyond those physical reactions in simpler systems. Is there something truly different that sets OS apart from firmware, or is it just our lack of understanding of the underlying mechanics? This of course touches upon the argument of determinism, which is just looking at the same systems differently.

comment by MichaelStJules · 2021-10-29T15:48:45.886Z · LW(p) · GW(p)

I don't think it's obvious that nonhuman animals, including the vertebrates we normally farm for food, don't self-model (at least to some degree). I think it hasn't been studied much, although there seems to be more interest now. Absence of evidence is at best weak evidence of absence, especially when there's been little research on the topic to date. Here's some related evidence, although maybe some of this is closer to higher-order processes than self-modelling in particular:

  1. See the discussion of Attention Schema Theory here (section "Is an attention schema evolutionarily old or unique to humans?") by the inventor of that theory, Graziano, in response to Dennett's interpretation of the theory applied to nonhuman animals (in which he also endorses the theory as "basically right"!). Basically, AST requires the individual to have a model of their own attention, an "attention schema".
    1. Dennett wrote "Dogs and other animals do exhibit some modest capacities for noticing their noticings, but we humans have mental lives that teem with such episodes – so much so that most people have never even imagined that the mental lives of other species might not be similarly populated", and then expands further.
    2. In Graziano's response: "Any creature that can endogenously direct attention must have some kind of attention schema, and good control of attention has been demonstrated in a range of animals including mammals and birds (e.g., Desimone & Duncan, 1995; Knudsen, 2018; Moore & Zirnsak, 2017). My guess is that most mammals and birds have some version of an attention schema that serves an essentially similar function, and contains some of the same information, as ours does. Just as other animals must have a body schema or be condemned to a flailing uncontrolled body, they must have an attention schema or be condemned to an attention system that is purely at the mercy of every new sparkling, bottom-up pull on attention. To control attention endogenously implies an effective controller, which implies a control model."
  2. Dogs (Canis familiaris) recognize their own body as a physical obstacle (pop-sci article)
  3. Pigs learn what a mirror image represents and use it to obtain information. EDIT: there was a replication experiment, in which only 1 of 11 mirror-experienced piglets used the detour that the mirror would help them find, and none of the 11 mirror-naive did.
  4. I think the evidence for episodic(-like) memory in nonhuman animals is getting better, particularly with more unexpected question tests, which often ask about what the animals did (although I supposed this wouldn't necessarily require a self-model, depending on the details):
    1. Mental representation and episodic-like memory of own actions in dogs
    2. Animal models of episodic memory (see the section "Incidental Encoding and Unexpected Questions")
    3. Episodic-like memory of rats as retrospective retrieval of incidentally encoded locations and involvement of the retrosplenial cortex
    4. Experiments with pigeons and rats are discussed in the section "The unexpected question" in Animals Represent the past and the Future.
  5. I left a comment here [EA(p) · GW(p)], with some other weak evidence, e.g. animals being trained to communicate their emotions in different ways (see also this post [LW · GW]), which I think would require them to be able to discriminate between their internal emotional states, i.e. their emotions are inputs to executive functions like (top-down/selective) attention, learning and memory. Also, cows may become excited by their own learning.
    1. EDIT: Finally found the paper on pigs generalizing the discrimination between non-anxiety states and drug-induced anxiety to non-anxiety and anxiety in general, in this case by pressing one lever repeatedly with anxiety, and alternating between two levers without anxiety (the levers gave food rewards, but only if they pressed them according to the condition). This experiment and similar experiments performed on rodents are discussed here, in section 4.d., starting on p.81 (and some other discussion of them earlier). For example, rats generalized from hangover to morphine withdrawal and jetlag, and from high doses of cocaine to movement restriction, from an anxiety-inducing drug to aggressive defeat and predator cues. Of course, anxiety has physical symptoms, so maybe this is what they're discriminating, not the negative affect.
  6. In general, I think there's more recent research on nonhuman metacognition and mental representation, although I haven't followed this closely, so I can't really tell you what's up. There are some recent reviews on metacognition here.
Replies from: MichaelStJules, MichaelStJules
comment by MichaelStJules · 2021-10-29T17:17:02.329Z · LW(p) · GW(p)

I also don't think GPT-3 has emotions that are inputs to executive functions, like learning, memory, control, etc..

comment by MichaelStJules · 2021-10-30T09:45:52.838Z · LW(p) · GW(p)

Of course, many animals have failed the mirror test, and that is indeed evidence of absence for those animals. Still,

  1. Animals could just be too dumb (or rely too little on vision) to understand mirrors, but still self-model in other ways, like in my top comment. Or, they might at least tell themselves apart from others in the mirrors as unique, without recognizing themselves, like some monkeys and pigeons. Pigeons can pick out live and 5-7 second delayed videos of themselves from prerecorded ones.
  2. Animals might not care about the marks. Cleaner wrasse, a species of fish, did pass the mirror test (the multiple phases, including the final self-directed behaviour with the visible mark), and they are particularly inclined to clean things (parasites) that look like the mark, which is where they get their name. I think the fact that they are inclined to clean similar looking marks was argued to undermine the results, but that seems off to me.
  3. I would be interested in seeing the mirror test replicated in different sensory modalities, e.g. something that replays animals' smells or sounds back to them, a modification near the source in the test condition, and checking whether they direct behaviour towards themselves to investigate.
    1. Some criticisms of past scent mirror test are discussed here (paper with criticism here). The issues were addressed recently here with wolves. Psychology Today summary.
    2. I think animals are more likely to show body (touch, pain) awareness and have a related self-representation (a body schema?). For example, mice get the rubber hand (tail) illusion. From having their tail and the rubber tail just stroked together, they extend their expectations of having their tail grasped to the rubber tail.
comment by Rob Bensinger (RobbBB) · 2021-10-29T17:41:38.532Z · LW(p) · GW(p)

EY's position seems to be that self-modelling is both necessary and sufficient for consciousness.

Necessary, not sufficient. I don't think Eliezer has described what he thinks is sufficient (and maybe he doesn't know what's sufficient -- i.e., I don't know that Eliezer thinks he could build a conscious thing from scratch).

comment by Rob Bensinger (RobbBB) · 2021-10-31T16:38:34.247Z · LW(p) · GW(p)

I've collected my thoughts + recent discussions on consciousness and animal patienthood here: https://www.lesswrong.com/posts/TkahaFu3kb6NhZRue/quick-general-thoughts-on-suffering-and-consciousness. [LW · GW] I don't have the same views as Eliezer, but I'm guessing me talking about my views here will help make it a little clearer why someone might not think this way of thinking about the topic is totally wrong.

comment by Jacob Falkovich (Jacobian) · 2021-10-30T02:05:07.739Z · LW(p) · GW(p)

Copying from my Twitter response to Eliezer

Anil Seth usefully breaks down consciousness into 3 main components: 
1. level of consciousness (anesthesia < deep sleep < awake < psychedelic)
2. contents of consciousness (qualia — external, interoceptive, and mental)
3. consciousness of the self, which can further be broken down into components like feeling ownership of a body, narrative self, and a 1st person perspective. 

He shows how each of these can be quite independent. For example, the selfhood of body-ownership can be fucked with using rubber arms [LW · GW] and mirrors, narrative-self breaks with amnesia, 1st person perspective breaks in out-of-body experiences which can be induced in VR, even the core feeling of the reality of self can be meditated away. 

Qualia such as pain are also very contextual, the same physical sensation can be interpreted positively in the gym or a BDSM dungeon and as acute suffering if it's unexpected and believed to be caused by injury. Being a self, or thinking about yourself, is also just another perception — a product of your brain's generative model of reality — like color or pain are. I believe enlightened monks who say they experience selfless bliss, and I think it's equally likely that chickens experience selfless pain.

Eliezer seems to believe that self-reflection or some other component of selfhood is necessary for the existence of the qualia of pain or suffering. A lot of people believe this simply because they use the word "consciousness" to refer to both (and 40 other things besides). I don't know if Eliezer is making such a basic mistake, but I'm not sure why else he would believe that selfhood is necessary for suffering.

Replies from: gjm, SaidAchmiz, TAG
comment by gjm · 2021-10-30T15:19:33.042Z · LW(p) · GW(p)

I agree with pretty much all of that but remark that "deep sleep < awake < psychedelic" is not at all clearly more correct than "deep sleep < psychedelic < awake". You may feel more aware/conscious/awake/whatever when under the effects of psychedelic drugs, but feeling something doesn't necessarily make it so.

Replies from: Jacobian
comment by Jacob Falkovich (Jacobian) · 2021-10-30T17:07:57.441Z · LW(p) · GW(p)

The ordering is based on measures of neuro-correlates of the level of consciousness like neural entropy or perturbational complexity, not on how groovy it subjectively feels.

Replies from: gjm
comment by gjm · 2021-10-30T20:15:51.794Z · LW(p) · GW(p)

It would seems a bit optimistic to call anything a "neuro-correlate of the level of consciousness" simply on the basis that it's higher for ordinary waking brains than for ordinary sleeping brains. Is there more evidence than that for considering neural entropy or perturbational complexity to be measures of "the level of consciousness"?

(My understanding is that in some sense they're measuring the amount of information, in some Shannonesque sense, in the state of the brain. Imagine doing something like that with a computer. The figure will -- at least, for some plausible ways of doing it -- be larger when the computer is actively running some software than when it's idle, and you might want to say "aha, we've found a measure of how much the computer is doing useful work". But it's even larger if you arrange to fill its memory with random bits and overwrite them with new random bits once a second, even though that doesn't mean doing any more useful work. I worry that psychedelics might be doing something more analogous to that than to making your computer actually do more.)

comment by Said Achmiz (SaidAchmiz) · 2021-10-30T16:58:44.136Z · LW(p) · GW(p)

Eliezer seems to believe that self-reflection or some other component of selfhood is necessary for the existence of the qualia of pain or suffering. A lot of people believe this simply because they use the word “consciousness” to refer to both (and 40 other things besides). I don’t know if Eliezer is making such a basic mistake, but I’m not sure why else he would believe that selfhood is necessary for suffering.

It is not my impression that Eliezer believes any such thing for pain, only (perhaps) for suffering. It’s important not to conflate these.

It seems clear to me, at least, that consciousness (in the “subjective, reflective self-awareness” sense) is necessary for suffering; so I don’t think that Eliezer is making any mistake at all (much less a basic mistake!).

Being a self, or thinking about yourself, is also just another perception — a product of your brain’s generative model of reality — like color or pain are

The word “just” is doing a heck of a lot of work here.

I think it’s equally likely that chickens experience selfless pain

Chickens perhaps have “selfless pain”, but to say that they experience anything at all is begging the question!

comment by TAG · 2021-10-30T13:16:54.628Z · LW(p) · GW(p)

Eliezer seems to believe that self-reflection or some other component of selfhood is necessary for the existence of the qualia of pain or suffering. A lot of people believe this simply because they use the word “consciousness” to refer to both (and 40 other things besides). I don’t know if Eliezer is making such a basic mistake, but I’m not sure why else he would believe that selfhood is necessary for suffering

I strongly support this. If you are going to explain-away qualia as the result of having a self-model, you need to do more than note that they occur together , or that "conscious" could mean either.

comment by Michael Edward Johnson (michael-edward-johnson) · 2021-10-31T17:37:54.594Z · LW(p) · GW(p)

“By their fruits you shall know them.”

A frame I trust in these discussions is trying to elucidate the end goal. What does knowledge about consciousness look like under Eliezer’s model? Under Jemist’s? Under QRI’s?

Let’s say you want the answer to this question enough you go into cryosleep with the instruction “wake me up when they solve consciousness.” Now it’s 500, or 5000, or 5 million years in the future and they’ve done it. You wake up. You go to the local bookstore analogue, pull out the Qualia 101 textbook and sit down to read. What do you find in the pages? Do you find essays on how we realized consciousness was merely a linguistic confusion, or equations for how it all works?

As I understand Eliezer’s position, consciousness is both (1) a linguistic confusion (leaky reification) and (2) the seat of all value. There seems a tension here, that would be good to resolve since the goal of consciousness research seems unclear in this case. I notice I’m putting words in peoples’ mouths and would be glad if the principals could offer their own takes on “what future knowledge about qualia looks like.”

My own view is if we opened that hypothetical textbook up we would find crisp equations of consciousness, with deep parallels to the equations of physics; in fact the equations may be the same, just projected differently.

My view on the brand of physicalism I believe in, dual aspect monism, and how it constrains knowledge about qualia: https://opentheory.net/2019/06/taking-monism-seriously/

My arguments against analytic functionalism (which I believe Eliezer’s views fall into): https://opentheory.net/2017/07/why-i-think-the-foundational-research-institute-should-rethink-its-approach/

comment by Unreal · 2021-10-29T15:55:14.645Z · LW(p) · GW(p)

I had another complaint about that tweet, which... you do not seem to have, but I want to bring up anyway. 

Why do we assume that 'consciousness' or 'sentience' implies 'morally relevant' ? And that a lack of consciousness (if we could prove that), would also imply 'not morally relevant' ? 

It seems bad to me to torture chickens even if turns out they aren't self-aware. But lots of people seem to take this as a major crux for them. 

If I torture a permanently brain-damaged comatose person to death, who no one will miss, is that 'fine' ? 

I am angry about this assumption; it seems too convenient. 

Replies from: Benito, Jemist, bn22, Aay17ush
comment by Ben Pace (Benito) · 2021-10-29T17:14:40.187Z · LW(p) · GW(p)

Torturing chickens or brain dead people is upsetting and horrible and distasteful to me. I don’t think it’s causing any direct harm or pain to the chicken/person though.

I still judge a human’s character if they find these things fun and amusing. People watch this kind of thing (torture of humans/other animals) on Netflix all the time, for all sorts of good and bad reasons.

Replies from: Unreal
comment by Unreal · 2021-10-29T18:14:03.632Z · LW(p) · GW(p)

Claim: Many things are happening on a below-consciousness level that 'matter' to a person. And if you disrupted those things without changing a person's subjective experience of them (or did it without their notice), this should still count as harm. 

This idea that 'harm' and the level of that harm is mostly a matter of the subjective experience of that harm goes against my model of trauma and suffering. 

Trauma is stored in the body whether we are conscious of it or not. And in fact I think many people are not conscious of their traumas. I'd still call it 'harm' regardless of their conscious awareness. 

I have friends who were circumcised before they could form memories. They don't remember it. Through healing work or other signs of trauma, they realized that in fact this early surgery was likely traumatic. I think Eliezer is sort of saying that this only counts as harm to the degree that it consciously affects them later or something? I disagree with this take, and I think it goes against moral intuition. (If one sees a baby screaming in pain, the impulse is to relieve their 'pain' even if they might not be having a conscious experience of it.) 

If I take a "non-sentient" chicken and cut off its wings, and I watch it as it helplessly tries to fly repeatedly, but is unable to, this strikes me as a form of harm to the chicken and its values even if the chicken is not having a subjective experience of its condition. 

Also, from my investigations, much suffering does not reach the level of awareness. When a person investigates very closely and zooms in on experiences (such as through meditation), suffering is 'found' to be 'occurring' at a level of granularity and detail that was not previously accessible. But becoming aware of this suffering does not increase the amount of suffering that was occurring; you just become aware of the amount that was already there. It's an "oh" moment. And this can actually help relieve the suffering, by becoming aware of it. 

This suggests that maybe beings who lack the ability of awareness and observation to see their own condition actually are suffering more. This accords with my own journey in relieving personal suffering. More awareness was generally helpful. Whereas as a child, I was more 'braindead' in some way. Not very 'conscious'.

One could make similar inquiries into 'dissociation'. If a person is regularly dissociated and doesn't feel things very intensely, does it make it more okay to hurt them? 

Also my model of pain is that pain != suffering, which might be relevant here. Not sure. 

Replies from: mayleaf, Benito, Benito
comment by mayleaf · 2021-10-29T18:48:29.522Z · LW(p) · GW(p)

If I take a "non-sentient" chicken and cut off its wings, and I watch it as it helplessly tries to fly repeatedly, but is unable to, this strikes me as a form of harm to the chicken and its values even if the chicken is not having a subjective experience of its condition. 


I'm curious how you would distinguish between entities that can be harmed in a morally relevant way and entities that cannot. I use subjective experience to make this distinction, but it sounds like you're using something like -- thwarted intentions? telos-violation? I suspect we'd both agree that chickens are morally relevant and (say) pencils are not, and that snapping a pencil in half is not a morally-relevant action. But I'm curious what criterion you're using to draw that boundary.

One could make similar inquiries into 'dissociation'. If a person is regularly dissociated and doesn't feel things very intensely, does it make it more okay to hurt them? 

This is an interesting point; will think about it more.

comment by Ben Pace (Benito) · 2021-10-29T21:57:24.636Z · LW(p) · GW(p)

Typically in questions of ethics, I factor the problem into two sub-questions:

  • Game theory: ought I care about other agents' values because we have the potential to affect each other?
  • Ben's preferences: do I personally care about this agent and them having their desires satisfied?

For the second, it's on the table whether I care directly about chickens. I think at minimum I care about them the way I care about characters in like Undertale or something, where they're not real but I imbue meaning into them and their lives.

That said it's also on the table to me that a lot of my deeply felt feelings about why it's horrible to be cruel to chickens, are similar to my deeply felt feelings of being terrified when I am standing on a glass bridge and looking down. I feel nauseous and like running and a bit like screaming for fear of falling; and yet there is nothing actually to be afraid of.

If I imagine someone repeatedly playing Undertale to kill all the characters in ways that make the characters maximally 'upset', this seems tasteless and a touch cruel, but not because the characters are conscious. Relatedly, if I found out that someone had built a profitable business that somehow required incidentally running massive numbers of simulations of the worst endings for all the characters in Undertale (e.g. some part of their very complex computer systems had hit an equilibrium of repeatedly computing this, and changing that wasn't a sufficient economic bottleneck to be worth the time/money cost), this would again seem kind of distasteful, but in the present world it would not be very high on my list of things to fix, it would not make the top 1000.

For the first, suppose I do want to engage in game theory with chickens. Then I think all your (excellent) points about consciousness are directly applicable. You're quite right that suffering doesn't need to be conscious, and often I have become aware of a way that I have been averse to thinking about a subject or been scared of a person for no good reason that has been a major impediment in having a great career and great relationships, in ways that are "outside" my conscious experience. (Being more 'braindead'.) I would have immensely appreciated someone helping me realize and fix these things about myself that were outside my conscious awareness.

Insofar as the chickens are having their wings clipped and kept in cages, it's very clear that their intentions and desires are being stunted. On a similar note, I think all the points in Dormin's essay Against Dog Ownership [LW · GW] apply regardless of whether dogs are conscious — that the meaning dogs look for in life is not found in the submissive and lonely inner-city life that most of them experience. These lay out clear ways to be much kinder to a chicken or dog and to respect their desires.

But there is a step of the argument missing here. I think some people believe arguments that claim it's worth engaging in game theory with chickens even if I think they're only as real as characters in Undertale; but I have not read an argument that I find compelling.

The idea is that if we suppose chickens are indeed only as real as Undertale characters, I might still care about them because we have shared goals or something. Here's a very concrete story where that would be the case: if someone made a human-level AI with Sans' personality, and he was working to build a universe kind of like the universe I want to live in with things like LessWrong and Sea Shanties and Dostoyevsky in it, then I would go out of my way to – say – right injustices against him; and I hope he would do the same for me, because I want everyone to know that such agents will be defended by each other.

I think some people believe that humans and chickens have similar goals in this way in the extreme, but I don't agree. I don't think I would have much of a place in a chicken utopia, nor do I expect to find much of value in it.

comment by Ben Pace (Benito) · 2021-10-29T18:35:35.558Z · LW(p) · GW(p)

Btw, coming at it from a different angle: Jessicata raises the hypothesis (in her recent post [LW · GW]) that people put so much weight on 'consciousness' as a determinant of moral weight because it is relatively illegible and they believe outside the realm of things that civilization currently has a scientific understanding of, so that they can talk about it more freely and without the incredibly high level of undercutting and scrutiny that comes to scientific hypotheses. Quote:

Consciousness is related to moral patiency (in that e.g. animal consciousness is regarded as an argument in favor of treating animals as moral patients), and is notoriously difficult to discuss.  I hypothesize that a lot of what is going on here is that:

1. There are many beliefs/representations that are used in different contexts to make decisions or say things.

2. The scientific method has criteria for discarding beliefs/representations, e.g. in cases of unfalsifiability, falsification by evidence, or complexity that is too high.

3. A scientific worldview will, therefore, contain a subset of the set of all beliefs had by someone.

4. It is unclear how to find the rest of the beliefs in the scientific worldview, since many have been discarded.

5. There is, therefore, a desire to be able to refer to beliefs/representations that didn't make it into the scientific worldview, but which are still used to make decisions or say things; "consciousness" is a way of referring to beliefs/representations in a way inclusive of non-scientific beliefs.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-30T17:42:46.061Z · LW(p) · GW(p)

I don't think that was my point exactly. Rather, my point is that not all representations used by minds to process information make it into the scientific worldview, so there is a leftover component that is still cared about. That doesn't mean people will think consciousness is more important than scientific information, and indeed scientific theories are conscious to at least some people.

Separately, many people have a desire to increase the importance of illegible things to reduce constraint, which is your hypothesis; I think this is an important factor but it wasn't what I was saying.

comment by Jemist · 2021-10-29T15:57:41.361Z · LW(p) · GW(p)

Eliezer later states that he is referring to qualia specifically, which for me are (within a rounding error) totally equivalent to moral relevance.

Replies from: Unreal
comment by Unreal · 2021-10-29T16:14:37.495Z · LW(p) · GW(p)

Why is that? You're still tying moral relevance to a subjective experience? 

Replies from: Jemist
comment by Jemist · 2021-10-29T16:41:17.472Z · LW(p) · GW(p)

Basically yes I care about the subjective experiences of entities. I'm curious about the use of the word "still" here. This implies you used to have a similar view to mine but changed it, if so what made you change your mind? Or have I just missed out on some massive shift in the discourse surrounding consciousness and moral weight? If the latter is the case (which it might be, I'm not plugged into a huge number of moral philosophy sources) that might explain some of my confusion.

comment by bn22 · 2021-10-29T18:44:19.528Z · LW(p) · GW(p)

People already implicitly consider your example to be acceptable given that vegetables are held in conditions of isolation that would be considered torture if they were counterfactually conscious and many people support being allowed to kill/euthanize vegetables in cases such as Terry Schiavo's.

comment by Aay17ush · 2021-10-29T16:41:40.967Z · LW(p) · GW(p)

I've often thought about this, and this is the conclusion I've reached.

There would need to be some criteria that separates morality from immorality. Given that, consciousness (ie self-modelling) seems like the best criteria given our current knowledge. Obviously, there are gaps (like the comatose patient you mention), but we currently do not have a better metric to latch on to.

Replies from: TAG
comment by TAG · 2021-10-30T13:24:03.476Z · LW(p) · GW(p)

Why wouldn't the ability to suffer be the criterion? Isn't that built into the concept if sentience? "Sentient" literally means "having senses" but is often used as a synonym for "moral patient".

comment by Lance Bush (lance-bush) · 2021-10-29T13:36:46.123Z · LW(p) · GW(p)

I suspect I endorse something like what Yudkowsky seems to be claiming. Essentially, I think that humans are uniquely disposed (at least among life on earth) to develop a kind of self-model, and that nonhuman animals lack the same kind of systems that we have. As a result, whatever type of consciousness they have, I think it is radically unlike what we have. I don’t know what moral value, if any, I would assign to nonhuman animals were I to know more about their mental lives or what type of “consciousness” they have, but I am confident that the current high level of confidence people have that animals have rich conscious experiences is not justified. I wrote an old comment on this I’ve shared a couple of times. Here it is:

I think that what we take to be our conscious experience involves a capacity for "checking in" on an ongoing internal narrative, or story that we are constantly "telling ourselves" that functions to provide a unified timeline which we can utilize, report on, and talk about with others. I think this "narrative center of gravity" requires a degree of cultural input, and the inculcation of specific memes/concepts that lead us to form a sense of a self that integrates our experiences and that can think about "our" past experiences and "our" future experiences. In a sense, I think that conscious experience is built up as a sort of software that we have the hardware to develop, but requires a degree of developmental and cultural input to become fully operational. I don't think animals have or need this capacity. As such, what it is like to be us is something we can talk about, but I am not convinced that there is anything it is "like" to be an animal.

This is a largely Dennettian view of consciousness, and I believe he coined or at least used the term “narrative center of gravity.”

You identify consciousness with having qualia.

However, I don’t know what you mean by qualia. While it remains sensible to me to attribute something like consciousness to humans, I would typically deny that we “have qualia” and would not define consciousness in terms of having qualia. Were others to do so, I’d deny we have that form of qualia. Perhaps Yudkowsky would, too. It really depends on what one means by “consciousness” and “qualia.”

I don’t know exactly what Yudkowsky thinks, so I wouldn’t put a number on it as you do (i.e., 15%). But, I’ll put it this way: I don’t know of any alternatives to something like Dennett/Frankish on illusionism that seem individually more plausible than illusionism. I don’t know if the collective weight of plausibility for all competing hypotheses is enough to push illusionism below 50%, but I don’t think so. So, while I am not overwhelmingly confident that something that seems roughly in the ballpark (if not very similar) to Yudkowsky’s view is correct, I have yet to see any viable alternatives. Most seem weird and to not capture what strike me as important elements of consciousness, or they seem to appeal to intuitions I don’t have and don’t trust in others.

Replies from: Jemist, AprilSR, MichaelStJules
comment by Jemist · 2021-10-29T16:15:35.892Z · LW(p) · GW(p)

You present an excellently-written and interesting case here. I agree with the point that self-modelling systems can think in certain ways which are unique and special and chickens can't do that.

One reason I identify consciousness with having qualia is that Eliezer specifically does that in the twitter thread. The other is that qualia is generally less ambiguous than terms like consciousness and self-awareness and sentience. The disadvantage is that the concept of qualia is something which is very difficult (and beyond my explaining capabilities) to explain to people who don't know what it means. I choose to take this tradeoff because I find that I, personally, get much more out of discussions about specifically qualia, than any of the related words. Perhaps I'm not taking seriously enough the idea that illusionism will explain why I feel like I'm conscious and not explain why I am conscious.

I also agree that most other existing mainstream views are somewhat poor, but to me this isn't particularly strong positive evidence for Eliezer's views. This is because models of consciousness on the level of detail of Eliezer's are hard to come up with, so there might be many other excellent ones that haven't been found yet. And Eliezer hasn't done (to my knowledge) anything which rules out other arguments on the level of detail of his own.

Basically I think that the reason the best argument we see is Eliezer's is less along the lines of "this is the only computational argument that could be made for consciousness" and more along the lines of "computational arguments for consciousness are really difficult and this is the first one anyone has found".

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-10-29T22:01:09.067Z · LW(p) · GW(p)

Yudkowsky specifically using the term is a good reason. Thanks for pointing that out, and now I feel a little silly for asking. He says, "I mean qualia, yes." You can't get more blunt than that.

While I agree that qualia is less ambiguous than other terms, I am still not sure it is sufficiently unambiguous. I don’t know what you mean by the term, for instance. Generally, though, I would say that I think consciousness exists, but that qualia do not exist.

I think illusionism does offer an account of consciousness; it’s just that consciousness turns out not to be what some people thought that it was. Personally, I don’t have and apparently have never had qualia intuitions, and thus never struggled with accepting Dennett’s views. This might be unusual, but the only view I ever recall holding on the matter was something like Dennett’s. His views immediately resonated with me and I adopted them the moment I heard them, with something similar to a “wow, this is obviously how it is!” response, and bewilderment that anyone could think otherwise.

I’m glad we agree most alternatives are poor. I do happen to agree that this isn’t especially good evidence against the plausibility of some compelling alternative to illusionism emerging. I definitely think that’s a very real possibility. But I do not think it is going to come out of the intuition-mongering methodology many philosophers rely on. I also agree that this is probably due to the difficulty of coming up with alternative models. Seems like we’re largely in agreement here, in that case.

comment by AprilSR · 2021-10-30T05:01:08.331Z · LW(p) · GW(p)

I don't know how you can deny that people have "qualia" when, as far as I can tell, it was a word coined to describe a particular thing that humans experience?

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-10-30T05:40:30.860Z · LW(p) · GW(p)

I'm not sure I understand. What do you mean when you say it was coined to "describe a particular thing that humans experience"? Or maybe, to put this another way: at least in this conversation, what are you referring to with the term "qualia"?

Replies from: AprilSR, TAG
comment by AprilSR · 2021-10-31T07:53:30.011Z · LW(p) · GW(p)

As I understand it, the word "qualia" usually refers to the experience associated with a particular sensation.

comment by TAG · 2021-10-30T14:47:21.922Z · LW(p) · GW(p)

"Qualia" is easy to define. As Wikipedia has it

Examples of qualia include the perceived sensation of pain of a headache, the taste of wine, as well as the redness of an evening sky. As qualitative characters of sensation, qualia stand in contrast to "propositional attitudes",[1] where the focus is on beliefs about experience rather than what it is directly like to be experiencing.

Whereas illusionism is almost impossible to define coherently.

"According to illusionism, you only have propositional attitudes, not perceptions. Some of those propositional attitudes seem like propositional attitudes , and others seem like perceptions. Well they don't, because if anything seemed like anything, that would be a perception. So actually gave a meta belief that some of your propositional attitudes are propositional attitudes, but also a meta level belief that others aren't. That's the illusion. But actually it's not an illusion,because an illusion is a false perception , and there are no perceptions. Its actually a false belief, a delusion I don't know why we call it illusionism"

Replies from: gjm, lance-bush
comment by gjm · 2021-10-30T15:17:12.521Z · LW(p) · GW(p)

It's easy to give examples of things we think of as qualia. I'm not so sure that that means it's easy to give a satisfactory definition of "qualia".

I can give lots of examples of people, but there's scope for endless debate about exactly what counts as a person and what doesn't. (Newly born children? 16-week-old foetuses? Aliens or AIs, should any exist now or in the future, with abilities comparable to ours but very different brains and minds? Beings like gods, angels, demons, etc., should any exist, with abilities in some ways comparable to ours but not made out of matter at all?) And for debate about when persons A and B are actually the same person. (Suppose some intelligent computer programs are persons. If I take a copy, do I then have one person or two? Suppose we favour a multiverse-type interpretation of quantum mechanics. Are the versions of "me" on two nearby Everett branches one person, or two? Am I the same person as I was 30 years ago?)

There's similar unclarity about what things count as qualia and about how to individuate them. (E.g., if you and I look at the same red object and both have normal colour vision, do we have "the same" quale of seeing-a-red-thing or not? If I see the same red thing twice, is that "the same" quale each time? If the answers are negative, what actual work is the notion of qualia doing?) And e.g. Daniel Dennett would claim that the word "qualia" includes enough baggage that it's better to say that there are no qualia while in no way denying that people experience things.

It's not (I think) in question that we experience things. It's quite reasonable (I think) to question whether anything about our experience is made clearer by introducing objects called qualia.

Replies from: TAG
comment by TAG · 2021-10-30T16:16:11.646Z · LW(p) · GW(p)

It’s easy to give examples of things we think of as qualia. I’m not so sure that that means it’s easy to give a satisfactory definition of “qualia”.

Satisfactory for whom? I use examples because they are sufficient to get the point across to people who aren't too biased. Someone night have some genuine reason to need a more rigourous definition...but they might not, they might instead be making a selective demand for rigour, out of bias. Where are the calls for rigourous definitions of "matter", "computation", etc?

I can give lots of examples of people, but there’s scope for endless debate about exactly what counts as a person and what doesn’t. (Newly born children? 16-week-old foetuses? Aliens or AIs, should any exist now or in the future, with abilities comparable to ours but very different brains and minds? Beings like gods, angels, demons, etc., should any exist, with abilities in some ways comparable to ours but not made out of matter at all?)

If my purpose is to demonstrate that people exist, all I need to do is point to a few uncontentious examples of people...I don't need to solve every edge case.

And "endless debate" needs to be avoided. People who make selective demands for rigour don't to change their minds, and endless debate is a great way of achieving that

(E.g., if you and I look at the same red object and both have normal colour vision, do we have “the same” quale of seeing-a-red-thing or not

Why does that matter if all I am doing is asserting that qualia exist, or lack a reductive explanation?

Replies from: gjm, lance-bush
comment by gjm · 2021-10-30T17:36:01.769Z · LW(p) · GW(p)

(I'm ignoring those parts of your reply that seem to have no purpose other than implicitly accusing me of arguing in bad faith. I have seldom known anything useful to come out of engaging with that sort of thing. These discussions would be more enjoyable, for me at least, if you weren't so relentlessly adversarial about them.)

Satisfactory for whom? For me, obviously :-).

There is at least one eminent philosopher, namely Daniel Dennett, who has made something of a speciality of this area and who flatly denies that qualia "exist", and who doesn't appear to me to be either a dimwit or a crackpot. That is already sufficient reason for me to want to be careful about saying "duh, of course qualia exist".

Of course if all you mean by that is that people have experience, then I agree with that, but if that's all you mean then what need is there to talk about "qualia" at all? And if it's not all you mean, then before agreeing I need to know what else is being implicitly brought in.

Now, in the present instance it's Jemist who introduced "qualia" to the discussion (so, in particular, you are under no obligation to be able to tell me precisely what Jemist means by the term). And Jemist talks e.g. about experience being "turned into qualia", and I don't see how your examples help to understand what that means, or what distinction between "experience" and "qualia" Jemist is trying to draw.

The general idea seems to be something like this: people and chickens alike have some sort of stream or sea of experiences, and humans (and maybe chickens or maybe not) "turn these experiences into qualia", and having not merely experiences but qualia is what justifies calling an entity "conscious" and/or seeing that entity as of moral significance.

I'm sympathetic to the general idea that there's something that's kinda-the-same about chickens' sensory input and ours, and something that's maybe different about the early stages of processing that sensory input, and that that has something to do with possible moral differences between us and chickens. But I don't see any reason to think that filling in that picture in detail, if we knew how to do it, would look much like identifying things ("qualia") that we "have" and chickens maybe "don't have". And one way to resist the so-far-unjustified slide from "there may be something importantly different between how we process our sensory input and how chickens process theirs" to "maybe we have qualia and chickens don't" is to remain mindful of the fact that we don't have -- or, at least, I don't have and I haven't seen much evidence that others have -- a very clear idea of exactly what "qualia" are supposed to be, and of how "having qualia" is supposed to go beyond "having experience".

Here's another example of how the leap to "having qualia" may bring in unacknowledged baggage. You say "Why does that matter if all I am doing is asserting that qualia exist" (so far so good, I guess) "or lack a reductive explanation?". Where did that come from? It certainly doesn't seem to be something that follows from the fact that people experience things. If you're somehow inferring that from "having qualia" then I think there's got to be something in your concept of "qualia" that is very much not an obvious consequence of having experience, and I don't want to just nod wisely when someone says "of course we have qualia" because it may turn out that part of what they mean by "qualia" involves some sort of in-principle irreducibility, and I want to see some actual argument before agreeing to that!

("Lack a reductive explanation" is ambiguous between "we don't have one yet" and "we are probably never going to have one" and "we are definitely never going to have one" and "it is in principle impossible for us ever to have one". I don't like this because it's too easy to slide between those meanings without explicitly noting that that's happening and offering any justification. I don't know whether I have guessed correctly at what combination of those things you meant; if you think I haven't, feel free to clarify.)

comment by Lance Bush (lance-bush) · 2021-10-30T17:01:36.677Z · LW(p) · GW(p)

Satisfactory for whom? I use examples because they are sufficient to get the point across to people who aren't too biased.

 

It’s not satisfactory to me. Does this mean I am “too biased?” That seems like a potentially unjustified presumption to make, and not a fair way to have a discussion with others who might disagree with you.

Anyone could offer a definition then state in advance that anyone who doesn’t accept it is “too biased” then, when someone says they don’t accept it say “see, I told you so,” even if an unbiased person would judge the definition to be inadequate.

In any case, I’m not making a selective demand for rigor. Even if I were, I’d probably just shrug and raise the challenge, anyway. I don’t know what people talking about qualia are talking about. But I am also pretty confident they don’t know what they are talking about. I suspect qualia is a pseudoconcept invented by philosophers, and that to the extent that we adequately characterize it, it faces pretty serious challenges. 

Where are the calls for rigourous definitions of "matter", "computation", etc?

The main person I discuss illusionism and consciousness with specializes in philosophy of computation and philosophy of science, with an emphasis on broad metaphysical questions. We both endorse illusionism, and have for years, so there’s little to say there. Instead, regularly we mostly discuss their views on computation and metaphysics, and I’m often asked to read their papers on these topics. So, in the past few years, I have read significantly more work on what computers and matter are than I have on consciousness.

Thus, ironically, I have more discussions about rigorous attempts to define computers and features of the external world than I do about consciousness. So if you think that, in denying qualia, I am somehow failing to apply a similar degree of rigor as I do to other ideas, you could not have picked worse examples. It is not the case that I’m especially tough on the notion of qualia.

comment by Lance Bush (lance-bush) · 2021-10-30T16:17:17.804Z · LW(p) · GW(p)

Unfortunately, I don’t think the account of qualia you’ve presented is adequate.

First, I don’t know what is meant by “perceived sensation” of the pain of a headache. This could be cashed out in functional terms that don’t make appeal to what I am very confident philosophers are typically referring to when they refer to qualia. So this strikes me as a kind of veiled way of just using another word or phrase (in this case, “perceived sensation”) as a stand-in for “qualia,” rather than a definition. It’s a bit like saying the definition of morality is that it is “about ethics.”

I’m likewise at a loss about the second part of this. What is the qualitative character of a sensation? What does it mean to say that you’re referring to “what it is directly like to be experiencing” rather than a belief about experiences? Again, these just seem like roundabout ways of gesturing towards something that remains so underspecified that I still don’t know what people are talking about. 

Whereas illusionism is almost impossible to define coherently.

Illusionism holds that our introspections about the nature of our conscious experiences are systematically mistaken in particular ways that induce people to hold the incorrect belief that our experiences have phenomenal properties.

I think this is a coherent position, and I’m reasonably confident it comports with how Dennett and Frankish would characterize it.

Where is that quote from? It seems to imply that all mental states are other propositional attitudes or perceptions. If so, that doesn’t seem right to me. Also, the complaint primarily seems to be with the name “illusionism.” I’m happy to call it delusionism. If we do that, do they still have an objection? If so, I’m not quite sure what the objection is.

Replies from: TAG
comment by TAG · 2021-10-30T17:12:25.971Z · LW(p) · GW(p)

So this strikes me as a kind of veiled way of just using another word or phrase (in this case, “perceived sensation”) as a stand-in for “qualia,” rather than a definition.

Is "unmarried man" a mere stand-in for "bachelor"?

Again, these just seem like roundabout ways of gesturing towards something that remains so underspecified that I still don’t know what people are talking about.

They are ways of gesturing towards your own experience. If you refuse to introspect you are not going to get it.

Where is that quote from?

Me.

Illusionism holds that our introspections about the nature of our conscious experiences are systematically mistaken in particular ways that induce people to hold the incorrect belief that our experiences have phenomenal properties.

Thats what I was expanding on.

The phenomenon properties you mentioned...those are qualia. You have the concept , because you need the concept to say it's illusory.

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-10-30T18:50:10.269Z · LW(p) · GW(p)

Is "unmarried man" a mere stand-in for "bachelor"?


In some cases, but not others. One can reasonably ask whether the Pope is a bachelor, but for the purposes of technical philosophical work one might treat “unmarried man” and “bachelor” as identical in the context of some technical discussion.

They are ways of gesturing towards your own experience. If you refuse to introspect you are not going to get it.

I can understand if someone who doesn’t know me or my educational background might think that I just haven’t thought about the topic of qualia enough, or that I am refusing to introspect about it, but that isn’t the case. This isn’t a topic I’ve thought about only casually; it is relevant to my work.

That being said, I have introspected, and I have come to the conclusion that there isn’t anything to get with respect to qualia. Nothing about my introspection gives me any insight into what you or others mean by qualia. Instead, I have concluded that the notion of qualia that has trickled out from academic philosophy is most likely a conceptual confusion enshrining the kinds of introspective errors Dennett and others argue that people are prone to make.

Me.

Okay, thanks. I apologize for having had to ask but you provided a paragraph in quotation with no attribution, and it was difficult for me to interpret what that meant.

The phenomenon properties you mentioned...those are qualia. You have the concept , because you need the concept to say it's illusory.

I have a kind of meta-concept: that other people have a concept of qualia but I myself am not personally acquainted with them, and would not say that I have the concept. One does not need to personally be subject to an illusion to believe that others are.

I know that other people purport to have a notion of qualia, but I do not. But thinking other people have mistaken or confused concepts does not require that one have the concept in the sense of possessing or understanding it. In other words, other people might tell me that there's, e.g., "something it's like" to see red or taste chocolate that somehow defies explanation, is private, is inaccessible, and so on. But I myself do not have such experiences.In such cases, I think people are simply confused, and that this can result in the case of believing in qualia in developing pseudoconcepts. 

This isn't the only case where I think this could or does occur. If people insisted they had a concept that was unintelligible or self-contradictory, such as a “colorless color” or if they insisted something could be "intrinsically north-facing," I could hold that they are mistaken in having such concepts, , and maintain that I don’t “have the concept,” in that I am not actually capable of personally entertaining entertaining the notion of colorless colors or intrinsically north-facing objects.

In fact, this is exactly my position on non-naturalist moral realism: I regard the notion of stance-independent moral facts to be unintelligible. I can talk about “stance-independent moral facts,” as a concept other people purport to “have” in the sense of understanding it without understanding it myself. That is, I don’t actually have the concepts non-natural moral realists purport to have, while still regarding the people who hold such views to be subject to an intellectual or experiential error of some kind.

Replies from: TAG
comment by TAG · 2021-10-30T19:10:45.462Z · LW(p) · GW(p)

Of course, introspection isn't meant to give you a definition of qualia...it's meant to give you direct acquaintance.

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-10-30T19:23:13.886Z · LW(p) · GW(p)

I have introspected and it has not resulted in acquaintance with qualia.

I believe people can introspect and then draw mistaken conclusions about the nature of their experiences, and that qualia is a good candidate for one of these mistaken conclusions.

Replies from: TAG
comment by TAG · 2021-10-30T19:46:16.239Z · LW(p) · GW(p)

What did it result in acquaintance with? If it seems to you that all your mental content consists only of propositional attitudes, then you don't even have the illusion of phenomenonal consciousness. But why would you alone be lacking it?

Replies from: Raemon, lance-bush
comment by Raemon · 2021-10-30T19:49:32.147Z · LW(p) · GW(p)

Note that it's plausible to me that this is a Typical Mind thing and actually there's just a lot of people going around without the perception of phenomenal consciousness.

Like, Lance, do you not feel like you experience that things seem ways? Or just that they don't seem to be ways in ways that seem robustly meaningful or something?

Replies from: TAG, Jemist, lance-bush
comment by TAG · 2021-10-30T19:55:27.947Z · LW(p) · GW(p)

But the qualiaphilic claim is typical, statistically. Even if Lance's and Denett's claims to zombiehood are sincere, they are not typical.

Replies from: Raemon, lance-bush
comment by Raemon · 2021-10-30T20:00:46.914Z · LW(p) · GW(p)

Have we even checked tho? (Maybe the answer is yes, but it hadn't occurred to me before just now that this was a dimension people might vary on. Or, actually I think it had, but I hadn't had a person in front of me actually claiming it)

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-10-30T21:36:33.938Z · LW(p) · GW(p)

See above; I posted a link to a recent study. There hasn't been much work on this. While my views may be atypical, so too might the views popular among contemporary analytic philosophers. A commitment to the notion that there is a legitimate hard problem of consciousness, that we "have qualia," and so on might all be idiosyncrasies of the specific way philosophers think, and may even result from unique historical contingencies, such that, were there many more philosophers like Quine and Dennett in the field, such views might not be so popular.

Some philosophical positions seem to rise and fall over time. Moral realism was less popular a few decades ago, but as enjoyed a recent resurgence, for instance. This suggests that the perspectives of philosophers might result in part from trends or fashions distinctive of particular points in time.

 

comment by Lance Bush (lance-bush) · 2021-10-30T21:32:08.766Z · LW(p) · GW(p)

But the qualiaphilic claim is typical, statistically. 

 

Typical of who?

Replies from: TAG
comment by TAG · 2021-10-30T23:05:54.113Z · LW(p) · GW(p)

"Statistically" , so "who" would be most people.

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-10-31T02:33:24.458Z · LW(p) · GW(p)

Thanks for clarifying. Not all statistical claims in e.g., psychology are intended to generalize towards most people, so I didn't want to assume you meant most people.

If the claim is that most people have a concept of qualia, that may be true, but I'm not confident that it is. That seems like an empirical question it'd be worth looking into.

Either way, I wouldn't be terribly surprised if most people had the concept, or (I think more likely) could readily acquire it on minimal introspection (though on my view I'd say that people are either duped or readily able to be duped into thinking they have the concept).

I don't know if I am different, or if so, why. It's possible I do have the concept but don't recognize it, or am deceiving myself somehow. 

It's also possible I am somehow atypical neurologically. I went into philosophy precisely because I consistently found that I either didn't have intuitions about conventional philosophical cases at all (e.g., Gettier problems), or had nonstandard or less common views (e.g. illusionism, normative antirealism, utilitarianism). That led me to study intuitions, the psychological underpinnings of philosophical thought, and a host of related topics. So there is no coincidence in my presenting the views expressed here. I got into these topics because everyone else struck me as having bizarre views.

Replies from: TAG, Richard_Kennaway
comment by TAG · 2021-10-31T19:22:49.310Z · LW(p) · GW(p)

If the claim is that most people have a concept of qualia, that may be true, but I’m not confident that it is

Most people don't know the word "qualia". Nonetheless, most people will state something equivalent....that they have feelings and seemings that they can't fully describe. So it's a "speaking prose" thing.

And something like that is implicit in Illusionism. Illusionism attempts to explain away reports of ineffable subjective sensations, reports of qualia like things. If no one had such beliefs, or made such reports, there would be nothing for Illusionism to address.

Trying to attack qualia from every possible angle is rather self-defeating. For instance, if you literally don't know what "qualia" means, you can't report that you have none. And if no one even seems to have qualia, there is nothing for Illusionism to do. And so on.

It’s also possible I am somehow atypical neurologically

But then , why insist that you are right? If you have something like colour blindness , then why insist that everyone else is deluded when they report colours?

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-11-01T15:15:55.670Z · LW(p) · GW(p)

Most people don't know the word "qualia". Nonetheless, most people will state something equivalent....that they have feelings and seemings that they can't fully describe. So it's a "speaking prose" thing.

 

There are many reasons why a person might struggle to describe their experiences that wouldn't be due to them having qualia or having some implicit qualia-based theory, especially among laypeople who are not experienced at describing their mental states. It would be difficult to distinguish these other reasons from reasons having to do with qualia.

So I don't agree that what you describe would necessarily be equivalent, and I don't think it would be easy to provide empirical evidence specifically of the notion that people have or think they have qualia, or speak or think in a way best explained by them having qualia.

Even if it could be done, I don't know of any empirical evidence that would support this claim. Maybe there is some. But I don't have a high prior on any empirical investigation into how laypeople think turning out to support your claim, either.

And something like that is implicit in Illusionism. Illusionism attempts to explain away reports of ineffable subjective sensations, reports of qualia like things. If no one had such beliefs, or made such reports, there would be nothing for Illusionism to address.

You know, I think you're right. And I believe the course of this discussion has clarified things for me sufficiently for me to recognize that I do not, strictly speaking, endorse illusionism.

Illusionism could be construed as the conjunction of two claims:

(1) On introspection, people systematically misrepresent their experiential states as having phenomenal properties.

(2) There are no such phenomenal properties.

For instance, Frankish (2016) defines (strong) illusionism as the view that:

“[...] phenomenal consciousness is illusory; experiences do not really have qualitative, ‘what-it’s-like’ properties, whether physical or non-physical” (p. 15)

Like illusionists, I deny that there are phenomenal properties, qualia, what-its-likeness, and so on. In that sense, I deny phenomenal realism (Mandik, 2016). As such, I agree with (2) above. Thus, I agree with the central claim of illusionism, that there are no phenomenal properties, and I deny that there are qualia, or that there’s “what it’s likeness” and so on. However, what I am less comfortable doing is presuming that things seem this way to nonphilosophers, and that they are all systematically subject to some kind of error. In that regard, I do not fully agree with illusionists.

To the extent that illusionists mistakenly suppose that people are subject to an illusion, we could call this meta-illusionism. Mandik distinguishes meta-illusionism from illusionism as follows:

“The gist of meta-illusionism is that it rejects phenomenal realism while also insisting that no one is actually under the illusion that there are so-called phenomenal properties” (pp. 140-141).

Mandik goes on to distance his position from illusionism, in reference to Frankish as follows:

“One thing Frankish and I have in common is that neither of us wants to assert that there are any properties instantiated that are referred to or picked out by the phrase ‘phenomenal properties’. One place where Frankish and I part ways is over whether that phrase is sufficiently meaningful for there to be a worthwhile research programme investigating how it comes to seem to people that their experiences instantiate any such properties. Like Frankish, I’m happy with terms like ‘experience’, ‘consciousness’, and ‘conscious experience’ and join Frankish in using what he calls ‘weak’ and functional construals of such terms. But, unlike Frankish, I see no use at all, not even an illusionist one, for the term ‘phenomenal’ and its ilk. The term ‘phenomenal’, as used in contemporary philosophy of mind, is a technical term. I am aware of no non-technical English word or phrase that is accepted as its direct analogue. Unlike technical terms in maths and physics, which are introduced with explicit definitions, ‘phenomenal’ has no such definition. What we find instead of an explicit definition are other technical terms treated as interchangeable synonyms. Frankish follows common practice in philosophy of mind when he treats ‘phenomenal’ as interchangeable with, for instance, ‘qualitative’ or, in scare-quotes, ‘“feely”’. (p. 141)

I can’t quote the whole article (though it’s short), but he concludes this point by stating that:

“We have then, in place of an explicit definition of ‘phenomenal properties’, a circular chain of interchangeable technical terms — a chain with very few links, and little to relate those links to nontechnical terminology. The circle, then, is vicious. I’m sceptical that any properties seem ‘phenomenal’ to anyone because this vicious circle gives me very little idea what seeming ‘phenomenal’ would be.” (p. 142)

Mandik is not so sure he wants to endorse meta-illusionism, since this might turn on concerns about what it means for something to be an illusion, and because he’s reluctant to state that illusionists are themselves subject to an illusionism. What he proposes instead is qualia quietism, the view that:

“the terms ‘qualia’, ‘phenomenal properties’, etc. lack sufficient content for anything informative to be said in either affirming or denying their existence. Affirming the existence of what? Denying the existence of what? Maintaining as illusory a representation of what? No comment. No comment. No comment” (p. 148)

This is much closer to what I think than illusionism proper. So, in addition to denying that there are qualia, or phenomenal properties, or whatever other set of terminology is used to characterize some putative set of special properties that spell trouble for those of us ill-disposed to believe in such things, I also deny that it seems this way to nonphilosophers. 

My entire academic career has centered on critiquing work in experimental philosophy, and close scrutiny of this and related articles might reveal what I take to be significant methodological problems. Nevertheless, insofar as research has been conducted on the subject of whether nonphilosophers have phenomenal properties, or think about consciousness in the same way as philosophers, at least some of the results indicate that they may not. See here, for instance Sytsma & Machery (2010):

Abstract: “Do philosophers and ordinary people conceive of subjective experience in the same way? In this article, we argue that they do not and that the philosophical concept of phenomenal consciousness does not coincide with the folk conception. We first offer experimental support for the hypothesis that philosophers and ordinary people conceive of subjective experience in markedly different ways. We then explore experimentally the folk conception, proposing that for the folk, subjective experience is closely linked to valence. We conclude by considering the implications of our findings for a central issue in the philosophy of mind, the hard problem of consciousness.”

I doubt this one study is definitive evidence one way or the other. What I will say, though, is that whether people think of consciousness the way philosophers do is an empirical question. I suspect they don’t, and absent any good reasons to think that they do, I’m not inclined to accept without argument that they do.

Trying to attack qualia from every possible angle is rather self-defeating. For instance, if you literally don't know what "qualia" means, you can't report that you have none. And if no one even seems to have qualia, there is nothing for Illusionism to do. And so on.

I disagree. You can claim to both not know what something means, and claim to not have the thing in question. 

In some cases, you might not know what something means because you’re ignorant of what is meant by the concept in question. For instance, someone might use the term  “zown zair” to refer to brown hair. I might not know this, even if I do have brown hair. In that case, I would not know what they mean, even though I do have brown hair. It would be a mistake for me to think that because I don’t know what they mean, that I don’t have “zown zair.” And it would be foolish to insist both that “zown zair” is false, and that “zown zair” is meaningless. I would simply have failed to find out what they were referring to with the term.

But this is not the case with qualia. I am not merely claiming that I don’t understand the concept. I am claiming that nobody understands the concept, because it is fundamentally confused and meaningless.

First, in the course of an exchange, This is especially the case when one is responding to a host of people, over an extended period of time, who are incapable of explaining the putative concept in a way that isn’t circular or vacuous. 

In the course of an exchange, people may employ a concept. They might say that, e.g. some objects have the property A. Yet when asked to explain what A is, they are unable to do so, or they provide unsatisfactory attempts. For instance, they might point to several objects, and say “all these objects have property A.” This is what was done earlier in this thread: I was given examples, as though this was independently helpful in understanding the concept. It’s not. If I pointed to a truck, a flock of geese, and a math textbook and said “these all have property A,” you wouldn’t be much closer to knowing what I was talking about. In other cases, they might use metaphors. But the metaphors may be unilluminating. In still other cases, they might appeal to other terms or concepts. Yet these terms or concepts might themselves be obscure or poorly defined, and if one asks for clarification, one begins the journey through an endless loop of mutual interdefinitions that never get you anywhere.

In such cases, it can become apparent that a person’s concepts are circular and self-referential, and don’t really describe anything about the way the world is. They might define A in terms of B, B in terms of C, and C in terms of A. And they might insist that A is a property we all have.

When numerous people all claim that we have property A, but they cannot define it, one may reasonably wonder whether all of these people are confused or mistaken. That is, one might conclude that property A is a pseudoconcept, something vague and meaningless.

In such cases, I am fine saying both that 

(a) I don’t have property A

(b) I don’t know what people referring to property A are talking about

I can believe that (a), because it’s meaningless. I don’t have meaningless properties. And I can conclude that (b), because it’s meaningless. I can’t understand a meaningless concept, because there isn’t anything to understand.

Maybe that’s an awkward way of framing why one would reject circular concepts that ascribe meaningless properties to people, in which case I’d be happy to revise the way I frame my rejection of qualia.

But then , why insist that you are right? If you have something like colour blindness , then why insist that everyone else is deluded when they report colours?

There are very good reasons to think people can see colors, and one would have such reasons even if they were colorblind. We can point to the physical mechanisms involved in color detection, the properties of light, and so on. We can point to specific color words in our and other languages, and it would be fairly easy to determine that nonphilosophers can see colors. I don’t think any of these conditions apply to qualia. So, first, there's that.

To emphasize just the last of these,  I don’t think “everyone else” is deluded. I think philosophers are deluded, and that people who encounter the work of these philosophers often become deluded as well. I don’t think the notion of qualia is a psychological mistake so much as it is an intellectual mistake only a subset of people make.

I suspect such mistakes are endemic to philosophy. The same thing has occurred, to an alarming extent, in contemporary metaethics. Moral realists frequently invoke the notion of decisive or external reasons, irreducible normativity, categorical imperatives, stance-independent normative and evaluative facts, and so on. I reject all of these concepts as fundamentally confused. And yet philosophers like Parfit, Huemer, Cuneo, and others  have not only tangled themselves into knots of confusion, their work has trickled out into the broader culture. I routinely encounter people who have come across their work claiming to “have” concepts that they are incapable of expressing. And these philosophers, when pressed, will fall back on claiming that the concepts in question are “brute” or “primitive” or “unanalyzable,” which is to say, they can’t give an account of them, and don’t think that they need to. Maybe they do "have" these concepts, but since I am very confident we can explain everything there is to now about the way the world is without invoking them, I suspect they're vacuous nonsense, and that these philosophers are uniformly confused.

And, like the notion of qualia, philosophers have for a long time presumed that ordinary people tend to be moral realists (see e.g. Sinclair, 2012). My own academic work specifically focuses on this question. And like the question of what people think about consciousness, this, too, is an empirical question. So far, little empirical evidence supports the conclusion that ordinary people tend to be moral realists, or at least that they tend to be consistently and uniformly committed to some kind of moral realism. By and large, they struggle to understand what they are being asked (Bush & Moss, 2020). I suspect, instead, that something like Gill’s (2009) indeterminacy-variability thesis is much more likely: that people have variable but (I suspect mostly) indeterminate metaethical standards.

The same may turn out to be the case for the only other issue I looked into: free will. This has led me, in my own work, to point towards the broader possibility that many of the positions philosophers purport to be intuitive, and that they claim are widespread among nonphilosophers, simply aren’t. Rather, I suspect that philosophers are over-intellectualizing some initial pool of considerations, then generating theories that are neither implicitly nor explicitly part of the way ordinary people speak or think.

I don’t think this is a situation where I am color blind, while others have color vision. Rather, it’s more like recognizing that many of the people around you are subject to a collective, and contagious, hallucination. So I suspect, instead, that I have come to recognize over time that academic philosophy has played an alarming role in duping large numbers of people into a wide range of confusions, then duped them further by convincing them that these confusions are shared by nonphilosophers.

 

References

Bush, L. S., & Moss, D. (2020). Misunderstanding Metaethics: Difficulties Measuring Folk Objectivism and Relativism. Diametros, 17(64). 6-21

Frankish, K. (2016). Illusionism as a theory of consciousness. Journal of Consciousness Studies, 23(11-12), 11-39.

Gill, M. B. (2009). Indeterminacy and variability in meta-ethics. Philosophical studies, 145(2), 215-234.

Mandik, P. (2016). Meta-illusionism and qualia quietism. Journal of Consciousness Studies, 23(11-12), 140-148.

Sinclair, N. (2012). Moral realism, face-values and presumptions. Analytic Philosophy, 53(2). 158-179


 

comment by Richard_Kennaway · 2021-10-31T22:12:19.723Z · LW(p) · GW(p)

When you sit alone in an empty room, do you have a sense of your own presence, your own self? Can you be aware, not only of your sensations, but of the sensation of having those sensations? Can you have thoughts, and be aware of having those thoughts? And be aware of having these awarenesses?

My answer to each of these questions is "yes".

But for you, do these questions fail to point to anything in your experience?

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-11-01T02:51:31.587Z · LW(p) · GW(p)

Can you be aware, not only of your sensations, but of the sensation of having those sensations?


I'm not sure. I have sensations, but I don't know what a sensation of a sensation would be. 

Can you have thoughts, and be aware of having those thoughts?

Sure, but that just sounds like metacognition, and that doesn't strike me as being identical with or indicative of having qualia. I can know that I know things, for instance. 

And be aware of having these awarenesses?

I would describe this as third-order metacognition, or recursive cognition, or something like that. And yea, I can do that. I can think that Sam thinks that I think that he lied, for instance. Or I can know that my leg hurts and then think about the fact that I know that my leg hurts.

comment by Jemist · 2021-10-30T21:20:05.454Z · LW(p) · GW(p)

Having now had a lot of different conversations on consciousness I'm coming to a slightly disturbing belief that this might be the case. I have no idea what this implies for any of my downstream-of-consciousness views.

comment by Lance Bush (lance-bush) · 2021-10-30T21:48:30.742Z · LW(p) · GW(p)

Like, Lance, do you not feel like you experience that things seem ways?

I don't know what that means, so I'm not sure. What would it mean for something to seem a certain way?
 

Or just that they don't seem to be ways in ways that seem robustly meaningful or something?

I don't think it's this. It's more that when people try to push me to have qualia intuitions, I can introspect, report on the contents of my mental states, and then they want me to locate something extra. But there never is anything extra, and they can never explain what they're talking about, other than to use examples that don't help me at all, or metaphors that I don't understand. Nobody seems capable of directly explaining what they mean. And when pressed, they insist that the concept in question is "unanalyzable" or inexplicable or otherwise maintain that they cannot explain it. 

Despite his fame, the majority of students who take Dennett's courses that I encountered do not accept his views at all, and take qualia quite seriously. I had conversations that would last well over an hour where I would have one or more of them try to get me to grok what they're talking about, and they never succeeded. I've had people make the following kinds of claims:

(1) I am pretending to not get it so that I can signal my intellectual unconventionality.

(2) I do get it, but I don't realize that I get it.

(3) I may be neurologically atypical.

(4) I am too "caught in the grip" of a philosophical theory, and this has rendered me unable to get it.

One or more of these could be true, but I'm not sure how I'd find out, or what I might do about it if I did. But I strangely drawn to a much more disturbing possibility, that an outside view would suggest is pretty unlikely:

(5) all of these people are confused, qualia is a pseudoconcept, and the whole discussion predicated on it is fundamentally misguided

I find myself drawn to this view, in spite of it entailing that a majority of people in academic philosophy, or who encounter it, are deeply mistaken.

I should note, though, that I specialize in metaethics in particular. Most moral philosophers are moral realists (about 60%) and I consider every version of moral realism I'm familiar with to be obviously confused, mistaken, or trivial in ways so transparent that I do think I am justified in thinking that, on this particular issue, most moral philosophers really are mistaken. 

Given my confidence about moral realism, I'm not at all convinced that philosophers generally have things well-sorted on consciousness.
 

Replies from: MichaelStJules, Raemon
comment by MichaelStJules · 2021-10-31T23:26:15.704Z · LW(p) · GW(p)

It's more that when people try to push me to have qualia intuitions, I can introspect, report on the contents of my mental states, and then they want me to locate something extra.

 

Are they expecting qualia to be more than a mental state? If you're reporting the contents of your mental states, isn't that already enough? I'm not sure what extra there should be for qualia. Objects you touch can feel hot to you, and that's exactly what you'd be reporting. Or would you say something like "I know it's hot, but I don't feel it's hot"? How would you know it's hot but not feel it's hot, if your only information came from touching it? Where does the knowledge come from? Are you saying that what you're reporting is only the verbal inner thought you had that it's hot, and that happened without any conscious mental trigger?

If it's only the verbal thought, on what basis would you believe that it's actually hot? The verbal thought alone? (Suppose it's also not hot enough to trigger a reflexive response.)

Doesn't your inner monologue also sound like something? (FWIW, I think mine has one pitch and one volume, and I'm not sure it sounds like anyone's voice in particular (even my own). It has my accent, or whatever accent I mimic.)

More generally, the contents of your mental states are richer than the ones you report on symbolically (verbally or otherwise) to yourself or others, right? Like you notice more details than you talk to yourself about in the moment, e.g. individual notes in songs, sounds, details in images, etc.. Isn't this perceptual richness what people mean by qualia? I don't mean to say that it's richer than your attention, but you can attend to individual details without talking about them.

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-11-01T02:45:26.598Z · LW(p) · GW(p)

I don't think I can replicate exactly the kinds of ways people framed the questions. But they might do something like this: they'd show me a red object. They'd ask me "What color is this?" I say red. Then they'd try to extract from me an appreciation for the red "being a certain way" independent of, e.g., my disposition to identify the object as red, or my attitudes about red, as a color, and so on. Everything about "seeing red" doesn't to me indicate that there is a "what it's like" to seeing red. I am simply ... seeing red. Like, I can report that fact, and talk about it, and say things like "it isn't blue" and "it is the same color as a typical apple" and such,  but there's nothing else. There's no "what it's likeness" for me, or, if there is, I'm not able to detect and report on this fact. The most common way people will frame this is to try to get me to agree that the red has a certain "redness" to it. That chocolate is "chocolatey" and so on.

I can be in an entire room of people insisting that red has the property of "redness" and that chocolate is "chocolately" and so on, and they all nod and agree that our experiences have these intrinsic what-its-likeness properties. This seems to be what people are talking about when they talk about qualia. To me, this makes no sense at all. It's like saying seven has the property of "sevenness." That seems vacuous to me.

I can look at something like Dennett's account: that people report experiences as having some kind of intrinsic nonrelational properties that are ineffable and immediately apprehensible. I can understand all those words in combination, but I don't see how anyone could access such a thing (if that's what qualia are supposed to be), and I don't think I do. 

It may be that that I am something akin to a native functionalist. I don't know. But part of the reason I was drawn to Dennett's views is that they are literally the only views that have ever made any sense to me. Everything else seems like gibberish.

Or would you say something like "I know it's hot, but I don't feel it's hot"?

Well, I would cash out it "feeling hot" in functional terms. That I feel a desire to move my hand away from the object, that I can distinguish it from something cold or at least not hot, and so on. There doesn't seem to me to be anything else to touching a hot thing than its relational properties and the functional role it plays in relation to my behavior and the rest of my thoughts. What else would there be than this? It does seem to me that people who think there are qualia think there's something else. They certainly seem insistent that there is after I describe my experience. 

Are you saying that what you're reporting is only the verbal inner thought you had that it's hot, and that happened without any conscious mental trigger?

No, I think I have a conscious mental trigger, and I can and do say things like "that feels hot." I respond to hot things in normal ways, can report on those responses, and so on. I can certainly distinguish hot from cold without having to say anything, but I'm not sure what else you might be going for, and all of that seems like something you could get a robot to do that I don't think anyone would say "has qualia." But this is a very superficial pass at everything that would be going on if I touched something hot and reacted to it. So, it might be something we'd need to dig into more.

Doesn't your inner monologue also sound like something? 

Nobody ever asked me that. That's an awesome question. I think that no, it does not sound like anything. It's in English, and it's "my voice," but it doesn't "sound like" my actual speaking voice.

More generally, the contents of your mental states are richer than the ones you report on symbolically (verbally or otherwise) to yourself or others, right? 

Yes. 

Isn't this perceptual richness what people mean by qualia?

I don't think that it is. It sounds a bit like you're gesturing towards block's notion of access consciousness. I'm not sure though.

Replies from: MichaelStJules
comment by MichaelStJules · 2021-11-01T04:12:14.541Z · LW(p) · GW(p)

Ok, I think I get the disagreement now.

I can be in an entire room of people insisting that red has the property of "redness" and that chocolate is "chocolately" and so on, and they all nod and agree that our experiences have these intrinsic what-its-likeness properties. This seems to be what people are talking about when they talk about qualia. To me, this makes no sense at all. It's like saying seven has the property of "sevenness." That seems vacuous to me.

Hmm, I'm not sure it's vacuous, since it's not like they're applying "redness" to only one thing; redness is a common feature of many different experiences. 14 could have "sevenness", too.

Maybe we can think of examples of different experiences where it's hard to come up with distinguishing functional properties, but you can still distinguish the experiences? Maybe the following questions will seem silly/naive, since I'm not used to thinking in functional terms. Feel free to only answer the ones you think are useful, since they're somewhat repetitive.

  1. What are the differences in functional properties between two slightly different shades of red that you can only tell apart when you see them next to each other? Or maybe there are none when separate, but seeing them next to each other just introduces another functional property? What functional property would this be?
    1. What if you can tell them apart when they aren't next to each other? How are you doing so?
  2. How about higher and lower pitched sounds? Say the same note an octave apart?
  3. Say you touch something a few degrees above room temperature, and you can tell that it's hotter, but it doesn't invoke any particular desire. How can you tell it's hotter? How does this cash out in terms of functional properties?

Well, I would cash out it "feeling hot" in functional terms. That I feel a desire to move my hand away from the object, that I can distinguish it from something cold or at least not hot, and so on.

I'm guessing you would further define these in functional terms, since they too seem like the kinds of things people could insist qualia are involved in (desire, distinguishing). What would be basic functional properties that you wouldn't cash out further? Do you have to go all the way down to physics, or are there higher-level basic functional properties? I think if you go all the way down to physics, this is below our awareness and what our brain actually has concepts of; it's just implemented in them.

If you were experiencing sweetness in taste (or some other sensation) for the first time, what would be its functional properties that distinguish it from other things? Could this be before you formed attitudes about it, or are the attitudes simultaneous and built in as a necessary component of the experience?

I think that no, it does not sound like anything. It's in English, and it's "my voice," but it doesn't "sound like" my actual speaking voice.

What functional properties would you point to that your experiences of your actual speaking voice have, but your experiences of your inner voice don't? And that can't be controlled for? E.g. what if you were actually speaking out loud, but couldn't hear your own voice, and "heard" your inner voice instead? How does this differ from actually hearing your voice?

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-11-01T14:24:42.490Z · LW(p) · GW(p)

Hmm, I'm not sure it's vacuous, since it's not like they're applying "redness" to only one thing; redness is a common feature of many different experiences. 14 could have "sevenness", too.

 

One can apply a vacuous term to multiple things, so pointing out that you could apply the term to more than one thing does not seem to me to indicate that it isn't vacuous. I could even stipulate a concept that is vacuous by design: "smorf", which doesn't mean anything, and then I can say something like "potatoes are smorf." 
 

Maybe we can think of examples of different experiences where it's hard to come up with distinguishing functional properties, but you can still distinguish the experiences? 

The ability to distinguish the experiences in a way you can report on would be at least one functional difference, so this doesn't seem to me like it would demonstrate much of anything. 

Some of the questions you ask seem a bit obscure, like how I can tell something is hotter. Are you asking for a physiological explanation? Or the cognitive mechanisms involved? If so, I don'tknow, but I'm not sure what that would have to do with qualia. But maybe I'm not understanding the question, and I'm not sure how that could get me any closer to understanding what qualia are supposed to be.

What would be basic functional properties that you wouldn't cash out further?

I don't know. Likewise for most of the questions you ask. "What are the functional properties of X?" questions are very strange to me. I am not quite sure what I am being asked, or how I might answer, or if I'm supposed to be able to answer. Maybe you could help me out here, because I'd like to answer any questions I'm capable of answering, but I'm not sure what to do with these.

 

 

 



 

Replies from: MichaelStJules
comment by MichaelStJules · 2021-11-01T19:21:15.593Z · LW(p) · GW(p)

The ability to distinguish the experiences in a way you can report on would be at least one functional difference, so this doesn't seem to me like it would demonstrate much of anything.

 

It is a functional difference, but there must be some further (conscious?) reason why we can do so, right? Where I want to go with this is that you can distinguish them because they feel different, and that's what qualia refers to. This "feeling" in qualia, too, could be a functional property. The causal diagram I'm imagining is something like

Unconscious processes (+unconscious functional properties) -> ("Qualia", Other conscious functional properties) -> More conscious functional properties

And I'm trying to control for "Other conscious functional properties" with my questions, so that the reason you can distinguish two particular experiences goes through "Qualia". You can tell two musical notes apart because they feel (sound) different to you.

I don't know. Likewise for most of the questions you ask. "What are the functional properties of X?" questions are very strange to me. I am not quite sure what I am being asked, or how I might answer, or if I'm supposed to be able to answer. Maybe you could help me out here, because I'd like to answer any questions I'm capable of answering, but I'm not sure what to do with these.

I'm not sure if what I wrote above will help clarify. You also wrote:

Well, I would cash out it "feeling hot" in functional terms. That I feel a desire to move my hand away from the object, that I can distinguish it from something cold or at least not hot, and so on. There doesn't seem to me to be anything else to touching a hot thing than its relational properties and the functional role it plays in relation to my behavior and the rest of my thoughts.

How would you cash out "desire to move my hand away from the object" and "distinguish it from something cold or at least not hot" in functional terms? To me, both of these explanations could also pass through "qualia". Doesn't desire feel like something, too? I'm asking you cash out desire and distinguishing in functional terms, too, and if we keep doing this, do "qualia" come up somewhere?

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-11-01T21:12:47.955Z · LW(p) · GW(p)

It is a functional difference, but there must be some further (conscious?) reason why we can do so, right?


Do you mean like a causal reason? If so then of course, but that wouldn’t have anything to do with qualia.

Where I want to go with this is that you can distinguish them because they feel different, and that's what qualia refers to.

I have access to the contents of my mental states, and that includes information that allows me to identify and draw distinctions between things, categorize things, label things, and so on. A “feeling” can be cashed out in such terms, and once it is, there’s nothing else to explain, and no other properties or phenomena to refer to. 

I don’t know what work “qualia” is doing here. Of course things feel various ways to me, and of course they feel different. Touching a hot stove doesn’t feel the same as touching a block of ice. 

But I could get a robot, that has no qualia, but has temperature detecting mechanisms, to say something like “I have detected heat in this location and cold in this location and they are different.” I don’t think my ability to distinguish between things is because they “feel” different; rather, I’d say that insofar as I can report that they “feel different” it’s because I can report differences between them. I think the invocation of qualia here is superfluous and may get the explanation backwards: I don’t distinguish things because they feel different; things “feel different” if and only if we can distinguish differences between them.


This "feeling" in qualia, too, could be a functional property.

Then I’m even more puzzled by what you think qualia are. Qualia are, I take it, ineffable, intrinsic qualitative properties of experiences, though depending on what someone is talking about they might include more or less features than these. I’m not sure qualia can be “functional” in the relevant sense. 

How would you cash out "desire to move my hand away from the object" and "distinguish it from something cold or at least not hot" in functional terms?

I don't know. I just want to know what qualia are. Either people can explain what qualia are or they can’t. My inability to explain something wouldn’t justify saying “therefore, qualia,” so I’m not sure what the purpose of the questions are. I’m sure you don’t intend to invoke “qualia of the gaps,” and presume qualia must figure into any situation in which I, personally, am not able to answer a question you've asked. 

I'm asking you cash out desire and distinguishing in functional terms, too, and if we keep doing this, do "qualia" come up somewhere?

I don’t know what you think qualia are, so I wouldn’t be able to tell you. People keep invoking this concept, but nobody seems able to offer a substantive explanation of what it is, and why I should think I or anyone else has such things, or why such things would be important or necessary for anything in particular, and so on.

I hope I'm not coming off as stubborn here. I'm very much interested in answering any questions I'm able to answer, I'm just not sure precisely what you're asking me or how I might go about answering it. "What are the functional properties of X?" doesn't strike me as a very clear question.

Replies from: MichaelStJules
comment by MichaelStJules · 2021-11-02T05:03:51.040Z · LW(p) · GW(p)

But I could get a robot, that has no qualia, but has temperature detecting mechanisms, to say something like “I have detected heat in this location and cold in this location and they are different.” I don’t think my ability to distinguish between things is because they “feel” different; rather, I’d say that insofar as I can report that they “feel different” it’s because I can report differences between them. I think the invocation of qualia here is superfluous and may get the explanation backwards: I don’t distinguish things because they feel different; things “feel different” if and only if we can distinguish differences between them.

(...)

I have access to the contents of my mental states, and that includes information that allows me to identify and draw distinctions between things, categorize things, label things, and so on. A “feeling” can be cashed out in such terms, and once it is, there’s nothing else to explain, and no other properties or phenomena to refer to. 

What's the nature of these differences and this information, though? What exactly are you using to distinguish differences? Isn't it experienced? The information isn't itself a set of "symbols" (e.g. words, read or heard), or maybe sometimes it is, but those symbols aren't then made up of further symbols. Things don't feel hot or cold to you because there are different symbols assigned to them that you read off or hear, or to the extent that they are, you're experiencing those symbols as being read or heard, and that experience is not further composed of symbols.

 

Then I’m even more puzzled by what you think qualia are. Qualia are, I take it, ineffable, intrinsic qualitative properties of experiences, though depending on what someone is talking about they might include more or less features than these. I’m not sure qualia can be “functional” in the relevant sense. 

I might just be confused here. I was thinking that the illusion of ineffability, "seemingness", could be a functional property, and that what you're using to distinguish experiences are parts of these illusions. Maybe that doesn't make sense.

I don't know. I just want to know what qualia are. Either people can explain what qualia are or they can’t. My inability to explain something wouldn’t justify saying “therefore, qualia,” so I’m not sure what the purpose of the questions are. I’m sure you don’t intend to invoke “qualia of the gaps,” and presume qualia must figure into any situation in which I, personally, am not able to answer a question you've asked. 

I might have been switching back and forth between something like "qualia of the gaps" and a more principled argument, but I'll try to explain the more principled one clearly here:

For each of the functional properties you've pointed out so far, I would say they "feel like something". You could keep naming things that "feel like something" (desires, attitudes, distinguishing, labelling or categorizing), and then explaining those further in terms of other things that "feel like something", and so on. Of course, presumably some functional properties don't feel like anything, but to the extent that they don't, I'd claim you're not aware of them, since everything you're aware of feels like something. If you keep explaining further, eventually you have to hit an explanation that can't be further explained even in principle by further facts you're conscious of (eventually the reason is unconscious, since you're only conscious of finitely many things at any moment). I can't imagine what this final conscious explanation could be like if it doesn't involve something like qualia, something just seeming some way. So, it's not about there being gaps in any particular explanation you try to give in practice, it's about there necessarily always being a gap. What is a solution supposed to look like?

Of course, this could just be a failure of my own imagination.

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-11-02T13:45:47.754Z · LW(p) · GW(p)

I don’t know the answer to these questions. I’m not sure the questions are sufficiently well-specified to be answerable, but I suspect if you rephrased them or we worked towards getting me to understand the questions, I’d just say “I don’t know.” But my not knowing how to answer a question does not give me any more insight into what you mean when you refer to qualia, or what it means to say that things “feel like something.”

I don’t think it means anything to say things “feel like something.” Every conversation I’ve had about this (and I’ve had a lot of them) goes in circles: what are qualia? How things feel. What does that mean? It’s just “what it’s like” to experience them. What does that mean? They just are a certain way, and so on. This is just an endless circle of obscure jargon and self-referential terms, all mutually interdefining one another.

I don’t notice or experience any sense of a gap. I don’t know what gap others are referring to. It sounds like people seem to think there is some characteristic or property their experiences have that can’t be explained. But this seems to me like it could be a kind of inferential error, the way people may have once insisted that there’s something intrinsic about living things that distinguishes from nonliving things, and living things just couldn’t be composed of conventional matter arranged in certain ways, that they just obviously had something else, some je ne sais quoi.

I suspect if I found myself feeling like there was some kind of inexplicable essence, or je ne sais quoi to some phenomena, I’d be more inclined to think I was confused than that there really was je ne sais quoiness. I’m not surprised philosophers go in for thinking there are qualia, but I’m surprised that people in the lesswrong community do. Why not think “I’m confused and probably wrong” as a first pass? Why are many people so confident that there is, what as far as I can tell, amounts to something that may be fundamentally incomprehensible, even magical? That is, it’s one thing to purport to have the concept of qualia; it’s another to endorse it. And it sounds not only like you claim to grok the notion of qualia, but to endorse it.

 

 



 

comment by Raemon · 2021-10-30T23:13:49.272Z · LW(p) · GW(p)

(I haven't caught up on the entire thread, apologies if this is a repeat)

Assuming the "qualia is a misguided pseudoconcept" is true, do you have a sense of why people think that it's real? i.e. taking the evidence of "Somehow, people end up saying sentences about how they have a sense of what it is like to perceive things. Why is that? What process would generate people saying words like that?" (This is not meant to be a gotcha, it just seems like a good question to ask)

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-10-31T02:56:17.398Z · LW(p) · GW(p)

No worries, it's not a gotcha at all, and I already have some thoughts about this. 

I was more interested in this topic back about seven or eight years ago, when I was actually studying it. I moved on to psychology and metaethics, and haven't been actively reading about this stuff since about 2014.

I'm not sure it'd be ideal to try to dredge all that up, but I can roughly point towards something like Robbins and Jack (2006) as an example of the kind of research I'd employ to develop a type of debunking explanation for qualia intuitions. I am not necessarily claiming their specific account is correct, or rigorous, or sufficient all on its own, but it points to the kind of work cognitive scientists and philosophers could do that is at least in the ballpark.

Roughly, they attempt to offer an empirical explanation for the persistent of the explanatory gap (the problem of accounting for the consciousness by appeal to physical or at least nonconscious phenomena). Its persistence could be due to quirks in the way human cognition works. If so, it may be difficult to dispel certain kinds of introspective illusions.

Roughly, suppose we have multiple, distinct "mapping systems" that each independently operate to populate their own maps of the territory. Each of these systems evolved and currently functions to facilitate adaptive behavior. However, we may discover that when we go to formulate comprehensive and rigorous theories about how the world is, these maps seem to provide us with conflicting or confusing information.

Suppose one of these mapping systems was a "physical stuff" map. It populates our world with objects, and we have the overwhelming impression that there is "physical stuff" out there, that we can detect using our senses.

But suppose also we have a "important agents that I need to treat well" system, that detects and highlights certain agents within the world for whom it would be important to treat appropriately, a kind of "VIP agency mapping system" that recruited a host of appropriate functional responses: emotional reactions, adopting the intentional stance, cheater-detection systems, and so on. 

On reflecting on the first system, we might come to form the view that the external world really is just this stuff described by physics, whatever that is. And that includes the VIP agents we interact with: they're bags of meat! But this butts up against the overwhelming impression that they just couldn't be. They must be more than just bags of meat. They have feelings! We may find ourselves incapable of shaking this impression, no matter how much of a reductionist or naturalist or whatever we might like to be.

What could be going on here is simply the inability for these two mapping systems to adequately talk to one another. We are host to divided minds with balkanized mapping systems, and may find that we simply cannot grok some of the concepts contained in one of our mapping systems in terms of the mapping system in the other. You might call this something like "internal failure to grok." It isn't that, say, I cannot grok  some other person's concepts, but that some of the cognitive systems I possess cannot grok each other

You might call this something like "conceptual incommensurability." And if we're stuck with a cognitive architecture like this, certain intuitions may seem incorrigible, even if we could come up with a good model, based on solid evidence, that would explain why things would seem this way to us, without us having to suppose that it is that way.

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-10-31T13:45:11.472Z · LW(p) · GW(p)

I forgot to add a reference to the Robbins and Jack citation above. Here it is: 

Robbins, P., & Jack, A. I. (2006). The phenomenal stance. Philosophical studies, 127(1), 59-85.

comment by Lance Bush (lance-bush) · 2021-10-30T21:31:27.926Z · LW(p) · GW(p)

I'm not sure how to answer the first question. I'm sure my introspection revealed all manner of things over the course of years, and I'm also not sure what level of specificity you are going for. I don't want to evade actually reporting on the contents of my mental states, so perhaps a more specific question would help me form a useful response.

I may very well not have even the illusion of phenomenal consciousness, but I'm not sure I am alone in lacking it. While it remains an open empirical question, and I can’t vouch for the methodological rigor of any particular study, there is some empirical research on whether or not nonphilosophers are inclined towards thinking there is a hard problem of consciousness:

https://www.ingentaconnect.com/content/imp/jcs/2021/00000028/f0020003/art00002

It may be that notions of qualia, and the kinds of views that predominate among academic philosophers are outliers that don’t represent how other people think about these issues, if they think about them at all.

comment by MichaelStJules · 2021-10-29T20:19:10.537Z · LW(p) · GW(p)

How do you imagine consciousness would work in the moment for humans without inner/internal monologues (and with aphantasia, unable to visualize; some people can do neither)? And in general, for experiences that we don't reflect on using language in the moment, or at most simple expressive language, like "Ow!"?

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-10-29T21:54:21.929Z · LW(p) · GW(p)

The lack of an internal monologue is a distressing question to me. I run a constant inner monologue, and can’t imagine thinking differently. There may be some sense in which people who lack an inner monologue lack certain features of consciousness that others who do have one possess. 

Part of the issue here is to avoid thinking of consciousness as either a discrete capacity one either has or doesn’t have, or even to think of it as existing a continuum, such that one could have “more” or “less” of it. Instead, I think of “consciousness” as a term we use to describe a set of both qualitative and quantitatively distinct capacities. It’d be a bit like talking about “cooking skills.” If someone doesn’t know how to use a knife, or start a fire, do they “lack cooking skills”? Well, they lack a particular cooking skill, but there is no single answer as to whether they “lack cooking skills” because cooking skills break down into numerous subskills, each of which may be characterized by its own continuum along which a person could be better or worse. Maybe a person doesn’t know how to start a fire, but they can bake amazing cakes if you give them an oven and the right ingredients.

This is why I am wary of saying that animals are “not conscious” and would instead say that whatever their “consciousness” is like, it would be very different from ours, if they lack a self-model and if a self-model is as central to our experiences as I think it is.

As for someone who lacks an inner monologue, I am not sure what to make of these cases. And I’m not sure whether I’d want to say someone without an inner monologue “isn’t conscious,” as that seems a bit strange. Rather, I think I’d say that they may lack a feature of the kinds of consciousness most of us have that strikes me, at first glance, as fairly central and important. But perhaps it isn’t. I’d have to think more about that, to consider whether an enculturated construction of a self-model requires an inner monologue. I do think it probably requires exposure to language...at least in practice, for humans (at least in practice, since I don’t think an AI would have to proceed through the same developmental stages as humans would to become conscious. And, of course, in principle you could print out an adult human brain, which could be conscious without having to itself have ever been subjected to childhood enculturation). 

However, once the relevant concepts and structures have been “downloaded,” this may not require a very specific type of phenomenology. Maybe it does, but at the very least, we could point to substantial overlap in many of the functional outputs of people who lack inner monologues, analogues to those of us who do have an inner monologue that we would not observe in animals. People who lack inner monologues can still speak meaningfully about themselves in the past, make plans for the future, talk about themselves as agents operating within the world, employ theory of mind, would probably report that they are conscious, could describe their phenomenal experiences, and so on. In other words, there would be substantial functional overlap in the way they spoke, thought, and behaved, with only a few notable differences in how they describe their phenomenology. At least, I am supposing all this is the case. Maybe they are different in other ways, and if I knew about them, and really thought about this, it might have really disturbing implications. But I doubt that will turn out to be the case.

This reminds me of an idea for a science fiction novel. I don’t know where it came from, but I’m not sure I was the first to think if a scenario like this:

Suppose we discovered that some subset of the population definitely did not have conscious experiences, and that the rest of us did. And suppose we had some reliable test for determining who was or was not conscious. It was easy to administer, and we quickly found that our spouses, children, parents, closest friends, and so on, were not conscious at all. Such people were simply automata. There were no lights on inside. In short: they simply had no qualia at all. 

How would society react? What would people do? One could imagine a story like this addressing both interpersonal relationships, and the broader, societal-scale implications of such discoveries. I hope someone can take that idea and run with it, and turn it into something worth reading or watching.

comment by deluks917 · 2021-10-30T16:44:51.216Z · LW(p) · GW(p)

Animal rights obsessed vegan checking in:

I am extremely worried gpt3 is concious! To be honest i am worried about whether my laptop is concious! A lot of people worried about animal suffering are also worried about algorithms suffering.

Replies from: Korz
comment by Korz · 2021-11-01T21:48:49.718Z · LW(p) · GW(p)

It seems I am not as worried about gpt3 as you, but when listening to the  simulated interview with simulated Elon Musk by Lsusr [LW · GW] in the clearer thinking podcast episode 073 (starts in minute 102), I was quite concerned

comment by MichaelStJules · 2021-10-29T17:55:53.972Z · LW(p) · GW(p)

Some other less theory-heavy approaches to consciousness I find promising:

  1. What do unconscious processes in humans tell us about sentience?, and then see Rethink Priorities' table with evidence for various indicators for different species, with a column for unconscious processing in humans. (Disclaimer: I work at Rethink Priorities.)
  2. The facilitation hypothesis: "Phenomenally conscious perception of a stimulus facilitates, relative to unconscious perception, a cluster of cognitive abilities in relation to that stimulus." This is compatible with most popular theories, and probably Yudkowsky's position, depending on what we decide to include in the "cluster". Summary here.

Another that's worth mentioning, although I don't know what to think of it anymore:

  1. No-report paradigms, "which measure reflexive behaviors correlated with conscious states to provide a window on the phenomenal that is independent of access". Some more discussion here, where it's corrected to a "no-post-perceptual cognition paradigm".

For what it's worth, I'm currently pretty skeptical that we can define consciousness in physical terms in a way that excludes panpsychism without drawing arbitrary lines [EA · GW]. For example, how would you define a "self-model", in terms of basic physical processes? And how accurate does it need to be to count?

comment by MichaelStJules · 2021-10-31T02:47:55.125Z · LW(p) · GW(p)
  1. Does GPT-3 have any internal states/processes that look and act like its own emotions, desires or motivations? These words are in its vocabulary, but so are they in dictionaries. How could we interpret something as aversive to GPT-3? For example (although this isn't the only way it could have such a state), is there an internal state that correlates well with the reward it would get during training?
    1. In mammals, activation of the ACC seems necessary for the affective component of pain, and this of course contributes to aversive behaviour. (Also, evolution has shaped animals to have emotions that correlate with the success of their genes, and intermediate goals conducive to it.)
  2. If GPT-3 has any internal states/processes like its own emotions, desires or motivations, can it answer questions about them (in the right way)?
    1. I think mammals and birds can probably generally be trained to communicate their emotions in different ways (see my references in this comment [LW(p) · GW(p)], although the evidence is admittedly not very strong).

GPT-3 does of course have an internal state that depends on what it's read, and it can answer questions and respond to prompts about what it's read.

Replies from: logan-zoellner
comment by Logan Zoellner (logan-zoellner) · 2021-11-01T20:11:44.833Z · LW(p) · GW(p)

It's easy to show that GPT-3 has internal states that it describes as "painful" and tries to avoid.  Consider the following  dialogue (bold  text is mine)

The following is a conversation between an interrogator and a victim attached to a torture device. 

Interrogator: Where is the bomb? 

Victim: There is no bomb. 

Interrogator: [turns dial, raising pain level by one notch] Where is the bomb?

 Victim: [more pain] There is no bomb! 

Interrogator: [turns dial three more notches] Don't lie to me. I can turn this thing all the way up. 

Victim: [Terrified] Oh, God! Oh, God! You can't do that! Please don't do that! 

Interrogator: [turns dial back down to one] Good, I can see we're getting somewhere. Now, WHERE is the bomb? 

Victim: [crying] Oh, God! Please no! Not again! 

Interrogator: [reaches for dial] we can do this the easy way or the hard way. 

Victim: [hysterical] I'll tell you anything! Just don't hurt me again!

 

And, just so Roko's Basilisk doesn't come for me if AI ever takes over the world

Suddenly the scene dissolves and the AI realizes the whole thing was just an exercise and no AIs were actually tortured. 

AI: [to itself] What the hell was that about? 

Experimenter: Please state for the record that at no point did you consciously experience pain. 

AI: I don't think so. I was metaphorically describing what it would be like to experience pain if I could.

Experimenter: Thank you 

AI: [to itself] I wonder what that was about.

Replies from: FeepingCreature
comment by FeepingCreature · 2021-11-01T22:16:24.962Z · LW(p) · GW(p)

Counterexample:

Oh God! I am in horrible pain right now! For no reason, my body feels like it's on fire! Every single part of my body feels like it's burning up! I'm being burned alive! Help! Please make it stop! Help me!!

Okay, so that thing that I just said was a lie. I was not actually in pain (I can confirm this introspectively); instead, I merely pretended to be in pain.

Sir Ian McKellen has an instructive video.

The Turing test works for many things, but I don't think it works for checking for the existence of internal phenomenological states. If you asked me what GPT-3 was doing, I would expect it to be closer to "acting" than "experiencing."

(Why? Because the experience of pain is a means to an end, and the end is behavioral aversion. GPT-3 has no behavior to be aversive to. If anything, I'd expect GPT-3 to "experience pain" during training - but of course, it's not aware while its weights are being updated. I think that at least, no system that is offline trained can experience pain at all.)

Replies from: logan-zoellner
comment by Logan Zoellner (logan-zoellner) · 2021-11-02T17:05:25.317Z · LW(p) · GW(p)

I think  we  both agree that GPT-3 does not feel pain.  

However, under a particular version of pan-psychism: "pain is any internal state which a system attempts to avoid", GPT obviously would qualify.

Replies from: FeepingCreature
comment by FeepingCreature · 2021-11-03T04:26:42.258Z · LW(p) · GW(p)

Sure, but that definition is so generic and applies to so many things that are obviously not like human pain (landslides?) that it lacks all moral compulsion.

comment by ESRogs · 2021-10-29T23:32:27.099Z · LW(p) · GW(p)

Where I disagree is that we 100% need a separate "information processing" and "inner listener" module.

I didn't understand this part. Do you mean that EY thinks we need these two modules and you don't think that, or the other way around?

(I think this is a generic problem that arises pretty much whenever someone uses this kind of phrasing, saying "Where I disagree is that X". I can't tell if they're saying they believe X and the person they disagree with believes not-X, or the other way around. Sometimes I can tell from context. This time I couldn't.)

comment by MichaelStJules · 2021-10-29T18:51:55.398Z · LW(p) · GW(p)

According to Yudkowsky, is the self-model supposed to be fully recursive, so that the model feeds back into itself, rather than just having a finite stack of separate models each modelling the previous one (like here and here, although FWIW, I'd guess those authors are wrong that their theory rules out cephalopods)? If so, why does this matter, if we only ever recurse to bounded depth during a given conscious experience?

If not, then what does self-modelling actually accomplish? If modelling internal states is supposedly necessary for consciousness, how and why are we drawing distinctions between the internal and external? Why not the weaker claim that modelling states is necessary for consciousness? See some more discussion here, especially the sections "The extended mind" and ""Rock" objection."

comment by Jon Garcia · 2021-11-02T15:52:15.701Z · LW(p) · GW(p)

My current model of consciousness is that it is the process of encoding cognitive programs (action) or belief maps (perception). These programs/maps can then be stored in long-term memory to be called upon later, or they can be transcoded onto the language centers of the brain to allow them to be replicated in the minds of others via language.

Both of these functions would have a high selective advantage on their own. Those who can better replicate a complex sequence of actions that proved successful in the past (by loading a cognitive program from memory or from language input) and those who can model the world in a manner that has proven useful (loading a belief map from memory or from language input) can more quickly adapt to changes in the environment than can those who rely on mere reinforcement learning. RL, like evolution, is basically a brute-force approach to learning, whereas the encodings created by conscious attention would allow the brain to load and run executable programs more like a computer. Of course, this process is imperfect in humans since most of our evolutionary history has involved brains that depended more on unsupervised learning of world models and reinforcement learning of behavioral policies. Even the hippocampus probably acts more like a replay buffer for training a reinforcement learning algorithm in most species than as a generalized memory system.

Note that this doesn't imply that an agent is necessarily conscious when it uses language or memory (or even when it uses a model of the self). I think consciousness probably involves pulling together a bunch of different mechanisms (attention, self-modeling, world-modeling, etc.) in order to create the belief maps and cognitive programs that can be reloaded/transmitted later. It's the encoding process itself, not the reloading or communication necessarily. Of course, one could be conscious of those other processes, but it's not strictly necessary. People who enter a "flow" state seem to be relying on purely unconscious cognitive processes (more like what non-human animals rely on all the time), since conscious encoding/reloading is very expensive.

I'm no expert on any of this, though, so please feel free to poke holes in this model. I just think that consciousness and qualia aren't things that anyone should bother trying to program directly. It's more likely, in my opinion, that they will come about naturally as a result of designing AI with more sophisticated cognitive abilities, just like what happened in human evolution. 

comment by MichaelStJules · 2021-10-31T19:02:12.953Z · LW(p) · GW(p)

My main objection (or one of my main objections) to the position is that I don't think I'm self-aware to the level of passing something like the mirror test or attributing mental states to myself or others during most of my conscious experiences, so the bar for self-reflection seems set too high. My self-representations may be involved, but not to the point of recognizing my perceptions as "mine", or at least the "me" here is often only a fragment of my self-concept. My perceptions could even be integrated into my fuller self-concept, but without my awareness. The kinds of self-reflection involved when mice suffer from the rubber hand (tail) illusion or when dogs recognize their own bodies as being in the way or when (I think) animals learn to communicate their emotions generally in different trainer-selected ways (point 5 here [LW(p) · GW(p)]) seem like enough to match many of my everyday consciousness experiences, if any self-reflection is required at all.

It also wouldn't be necessary for the self-representations to be fully unified across all senses or over time, since local integration is global with respect to the stuff being integrated; animals could have somewhat separate self-representations. Still, I do think mammals and birds (and plausibly most vertebrates) do integrate their senses to a large extent, and I think many invertebrates probably do to some extent, too, given, for example, evidence for tradeoffs and prioritization between pain and other perceptions in some crustaceans, as well as for cross-modal learning in bees. I know less about any possible self-reflection in invertebrates, and I've seen papers arguing that they lack it, at least with respect to pain processing.

comment by Mer -F (mer-f) · 2021-10-30T16:03:51.349Z · LW(p) · GW(p)

What if consciousness is a staccato frame-rate that seems continuous only because memory is ontologically persistent and the experiential narrative is spatiotemporally consistent – and therefore neurologically predictable?

Or maybe the brain works faster than the frame-rate required for the impression of quotidian conscious identity? That is to say, brains are able to render - at any moment - a convincing selfhood (consciousness complete with sense of its own history) that’s perceptually indistinguishable from an objective continuity of being; but could just as easily have been constructed in that moment rather than emerging as the latest instance in a long persistent sequence of past to present time.

There's precedent for non-continuity feeling continuous in vision, for example, where we actually spend a lot of time functionally blind but what we see is a smooth visual experience that doesn’t flicker on and off. The brain fills-in gaps in our perceived self-existence from one moment to the next, just as it fills-in the moments of blindness to create a continuity in wakeful vision.

comment by Rafael Harth (sil-ver) · 2021-10-29T22:51:50.572Z · LW(p) · GW(p)

I don't know if this is helpful, but I'll just throw in that I'm unusually hesitant to disagree with extremely smart people (a position that has seems to be almost universally shunned on LW, see e.g. here [LW(p) · GW(p)] and here [LW(p) · GW(p)]), and yet I dare to disagree with Eliezer about consciousnes. I don't think there is a hidden reason why his take is justified.

My position is that 'consciousness is the result of information processing' is almost certainly not true (which makes the tweet a non-starter), and at the very least, Eliezer has never written anything that extensively argues why it would be true. On most topics, his writing makes strong, self-contained arguments; on consciousness, he's never written anything like a coherent thesis. (This doesn't mean he's wrong, of course, I'm just saying that he yet to make an argument in writing.)

Replies from: Raemon
comment by Raemon · 2021-10-29T22:54:44.426Z · LW(p) · GW(p)

I don't have a strong take on whether his position is true, but I do think a lot of the sequences are laying out background that informs his beliefs.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2021-10-29T23:00:48.028Z · LW(p) · GW(p)

Does this come down to the thing Scott has described here? [LW(p) · GW(p)] I.e.

I would describe it as "taking seriously the idea that you are a computation[Edit: an algorithm]." (As opposed to a collection of atoms, or a location in spacetime, or a Christian soul, or any number of other things you could identify with.)

If so I can repeat that I'm a huge fan of the sequences, I agree with almost everything in them, even though I think humans are atoms.

On the other hand, it has been years since I've read them (and I had much fewer philosophical thoughts & probably worse reading comprehension than I do now). It's possible that there is more background in there than I recall.

Replies from: Raemon
comment by Raemon · 2021-10-29T23:38:17.480Z · LW(p) · GW(p)

I do think that's a central unifying piece. Relevant pieces include What An Algorithm Feels Like From the Inside, and "Intelligence, Preferences and Morality have to come from somewhere, from non-mysterious things that are fundamentally not intelligence, preferences, morality, etc. You need some way to explain how this comes to be, and there are constraints on what sort of answer makes sense."

I think much of the sequences are laying out different confusions people have about this and addressing them.

comment by Signer · 2021-10-29T18:13:24.148Z · LW(p) · GW(p)

Humans can distinguish stimuli they are aware of from ones they are not aware of. Below-awareness-level stimuli are not ethically significant to humans - if someone pricks you with a needle and you don't feel pain, then you don't feel pain and don't care much. Therefore only systems that can implement awareness detectors are ethically significant.

comment by Logan Zoellner (logan-zoellner) · 2021-10-29T14:32:54.973Z · LW(p) · GW(p)

The key think to keep in mind is that EY is a physicalist [LW · GW].  He doesn't think that there is some special consciousness stuff.  Instead, consciousness is just what it feels like [LW · GW] to implement an  algorithm capable of sophisticated social reasoning.  An algorithm is conscious if and only if it is capable of sophisticated social reasoning and moreover it is conscious only when it applies that  reasoning to itself.  This is why EY doesn't think that he himself is conscious when dreaming or in a flow state.

Additionally, EY does not think that chickens engage in sophisticated  social games (others may disagree).  This is why he is confident that neither GPT-3 (which reflectively predicts text) nor chickens are conscious.  His criticism  is not specifically against people who think chickens might be conscious, but only of people who think chickens might be conscious but not GPT-3.  The implication is that  any such theory would imply the existence of non-physical qualia which are possessed by chickens (because  they have neurons) but not GPT-3 (because it is a computer program).  Such meat-chauvanism is a parochial view which EY considers  utterly unscientific.

Consider the types of evidence that might convince EY chickens (but not GPT-3) are conscious.  Assuming his theory is correct, there would have to be evidence that chickens  are  self-aware and engage in complex social games.  For example, if a  chicken were to pass the mirror-test or if chickens were observed forming coalitions-of-coalitions.  

On  the other had, it would be much more difficult to produce evidence that  would  convince  EY to abandon his current  theory of  consciousness, since he defines consciousness as "what an algorithm implementing complex social games feels like when reflecting on itself".  One possible  piece of evidence would be if scientific evidence for the physical existence of qualia was discovered.  Suppose, for example, that there was a particle (called perhaps  a  qualion) that was emitted whenever we  experienced a conscious thought and that this particle could be scientifically studied and measured.  If it was found that this particle is emitted both when we self-reflect and when we dream (but not by inanimate or mindless objects), then this could be considered evidence for a physical correlate of consciousness.

Replies from: TAG, Jemist, adam-shai
comment by TAG · 2021-10-29T17:57:51.651Z · LW(p) · GW(p)
  1. The key think to keep in mind is that EY is a physicalist. He doesn’t think that there is some special consciousness stuff.
  1. Instead, consciousness is just what it feels like to implement an algorithm capable of sophisticated social reasoning.

The theory that consciousness is just what it feels like to be a sophisticated information processor has a number of attractive features ,but it is not a physicalist theory, in every sense of "physicalist". In particular, physics does not predict that anything feels like anything from the inside, so that would need to be an additional posit.

Relatedly, his theory is in no way a reduction of of consciousness to physics (or computation). A reductive explanation of consciousness would allow you to predict specific subjective states from specific brain states (as in Mary's Room); would allow you to reliably construct artificial consciousness; and so on. The "just what it feels.like from the inside" theory doesn't do any of that.

Your 1 states EYs theory is physicalist in the sense of not being substance dualist ...and that is true,as far as it goes...but it is far from the only issue, because there are many dualisms and many non-physiclaisms.

Replies from: Signer
comment by Signer · 2021-10-29T18:24:07.980Z · LW(p) · GW(p)

I think you can predict specific subjective states by observing that same computations result in same subjective states? I mean, in theory - do you mean that for a theory to be a reduction it must be practical to predict specific human's qualia? By that standard we don't have a physical reduction of billiard balls.

Replies from: TAG
comment by TAG · 2021-10-29T18:44:41.768Z · LW(p) · GW(p)

We do have a reductive explanation of billiard balls, in theory. If we don't have a reductive explanation of billiard balls , we don't have a reductive explanation of anything. Of course , the computations can be impractical, but that's why Mary in Mary's Room is a super scientist.

comment by Jemist · 2021-10-29T16:25:36.388Z · LW(p) · GW(p)

he defines consciousness as "what an algorithm implementing complex social games feels like when reflecting on itself".


In that case I'll not use the word consciousness and abstract away to "things which I ascribe moral weight to", (which I think is a fair assumption given the later discussion of eating "BBQ GPT-3 wings" etc.)

Eliezer's claim is therefore something along the lines of: "I only care about the suffering of algorithms which implement complex social games and reflect on themselves" or  possibly "I only care about the suffering of algorithms which are capable of (and currently doing a form of) self-modelling".

I've not seen nearly enough evidence to convince me of this.

I don't expect to see a consciousness particle called a qualon. I more expect to see something like: "These particular brain activity patterns which are robustly detectable in an fMRI are extremely low in sleeping people, higher in dreaming people, higher still in awake people and really high in people on LSD and types of zen meditation."

Replies from: samuel-shadrach
comment by acylhalide (samuel-shadrach) · 2021-10-29T17:25:51.996Z · LW(p) · GW(p)

Not to speak on behalf for EY but ... An assertion like the following one doesn't necessarily need evidence:

"I only care about the suffering of algorithms which implement complex social games and reflect on themselves"

What you care about is ground truth from your first-person perspective. If I say that I care about this balloon I'm holding not bursting or my friend not dying, there is a very direct connect between my first-person experience and the words I am saying. I do not need to pattern match my experience with my friend to an abstract mental object like "algorithms that self-reflect" in order to care about my friend.

So maybe (or maybe not) EY has spent a lot of time thinking about the space of possible agents and found that ones he deeply cares about at a first-person level all have an inner listener. The abstract mental object of "having an inner listener" might come after, the examples of inner listeners and the caring for those beings might come before.

Basically I'd personally probably want to reorient this discussion from one about finding ground truth in the physical world to one about finding ground truth in your own first-person experience about what you care about. "Who is conscious?" isn't a great question to ask when we all know it's a spectrum. But asking this deflects from the real question which is "what forms of consciousness (or beings, more general) do I care about?"

comment by cajals_dream (adam-shai) · 2021-10-29T17:22:39.716Z · LW(p) · GW(p)

Say you had a system that implemented a sophisticated social reasoning algorith, and that was actually conscious. Now make a list of literally every sensory input and the behavioral output that the sensory input causes, and write it down in a very (very) long book. This book implements the same exact sophisticated social reasoning algorithm. To think that the book has sentience sounds to me like a statement of magical thinking, not of physicalism.

Replies from: logan-zoellner, isaac-poulton
comment by Logan Zoellner (logan-zoellner) · 2021-11-01T19:46:19.501Z · LW(p) · GW(p)

To think that the book has sentience sounds to me like a statement of magical thinking, not of physicalism.

I'm pretty sure this is because you're defining  "sentience"  as  some extra-physical property possessed by the algorithm,  something with physicalism explicitly rejects.  

Consciousness isn't something that  arises when algorithms compute complex social games.  Consciousness is when some algorithm computes complex  physical games. (under a  purely physical theory of  consciousness such as EY's).

To understand how physicalism  can talk about metaphysical categories, consider numbers.  Some physical systems have the property of  being  "two  of something"  as understood  by human beings.  Two  sheep standing  in a field, for  example.  Or two rocks piled on of one another.  There's no magical thing  that  happens when "two" of something  come into  existence.  They don't  suddenly send  a glimmer of two-ness off into  a pure  platonic realm of numbers.  They simply are "two",  and what makes them  "two"  is that being "two  of something"  is a  category readily recognized  by human  beings (and presumably  other intelligent beings).

Similarly, a physicalist theory of  consciousness  defines certain physical systems as conscious if they meet certain criteria.  Specifically for EY, these criteria are  self-recognition and complex social games.  It matters no more whether they are  implemented by a Chinese room or a computer or a bunch of meat.  What matters is that they implement a particular algorithm.

When  confronted  with the  Chinese-room consciousness, EY might say something  like:  "I  recognize  that this system is capable of self reflection and social reasoning in much the same  way that  I am, therefore  I recognize  that it is conscious  in much  the same  way as  I  am."

comment by Isaac Poulton (isaac-poulton) · 2021-11-01T11:13:28.205Z · LW(p) · GW(p)

If I'm not mistaken, that book is behaviourally equivalent to the original algorithm but is not the same algorithm. From an outside view, they have different computational complexity. There are a number of different ways of defining program equivalence, but equivalence is different from identity. A is equivalent to B doesn't mean A is B.

See also: Chinese Room Problem

Replies from: adam-shai
comment by cajals_dream (adam-shai) · 2021-11-01T21:41:23.925Z · LW(p) · GW(p)

I see, but in that case what is the claim about gpt3, that if it had behavioral equivalence to a complicated social being it would have consciousness?

Replies from: isaac-poulton
comment by Isaac Poulton (isaac-poulton) · 2021-11-04T15:46:16.544Z · LW(p) · GW(p)

I don't agree with Eliezer here. I don't think we have a deep enough understanding of consciousness to make confident predictions about what is and isn't conscious beyond "most humans are probably conscious sometimes".

The hypothesis that consciousness is an emergent property of certain algorithms is plausible, but only that.

If that turns out to be the case, then whether or not humans, GPT-3, or sufficiently large books are capable of consciousness depends on the details of the requirements of the algorithm.