I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

post by J Bostock (Jemist) · 2021-10-29T11:09:20.559Z · LW · GW · 120 comments

Contents

  "Higher" Consciousness
    Asleep Human < Awake Human < Human on Psychedelics/Zen Meditation
  Inner Listeners
  Confidence
  Conclusions
None
121 comments

From Twitter:

I'd say that I "don't understand" why the people who worry that chickens are sentient and suffering, don't also worry that GPT-3 is sentient and maybe suffering; but in fact I do understand, it's just not a charitable understanding. Anyway, they're both unsentient so no worries.

His overall thesis is spelt out in full here but I think the key passages are these ones:

What my model says is that when we have a cognitively reflective, self-modely thing, we can put very simple algorithms on top of that — as simple as a neural network having its weights adjusted — and that will feel like something, there will be something that it is like that thing to be, because there will be something self-modely enough to feel like there’s a thing happening to the person-that-is-this-person.

So I would be very averse to anyone producing pain in a newborn baby, even though I’d be truly shocked (like, fairies-in-the-garden shocked) to find them sentient, because I worry that might lose utility in future sentient-moments later.

I’m not totally sure people in sufficiently unreflective flow-like states are conscious, and I give serious consideration to the proposition that I am reflective enough for consciousness only during the moments I happen to wonder whether I am conscious.

I'm currently very confident on the following things, and I'm pretty sure EY is too:

  1. Consciousness (having qualia) exists and humans have it
  2. Consciousness isn't an epiphenomenon
  3. Consciousness is a result of how information is processed in an algorithm, in the most general sense: a simulation of a human brain is just as conscious as a meat-human

EY's position seems to be that self-modelling is both necessary and sufficient for consciousness. But I don't ever see him putting forward a highly concrete thesis for why this is the case. He is correct that his model has more moving parts than other models. But having more moving parts only makes sense if it's actually good at explaining observed data. And we only have one datapoint, which is that adult humans are conscious. Or do we?

"Higher" Consciousness

We actually have a few datapoints here. An ordering of consciousness as reported by humans might be:

Asleep Human < Awake Human < Human on Psychedelics/Zen Meditation

I don't know if EY agrees with this. From his beliefs he might say something along the lines of "having more thoughts doesn't mean you're more conscious". Given his arguments about babies, I'm pretty sure he thinks that you can have memories of times when you weren't conscious, and then consciously experience those things in a sort of "second hand" way by loading up those memories.

Now a lot of Zen meditation involves focusing on your own experiences, which seems like self-modelling. However something else I notice here is the common experience of "ego death" while using psychedelics and in types of meditation. Perhaps EY has a strong argument that this in fact requires more self-modelling than previous states. On the other hand, he might argue that consciousness is on/off, and then amount of experience is unrelated to whether or not those experiences are being turned into qualia.

I'm trying to give potential responses to my arguments, but I don't want to strawman EY so I ought to point out that there are lots of other counter-arguments to this he might have, which might be more insightful than my imagined ones.

Inner Listeners

EY talks a lot about "inner listeners", and mentions that a good theory should be able to have them arise naturally in some way. I agree with this point, and I do agree that his views provide a possible explanation as to what produces an inner listener.

Where I disagree is that we 100% need a separate "information processing" and "inner listener" module. The chicken-conscious, GPT-3-unconscious model seems to make sense from the following perspective:

Some methods of processing input data cause consciousness and some don't. We know that chickens process input data in a very similar way to humans (by virtue of being made of neurons) and we know that GPT-3 doesn't process information in that way (by virtue of not being made of neurons). I guess this is related to the binding problem.

Confidence

But what surprises me the most about EY's position is his confidence in it. He claims to have never seen any good alternatives to his own model. But that's simply a statement about the other beliefs he's seen, not a statement about all hypothesis-space. I even strongly agree with the first part of his original tweet! I do suspect most people who believe chickens are conscious but GPT-3 isn't believe it for bad reasons! And the quality of replies is generally poor.

EY's argument strikes me as oddly specific. There are lots of things which human brains do (or we have some uncertainty of them doing) which are kind of weird:

EY has picked out one thing (self modelling) and decided that it alone is the source of consciousness. Whether or not he has gone through all the weird and poorly-understood things brains do and ruled them out, I don't know. Perhaps he has. But he doesn't mention it in the thesis that he links to to explain his beliefs. He doesn't even mention that he's conducted such a search, the closest thing to that being references to his own theory treating qualia as non-mysterious (which is true). I'm just not convinced without him showing his working!

Conclusions

I am confused, and at the end of the day that is a fact about me, not about consciousness. I shouldn't use my own bamboozlement as strong evidence that EY's theory is false. On the other hand, the only evidence available (in the absence of experimentation) for an argument not making sense is that people can't make sense of it.

I don't think EY's theory of consciousness is completely absurd. I put about 15% credence in it. I just don't see what he's seeing that elevates it to being totally overwhelmingly likely. My own uncertainty is primarily due to the lack of truly good explanations I've seen of the form "X could cause consciousness", combined with the lack of strong arguments made of the form "Here's why X can't be the cause of consciousness". Eliezer sort of presents the first but not the second.

I would love for someone to explain to me why chickens are strongly unlikely to be conscious, so I can go back to eating KFC. I would also generally like to understand consciousness better.

120 comments

Comments sorted by top scores.

comment by So8res · 2021-10-30T04:07:30.249Z · LW(p) · GW(p)

Instrumental status: off-the-cuff reply, out of a wish that more people in this community understood what the sequences have to say about how to do philosophy correctly (according to me).

EY's position seems to be that self-modelling is both necessary and sufficient for consciousness.

That is not how it seems to me. My read of his position is more like: "Don't start by asking 'what is consciousness' or 'what are qualia'; start by asking 'what are the cognitive causes of people talking about consciousness and qualia', because while abstractions like 'consciousness' and 'qualia' might turn out to be labels for our own confusions, the words people emit about them are physical observations that won't disappear. Once one has figured out what is going on, they can plausibly rescue the notions of 'qualia' and 'consciousness', though their concepts might look fundamentally different, just as a physicist's concept of 'heat' may differ from that of a layperson. Having done this exercise at least in part, I (Nate's model of Eliezer) assert that consciousness/qualia can be more-or-less rescued, and that there is a long list of things an algorithm has to do to 'be conscious' / 'have qualia' in the rescued sense. The mirror test seems to me like a decent proxy for at least one item on that list (and the presence of one might correlate with a handful of others, especially among animals with similar architectures to ours)."

An ordering of consciousness as reported by humans might be:

Asleep Human < Awake Human < Human on Psychedelics/Zen Meditation

I don't know if EY agrees with this.

My model of Eliezer says "Insofar as humans do report this, it's a fine observation to write down in your list of 'stuff people say about consciousness', which your completed theory of consciousness should explain. However, it would be an error to take this as much evidence about 'consciousness', because it would be an error to act like 'consciousness' is a coherent concept when one is so confused about it that they cannot describe the cognitive antecedents of human insistence that there's an ineffable redness to red."

But what surprises me the most about EY's position is his confidence in it.

My model of Eliezer says "The type of knowledge I claim to have, is knowledge of (at least many components of) a cognitive algorithm that looks to me like it codes for consciousness, in the sense that if you were to execute it then it would claim to have qualia for transparent reasons and for the same reasons that humans do, and to be correct about that claim in the same way that we are. From this epistemic vantage point, I can indeed see clearly that consciousness is not much intertwined with predictive processing, nor with the "binding problem", etc. I have not named the long list of components that I have compiled, and you, who lack such a list, may well not be able to tell what consciousness is or isn't intertwined with. However, you can still perhaps understand what it would feel like to believe you can see (at least a good part of) such an algorithm, and perhaps this will help you understand my confidence. Many things look a lot more certain, and a lot less confusing, once you begin to see how to program them."

Replies from: acgt, Jemist, Ape in the coat, MichaelStJules, EI
comment by acgt · 2022-02-20T22:58:00.535Z · LW(p) · GW(p)

Your comments here and some comments Eliezer had made elsewhere seem to imply he believes he has at least in large party “solved” consciousness. Is this fair? And if so is there anywhere he has written up this theory/analysis in depth - because surely if correct this would be hugely important

I’m kind of assuming that whatever Eliezer’s model is, the bulk of the interestingness isn’t contained here and still needs to be cashed out, because the things you/he list (needing to examine consciousness through the lens of the cognitive algorithms causing our discussions of it, the centrality of self-modely reflexive things to consciousness etc.) are already pretty well explored and understood in mainstream philosophy, e.g Dennett.

Or is the idea here that Eliezer believes some of these existing treatments (maybe modulo some minor tweaks and gaps) are sufficient for him to feel like he has answered the question to his own satisfaction.

Basically struggling to understand which of the 3 below is wrong, because all three being jointly true seem crazy

  1. Eliezer has a working theory of consciousness
  2. This theory differs in important ways from existing attempts
  3. Eliezer has judged that it is not worthwhile writing this up
comment by J Bostock (Jemist) · 2021-10-30T18:40:20.393Z · LW(p) · GW(p)

I'm confident your model of Eliezer is more accurate than mine.

Neither the twitter thread or other writings originally gave me the impression that he had a model in that fine-grained detail. I was mentally comparing his writings on consciousness to his writings on free will. Reading the latter made me feel like I strongly understood free will as a concept, and since then I have never been confused, it genuinely reduced free will as a concept in my mind.

His writings on consciousness have not done anything more than raise that model to the same level of possibility as a bunch of other models I'm confused about. That was the primary motivation for this post. But now that you mention it, if he genuinely believes that he has knowledge which might bring him closer to (or might bring others closer to to) programming a conscious being, I can see why he wouldn't share it in high detail.

comment by Ape in the coat · 2021-10-30T08:38:01.802Z · LW(p) · GW(p)

While I agree with mostly everything your model of Eliezer said, I do not feel less confused about how Eliezer arrives to a conclusion that most animals are not conscious. Granted, I may, and probably actually am, lacking an important insight in the matter, but than it will be this insight that allows me to become less confused and I wish Eliezer shared it.

When I'm thinking about a thought process that allows to arrive to such a conclusion I imagine something like this. Consciousness is not fundamental but it feels like it is. That's why we intuitively apply concepts such as quantity towards consciousness, thinking about more or less conscious creatures as being more or less filled with conscious-fluid as we previously though about flogiston or caloric fluid. But this intuition is confused and leads us astray. Consciousness is a result of a specific cognitive algorithm. This algorithm can either be executed or not. There are good reasons to assume that such algorithm would be developped by evolution only among highly social animals as such conditions lead to necessity to model other creatures modelling yourself.

And I see an obvious problem with this line of thoughts. Reversed confusion isn't insight [LW · GW]. Our confused intuition which leads us to quantifying consciousness may be wrong, but it isn't necessary wrong. If anything, the idea that consciousness isn't quantifiable is also originally based on the idea of consciousness being fundamental. Think about ancient hebrews who claimed that animals didn't have souls. There are lots of bad reasons to think that farm animals are ethically irrelivant, indeed it would be super convinient, considered how tasty is their meat. That doesn't automatically mean that they are ethically relevant, just hints at the possibility.

We can think about hearing, or vision, or sense of smell. They are not fundamental. They are the result of a specific algorithm executed by our brain. Yet we can quantify them. Quantifying them actually makes a lot of sense, considered that evolution works incrementally. Why can't it be the same for consciousness?

Replies from: So8res
comment by So8res · 2021-10-30T14:41:34.764Z · LW(p) · GW(p)

I don't think the thought process that allows one to arrive at (my model of) Eliezer's model looks very much like your 2nd paragraph. Rather, I think it looks like writing down a whole big list of stuff people say about consciousness, and then doing a bunch of introspection in the vicinity, and then listing out a bunch of hypothesized things the cognitive algorithm is doing, and then looking at that algorithm and asking why it is "obviously not conscious", and so on and so forth, all while being very careful not to shove the entire problem under the rug in any particular step (by being like "and then there's a sensor inside the mind, which is the part that has feelings about the image of the world that's painted inside the head" or whatever).

Assuming one has had success at this exercise, they may feel much better-equipped to answer questions like "is (the appropriate rescuing of) consciousness more like a gradient quantity or more like a binary property?" or "are chickens similarly-conscious in the rescued sense?". But their confidence wouldn't be coming from abstract arguments like "because it is an algorithm, it can either be executed or not" or "there are good reasons to assume it would be developed by evolution only among social animals"; their confidence would be coming from saying "look, look at the particular algorithm, look at things X, Y, and Z that it needs to do in particular, there are other highly-probable consequences of a mind being able to do X, Y, and Z, and we difinitively observe those consequences in humans, and observe their absence in chickens."

You might well disbelieve that Eliezer has such insight into cognitive algorithms, or believe he made a mistake when he did his exercise! But hopefully this sheds some light on (what I believe is) the nature of his confidence.

comment by MichaelStJules · 2021-10-30T08:13:09.836Z · LW(p) · GW(p)

Thanks, this is helpful.

what are the cognitive causes of people talking about consciousness and qualia

Based on the rest of your comment, I'm guessing you mean talk about consciousness and qualia in the abstract and attribute them to themselves, not just talk about specific experiences they've had.

a cognitive algorithm that looks to me like it codes for consciousness, in the sense that if you were to execute it then it would claim to have qualia for transparent reasons and for the same reasons that humans do, and to be correct about that claim in the same way that we are

Why use the standard of claiming to be conscious/have qualia? That is one answer that gets at something that might matter, but why isn't that standard too high?

For example, he wrote:

I give serious consideration to the proposition that I am reflective enough for consciousness only during the moments I happen to wonder whether I am conscious.

If this proposition is false, we need to allow unsymbolized (non-verbal) ways to self-attribute consciousness for self-attributing consciousness to matter in itself, right? Would (solidly) passing the mirror test be (almost) sufficient at this point? There's a visual self-representation, and an attribution of the perception of the mark to this self-representation. What else would be needed?

Would it need to non-symbolically self-attribute consciousness generally, not just particular experiences? How would this work?

If the proposion is true, doesn't this just plainly contradict our everyday experiences of consciousness? I can direct my attention towards things other than wondering whether or not I'm conscious (and towards things other than and unrelated to my inner monologue), while still being conscious, at least in a way that still matters to me that I wouldn't want to dismiss. We can describe our experiences without wondering whether or not we're having (or had) them.

it would claim to have qualia for transparent reasons and for the same reasons that humans do, and to be correct about that claim in the same way that we are

What kinds of reasons? And what would being correct look like?

If unsymbolized self-attribution of consciousness is enough, how would we check just for it? The mirror test?

Replies from: So8res
comment by So8res · 2021-10-30T15:39:21.161Z · LW(p) · GW(p)

Based on the rest of your comment, I'm guessing you mean talk about consciousness and qualia in the abstract and attribute them to themselves, not just talk about specific experiences they've had.

If I were doing the exercise, all sorts of things would go in my "stuff people say about consciousness" list, including stuff Searl says about chinese rooms, stuff Chalmers says about p-zombies, stuff the person on the street says about the ineffable intransmissible redness of red, stuff schoolyard kids say about how they wouldn't be able to tell if the color they saw as green was the one you saw as blue, and so on. You don't need to be miserly about what you put on that list.

Why use the standard of claiming to be conscious/have qualia? That is one answer that gets at something that might matter, but why isn't that standard too high?

Mostly (on my model) because it's not at all clear from the getgo that it's meaningful to "be conscious" or "have qualia"; the ability to write an algorithm that makes the same sort of observable-claims that we make, for the same cognitive reasons, demonstrates a mastery of the phenomenon even in situations where "being conscious" turns out to be a nonsense notion.

Note also that higher standards on the algorithm you're supposed to produce are more conservative: if it is meanigful to say that an algorithm "is conscious", then producing an algorithm that is both conscious, and claims to be so, for the same cognitive reasons we do, is a stronger demonstration of mastery than isolating just a subset of that algorithm (the "being conscious" part, assuming such a thing exists).

I'd be pretty suspicious of someone who claimed to have a "conscious algorithm" if they couldn't also say "and if you inspect it, you can see how if you hook it up to this extra module here and initialize it this way, then it would output the Chinese Room argument for the same reasons Searl did, and if you instead initialize it that way, then it outputs the Mary's Room thought experiment for the same reason people do". Once someone demonstrated that sort of mastery (and once I'd verified it by inspection of the algorithm, and integrated the insights therefrom), I'd be much more willing to trust them (or to operate the newfound insights myself) on questions of how the ability to write philosophy papers about qualia relates to the ability of the mind to feel, but the qualifying bar for "do you have a reductionist explanation of consciousness" is "can you show me how to build something that produces the observations we set out to explain in the first place (people talking about 'consciousness') fo rthe same cognitive reasons?".

Note further that demonstrating an algorithm that produces the same sort of claims humans do (eg, claims about the redness of red) for the same cognitive reasons, is not the same thing as asserting that everything "with consciousness/qualia" must make similar claims.

If this proposition is false, we need to allow unsymbolized (non-verbal) ways to self-attribute consciousness for self-attributing consciousness to matter in itself, right?

My model of Eliezer says "In lieu of an algorithmic account of the cognitive antecedents of people insisting they are conscious, that sort of claim is not even wrong." (And similarly with various other claims in that section.) My model continues: "You seem to me to be trying to do far more with the word 'consciousness' than your understanding of the phenomenon permits. I recommend doing less abstract reasoning about how 'consciousness' must behave, and more thinking about the cognitive causes behind the creation of the Mary's Room hypothetical."

What kinds of reasons?

My model says: "The list of reasons is not particularly small, in this case."

And what would being correct look like?

"The claim is correct if the actual cognitive reasons for Searl inventing the Chinese Room hypothetical, are analogous to the cognitive reasons that the alleged algorithm invents the Chinese Room hypothetical, and so on and so forth.

"This is of course difficult to check directly. However, fairly strong evidence of correctness can be attained by reading the algorithm and imagining its execution. Just as you can stare at the gears of a watch until you understand how their interactions makes the watch-hands tick, at which point you can be justifiably confident that you understand the watch, you should be able to stare at a cognitive algorithm explaining 'consciousness' until you understand how its execution makes things like 'inner listeners' 'experiencing redness' (in a suitably rescued sense), at which point you can be justifiably confident that you understand experience.

"Your fellow tribemembers, who have not understood how gears can drive the hands of a watch, might doubt your claim, saying 'There are many theories of how the watch works, ranging from internal gears to external solar radiation to the whims of the spirits. How are you so confident that it is the turning of little gears, nevermind this specific mechanism that you claim you can sketch out in the dirt?'. And you could rightly reply, 'When we unscrew the back, we see gears. And there is an arrangement of gears, that I understand, that by inspection would tick the hands in just the way we observe the hands to tick. And while I have not fully taken the watch apart, the visible features of the gears we can see when we unscrew the back, match the corresponding properties of my simple gear mechanism. This is enough for me to be pretty confident that something like my mechanism, which I understand and which clealry by inspection ticks watch-hands, governs the watch before us."

Replies from: MichaelStJules
comment by MichaelStJules · 2021-10-30T21:55:58.893Z · LW(p) · GW(p)

Shouldn't mastery and self-awareness/self-modelling come in degrees? Is it necessary to be able to theorize and come up with all of the various thought experiments (even with limited augmentation from extra modules, different initializations)? Many nonhuman animals could make some of the kinds of claims we make about our particular conscious experiences for essentially similar reasons, and many demonstrate some self-awareness in ways other than by passing the mirror test (and some might pass a mirror test with a different sensory modality, or with some extra help, although some kinds of help would severely undermine a positive result), although I won't claim the mirror test is the only one Eliezer cares about; I don't know what else he has in mind. It would be helpful to see a list of the proxies he has in mind and what they're proxies for.

 

EY: I give serious consideration to the proposition that I am reflective enough for consciousness only during the moments I happen to wonder whether I am conscious.

Me: If this proposition is false, we need to allow unsymbolized (non-verbal) ways to self-attribute consciousness for self-attributing consciousness to matter in itself, right?

You: My model of Eliezer says "In lieu of an algorithmic account of the cognitive antecedents of people insisting they are conscious, that sort of claim is not even wrong." (And similarly with various other claims in that section.) My model continues: "You seem to me to be trying to do far more with the word 'consciousness' than your understanding of the phenomenon permits. I recommend doing less abstract reasoning about how 'consciousness' must behave, and more thinking about the cognitive causes behind the creation of the Mary's Room hypothetical."

To make sure I understand correctly, it's not the self-attribution of consciousness and other talk of consciousness like Mary's Room that matter in themselves (we can allow some limited extra modules for that), but their cognitive causes. And certain (kinds of) cognitive causes should be present when we're "reflective enough for consciousness", right? And Eliezer isn't sure whether wondering whether or not he's conscious is among them (or a proxy/correlate of a necessary cause)?

comment by EI · 2021-10-31T03:59:57.549Z · LW(p) · GW(p)

The mirror test seems to me like a decent proxy for at least one item on that list 

This is merely a bias on our own part as humans. I think people are confusing consciousness with self-awareness. They are completely different things. Consciousness is the OS that runs on the meat machine. Self-awareness is an algorithm that runs on the OS. All meat machines that run this OS have different algorithms for different functions. Some may not have any self-awareness algorithm running, some may have something similar but not exactly the same as our own self-awareness algorithm. That's where the mirror test fails. We can only observe the who-knows-how-many-levels of causality that lead to those animals to show or not show self-aware behaviors in front of a mirror. We can't say anything consequential about the actual algorithm(s) running on their OS when they stand in front of a mirror. We are just running our own set of self-awareness algorithms when we stand in front of a mirror. It seems like these algorithms change according to evolution, just like other systems within the multicellular ecosystem that make up the individual organisms. We often see animals that demonstrate these "self-aware" traits because of similar evolutionary conditions, like cats and dogs have evolved to run a lot of socializing algorithms that mingle well with our own social algorithms.

Whether the self-reflective aspect of running these algorithms on our own OS makes one feel certain way about eating meat is in and of itself the result of the relationship between multi-threading the self-aware part and the self-preservation part in terms of labeling kins and such.

At this point we aren't even conclusive about where to draw the boundary between hardware and software. We end up distinguishing between OS and simple firmware as conscious and unconscious. We mostly reduce the firmware down to simple physical reactions by the laws of physics while the OS exhibits something magical beyond those physical reactions in simpler systems. Is there something truly different that sets OS apart from firmware, or is it just our lack of understanding of the underlying mechanics? This of course touches upon the argument of determinism, which is just looking at the same systems differently.

comment by MichaelStJules · 2021-10-29T15:48:45.886Z · LW(p) · GW(p)

I don't think it's obvious that nonhuman animals, including the vertebrates we normally farm for food, don't self-model (at least to some degree). I think it hasn't been studied much, although there seems to be more interest now. Absence of evidence is at best weak evidence of absence, especially when there's been little research on the topic to date. Here's some related evidence, although maybe some of this is closer to higher-order processes than self-modelling in particular:

  1. See the discussion of Attention Schema Theory here (section "Is an attention schema evolutionarily old or unique to humans?") by the inventor of that theory, Graziano, in response to Dennett's interpretation of the theory applied to nonhuman animals (in which he also endorses the theory as "basically right"!). Basically, AST requires the individual to have a model of their own attention, an "attention schema".
    1. Dennett wrote "Dogs and other animals do exhibit some modest capacities for noticing their noticings, but we humans have mental lives that teem with such episodes – so much so that most people have never even imagined that the mental lives of other species might not be similarly populated", and then expands further.
    2. In Graziano's response: "Any creature that can endogenously direct attention must have some kind of attention schema, and good control of attention has been demonstrated in a range of animals including mammals and birds (e.g., Desimone & Duncan, 1995; Knudsen, 2018; Moore & Zirnsak, 2017). My guess is that most mammals and birds have some version of an attention schema that serves an essentially similar function, and contains some of the same information, as ours does. Just as other animals must have a body schema or be condemned to a flailing uncontrolled body, they must have an attention schema or be condemned to an attention system that is purely at the mercy of every new sparkling, bottom-up pull on attention. To control attention endogenously implies an effective controller, which implies a control model."
  2. Dogs (Canis familiaris) recognize their own body as a physical obstacle (pop-sci article)
  3. Pigs learn what a mirror image represents and use it to obtain information. EDIT: there was a replication experiment, in which only 1 of 11 mirror-experienced piglets used the detour that the mirror would help them find, and none of the 11 mirror-naive did.
  4. I think the evidence for episodic(-like) memory in nonhuman animals is getting better, particularly with more unexpected question tests, which often ask about what the animals did (although I supposed this wouldn't necessarily require a self-model, depending on the details):
    1. Mental representation and episodic-like memory of own actions in dogs
    2. Animal models of episodic memory (see the section "Incidental Encoding and Unexpected Questions")
    3. Episodic-like memory of rats as retrospective retrieval of incidentally encoded locations and involvement of the retrosplenial cortex
    4. Experiments with pigeons and rats are discussed in the section "The unexpected question" in Animals Represent the past and the Future.
  5. I left a comment here [EA(p) · GW(p)], with some other weak evidence, e.g. animals being trained to communicate their emotions in different ways (see also this post [LW · GW]), which I think would require them to be able to discriminate between their internal emotional states, i.e. their emotions are inputs to executive functions like (top-down/selective) attention, learning and memory. Also, cows may become excited by their own learning.
    1. EDIT: Finally found the paper on pigs generalizing the discrimination between non-anxiety states and drug-induced anxiety to non-anxiety and anxiety in general, in this case by pressing one lever repeatedly with anxiety, and alternating between two levers without anxiety (the levers gave food rewards, but only if they pressed them according to the condition). This experiment and similar experiments performed on rodents are discussed here, in section 4.d., starting on p.81 (and some other discussion of them earlier). For example, rats generalized from hangover to morphine withdrawal and jetlag, and from high doses of cocaine to movement restriction, from an anxiety-inducing drug to aggressive defeat and predator cues. Of course, anxiety has physical symptoms, so maybe this is what they're discriminating, not the negative affect.
  6. In general, I think there's more recent research on nonhuman metacognition and mental representation, although I haven't followed this closely, so I can't really tell you what's up. There are some recent reviews on metacognition here.
Replies from: MichaelStJules, MichaelStJules
comment by MichaelStJules · 2021-10-30T09:45:52.838Z · LW(p) · GW(p)

Of course, many animals have failed the mirror test, and that is indeed evidence of absence for those animals. Still,

  1. Animals could just be too dumb (or rely too little on vision) to understand mirrors, but still self-model in other ways, like in my top comment. Or, they might at least tell themselves apart from others in the mirrors as unique, without recognizing themselves, like some monkeys and pigeons. Pigeons can pick out live and 5-7 second delayed videos of themselves from prerecorded ones.
  2. Animals might not care about the marks. Cleaner wrasse, a species of fish, did pass the mirror test (the multiple phases, including the final self-directed behaviour with the visible mark), and they are particularly inclined to clean things (parasites) that look like the mark, which is where they get their name. I think the fact that they are inclined to clean similar looking marks was argued to undermine the results, but that seems off to me.
  3. I would be interested in seeing the mirror test replicated in different sensory modalities, e.g. something that replays animals' smells or sounds back to them, a modification near the source in the test condition, and checking whether they direct behaviour towards themselves to investigate.
    1. Some criticisms of past scent mirror test are discussed here (paper with criticism here). The issues were addressed recently here with wolves. Psychology Today summary.
    2. I think animals are more likely to show body (touch, pain) awareness and have a related self-representation (a body schema?). For example, mice get the rubber hand (tail) illusion. From having their tail and the rubber tail just stroked together, they extend their expectations of having their tail grasped to the rubber tail.
comment by MichaelStJules · 2021-10-29T17:17:02.329Z · LW(p) · GW(p)

I also don't think GPT-3 has emotions that are inputs to executive functions, like learning, memory, control, etc..

comment by Rob Bensinger (RobbBB) · 2021-10-29T17:41:38.532Z · LW(p) · GW(p)

EY's position seems to be that self-modelling is both necessary and sufficient for consciousness.

Necessary, not sufficient. I don't think Eliezer has described what he thinks is sufficient (and maybe he doesn't know what's sufficient -- i.e., I don't know that Eliezer thinks he could build a conscious thing from scratch).

comment by Rob Bensinger (RobbBB) · 2021-10-31T16:38:34.247Z · LW(p) · GW(p)

I've collected my thoughts + recent discussions on consciousness and animal patienthood here: https://www.lesswrong.com/posts/TkahaFu3kb6NhZRue/quick-general-thoughts-on-suffering-and-consciousness. [LW · GW] I don't have the same views as Eliezer, but I'm guessing me talking about my views here will help make it a little clearer why someone might not think this way of thinking about the topic is totally wrong.

comment by Michael Edward Johnson (michael-edward-johnson) · 2021-10-31T17:37:54.594Z · LW(p) · GW(p)

“By their fruits you shall know them.”

A frame I trust in these discussions is trying to elucidate the end goal. What does knowledge about consciousness look like under Eliezer’s model? Under Jemist’s? Under QRI’s?

Let’s say you want the answer to this question enough you go into cryosleep with the instruction “wake me up when they solve consciousness.” Now it’s 500, or 5000, or 5 million years in the future and they’ve done it. You wake up. You go to the local bookstore analogue, pull out the Qualia 101 textbook and sit down to read. What do you find in the pages? Do you find essays on how we realized consciousness was merely a linguistic confusion, or equations for how it all works?

As I understand Eliezer’s position, consciousness is both (1) a linguistic confusion (leaky reification) and (2) the seat of all value. There seems a tension here, that would be good to resolve since the goal of consciousness research seems unclear in this case. I notice I’m putting words in peoples’ mouths and would be glad if the principals could offer their own takes on “what future knowledge about qualia looks like.”

My own view is if we opened that hypothetical textbook up we would find crisp equations of consciousness, with deep parallels to the equations of physics; in fact the equations may be the same, just projected differently.

My view on the brand of physicalism I believe in, dual aspect monism, and how it constrains knowledge about qualia: https://opentheory.net/2019/06/taking-monism-seriously/

My arguments against analytic functionalism (which I believe Eliezer’s views fall into): https://opentheory.net/2017/07/why-i-think-the-foundational-research-institute-should-rethink-its-approach/

comment by Jacob Falkovich (Jacobian) · 2021-10-30T02:05:07.739Z · LW(p) · GW(p)

Copying from my Twitter response to Eliezer

Anil Seth usefully breaks down consciousness into 3 main components: 
1. level of consciousness (anesthesia < deep sleep < awake < psychedelic)
2. contents of consciousness (qualia — external, interoceptive, and mental)
3. consciousness of the self, which can further be broken down into components like feeling ownership of a body, narrative self, and a 1st person perspective. 

He shows how each of these can be quite independent. For example, the selfhood of body-ownership can be fucked with using rubber arms [LW · GW] and mirrors, narrative-self breaks with amnesia, 1st person perspective breaks in out-of-body experiences which can be induced in VR, even the core feeling of the reality of self can be meditated away. 

Qualia such as pain are also very contextual, the same physical sensation can be interpreted positively in the gym or a BDSM dungeon and as acute suffering if it's unexpected and believed to be caused by injury. Being a self, or thinking about yourself, is also just another perception — a product of your brain's generative model of reality — like color or pain are. I believe enlightened monks who say they experience selfless bliss, and I think it's equally likely that chickens experience selfless pain.

Eliezer seems to believe that self-reflection or some other component of selfhood is necessary for the existence of the qualia of pain or suffering. A lot of people believe this simply because they use the word "consciousness" to refer to both (and 40 other things besides). I don't know if Eliezer is making such a basic mistake, but I'm not sure why else he would believe that selfhood is necessary for suffering.

Replies from: gjm, SaidAchmiz, TAG
comment by gjm · 2021-10-30T15:19:33.042Z · LW(p) · GW(p)

I agree with pretty much all of that but remark that "deep sleep < awake < psychedelic" is not at all clearly more correct than "deep sleep < psychedelic < awake". You may feel more aware/conscious/awake/whatever when under the effects of psychedelic drugs, but feeling something doesn't necessarily make it so.

Replies from: Jacobian
comment by Jacob Falkovich (Jacobian) · 2021-10-30T17:07:57.441Z · LW(p) · GW(p)

The ordering is based on measures of neuro-correlates of the level of consciousness like neural entropy or perturbational complexity, not on how groovy it subjectively feels.

Replies from: gjm
comment by gjm · 2021-10-30T20:15:51.794Z · LW(p) · GW(p)

It would seems a bit optimistic to call anything a "neuro-correlate of the level of consciousness" simply on the basis that it's higher for ordinary waking brains than for ordinary sleeping brains. Is there more evidence than that for considering neural entropy or perturbational complexity to be measures of "the level of consciousness"?

(My understanding is that in some sense they're measuring the amount of information, in some Shannonesque sense, in the state of the brain. Imagine doing something like that with a computer. The figure will -- at least, for some plausible ways of doing it -- be larger when the computer is actively running some software than when it's idle, and you might want to say "aha, we've found a measure of how much the computer is doing useful work". But it's even larger if you arrange to fill its memory with random bits and overwrite them with new random bits once a second, even though that doesn't mean doing any more useful work. I worry that psychedelics might be doing something more analogous to that than to making your computer actually do more.)

comment by Said Achmiz (SaidAchmiz) · 2021-10-30T16:58:44.136Z · LW(p) · GW(p)

Eliezer seems to believe that self-reflection or some other component of selfhood is necessary for the existence of the qualia of pain or suffering. A lot of people believe this simply because they use the word “consciousness” to refer to both (and 40 other things besides). I don’t know if Eliezer is making such a basic mistake, but I’m not sure why else he would believe that selfhood is necessary for suffering.

It is not my impression that Eliezer believes any such thing for pain, only (perhaps) for suffering. It’s important not to conflate these.

It seems clear to me, at least, that consciousness (in the “subjective, reflective self-awareness” sense) is necessary for suffering; so I don’t think that Eliezer is making any mistake at all (much less a basic mistake!).

Being a self, or thinking about yourself, is also just another perception — a product of your brain’s generative model of reality — like color or pain are

The word “just” is doing a heck of a lot of work here.

I think it’s equally likely that chickens experience selfless pain

Chickens perhaps have “selfless pain”, but to say that they experience anything at all is begging the question!

comment by TAG · 2021-10-30T13:16:54.628Z · LW(p) · GW(p)

Eliezer seems to believe that self-reflection or some other component of selfhood is necessary for the existence of the qualia of pain or suffering. A lot of people believe this simply because they use the word “consciousness” to refer to both (and 40 other things besides). I don’t know if Eliezer is making such a basic mistake, but I’m not sure why else he would believe that selfhood is necessary for suffering

I strongly support this. If you are going to explain-away qualia as the result of having a self-model, you need to do more than note that they occur together , or that "conscious" could mean either.

comment by sapphire (deluks917) · 2021-10-30T16:44:51.216Z · LW(p) · GW(p)

Animal rights obsessed vegan checking in:

I am extremely worried gpt3 is concious! To be honest i am worried about whether my laptop is concious! A lot of people worried about animal suffering are also worried about algorithms suffering.

Replies from: Korz
comment by Mart_Korz (Korz) · 2021-11-01T21:48:49.718Z · LW(p) · GW(p)

It seems I am not as worried about gpt3 as you, but when listening to the  simulated interview with simulated Elon Musk by Lsusr [LW · GW] in the clearer thinking podcast episode 073 (starts in minute 102), I was quite concerned

comment by Unreal · 2021-10-29T15:55:14.645Z · LW(p) · GW(p)

I had another complaint about that tweet, which... you do not seem to have, but I want to bring up anyway. 

Why do we assume that 'consciousness' or 'sentience' implies 'morally relevant' ? And that a lack of consciousness (if we could prove that), would also imply 'not morally relevant' ? 

It seems bad to me to torture chickens even if turns out they aren't self-aware. But lots of people seem to take this as a major crux for them. 

If I torture a permanently brain-damaged comatose person to death, who no one will miss, is that 'fine' ? 

I am angry about this assumption; it seems too convenient. 

Replies from: Benito, Jemist, bn22, Aay17ush
comment by Ben Pace (Benito) · 2021-10-29T17:14:40.187Z · LW(p) · GW(p)

Torturing chickens or brain dead people is upsetting and horrible and distasteful to me. I don’t think it’s causing any direct harm or pain to the chicken/person though.

I still judge a human’s character if they find these things fun and amusing. People watch this kind of thing (torture of humans/other animals) on Netflix all the time, for all sorts of good and bad reasons.

Replies from: Unreal
comment by Unreal · 2021-10-29T18:14:03.632Z · LW(p) · GW(p)

Claim: Many things are happening on a below-consciousness level that 'matter' to a person. And if you disrupted those things without changing a person's subjective experience of them (or did it without their notice), this should still count as harm. 

This idea that 'harm' and the level of that harm is mostly a matter of the subjective experience of that harm goes against my model of trauma and suffering. 

Trauma is stored in the body whether we are conscious of it or not. And in fact I think many people are not conscious of their traumas. I'd still call it 'harm' regardless of their conscious awareness. 

I have friends who were circumcised before they could form memories. They don't remember it. Through healing work or other signs of trauma, they realized that in fact this early surgery was likely traumatic. I think Eliezer is sort of saying that this only counts as harm to the degree that it consciously affects them later or something? I disagree with this take, and I think it goes against moral intuition. (If one sees a baby screaming in pain, the impulse is to relieve their 'pain' even if they might not be having a conscious experience of it.) 

If I take a "non-sentient" chicken and cut off its wings, and I watch it as it helplessly tries to fly repeatedly, but is unable to, this strikes me as a form of harm to the chicken and its values even if the chicken is not having a subjective experience of its condition. 

Also, from my investigations, much suffering does not reach the level of awareness. When a person investigates very closely and zooms in on experiences (such as through meditation), suffering is 'found' to be 'occurring' at a level of granularity and detail that was not previously accessible. But becoming aware of this suffering does not increase the amount of suffering that was occurring; you just become aware of the amount that was already there. It's an "oh" moment. And this can actually help relieve the suffering, by becoming aware of it. 

This suggests that maybe beings who lack the ability of awareness and observation to see their own condition actually are suffering more. This accords with my own journey in relieving personal suffering. More awareness was generally helpful. Whereas as a child, I was more 'braindead' in some way. Not very 'conscious'.

One could make similar inquiries into 'dissociation'. If a person is regularly dissociated and doesn't feel things very intensely, does it make it more okay to hurt them? 

Also my model of pain is that pain != suffering, which might be relevant here. Not sure. 

Replies from: mayleaf, Benito, Benito
comment by mayleaf · 2021-10-29T18:48:29.522Z · LW(p) · GW(p)

If I take a "non-sentient" chicken and cut off its wings, and I watch it as it helplessly tries to fly repeatedly, but is unable to, this strikes me as a form of harm to the chicken and its values even if the chicken is not having a subjective experience of its condition. 


I'm curious how you would distinguish between entities that can be harmed in a morally relevant way and entities that cannot. I use subjective experience to make this distinction, but it sounds like you're using something like -- thwarted intentions? telos-violation? I suspect we'd both agree that chickens are morally relevant and (say) pencils are not, and that snapping a pencil in half is not a morally-relevant action. But I'm curious what criterion you're using to draw that boundary.

One could make similar inquiries into 'dissociation'. If a person is regularly dissociated and doesn't feel things very intensely, does it make it more okay to hurt them? 

This is an interesting point; will think about it more.

comment by Ben Pace (Benito) · 2021-10-29T21:57:24.636Z · LW(p) · GW(p)

Typically in questions of ethics, I factor the problem into two sub-questions:

  • Game theory: ought I care about other agents' values because we have the potential to affect each other?
  • Ben's preferences: do I personally care about this agent and them having their desires satisfied?

For the second, it's on the table whether I care directly about chickens. I think at minimum I care about them the way I care about characters in like Undertale or something, where they're not real but I imbue meaning into them and their lives.

That said it's also on the table to me that a lot of my deeply felt feelings about why it's horrible to be cruel to chickens, are similar to my deeply felt feelings of being terrified when I am standing on a glass bridge and looking down. I feel nauseous and like running and a bit like screaming for fear of falling; and yet there is nothing actually to be afraid of.

If I imagine someone repeatedly playing Undertale to kill all the characters in ways that make the characters maximally 'upset', this seems tasteless and a touch cruel, but not because the characters are conscious. Relatedly, if I found out that someone had built a profitable business that somehow required incidentally running massive numbers of simulations of the worst endings for all the characters in Undertale (e.g. some part of their very complex computer systems had hit an equilibrium of repeatedly computing this, and changing that wasn't a sufficient economic bottleneck to be worth the time/money cost), this would again seem kind of distasteful, but in the present world it would not be very high on my list of things to fix, it would not make the top 1000.

For the first, suppose I do want to engage in game theory with chickens. Then I think all your (excellent) points about consciousness are directly applicable. You're quite right that suffering doesn't need to be conscious, and often I have become aware of a way that I have been averse to thinking about a subject or been scared of a person for no good reason that has been a major impediment in having a great career and great relationships, in ways that are "outside" my conscious experience. (Being more 'braindead'.) I would have immensely appreciated someone helping me realize and fix these things about myself that were outside my conscious awareness.

Insofar as the chickens are having their wings clipped and kept in cages, it's very clear that their intentions and desires are being stunted. On a similar note, I think all the points in Dormin's essay Against Dog Ownership [LW · GW] apply regardless of whether dogs are conscious — that the meaning dogs look for in life is not found in the submissive and lonely inner-city life that most of them experience. These lay out clear ways to be much kinder to a chicken or dog and to respect their desires.

But there is a step of the argument missing here. I think some people believe arguments that claim it's worth engaging in game theory with chickens even if I think they're only as real as characters in Undertale; but I have not read an argument that I find compelling.

The idea is that if we suppose chickens are indeed only as real as Undertale characters, I might still care about them because we have shared goals or something. Here's a very concrete story where that would be the case: if someone made a human-level AI with Sans' personality, and he was working to build a universe kind of like the universe I want to live in with things like LessWrong and Sea Shanties and Dostoyevsky in it, then I would go out of my way to – say – right injustices against him; and I hope he would do the same for me, because I want everyone to know that such agents will be defended by each other.

I think some people believe that humans and chickens have similar goals in this way in the extreme, but I don't agree. I don't think I would have much of a place in a chicken utopia, nor do I expect to find much of value in it.

comment by Ben Pace (Benito) · 2021-10-29T18:35:35.558Z · LW(p) · GW(p)

Btw, coming at it from a different angle: Jessicata raises the hypothesis (in her recent post [LW · GW]) that people put so much weight on 'consciousness' as a determinant of moral weight because it is relatively illegible and they believe outside the realm of things that civilization currently has a scientific understanding of, so that they can talk about it more freely and without the incredibly high level of undercutting and scrutiny that comes to scientific hypotheses. Quote:

Consciousness is related to moral patiency (in that e.g. animal consciousness is regarded as an argument in favor of treating animals as moral patients), and is notoriously difficult to discuss.  I hypothesize that a lot of what is going on here is that:

1. There are many beliefs/representations that are used in different contexts to make decisions or say things.

2. The scientific method has criteria for discarding beliefs/representations, e.g. in cases of unfalsifiability, falsification by evidence, or complexity that is too high.

3. A scientific worldview will, therefore, contain a subset of the set of all beliefs had by someone.

4. It is unclear how to find the rest of the beliefs in the scientific worldview, since many have been discarded.

5. There is, therefore, a desire to be able to refer to beliefs/representations that didn't make it into the scientific worldview, but which are still used to make decisions or say things; "consciousness" is a way of referring to beliefs/representations in a way inclusive of non-scientific beliefs.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-30T17:42:46.061Z · LW(p) · GW(p)

I don't think that was my point exactly. Rather, my point is that not all representations used by minds to process information make it into the scientific worldview, so there is a leftover component that is still cared about. That doesn't mean people will think consciousness is more important than scientific information, and indeed scientific theories are conscious to at least some people.

Separately, many people have a desire to increase the importance of illegible things to reduce constraint, which is your hypothesis; I think this is an important factor but it wasn't what I was saying.

comment by J Bostock (Jemist) · 2021-10-29T15:57:41.361Z · LW(p) · GW(p)

Eliezer later states that he is referring to qualia specifically, which for me are (within a rounding error) totally equivalent to moral relevance.

Replies from: Unreal
comment by Unreal · 2021-10-29T16:14:37.495Z · LW(p) · GW(p)

Why is that? You're still tying moral relevance to a subjective experience? 

Replies from: Jemist
comment by J Bostock (Jemist) · 2021-10-29T16:41:17.472Z · LW(p) · GW(p)

Basically yes I care about the subjective experiences of entities. I'm curious about the use of the word "still" here. This implies you used to have a similar view to mine but changed it, if so what made you change your mind? Or have I just missed out on some massive shift in the discourse surrounding consciousness and moral weight? If the latter is the case (which it might be, I'm not plugged into a huge number of moral philosophy sources) that might explain some of my confusion.

comment by bn22 · 2021-10-29T18:44:19.528Z · LW(p) · GW(p)

People already implicitly consider your example to be acceptable given that vegetables are held in conditions of isolation that would be considered torture if they were counterfactually conscious and many people support being allowed to kill/euthanize vegetables in cases such as Terry Schiavo's.

comment by Aay17ush · 2021-10-29T16:41:40.967Z · LW(p) · GW(p)

I've often thought about this, and this is the conclusion I've reached.

There would need to be some criteria that separates morality from immorality. Given that, consciousness (ie self-modelling) seems like the best criteria given our current knowledge. Obviously, there are gaps (like the comatose patient you mention), but we currently do not have a better metric to latch on to.

Replies from: TAG
comment by TAG · 2021-10-30T13:24:03.476Z · LW(p) · GW(p)

Why wouldn't the ability to suffer be the criterion? Isn't that built into the concept if sentience? "Sentient" literally means "having senses" but is often used as a synonym for "moral patient".

comment by Lance Bush (lance-bush) · 2021-10-29T13:36:46.123Z · LW(p) · GW(p)

I suspect I endorse something like what Yudkowsky seems to be claiming. Essentially, I think that humans are uniquely disposed (at least among life on earth) to develop a kind of self-model, and that nonhuman animals lack the same kind of systems that we have. As a result, whatever type of consciousness they have, I think it is radically unlike what we have. I don’t know what moral value, if any, I would assign to nonhuman animals were I to know more about their mental lives or what type of “consciousness” they have, but I am confident that the current high level of confidence people have that animals have rich conscious experiences is not justified. I wrote an old comment on this I’ve shared a couple of times. Here it is:

I think that what we take to be our conscious experience involves a capacity for "checking in" on an ongoing internal narrative, or story that we are constantly "telling ourselves" that functions to provide a unified timeline which we can utilize, report on, and talk about with others. I think this "narrative center of gravity" requires a degree of cultural input, and the inculcation of specific memes/concepts that lead us to form a sense of a self that integrates our experiences and that can think about "our" past experiences and "our" future experiences. In a sense, I think that conscious experience is built up as a sort of software that we have the hardware to develop, but requires a degree of developmental and cultural input to become fully operational. I don't think animals have or need this capacity. As such, what it is like to be us is something we can talk about, but I am not convinced that there is anything it is "like" to be an animal.

This is a largely Dennettian view of consciousness, and I believe he coined or at least used the term “narrative center of gravity.”

You identify consciousness with having qualia.

However, I don’t know what you mean by qualia. While it remains sensible to me to attribute something like consciousness to humans, I would typically deny that we “have qualia” and would not define consciousness in terms of having qualia. Were others to do so, I’d deny we have that form of qualia. Perhaps Yudkowsky would, too. It really depends on what one means by “consciousness” and “qualia.”

I don’t know exactly what Yudkowsky thinks, so I wouldn’t put a number on it as you do (i.e., 15%). But, I’ll put it this way: I don’t know of any alternatives to something like Dennett/Frankish on illusionism that seem individually more plausible than illusionism. I don’t know if the collective weight of plausibility for all competing hypotheses is enough to push illusionism below 50%, but I don’t think so. So, while I am not overwhelmingly confident that something that seems roughly in the ballpark (if not very similar) to Yudkowsky’s view is correct, I have yet to see any viable alternatives. Most seem weird and to not capture what strike me as important elements of consciousness, or they seem to appeal to intuitions I don’t have and don’t trust in others.

Replies from: AprilSR, Jemist, MichaelStJules
comment by AprilSR · 2021-10-30T05:01:08.331Z · LW(p) · GW(p)

I don't know how you can deny that people have "qualia" when, as far as I can tell, it was a word coined to describe a particular thing that humans experience?

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-10-30T05:40:30.860Z · LW(p) · GW(p)

I'm not sure I understand. What do you mean when you say it was coined to "describe a particular thing that humans experience"? Or maybe, to put this another way: at least in this conversation, what are you referring to with the term "qualia"?

Replies from: AprilSR, TAG
comment by AprilSR · 2021-10-31T07:53:30.011Z · LW(p) · GW(p)

As I understand it, the word "qualia" usually refers to the experience associated with a particular sensation.

comment by TAG · 2021-10-30T14:47:21.922Z · LW(p) · GW(p)

"Qualia" is easy to define. As Wikipedia has it

Examples of qualia include the perceived sensation of pain of a headache, the taste of wine, as well as the redness of an evening sky. As qualitative characters of sensation, qualia stand in contrast to "propositional attitudes",[1] where the focus is on beliefs about experience rather than what it is directly like to be experiencing.

Whereas illusionism is almost impossible to define coherently.

"According to illusionism, you only have propositional attitudes, not perceptions. Some of those propositional attitudes seem like propositional attitudes , and others seem like perceptions. Well they don't, because if anything seemed like anything, that would be a perception. So actually you have a meta legal belief that some of your propositional attitudes are propositional attitudes, but also a meta level belief that others aren't. That's the illusion. But actually it's not an illusion,because an illusion is a false perception , and there are no perceptions. Its actually a false belief, a delusion I don't know why we call it illusionism"

Replies from: gjm, lance-bush
comment by gjm · 2021-10-30T15:17:12.521Z · LW(p) · GW(p)

It's easy to give examples of things we think of as qualia. I'm not so sure that that means it's easy to give a satisfactory definition of "qualia".

I can give lots of examples of people, but there's scope for endless debate about exactly what counts as a person and what doesn't. (Newly born children? 16-week-old foetuses? Aliens or AIs, should any exist now or in the future, with abilities comparable to ours but very different brains and minds? Beings like gods, angels, demons, etc., should any exist, with abilities in some ways comparable to ours but not made out of matter at all?) And for debate about when persons A and B are actually the same person. (Suppose some intelligent computer programs are persons. If I take a copy, do I then have one person or two? Suppose we favour a multiverse-type interpretation of quantum mechanics. Are the versions of "me" on two nearby Everett branches one person, or two? Am I the same person as I was 30 years ago?)

There's similar unclarity about what things count as qualia and about how to individuate them. (E.g., if you and I look at the same red object and both have normal colour vision, do we have "the same" quale of seeing-a-red-thing or not? If I see the same red thing twice, is that "the same" quale each time? If the answers are negative, what actual work is the notion of qualia doing?) And e.g. Daniel Dennett would claim that the word "qualia" includes enough baggage that it's better to say that there are no qualia while in no way denying that people experience things.

It's not (I think) in question that we experience things. It's quite reasonable (I think) to question whether anything about our experience is made clearer by introducing objects called qualia.

Replies from: TAG
comment by TAG · 2021-10-30T16:16:11.646Z · LW(p) · GW(p)

It’s easy to give examples of things we think of as qualia. I’m not so sure that that means it’s easy to give a satisfactory definition of “qualia”.

Satisfactory for whom? I use examples because they are sufficient to get the point across to people who aren't too biased. Someone night have some genuine reason to need a more rigourous definition...but they might not, they might instead be making a selective demand for rigour, out of bias. Where are the calls for rigourous definitions of "matter", "computation", etc?

I can give lots of examples of people, but there’s scope for endless debate about exactly what counts as a person and what doesn’t. (Newly born children? 16-week-old foetuses? Aliens or AIs, should any exist now or in the future, with abilities comparable to ours but very different brains and minds? Beings like gods, angels, demons, etc., should any exist, with abilities in some ways comparable to ours but not made out of matter at all?)

If my purpose is to demonstrate that people exist, all I need to do is point to a few uncontentious examples of people...I don't need to solve every edge case.

And "endless debate" needs to be avoided. People who make selective demands for rigour don't to change their minds, and endless debate is a great way of achieving that

(E.g., if you and I look at the same red object and both have normal colour vision, do we have “the same” quale of seeing-a-red-thing or not

Why does that matter if all I am doing is asserting that qualia exist, or lack a reductive explanation?

Replies from: gjm, lance-bush
comment by gjm · 2021-10-30T17:36:01.769Z · LW(p) · GW(p)

(I'm ignoring those parts of your reply that seem to have no purpose other than implicitly accusing me of arguing in bad faith. I have seldom known anything useful to come out of engaging with that sort of thing. These discussions would be more enjoyable, for me at least, if you weren't so relentlessly adversarial about them.)

Satisfactory for whom? For me, obviously :-).

There is at least one eminent philosopher, namely Daniel Dennett, who has made something of a speciality of this area and who flatly denies that qualia "exist", and who doesn't appear to me to be either a dimwit or a crackpot. That is already sufficient reason for me to want to be careful about saying "duh, of course qualia exist".

Of course if all you mean by that is that people have experience, then I agree with that, but if that's all you mean then what need is there to talk about "qualia" at all? And if it's not all you mean, then before agreeing I need to know what else is being implicitly brought in.

Now, in the present instance it's Jemist who introduced "qualia" to the discussion (so, in particular, you are under no obligation to be able to tell me precisely what Jemist means by the term). And Jemist talks e.g. about experience being "turned into qualia", and I don't see how your examples help to understand what that means, or what distinction between "experience" and "qualia" Jemist is trying to draw.

The general idea seems to be something like this: people and chickens alike have some sort of stream or sea of experiences, and humans (and maybe chickens or maybe not) "turn these experiences into qualia", and having not merely experiences but qualia is what justifies calling an entity "conscious" and/or seeing that entity as of moral significance.

I'm sympathetic to the general idea that there's something that's kinda-the-same about chickens' sensory input and ours, and something that's maybe different about the early stages of processing that sensory input, and that that has something to do with possible moral differences between us and chickens. But I don't see any reason to think that filling in that picture in detail, if we knew how to do it, would look much like identifying things ("qualia") that we "have" and chickens maybe "don't have". And one way to resist the so-far-unjustified slide from "there may be something importantly different between how we process our sensory input and how chickens process theirs" to "maybe we have qualia and chickens don't" is to remain mindful of the fact that we don't have -- or, at least, I don't have and I haven't seen much evidence that others have -- a very clear idea of exactly what "qualia" are supposed to be, and of how "having qualia" is supposed to go beyond "having experience".

Here's another example of how the leap to "having qualia" may bring in unacknowledged baggage. You say "Why does that matter if all I am doing is asserting that qualia exist" (so far so good, I guess) "or lack a reductive explanation?". Where did that come from? It certainly doesn't seem to be something that follows from the fact that people experience things. If you're somehow inferring that from "having qualia" then I think there's got to be something in your concept of "qualia" that is very much not an obvious consequence of having experience, and I don't want to just nod wisely when someone says "of course we have qualia" because it may turn out that part of what they mean by "qualia" involves some sort of in-principle irreducibility, and I want to see some actual argument before agreeing to that!

("Lack a reductive explanation" is ambiguous between "we don't have one yet" and "we are probably never going to have one" and "we are definitely never going to have one" and "it is in principle impossible for us ever to have one". I don't like this because it's too easy to slide between those meanings without explicitly noting that that's happening and offering any justification. I don't know whether I have guessed correctly at what combination of those things you meant; if you think I haven't, feel free to clarify.)

Replies from: TAG
comment by TAG · 2022-02-13T22:30:59.372Z · LW(p) · GW(p)

There is at least one eminent philosopher, namely Daniel Dennett, who has made something of a speciality of this area and who flatly denies that qualia “exist”, and who doesn’t appear to me to be either a dimwit or a crackpot. That is already sufficient reason for me to want to be careful about saying “duh, of course qualia exist”.

By that standard, there is no satisfactory definition of anything, since there are philosophers who doubt their own existence, your existence, the existence of an external world, the existence of matter and so on.

But a definition is not supposed to count as a proof all by itself. A definition of X should allow two people who are having a conversation about X to understand each other. A definition that is satisfactory for that purpose does not need to constitute a proof or settle every possible edge case.

The general idea seems to be something like this: people and chickens alike have some sort of stream or sea of experiences, and humans (and maybe chickens or maybe not) “turn these experiences into qualia”, and having not merely experiences but qualia is what justifies calling an entity “conscious” and/or seeing that entity as of moral significance.

I'm not sure why it's my job to explain what Jemist means.

If you want a hint as to what an "experience" could be other than a quale, then look at what qualia sceptics think an experience is...apparently some sort of disposition to answer "yes I see red" when asked what they see.

If you are anything like most people, you probably have no compunction against destroying machinery, or the virtual characters in a game. And you probably don't care too much if the characters say "aaagh!" or the machinery reports damage. So it's as if you think there is something about living organisms that goes beyond damage and reports of damage ... something like pain, maybe?

and having not merely experiences but qualia is what justifies calling an entity “conscious” and/or seeing that entity as of moral significance.

More than one thing could make an entity morally significant, and there are arguments for the existence of qualia other than moral significance.

But I don’t see any reason to think that filling in that picture in detail, if we knew how to do it, would look much like identifying things (“qualia”) that we “have” and chickens maybe “don’t have”.

Well, if we fill in the picture by adding in more fine grained structure and function, we are probably not going to find the qualia for the same reason that we haven't already. Nonetheless, we have good reason to think that our qualia are there, and rather less good reason to believe that the from-the-outside approach is literally everything, so that qualia have to be disregarded if they cannot be located on that particular map.

and I don’t want to just nod wisely when someone says “of course we have qualia” because it may turn out that part of what they mean by “qualia” involves some sort of in-principle irreducibility, and I want to see some actual argument before agreeing to that!

I just quoted a definition of qualia which says nothing about in-principle irreducibility. Do agree with that defintion?

Is it reasonable to to reject X, for which there is evidence, on the basis that someone might be smuggling in Y? Have you noticed that creationists do this -- they don't accept random mutation and natural selection because they are afraid they might end up agreeing that there is no god?

Here’s another example of how the leap to “having qualia” may bring in unacknowledged baggage. You say “Why does that matter if all I am doing is asserting that qualia exist” (so far so good, I guess) “or lack a reductive explanation?”. Where did that come from?

"Lack a reductive explanation" is a separate claim, not an implication of "have qualia".

“Lack a reductive explanation” is ambiguous between “we don’t have one yet” and “we are probably never going to have one” and “we are definitely never going to have one” and “it is in principle impossible for us ever to have one”.

The obvious interpretation is "we don’t have one yet". Moreover , it is also true, so charitableness recommends it as the interpretation.

it’s too easy to slide between those meanings without explicitly noting that that’s happening and offering any justification.

I don't do that.

What I would like to be able to do is be able to build up a step-by-step argument. But that can only work with a reader who is willing to judge each step on its own merits.

Replies from: gjm
comment by gjm · 2022-02-14T00:57:50.542Z · LW(p) · GW(p)

By that standard, there is no satisfactory definition of anything

I think you may have misunderstood what I was saying. (My fault, no doubt, for not being clearer and more explicit.) I was not arguing that because some eminent philosophers deny the "existence" of "qualia" it follows that the term has no satisfactory definition. (I do say that I've not seen a really satisfactory definition, but that's a separate claim.) But it seemed that you were saying that the "existence" of "qualia" is just obvious and I was explaining one reason why I can't agree.

(Why all the scare-quotes? Because if I just say "it is not obvious that qualia exist" then someone may take "qualia exist" to mean the same thing as "people experience things" and think that I am saying it isn't obvious that people experience things. That is not what I'm doubtful about. I am doubtful about the wisdom of reifying that experience into things-called-qualia, and I am doubtful about some of the philosophical baggage that "qualia" sometimes seem to be carrying.)

A definition of X should allow two people who are having a conversation about X to understand one another.

Yup. But (unless I've misunderstood you) you're wanting to define "qualia" by pointing to some examples of people having experience, and that is definitely not sufficient for me to understand exactly what you mean by "qualia" and by "having qualia".

I'm not sure why it's my job to explain what Jemist means.

It isn't, and in the comment to which you were replying I explicitly said that it isn't. I'm not sure why you think I think it's your job to explain what Jemist means, and if it's because I said something that implies that or looks like it did then I hope you will accept my apologies, because I didn't intend to do any such thing.

If you want a hint [...] something like pain, maybe?

It seems to me that the comment to which you were replying already sketched exactly the argument you're making, and then went on to explain why I don't find that argument sufficient reason to say that we "have qualia", even though (of course!) I agree that it indicates that there is something going on that has something to do with what people are pointing at when they talk about qualia.

(Perhaps the following analogy will help. Suppose you were instead arguing that people "have souls". I say I'm not convinced that that's true, and it's not entirely clear just what "souls" are meant to be. You say: of course there are souls, they're what make us us, they're what enable us to have moral values and appreciate beauty. I say: of course I agree that we are who we are and that we have moral values and appreciate beauty, but I don't see any good reason to think that those things are best explained by postulating things that we call "souls", and some of the specific things people say about "souls" -- e.g., that they are inherently immortal and persist after our brains decompose -- seem probably false. I don't know how you feel about souls, but if like me you are skeptical about them then it might be helpful to know that what you're actually saying about "qualia" seems very much like my imaginary-you is saying about "souls".)

I just quoted a definition of qualia which says nothing about in-principle irreducibility. Do you agree with that definition?

I think you're referring here to what you quoted from Wikipedia several comments back. (If not, my apologies, and please let me know what you meant instead.) I'm not sure what you mean by "agree with". I generally avoid using the term "qualia", in part because I don't think it's always clear what people mean by it, so I can't "agree with" any definition of it in the sense of using that definition myself. If you mean do I think it's a definition that a reasonable person might use, then I guess kinda -- except that there isn't actually a definition in what you quoted. You could extract from it something like "qualia are perceived sensations" (from the list of examples) or "qualia are qualitative characters of sensation" (from something later in what you quoted), but those two seem to me like they're different things. If I have a headache, is the quale (1) my "perceived sensation" of pain in my head, or (2) the "qualitative character" of that sensation?

I've no problem with any of this if it's taken informally. I might talk about "having a headache", and you might say that in that case  I "have a quale of headachiness", and in some sense we're referring to the same thing, and this is all fine. But "quale" is meant to be a technical philosophical term, to be deployed with precision, and for that purpose I worry that it assumes too much. I believe in headaches, in the sense that of course sometimes people's heads hurt. I am not so sure that I want an ontology that actually literally has things called headaches in it; it might be more accurate to treat having-a-headache as an intransitive verb, for instance. "Qualia" might b e like "headaches" in this sense.

I agree, for the avoidance of doubt, that "qualia are perceived sensations" and "qualia are qualitative characters of sensation" do not imply that they are in principle irreducible.

Is it reasonable to to reject X, for which there is evidence, on the basis that someone might be smuggling in Y?

Nope. But it might be reasonable to be cautious about X and want it very precisely defined, to make sure such smuggling doesn't happen. If you're talking about evolution with a fundamentalist, then you should be very careful not to slide between "evolution means change of allele frequencies in a population" and "evolution means mutation and natural selection" and "evolution means mutation and natural selection, all happening through purely natural processes". (And I think sometimes people do slide between those, and if they do then the creationists are right to complain.)

What I would like to be able to do is to build up a step-by-step argument. But that can only work with a reader who is willing to judge each step on its own merits.

Fine with me, with of course the proviso that sometimes it's perfectly reasonable to say things like "oh, I hadn't realised you meant that or "hmm, that inference seems valid, so let me reconsider something I agreed to before -- yeah, on reflection I shouldn't have been so quick to agree, and I now think it's probably wrong".

I agree that people experience things. You are apparently very keen that I should say not only that but also that people have qualia. Before agreeing with that, I want to know precisely what you mean by "having qualia" and how it goes beyond "experiencing things". The thing you quoted from Wikipedia doesn't really help me understand that. Could you be more explicit about what it is that you think I should be more confident of than I am? It surely isn't just that you specifically want me to say the particular words "we have qualia".

Replies from: TAG
comment by TAG · 2022-04-04T16:34:46.569Z · LW(p) · GW(p)

But it seemed that you were saying that the “existence” of “qualia” is just obvious and I was explaining one reason why I can’t agree.

But you weren't disagreeing with anything actually in the definition. You have been saying that the definition doesn't make it explicit enough that qualia aren't irreducible, immaterial, etc. Merely failing to mention reducibility, etc, one way or the other isn't enough for you.

I am doubtful about some of the philosophical baggage that “qualia” sometimes seem to be carrying.

"Seem" to whom? From my perspective, you keep insisting that I have smuggled in non-materialistic assumptions ... but I don't even see how that would work.

If I offer you one definition, then swap it for another, isn't that a blatant cheat on my part? And if it is , why worry?

Or if I argue that qualia are immaterial based on other evidence and theories and whatever. ... so that the conclusion isn't begged by definition alone ... that's legitimate argumentation.

Suppose you were instead arguing that people “have souls”. I say I’m not convinced that that’s true, and it’s not entirely clear just what “souls” are meant to be. You say: of course there are souls, they’re what make us us, they’re what enable us to have moral values and appreciate beauty. I say: of course I agree that we are who we are and that we have moral values and appreciate beauty, but I don’t see any good reason to think that those things are best explained by postulating things that we call “souls”, and some of the specific things people say about “souls”—e.g., that they are inherently immortal and persist after our brains decompose—seem probably false. I don’t know how you feel about souls, but if like me you are skeptical about them then it might be helpful to know that what you’re actually saying about “qualia” seems very much like my imaginary-you is saying about “souls”.

You are asking me to tell you what qualia are ontologically. But thats not a definition , that's a theory. Theories explain evidence. Evidence has to be spoken about separately from theories. When I define qualia, I am defining something that needs to be explained, not offering an explanation. I want the definition to be ontologically non committal so that the process of theory building can procede without bias. But neutrality isn't enough for you: you are committed to a theory, and you won't consider something as relevant evidence unless you can be guaranteed that it won't disrupt the theory.

I agree that people experience things. You are apparently very keen that I should say not only that but also that people have qualia

"Experience things" doesn't convey enough information, because it can too easily be taken in a naive realist sense.

The point isn't that you are seeing a tomato, it is that you are seeing it in a certain way.

According to science , our senses are not an open window on the world that portrays it exactly as it is. Instead , the sensory centres of our brains are connected the outside world by a complex causal chain, during which information, already limited by our sensory modalities, is filtered and reprocessed in various ways.

So scientific accounts of perception require there to be a way-we-perceive-things...quite possibly , an individual one. Which might as well be called “qualia” as anything else.

Replies from: gjm
comment by gjm · 2022-04-04T21:31:01.677Z · LW(p) · GW(p)

It is simply not true that

[I] have been saying that the definition doesn't make it explicit enough that qualia aren't irreducible, immaterial, etc.

I have been saying that the things you offer by way of defining "qualia" don't make it clear enough what the term means. And that I don't want to affirm the existence of something whose meaning is not clear to me, one reason (not the only one) being that that opens the way for bait-and-switch moves where I say "sure, X exists" and then the person I'm talking to says "aha, so you agree that Y" where Y is something that now turns out to be part of what they meant by X that hadn't been made explicit before.

That doesn't mean that I need a definition that says "these things aren't irreducible or immaterial". It means I need a definition clear enough that I can tell whether irreducibility, or immateriality, or a dozen other things, are part of what the term means.

So far, you've (1) pointed to a few things and said "look, these are qualia" (which obviously doesn't enable me to tell what is and isn't part of what you mean by the term), and (2) cited a definition in a Wikipedia article which, as I explained above, seems actually to be at least two different definitions that say different things. And, in your latest comment, (3) said some things about the processes of perception that don't help me understand what you mean by "qualia" for reasons I'll get to below.

It's very likely that what you mean by "qualia" doesn't in fact presuppose immateriality or irreducibility or whatever! But I can't tell because you have so far not chosen to tell me, in terms I am able to understand with confidence, just what you mean by the term.

you keep insisting that I have smuggled in non-materialistic assumptions

Nope. I keep insisting that I can't tell what assumptions, if any, you might have smuggled in or might smuggle in later, because I can't tell exactly what you mean by the term. Which is problematic for all sorts of reasons other than possible assumption-smuggling.

If I offer you one definition, then swap it for another, isn't that a blatant cheat on my part?

Yup, and as you say that would be fine because then I could just say "look, you cheated and here's how". But what you're actually doing is offering me no definitions concrete enough to tell what exactly you mean. And, again, the problem with that isn't just that if you were inclined to be dishonest you could smuggle things in. It's that you're getting cross at me for not saying "yes, I agree that there are qualia" even though I still don't know exactly what you would take that affirmation to mean.

(I remark that that really is the full extent of my denial. I haven't said "there are no qualia" or "your alleged notion of qualia is incoherent" or anything of the sort. I have merely explained why I am not positively affirming that there are qualia, and from this you make extravagant deductions about how I am inflexibly attached to some theory -- you don't say what theory -- which might possibly be troubled by the implications of qualia -- you don't say what implications. I think it is entirely possible that at some point in this discussion I will say "OK, I believe that by having qualia you mean X, and I completely agree that in that sense we have qualia". To the best of my knowledge I am not committed to any theory that would make that impossible.)

I agree that there is nothing illegitimate about arguing from actual evidence on which we agree that qualia are immaterial, or whatever. That's fine. But obviously we first need to figure out what evidence we agree on. You would like "qualia exist" to be one piece of evidence. Before I agree with that I need to know exactly what you mean by it. I am trying to find out, so far with less success than I'd like.


In the next bit of what you write it seems to me that you're doing two things that I don't see how to square with one another, while accusing me of bad faith for allegedly doing something that it looks to me as if you are doing. It would be nice to clear this up. So, first of all, the two things I don't see how to square with one another.

On the one hand, you insist that when you talk about "qualia" you're doing so in a way that doesn't have any sort of theory attached, that merely amounts to presenting "something that needs to be explained", something ontologically non-committal.

On the other hand, you aren't content with e.g. "people experience things"; you want something on top of that that's denoted by the word "qualia".

Now, it seems to me that what uncontroversially needs to be explained is simply that people experience things. (Which, at the risk of rererepeating myself, I agree that we do.) Maybe what-you-mean-by-qualia is another thing that uncontroversially needs to be explained, but (1) I still don't know exactly what you mean by "qualia" and (2) it looks to me as if the term goes beyond what uncontroversially needs to be explained.

One thing that going from "people experience things" to "people have qualia" adds, I think, is a reification of ways-of-experiencing, an implicit assertion something along the lines of "all those times people experience red, there is some single common thing shared between them". (Not necessarily exactly that, of course. But something along those lines.) But that's all those things you say you aren't doing: it introduces a theory to the effect that different experiences share the same quale, it introduces an ontological commitment to there being actual things for the term "qualia" to attach to, and in doing these things it assumes something that (it seems to me) you don't get to demand an explanation for, to treat as part of what needs to be explained, until you've given reason to think it's real. Which I don't think you've done yet.

(You have pointed to examples of people experiencing things. I agree that they are examples of people experiencing things. Before agreeing that they are, beyond that, examples of "qualia", I need to understand exactly what that means to you that goes beyond "people experiencing things", and so far I do not.)

Maybe I am misunderstanding what you mean by "qualia" and what it actually commits us to. That's very possible. Because I keep trying to get you to be more explicit about it and you keep not being explicit about it.

So I'll ask again. Please, please, could you have another go at being explicit and clear about what exactly you mean by "qualia" and what you consider someone is and isn't committed to by saying that "qualia exist"? If Alice believes that there are qualia and Bob believes that people experience things but there aren't qualia, can you describe what should be different between Alice's world and Bob's without needing the word "qualia"? In Bob-world, do you expect people not to say things like "what a beautifully red rose" or "that book cover is the exact same shade of blue as my lover's eyes"? Or do you expect people to say and do the same things but not really experience those things they're talking about? Or something else? (I think probably something else. But I would like a better understanding of what else.)


Finally, you offer (unless I misunderstand?) an argument for why "people experience things" isn't enough and we need "people have qualia" instead. But I don't understand the argument.

I agree (of course) that our perception of the external world is a complicated affair, with lots of processing and filtering along the way. But I have no inkling how you get from there to the assertion that in addition to saying "people experience things" we need a reification of ways-of-perceiving-things. (It seems almost as if you're equivocating between "ways of" meaning "processes by which" and "ways of" meaning something like "varieties of"; the fact that our perception is complicated and indirect might[1] give reason to believe in processes-by-which-perception-happens, but that's not the sort of "ways of perceiving" that qualia are supposed to be at all, surely?)

[1] Though I'm not 100% convinced those need reifying either.

Anyway, 

Replies from: TAG, TAG
comment by TAG · 2022-04-05T15:22:01.349Z · LW(p) · GW(p)

I have been saying that the things you offer by way of defining “qualia” don’t make it clear enough what the term means.

Define "matter".

Replies from: gjm
comment by gjm · 2022-04-05T19:54:10.855Z · LW(p) · GW(p)

Why?

(We haven't been discussing matter. I haven't been insisting that you affirm the existence of matter. There aren't any circumstances parallel to those involving "qualia".)

But, since you ask, here's the best I can do on short notice.

First, purely handwavily and to give some informal idea of the boundaries, here are some things that I would call "matter" and some possibly-similar things that I would not. Matter: electrons, neutrons, bricks, stars, air, people, the London Philharmonic Orchestra (considered as a particular bunch of particular people). Not matter: photons, electric fields, empty space (to whatever extent such a thing exists), the London Philharmonic Orchestra (considered as a thing whose detailed composition changes over time), the god believed in by Christians (should he exist), minds. Doubtful: black holes; the gods believed in by the ancient Greeks (should they exist).

"Matter" is a kind of stuff rather than a kind of thing; that is, in general if some things are "matter" then so is what we get by considering them together, and so are whatever parts they might have. (This might need revision if e.g. it turns out that things I consider "matter" and things I don't are somehow merely different arrangements of some more fundamental stuff.)

Conditional on the universe working roughly the way I currently model it as doing (or, more precisely, allow other people better at these things to model it as doing), I think the actually-existing things I call "matter" are coexistent with "things made from excitations of fermionic quantum fields". If the way the universe works is very different from how I think it does, then depending on the details I might want (1) to continue to say that matter is excitations of fermionic quantum fields, and to declare that contrary to appearances some things we've all been thinking of as matter are something else, or (2) to continue to say that the things we naïvely think of as matter should be called matter, even though some of them are made of other things than excitations of fermionic quantum fields, or (3) to abandon the notion of "matter" as ill-adapted for the way the world actually turns out to be.

If faced with someone denying, or reluctant to positively affirm, the existence of "matter", I would be interested to know whether they mean that some or all concrete things I regard as "matter" are fictions or simulations or imaginations or something, or whether they agree that those things are real but disagree somehow about their fundamental nature (in which case it would be nice to know what), or whether as in our case they don't find my usage of the term clear enough to endorse or reject.

(In our case, I think the experiencing you point at when you refer to "qualia" is real; I do not know whether you are intending to point at something more noun-like, nor whether the things in question are real; I don't think the term "qualia" generally presupposes any detailed view about the underlying nature of whatever-it-points-at but would want a clearer understanding of how my interlocutor is using the term before being confident of that in a specific case.)

comment by TAG · 2022-04-05T15:21:11.375Z · LW(p) · GW(p)

But I have no inkling how you get from there to the assertion that in addition to saying “people experience things” we need a reification of ways-of-perceiving-things.

I didn't say anything explicit about reification. And it's not an implication, either. Merely using a noun is not reification. "Action", "event", "state" "property", "process" and "nothingness" are all nouns, yet none of them refer to things.

a reification of ways-of-experiencing, an implicit assertion something along the lines of “all those times people experience red, there is some single common thing shared between them”.

Again, that would be an ontology of qualia. Again, I am offering a definition , not a complete theory. Again, your grounds for saying that the definition is inadequate is that it isn't answering every question you might have -- and that it might have implications you don't. If the way qualia actually work, ontologically -- a subject about which I have said nothing so far -- involves the literal sharing of a universal between identical subjective sensations, then you should believe it, because it is true, and not object to it dogmatically. Definitions are supposed to have implications. It's not reasonable to object to them for having implications ... and it's not reasonable to object to them for having implications you don't like, because you are supposed to decide theories on the basis.

Notice that in raising the issue, you are already using a good-enough definition of qualia. To object to qualia on the basis that they involve a Platonic shared universal, rather than some other solution to the problem of universals, you have to be able to talk about them, even if without using the word "qualia". But of course, you always have to have pre-theoretic definitions in order to build a theory.

It’s very likely that what you mean by “qualia” doesn’t in fact presuppose immateriality or irreducibility or whatever! But I can’t tell because you have so far not chosen to tell me, in terms I am able to understand with confidence, just what you mean by the term.

Whether qualia are immaterial or irreducible or whatever depends on all the evidence -- on a theory. It should not be begged by a single definition. Question begging definitions are bad, m'kay.

I agree that there is nothing illegitimate about arguing from actual evidence on which we agree that qualia are immaterial

But we would first need to agree that qualia exist at all. That's how theory building works ..step by step. Nobody could come to any conclusion about anything if they had to start with completely clear and exhaustive definitions. Ordinary definitions are not as exhaustive as encyclopedia articles, for instance. You are engaging in a selective demand for rigour.

If Alice believes that there are qualia and Bob believes that people experience things but there aren’t qualia, can you describe what should be different between Alice’s world and Bob’s without needing the word “qualia”?

I've already answered that: if Bob differs from Alice , he differs in being a naive realist.

give reason to believe in processes-by-which-perception-happens, but that’s not the sort of “ways of perceiving” that qualia are supposed to be at all, surely?)

No, they are supposed to be subjectively experienced ways of perceiving, as I have already said several times. I wasn't putting forward ways-of-perceiving as an exhaustive definition, I was pointing out the inadequacy of your definition.

Replies from: gjm
comment by gjm · 2022-04-05T19:24:12.202Z · LW(p) · GW(p)

Using a noun is, by default, reification. Or, at the very least, should be presumed so in the absence of some statement along the lines of "of course when I'm asking you to agree that people have qualia, I am not asking you to commit yourself to there being any such things as qualia".

Qualia without reification seem to me to amount to "people have experiences". I understand that it doesn't seem that way to you, but I don't understand why; I don't yet understand just what you mean by "qualia", and the one thing you've said that seems to be an attempt to explain why you want something that goes beyond "people have experiences" in the direction you're calling "qualia" -- the business about perception being a complex multi-stage process involving filtering and processing and whatnot -- didn't help me, for the reasons I've already given.

Again, that would be an ontology of qualia. Again, I am offering a definition, not a complete theory.

I wish you would offer a definition. You are repeatedly declining to do so, and then complaining that I object to your definition (which you haven't given) or have another definition of my own (which I don't) or that I am immovably committed to some theory (you don't say what) that conflicts (you don't say how) with something (you don't say what) about qualia. Maybe you're right -- for instance, I might be committed to some theory without even recognizing the fact, because it seems so obvious to me. But if so, the only way you're going to correct my error (I assume, at least for the sake of argument, that if I am wrong you do want to help me get less wrong, rather than merely to gloat at how wrong I am) is by showing me what I'm doing wrong, which you seem very unwilling to do. You just want to keep saying that I'm wrong, which so far as I can see accomplishes nothing.

Definitions are supposed to have implications. It's not reasonable to object to them for having implications

That's OK, because I'm not doing that, as I already tried to make clear. My problem is that I can't tell what implications your definition has. Because you won't tell me what it is.

in raising the issue, you are already using a good-enough definition of qualia

It seems to me that in order for a definition I'm using to be good enough, a minimum requirement should be for me to have a reasonable idea what it is. In the present case, this requirement is not met.

Further, what I require to raise the issue is precisely the thing you are now saying is not part of what you mean by "qualia", namely the understanding that insisting on the term "qualia" rather than merely saying that people experience things amounts to reifying whatever-it-is-"qualia"-is-about. You seem to be saying simultaneously (1) that speaking of "qualia" doesn't commit one to reifying anything, and (2) that if I say "I don't like the term 'qualia' because it seems like it may be reifying things that might be better not reified" I thereby demonstrate that I understand what you mean by "qualia" well enough. How can those possibly both be true?

Question begging definitions are bad, m'kay.

I'm glad you agree with me about that. But you seem to be ... offended? ... that I might be concerned at the possibility that you might be trying to make a question-begging definition. Do you consider that the way you've been conducting your side of this discussion demonstrates your bona fides so clearly that no reasonable person in my position could be concerned that there might be such an issue?

if Bob differs from Alice, he differs in being a naïve realist.

Splendid! I believe, like Alice, that people have experience. I am not a naïve realist. Therefore, apparently, I believe in qualia in whatever sense you have in mind, even though I don't yet understand what that sense is. Problem solved!

(At least, I think I am not a naïve realist. But -- I'm sorry if this seems to be becoming a theme -- I don't really understand exactly what you mean by "naïve realist", so maybe I'm wrong in thinking I'm not one. At any rate, I agree with what you said in apparent criticism of naïve realism, namely that when we perceive things it happens by means of a complicated process in which lots of things happen along the way.)

I wasn't putting forward ways-of-perceiving as an exhaustive definition, I was pointing out the inadequacy of your definition.

Not possible, since I wasn't proposing to define anything. I assume you mean the inadequacy of my proposal of "people experience things" as the uncontroversially-true-and-needing-explaining part of what you might possibly mean by "people have qualia". But, for the reasons I already gave and which you seem to have simply ignored without comment, I do not understand how what you said about naïve realism and the complexity of perception does anything to explain how "people have qualia" is better than "people experience things".

It occurs to me that maybe you're taking "things" in "people experience things" to be e.g. sunsets, and "people experience things" to mean something like "a straightforward X-experiences-Y relation holds between people and sunsets", which might explain why you are accusing me of naïve realism. That isn't what I mean; the difficulty I am having here is that the language we have available for talking about this stuff is full of implicit reifications and the like. Perhaps just "people experience" would be better. What is uncontroversial is not that people directly-interact-perceptually-with sunsets, nor that people have actual-reified-experience-types-called qualia, but that experiencing-events (-processes? all these terms are still too commital if taken literally) of the kind we are describing when we say things like "I saw a lovely sunset" happen. All the details of just what objects are involved are, I think, up for grabs and one of the things that makes me uneasy about "people have qualia" is that it strongly suggests particular answers to some of those questions that -- because "question-begging definitions are bad, m'kay" -- would be better left un-suggested.

You are engaging in a selective demand for rigor.

Only in the following sense: you seem offended that I will not declare that people have qualia. (Again, that's the full extent of what I'm doing, so if something else is what's offending you then you might consider saying explicitly what it is.) So I want to know what you mean when you say that people have qualia, and how that differs from weaker propositions like "people experience things".

So, sure, it's selective: I am selectively seeking clarification of the things you're talking about where I don't understand what you mean. If you find that problematic, then I would like some help understanding why.


It is possible that something you say at the end of your comment above is intended to be an answer to my often-repeated question. (Of course you claim that you have "already said [it] several times", and of course that is not true, but never mind.)

they [sc. qualia] are supposed to be subjectively experienced ways of perceiving

Well, for sure I agree that perceiving happens, and that how that feels to us is subjective (because how anything feels to us is subjective, because that's what "subjective" means). Does that mean that I agree with what you mean when you say "we have qualia", or not?

(That isn't meant to be some sort of gotcha-question. I genuinely don't know what the answer is. I am still trying to avoid needless reification, and I suspect that for reasons I am not currently able to anticipate you will find it unsatisfactory that I don't use the exact phrase "have qualia"; it seems to me that what I said above precisely captures what you get by taking "we have qualia" + "qualia are subjectively experienced ways of perceiving" and trying to change that as little as possible while avoiding the (apparent?) reification I'm worried about; but maybe I'm wrong. And a part of my brain still harbours the uncharitable suspicion that despite your assurances above the reification is actually the point and you will not accept any wording that doesn't carry the ontological commitment you say you aren't trying to beg the question of. Again, this is an uncharitable suspicion and I am aware that it is; but I am having trouble figuring out better explanations for why you're so concerned that I specifically embrace "qualia", in so many words.)

Replies from: TAG
comment by TAG · 2022-05-17T19:46:29.615Z · LW(p) · GW(p)

Using a noun is, by default, reification. Or, at the very least, should be presumed so in the absence of some statement along the lines of “of course when I’m asking you to agree that people have qualia, I am not asking you to commit yourself to there being any such things as qualia”.

I've already said that I'm using "qualia" in an ontologically non committal way.

I note from your 2016 comment that you use the word noncommittally yourself.

"Qualia are what happens in our brains (or our immaterial souls, or wherever we have experiences) in response to external stimulation, or similar things that arise in other ways (e.g., in dreams)."

Qualia without reification seem to me to amount to “people have experiences”.

As I have explained, equating qualia and experiences doesn't sufficiently emphasise the subjective aspects.

"Experience" can be used in contexts like "experience a sunset" where the thing experienced is entirely objective, or contexts like "experience existential despair" ,where it's a subjective feeling. Only the second kind of use overlaps with "qualia". Hence, "qualia" is often briefly defined as "subjective experience".

Note that "experience" is just as much of a noun as "quale", so it has just as much of reification issue.

None.

I am still trying to avoid needless reification,

Then dont reify. The reification issue exists only in your imagination.

I understand that it doesn’t seem that way to you, but I don’t understand why; I don’t yet understand just what you mean by “qualia”,

How do you know it's different from what you mean? You were comfortable using the word in 2016. This conversation started when I used a series of examples to define "qualia", which you objected to as not being a real definition.

"It’s easy to give examples of things we think of as qualia. I’m not so sure that that means it’s easy to give a satisfactory definition of “qualia”.'

But when I asked you to define "matter"...you started off with a listof examples!

"First, purely handwavily and to give some informal idea of the boundaries, here are some things that I would call “matter” and some possibly-similar things that I would not. Matter: electrons, neutrons, bricks, stars, air, people, the London Philharmonic Orchestra (considered as a particular bunch of particular people). Not matter: photons, electric fields, empty space (to whatever extent such a thing exists), the London Philharmonic Orchestra (considered as a thing whose detailed composition changes over time), the god believed in by Christians (should he exist), minds. Doubtful: black holes; the gods believed in by the ancient Greeks (should they exist)."

The only thing I'm doing that is different is going for a minimal and common sense approach, rather than a technical definition on the lines of "that which is ineffable, incorrigible, irreducible and repeatable. Hence why the list of examples: it's hard to deny that ones pains feel like something, even when one can quibble about incorrigibility or whatever.

and the one thing you’ve said that seems to be an attempt to explain why you want something that goes beyond “people have experiences” in the direction you’re calling “qualia”—the business about perception being a complex multi-stage process involving filtering and processing and whatnot—didn’t help me, for the reasons I’ve already given.

Again, that would be an ontology of qualia. Again, I am offering a definition, not a complete theory. Again, you shouldn't be rejecting evidence because you don't like it's theoretical implications.

Splendid! I believe, like Alice, that people have experience. I am not a naïve realist.

Naive realism is not the denial of experience: it's treating experience as objective.

At least, I think I am not a naïve realist. But—I’m sorry if this seems to be becoming a theme—I don’t really understand exactly what you mean by “naïve realist”

You can look up definitions, just as you can for "qualia".

so maybe I’m wrong in thinking I’m not one. At any rate, I agree with what you said in apparent criticism of naïve realism, namely that when we perceive things it happens by means of a complicated process in which lots of things happen along the way.)

Which have an objective aspect -- things happen differently in the brains of different perceivers -- and a subjective aspect -- things seem different to different observers. Again, the subjective aspect is what's relevant.

It occurs to me that maybe you’re taking “things” in “people experience things” to be e.g. sunsets, and “people experience things” to mean something like “a straightforward X-experiences-Y relation holds between people and sunsets”, which might explain why you are accusing me of naïve realism. That isn’t what I mean; the difficulty I am having here is that the language we have available for talking about this stuff is full of implicit reifications

No, it just seems to you that way.

they [sc. qualia] are supposed to be subjectively experienced ways of perceiving

Well, for sure I agree that perceiving happens, and that how that feels to us is subjective (because how anything feels to us is subjective, because that’s what “subjective” means).

No, it doesn't mean anything so vacuous. If two people perform mental arithmetic , that is not subjective because maths is objective...they get the same answer, or one of them is wrong. "Subjective" doesn't just mean that individual apprehensions vary, it means there is no right or wrong about the variation Some people like the way marmite tastes to them, others don't. Neither is right or wrong, but the marmite is always the exact same substance.

Does that mean that I agree with what you mean when you say “we have qualia”, or not?

Well, you seem to be having trouble understanding what "subjective" means.

Replies from: gjm
comment by gjm · 2022-05-18T11:52:42.914Z · LW(p) · GW(p)

Your accusations of inconsistency

Yup, I used the term "qualia" in 2016 (in response to someone else making an argument that used the term). I don't always pick every possible fight :-).

(In that case, turchin was making another specific argument and used the word "qualia" in passing. I disagreed with the other specific argument and argued against that. The specific word "qualia" was a side issue at most. Here, the specific point at issue is whether everyone needs to agree that "we have qualia".)

You asked for a definition of "matter" and I (1) gave a list of examples and counterexamples and near-the-boundary examples, (2) prefaced with an explicit note that this was preliminary handwaving, (3) followed by an attempt at a precise definition distinguishing matter from not-matter. You haven't done any of that for "qualia", just given a list of examples, and that (not the fact that you did give a list of examples) is what I was complaining about. "It's easy to give examples ... I'm not so sure that that means it's easy to give a satisfactory definition".

Your accusations of wilful ignorance and/or laziness

Yes, I could look up definitions of "naïve realism" or of "qualia". As it happens, I have. They don't tell me what you mean by those terms, and definitions of them do not always agree with one another. Which is why I keep asking you what you mean by terms you are using, and get frustrated when you seem reluctant to tell me.

For instance, here we read that "the naïve realist claims that, when we successfully see a tomato, that tomato is literally a constituent of that experience, such that an experience of that fundamental kind could not have occurred in the absence of that object". Here we read that "naïve realism is the human tendency to believe that we see the world around us objectively, and that people who disagree with us must be uninformed, irrational, or biased". In a comment of yours [LW(p) · GW(p)]elsewhere in this thread you say "People generally and incorrectly assume that colours are objective properties (hence the consternation caused, amongst some, by the dress illusion). That's called naive realism, and it's scientifically wrong."

(I remark in passing that you also said that the difference between two people who agree that people experience things, one of whom says that we have qualia and one of whom doesn't, is that the latter has to be a naïve realist; if you are in fact claiming that "we have qualia" means something that is straightforwardly implied by "colours are not objective properties of the objects whose colours they are" then, yay, it turns out that I believe that we have qualia and we can stop arguing. But I'm pretty sure this will not in fact be enough.)

These three things are not entirely unlike one another, but no two of them are the same. Your comment is offering an example rather than a definition, but it is not in fact an example of the first definition and I'm doubtful about its being an example of the second.

Or I could look up "qualia" in, say, the MIT Encyclopedia of Cognitive Science, whose entry for that subject begins as follows -- my annotations in square brackets.

The terms quale and qualia (pl.) are most commonly used ["most commonly", so plainly what is about to follow is acknowledged not to be a definition that covers everyone's use of the terms -- gjm] to characterize the qualitative, experiential, or felt properties of mental states. ["most commonly used to characterize" is quite a long way from defining anything -- gjm] Some philosophers take qualia to be essential features of all conscious mental states; others only of sensations and perceptions. [Again, not everyone uses the word the same way. -- gjm]

Here we have clear acknowledgement that the term is used differently by different people, and (I think deliberately) something that falls short of an actual clear definition.

Or I could go to an ordinary dictionary, say the superlative Oxford English Dictionary, the relevant part of whose definition reads: "a quality or property as perceived or experienced by a person". Which seems to presuppose (at least one version of) "naïve realism": these qualities or properties are presumably of things in the external world and this definition seems at risk of importing the objects themselves into our qualia.

(But: people have experiences; often those experiences are "of" qualities or properties of objects in the world around them; if you don't consider that to be acknowledging that "people have qualia", what do you think is missing other than a specific shibboleth-phrasing?) 

Actual discussion of the issues

I'm not sure whether "that which is ineffable, incorrigible, irreducible and repeatable" is a (presumably very partial) description of what you mean when you talk about qualia, rather than just a random list of adjectives. If it is, then it's a nice illustration of why I am reluctant to say (as I think you want me to) "yes, sure, I agree that there are qualia". I strongly suspect that what most people who talk about "qualia" mean when they use that word goes way beyond the specific handwavy examples they offer, and I don't want to be thought (by them or by others) to be endorsing any of the other stuff until we've talked about that other stuff explicitly. I agree with you that people experience things; I do not think it's clear that they have experiences, or equivalence-classes-of-experiences, or whatever, that are ineffable, incorrigible, irreducible and repeatable. Maybe they do. I'm not saying they definitely don't. Just that those further claims are substantial claims, and that I think they are the sort of thing that "qualia" commonly means even to people who say "oh, no, all I mean by qualia is that stuff that happens when you look at a red thing", and that part of the difference between "I agree that people experience things" and I agree that people have qualia" is that the latter is liable to give the impression that I endorse those substantial claims, and that if you want my assent to those claims then we actually need to discuss them.

Saying that you're going for a minimal and common-sense approach is all very well, but in practice people are not that great at using words that have connotations in ways that abandon those connotations. This is also why I don't think an atheist should play along if a certain kind of theist says "well, let's just take 'God' as a sort of shorthand for humanity's highest and best aspirations", and I don't think a creationist should play along if a biologist says "look, all 'evolution' means is change of allele frequencies in populations".

You distinguish (I think this is the nearest you get to answering my question about what exactly you mean by qualia) between two kinds of things we might say about experience: "objective" (things happen differently in the brains of different people) and "subjective" (things seem different to different people) and say it's the subjective side you're concerned with. I am not actually convinced that "things seem different to different people" is a very meaningful proposition -- that seems to presuppose some way of comparing experiences across people, and while obviously it's easy to say that my experience of looking at a green thing is not the same as your experience of being struck gently on the head with a golf club I don't really know what it would mean to say that my experience of looking at a green thing is or isn't the same as your experience of looking at a green thing, and I strongly suspect that there is no answer.

Still, for sure people experience things and we can e.g. ask someone what they feel like. Does that mean that I agree that "people have qualia"? If not, what do you think is missing other than a specific shibboleth-phrasing?

comment by Lance Bush (lance-bush) · 2021-10-30T17:01:36.677Z · LW(p) · GW(p)

Satisfactory for whom? I use examples because they are sufficient to get the point across to people who aren't too biased.

 

It’s not satisfactory to me. Does this mean I am “too biased?” That seems like a potentially unjustified presumption to make, and not a fair way to have a discussion with others who might disagree with you.

Anyone could offer a definition then state in advance that anyone who doesn’t accept it is “too biased” then, when someone says they don’t accept it say “see, I told you so,” even if an unbiased person would judge the definition to be inadequate.

In any case, I’m not making a selective demand for rigor. Even if I were, I’d probably just shrug and raise the challenge, anyway. I don’t know what people talking about qualia are talking about. But I am also pretty confident they don’t know what they are talking about. I suspect qualia is a pseudoconcept invented by philosophers, and that to the extent that we adequately characterize it, it faces pretty serious challenges. 

Where are the calls for rigourous definitions of "matter", "computation", etc?

The main person I discuss illusionism and consciousness with specializes in philosophy of computation and philosophy of science, with an emphasis on broad metaphysical questions. We both endorse illusionism, and have for years, so there’s little to say there. Instead, regularly we mostly discuss their views on computation and metaphysics, and I’m often asked to read their papers on these topics. So, in the past few years, I have read significantly more work on what computers and matter are than I have on consciousness.

Thus, ironically, I have more discussions about rigorous attempts to define computers and features of the external world than I do about consciousness. So if you think that, in denying qualia, I am somehow failing to apply a similar degree of rigor as I do to other ideas, you could not have picked worse examples. It is not the case that I’m especially tough on the notion of qualia.

comment by Lance Bush (lance-bush) · 2021-10-30T16:17:17.804Z · LW(p) · GW(p)

Unfortunately, I don’t think the account of qualia you’ve presented is adequate.

First, I don’t know what is meant by “perceived sensation” of the pain of a headache. This could be cashed out in functional terms that don’t make appeal to what I am very confident philosophers are typically referring to when they refer to qualia. So this strikes me as a kind of veiled way of just using another word or phrase (in this case, “perceived sensation”) as a stand-in for “qualia,” rather than a definition. It’s a bit like saying the definition of morality is that it is “about ethics.”

I’m likewise at a loss about the second part of this. What is the qualitative character of a sensation? What does it mean to say that you’re referring to “what it is directly like to be experiencing” rather than a belief about experiences? Again, these just seem like roundabout ways of gesturing towards something that remains so underspecified that I still don’t know what people are talking about. 

Whereas illusionism is almost impossible to define coherently.

Illusionism holds that our introspections about the nature of our conscious experiences are systematically mistaken in particular ways that induce people to hold the incorrect belief that our experiences have phenomenal properties.

I think this is a coherent position, and I’m reasonably confident it comports with how Dennett and Frankish would characterize it.

Where is that quote from? It seems to imply that all mental states are other propositional attitudes or perceptions. If so, that doesn’t seem right to me. Also, the complaint primarily seems to be with the name “illusionism.” I’m happy to call it delusionism. If we do that, do they still have an objection? If so, I’m not quite sure what the objection is.

Replies from: TAG
comment by TAG · 2021-10-30T17:12:25.971Z · LW(p) · GW(p)

So this strikes me as a kind of veiled way of just using another word or phrase (in this case, “perceived sensation”) as a stand-in for “qualia,” rather than a definition.

Is "unmarried man" a mere stand-in for "bachelor"?

Again, these just seem like roundabout ways of gesturing towards something that remains so underspecified that I still don’t know what people are talking about.

They are ways of gesturing towards your own experience. If you refuse to introspect you are not going to get it.

Where is that quote from?

Me.

Illusionism holds that our introspections about the nature of our conscious experiences are systematically mistaken in particular ways that induce people to hold the incorrect belief that our experiences have phenomenal properties.

Thats what I was expanding on.

The phenomenon properties you mentioned...those are qualia. You have the concept , because you need the concept to say it's illusory.

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-10-30T18:50:10.269Z · LW(p) · GW(p)

Is "unmarried man" a mere stand-in for "bachelor"?


In some cases, but not others. One can reasonably ask whether the Pope is a bachelor, but for the purposes of technical philosophical work one might treat “unmarried man” and “bachelor” as identical in the context of some technical discussion.

They are ways of gesturing towards your own experience. If you refuse to introspect you are not going to get it.

I can understand if someone who doesn’t know me or my educational background might think that I just haven’t thought about the topic of qualia enough, or that I am refusing to introspect about it, but that isn’t the case. This isn’t a topic I’ve thought about only casually; it is relevant to my work.

That being said, I have introspected, and I have come to the conclusion that there isn’t anything to get with respect to qualia. Nothing about my introspection gives me any insight into what you or others mean by qualia. Instead, I have concluded that the notion of qualia that has trickled out from academic philosophy is most likely a conceptual confusion enshrining the kinds of introspective errors Dennett and others argue that people are prone to make.

Me.

Okay, thanks. I apologize for having had to ask but you provided a paragraph in quotation with no attribution, and it was difficult for me to interpret what that meant.

The phenomenon properties you mentioned...those are qualia. You have the concept , because you need the concept to say it's illusory.

I have a kind of meta-concept: that other people have a concept of qualia but I myself am not personally acquainted with them, and would not say that I have the concept. One does not need to personally be subject to an illusion to believe that others are.

I know that other people purport to have a notion of qualia, but I do not. But thinking other people have mistaken or confused concepts does not require that one have the concept in the sense of possessing or understanding it. In other words, other people might tell me that there's, e.g., "something it's like" to see red or taste chocolate that somehow defies explanation, is private, is inaccessible, and so on. But I myself do not have such experiences.In such cases, I think people are simply confused, and that this can result in the case of believing in qualia in developing pseudoconcepts. 

This isn't the only case where I think this could or does occur. If people insisted they had a concept that was unintelligible or self-contradictory, such as a “colorless color” or if they insisted something could be "intrinsically north-facing," I could hold that they are mistaken in having such concepts, , and maintain that I don’t “have the concept,” in that I am not actually capable of personally entertaining entertaining the notion of colorless colors or intrinsically north-facing objects.

In fact, this is exactly my position on non-naturalist moral realism: I regard the notion of stance-independent moral facts to be unintelligible. I can talk about “stance-independent moral facts,” as a concept other people purport to “have” in the sense of understanding it without understanding it myself. That is, I don’t actually have the concepts non-natural moral realists purport to have, while still regarding the people who hold such views to be subject to an intellectual or experiential error of some kind.

Replies from: TAG
comment by TAG · 2021-10-30T19:10:45.462Z · LW(p) · GW(p)

Of course, introspection isn't meant to give you a definition of qualia...it's meant to give you direct acquaintance.

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-10-30T19:23:13.886Z · LW(p) · GW(p)

I have introspected and it has not resulted in acquaintance with qualia.

I believe people can introspect and then draw mistaken conclusions about the nature of their experiences, and that qualia is a good candidate for one of these mistaken conclusions.

Replies from: TAG
comment by TAG · 2021-10-30T19:46:16.239Z · LW(p) · GW(p)

What did it result in acquaintance with? If it seems to you that all your mental content consists only of propositional attitudes, then you don't even have the illusion of phenomenonal consciousness. But why would you alone be lacking it?

Replies from: Raemon, lance-bush
comment by Raemon · 2021-10-30T19:49:32.147Z · LW(p) · GW(p)

Note that it's plausible to me that this is a Typical Mind thing and actually there's just a lot of people going around without the perception of phenomenal consciousness.

Like, Lance, do you not feel like you experience that things seem ways? Or just that they don't seem to be ways in ways that seem robustly meaningful or something?

Replies from: TAG, Jemist, lance-bush
comment by TAG · 2021-10-30T19:55:27.947Z · LW(p) · GW(p)

But the qualiaphilic claim is typical, statistically. Even if Lance's and Denett's claims to zombiehood are sincere, they are not typical.

Replies from: Raemon, lance-bush
comment by Raemon · 2021-10-30T20:00:46.914Z · LW(p) · GW(p)

Have we even checked tho? (Maybe the answer is yes, but it hadn't occurred to me before just now that this was a dimension people might vary on. Or, actually I think it had, but I hadn't had a person in front of me actually claiming it)

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-10-30T21:36:33.938Z · LW(p) · GW(p)

See above; I posted a link to a recent study. There hasn't been much work on this. While my views may be atypical, so too might the views popular among contemporary analytic philosophers. A commitment to the notion that there is a legitimate hard problem of consciousness, that we "have qualia," and so on might all be idiosyncrasies of the specific way philosophers think, and may even result from unique historical contingencies, such that, were there many more philosophers like Quine and Dennett in the field, such views might not be so popular.

Some philosophical positions seem to rise and fall over time. Moral realism was less popular a few decades ago, but as enjoyed a recent resurgence, for instance. This suggests that the perspectives of philosophers might result in part from trends or fashions distinctive of particular points in time.

 

comment by Lance Bush (lance-bush) · 2021-10-30T21:32:08.766Z · LW(p) · GW(p)

But the qualiaphilic claim is typical, statistically. 

 

Typical of who?

Replies from: TAG
comment by TAG · 2021-10-30T23:05:54.113Z · LW(p) · GW(p)

"Statistically" , so "who" would be most people.

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-10-31T02:33:24.458Z · LW(p) · GW(p)

Thanks for clarifying. Not all statistical claims in e.g., psychology are intended to generalize towards most people, so I didn't want to assume you meant most people.

If the claim is that most people have a concept of qualia, that may be true, but I'm not confident that it is. That seems like an empirical question it'd be worth looking into.

Either way, I wouldn't be terribly surprised if most people had the concept, or (I think more likely) could readily acquire it on minimal introspection (though on my view I'd say that people are either duped or readily able to be duped into thinking they have the concept).

I don't know if I am different, or if so, why. It's possible I do have the concept but don't recognize it, or am deceiving myself somehow. 

It's also possible I am somehow atypical neurologically. I went into philosophy precisely because I consistently found that I either didn't have intuitions about conventional philosophical cases at all (e.g., Gettier problems), or had nonstandard or less common views (e.g. illusionism, normative antirealism, utilitarianism). That led me to study intuitions, the psychological underpinnings of philosophical thought, and a host of related topics. So there is no coincidence in my presenting the views expressed here. I got into these topics because everyone else struck me as having bizarre views.

Replies from: TAG, Richard_Kennaway
comment by TAG · 2021-10-31T19:22:49.310Z · LW(p) · GW(p)

If the claim is that most people have a concept of qualia, that may be true, but I’m not confident that it is

Most people don't know the word "qualia". Nonetheless, most people will state something equivalent....that they have feelings and seemings that they can't fully describe. So it's a "speaking prose" thing.

And something like that is implicit in Illusionism. Illusionism attempts to explain away reports of ineffable subjective sensations, reports of qualia like things. If no one had such beliefs, or made such reports, there would be nothing for Illusionism to address.

Trying to attack qualia from every possible angle is rather self-defeating. For instance, if you literally don't know what "qualia" means, you can't report that you have none. And if no one even seems to have qualia, there is nothing for Illusionism to do. And so on.

It’s also possible I am somehow atypical neurologically

But then , why insist that you are right? If you have something like colour blindness , then why insist that everyone else is deluded when they report colours?

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-11-01T15:15:55.670Z · LW(p) · GW(p)

Most people don't know the word "qualia". Nonetheless, most people will state something equivalent....that they have feelings and seemings that they can't fully describe. So it's a "speaking prose" thing.

 

There are many reasons why a person might struggle to describe their experiences that wouldn't be due to them having qualia or having some implicit qualia-based theory, especially among laypeople who are not experienced at describing their mental states. It would be difficult to distinguish these other reasons from reasons having to do with qualia.

So I don't agree that what you describe would necessarily be equivalent, and I don't think it would be easy to provide empirical evidence specifically of the notion that people have or think they have qualia, or speak or think in a way best explained by them having qualia.

Even if it could be done, I don't know of any empirical evidence that would support this claim. Maybe there is some. But I don't have a high prior on any empirical investigation into how laypeople think turning out to support your claim, either.

And something like that is implicit in Illusionism. Illusionism attempts to explain away reports of ineffable subjective sensations, reports of qualia like things. If no one had such beliefs, or made such reports, there would be nothing for Illusionism to address.

You know, I think you're right. And I believe the course of this discussion has clarified things for me sufficiently for me to recognize that I do not, strictly speaking, endorse illusionism.

Illusionism could be construed as the conjunction of two claims:

(1) On introspection, people systematically misrepresent their experiential states as having phenomenal properties.

(2) There are no such phenomenal properties.

For instance, Frankish (2016) defines (strong) illusionism as the view that:

“[...] phenomenal consciousness is illusory; experiences do not really have qualitative, ‘what-it’s-like’ properties, whether physical or non-physical” (p. 15)

Like illusionists, I deny that there are phenomenal properties, qualia, what-its-likeness, and so on. In that sense, I deny phenomenal realism (Mandik, 2016). As such, I agree with (2) above. Thus, I agree with the central claim of illusionism, that there are no phenomenal properties, and I deny that there are qualia, or that there’s “what it’s likeness” and so on. However, what I am less comfortable doing is presuming that things seem this way to nonphilosophers, and that they are all systematically subject to some kind of error. In that regard, I do not fully agree with illusionists.

To the extent that illusionists mistakenly suppose that people are subject to an illusion, we could call this meta-illusionism. Mandik distinguishes meta-illusionism from illusionism as follows:

“The gist of meta-illusionism is that it rejects phenomenal realism while also insisting that no one is actually under the illusion that there are so-called phenomenal properties” (pp. 140-141).

Mandik goes on to distance his position from illusionism, in reference to Frankish as follows:

“One thing Frankish and I have in common is that neither of us wants to assert that there are any properties instantiated that are referred to or picked out by the phrase ‘phenomenal properties’. One place where Frankish and I part ways is over whether that phrase is sufficiently meaningful for there to be a worthwhile research programme investigating how it comes to seem to people that their experiences instantiate any such properties. Like Frankish, I’m happy with terms like ‘experience’, ‘consciousness’, and ‘conscious experience’ and join Frankish in using what he calls ‘weak’ and functional construals of such terms. But, unlike Frankish, I see no use at all, not even an illusionist one, for the term ‘phenomenal’ and its ilk. The term ‘phenomenal’, as used in contemporary philosophy of mind, is a technical term. I am aware of no non-technical English word or phrase that is accepted as its direct analogue. Unlike technical terms in maths and physics, which are introduced with explicit definitions, ‘phenomenal’ has no such definition. What we find instead of an explicit definition are other technical terms treated as interchangeable synonyms. Frankish follows common practice in philosophy of mind when he treats ‘phenomenal’ as interchangeable with, for instance, ‘qualitative’ or, in scare-quotes, ‘“feely”’. (p. 141)

I can’t quote the whole article (though it’s short), but he concludes this point by stating that:

“We have then, in place of an explicit definition of ‘phenomenal properties’, a circular chain of interchangeable technical terms — a chain with very few links, and little to relate those links to nontechnical terminology. The circle, then, is vicious. I’m sceptical that any properties seem ‘phenomenal’ to anyone because this vicious circle gives me very little idea what seeming ‘phenomenal’ would be.” (p. 142)

Mandik is not so sure he wants to endorse meta-illusionism, since this might turn on concerns about what it means for something to be an illusion, and because he’s reluctant to state that illusionists are themselves subject to an illusionism. What he proposes instead is qualia quietism, the view that:

“the terms ‘qualia’, ‘phenomenal properties’, etc. lack sufficient content for anything informative to be said in either affirming or denying their existence. Affirming the existence of what? Denying the existence of what? Maintaining as illusory a representation of what? No comment. No comment. No comment” (p. 148)

This is much closer to what I think than illusionism proper. So, in addition to denying that there are qualia, or phenomenal properties, or whatever other set of terminology is used to characterize some putative set of special properties that spell trouble for those of us ill-disposed to believe in such things, I also deny that it seems this way to nonphilosophers. 

My entire academic career has centered on critiquing work in experimental philosophy, and close scrutiny of this and related articles might reveal what I take to be significant methodological problems. Nevertheless, insofar as research has been conducted on the subject of whether nonphilosophers have phenomenal properties, or think about consciousness in the same way as philosophers, at least some of the results indicate that they may not. See here, for instance Sytsma & Machery (2010):

Abstract: “Do philosophers and ordinary people conceive of subjective experience in the same way? In this article, we argue that they do not and that the philosophical concept of phenomenal consciousness does not coincide with the folk conception. We first offer experimental support for the hypothesis that philosophers and ordinary people conceive of subjective experience in markedly different ways. We then explore experimentally the folk conception, proposing that for the folk, subjective experience is closely linked to valence. We conclude by considering the implications of our findings for a central issue in the philosophy of mind, the hard problem of consciousness.”

I doubt this one study is definitive evidence one way or the other. What I will say, though, is that whether people think of consciousness the way philosophers do is an empirical question. I suspect they don’t, and absent any good reasons to think that they do, I’m not inclined to accept without argument that they do.

Trying to attack qualia from every possible angle is rather self-defeating. For instance, if you literally don't know what "qualia" means, you can't report that you have none. And if no one even seems to have qualia, there is nothing for Illusionism to do. And so on.

I disagree. You can claim to both not know what something means, and claim to not have the thing in question. 

In some cases, you might not know what something means because you’re ignorant of what is meant by the concept in question. For instance, someone might use the term  “zown zair” to refer to brown hair. I might not know this, even if I do have brown hair. In that case, I would not know what they mean, even though I do have brown hair. It would be a mistake for me to think that because I don’t know what they mean, that I don’t have “zown zair.” And it would be foolish to insist both that “zown zair” is false, and that “zown zair” is meaningless. I would simply have failed to find out what they were referring to with the term.

But this is not the case with qualia. I am not merely claiming that I don’t understand the concept. I am claiming that nobody understands the concept, because it is fundamentally confused and meaningless.

First, in the course of an exchange, This is especially the case when one is responding to a host of people, over an extended period of time, who are incapable of explaining the putative concept in a way that isn’t circular or vacuous. 

In the course of an exchange, people may employ a concept. They might say that, e.g. some objects have the property A. Yet when asked to explain what A is, they are unable to do so, or they provide unsatisfactory attempts. For instance, they might point to several objects, and say “all these objects have property A.” This is what was done earlier in this thread: I was given examples, as though this was independently helpful in understanding the concept. It’s not. If I pointed to a truck, a flock of geese, and a math textbook and said “these all have property A,” you wouldn’t be much closer to knowing what I was talking about. In other cases, they might use metaphors. But the metaphors may be unilluminating. In still other cases, they might appeal to other terms or concepts. Yet these terms or concepts might themselves be obscure or poorly defined, and if one asks for clarification, one begins the journey through an endless loop of mutual interdefinitions that never get you anywhere.

In such cases, it can become apparent that a person’s concepts are circular and self-referential, and don’t really describe anything about the way the world is. They might define A in terms of B, B in terms of C, and C in terms of A. And they might insist that A is a property we all have.

When numerous people all claim that we have property A, but they cannot define it, one may reasonably wonder whether all of these people are confused or mistaken. That is, one might conclude that property A is a pseudoconcept, something vague and meaningless.

In such cases, I am fine saying both that 

(a) I don’t have property A

(b) I don’t know what people referring to property A are talking about

I can believe that (a), because it’s meaningless. I don’t have meaningless properties. And I can conclude that (b), because it’s meaningless. I can’t understand a meaningless concept, because there isn’t anything to understand.

Maybe that’s an awkward way of framing why one would reject circular concepts that ascribe meaningless properties to people, in which case I’d be happy to revise the way I frame my rejection of qualia.

But then , why insist that you are right? If you have something like colour blindness , then why insist that everyone else is deluded when they report colours?

There are very good reasons to think people can see colors, and one would have such reasons even if they were colorblind. We can point to the physical mechanisms involved in color detection, the properties of light, and so on. We can point to specific color words in our and other languages, and it would be fairly easy to determine that nonphilosophers can see colors. I don’t think any of these conditions apply to qualia. So, first, there's that.

To emphasize just the last of these,  I don’t think “everyone else” is deluded. I think philosophers are deluded, and that people who encounter the work of these philosophers often become deluded as well. I don’t think the notion of qualia is a psychological mistake so much as it is an intellectual mistake only a subset of people make.

I suspect such mistakes are endemic to philosophy. The same thing has occurred, to an alarming extent, in contemporary metaethics. Moral realists frequently invoke the notion of decisive or external reasons, irreducible normativity, categorical imperatives, stance-independent normative and evaluative facts, and so on. I reject all of these concepts as fundamentally confused. And yet philosophers like Parfit, Huemer, Cuneo, and others  have not only tangled themselves into knots of confusion, their work has trickled out into the broader culture. I routinely encounter people who have come across their work claiming to “have” concepts that they are incapable of expressing. And these philosophers, when pressed, will fall back on claiming that the concepts in question are “brute” or “primitive” or “unanalyzable,” which is to say, they can’t give an account of them, and don’t think that they need to. Maybe they do "have" these concepts, but since I am very confident we can explain everything there is to now about the way the world is without invoking them, I suspect they're vacuous nonsense, and that these philosophers are uniformly confused.

And, like the notion of qualia, philosophers have for a long time presumed that ordinary people tend to be moral realists (see e.g. Sinclair, 2012). My own academic work specifically focuses on this question. And like the question of what people think about consciousness, this, too, is an empirical question. So far, little empirical evidence supports the conclusion that ordinary people tend to be moral realists, or at least that they tend to be consistently and uniformly committed to some kind of moral realism. By and large, they struggle to understand what they are being asked (Bush & Moss, 2020). I suspect, instead, that something like Gill’s (2009) indeterminacy-variability thesis is much more likely: that people have variable but (I suspect mostly) indeterminate metaethical standards.

The same may turn out to be the case for the only other issue I looked into: free will. This has led me, in my own work, to point towards the broader possibility that many of the positions philosophers purport to be intuitive, and that they claim are widespread among nonphilosophers, simply aren’t. Rather, I suspect that philosophers are over-intellectualizing some initial pool of considerations, then generating theories that are neither implicitly nor explicitly part of the way ordinary people speak or think.

I don’t think this is a situation where I am color blind, while others have color vision. Rather, it’s more like recognizing that many of the people around you are subject to a collective, and contagious, hallucination. So I suspect, instead, that I have come to recognize over time that academic philosophy has played an alarming role in duping large numbers of people into a wide range of confusions, then duped them further by convincing them that these confusions are shared by nonphilosophers.

 

References

Bush, L. S., & Moss, D. (2020). Misunderstanding Metaethics: Difficulties Measuring Folk Objectivism and Relativism. Diametros, 17(64). 6-21

Frankish, K. (2016). Illusionism as a theory of consciousness. Journal of Consciousness Studies, 23(11-12), 11-39.

Gill, M. B. (2009). Indeterminacy and variability in meta-ethics. Philosophical studies, 145(2), 215-234.

Mandik, P. (2016). Meta-illusionism and qualia quietism. Journal of Consciousness Studies, 23(11-12), 140-148.

Sinclair, N. (2012). Moral realism, face-values and presumptions. Analytic Philosophy, 53(2). 158-179


 

comment by Richard_Kennaway · 2021-10-31T22:12:19.723Z · LW(p) · GW(p)

When you sit alone in an empty room, do you have a sense of your own presence, your own self? Can you be aware, not only of your sensations, but of the sensation of having those sensations? Can you have thoughts, and be aware of having those thoughts? And be aware of having these awarenesses?

My answer to each of these questions is "yes".

But for you, do these questions fail to point to anything in your experience?

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-11-01T02:51:31.587Z · LW(p) · GW(p)

Can you be aware, not only of your sensations, but of the sensation of having those sensations?


I'm not sure. I have sensations, but I don't know what a sensation of a sensation would be. 

Can you have thoughts, and be aware of having those thoughts?

Sure, but that just sounds like metacognition, and that doesn't strike me as being identical with or indicative of having qualia. I can know that I know things, for instance. 

And be aware of having these awarenesses?

I would describe this as third-order metacognition, or recursive cognition, or something like that. And yea, I can do that. I can think that Sam thinks that I think that he lied, for instance. Or I can know that my leg hurts and then think about the fact that I know that my leg hurts.

comment by J Bostock (Jemist) · 2021-10-30T21:20:05.454Z · LW(p) · GW(p)

Having now had a lot of different conversations on consciousness I'm coming to a slightly disturbing belief that this might be the case. I have no idea what this implies for any of my downstream-of-consciousness views.

comment by Lance Bush (lance-bush) · 2021-10-30T21:48:30.742Z · LW(p) · GW(p)

Like, Lance, do you not feel like you experience that things seem ways?

I don't know what that means, so I'm not sure. What would it mean for something to seem a certain way?
 

Or just that they don't seem to be ways in ways that seem robustly meaningful or something?

I don't think it's this. It's more that when people try to push me to have qualia intuitions, I can introspect, report on the contents of my mental states, and then they want me to locate something extra. But there never is anything extra, and they can never explain what they're talking about, other than to use examples that don't help me at all, or metaphors that I don't understand. Nobody seems capable of directly explaining what they mean. And when pressed, they insist that the concept in question is "unanalyzable" or inexplicable or otherwise maintain that they cannot explain it. 

Despite his fame, the majority of students who take Dennett's courses that I encountered do not accept his views at all, and take qualia quite seriously. I had conversations that would last well over an hour where I would have one or more of them try to get me to grok what they're talking about, and they never succeeded. I've had people make the following kinds of claims:

(1) I am pretending to not get it so that I can signal my intellectual unconventionality.

(2) I do get it, but I don't realize that I get it.

(3) I may be neurologically atypical.

(4) I am too "caught in the grip" of a philosophical theory, and this has rendered me unable to get it.

One or more of these could be true, but I'm not sure how I'd find out, or what I might do about it if I did. But I strangely drawn to a much more disturbing possibility, that an outside view would suggest is pretty unlikely:

(5) all of these people are confused, qualia is a pseudoconcept, and the whole discussion predicated on it is fundamentally misguided

I find myself drawn to this view, in spite of it entailing that a majority of people in academic philosophy, or who encounter it, are deeply mistaken.

I should note, though, that I specialize in metaethics in particular. Most moral philosophers are moral realists (about 60%) and I consider every version of moral realism I'm familiar with to be obviously confused, mistaken, or trivial in ways so transparent that I do think I am justified in thinking that, on this particular issue, most moral philosophers really are mistaken. 

Given my confidence about moral realism, I'm not at all convinced that philosophers generally have things well-sorted on consciousness.
 

Replies from: MichaelStJules, Raemon
comment by MichaelStJules · 2021-10-31T23:26:15.704Z · LW(p) · GW(p)

It's more that when people try to push me to have qualia intuitions, I can introspect, report on the contents of my mental states, and then they want me to locate something extra.

 

Are they expecting qualia to be more than a mental state? If you're reporting the contents of your mental states, isn't that already enough? I'm not sure what extra there should be for qualia. Objects you touch can feel hot to you, and that's exactly what you'd be reporting. Or would you say something like "I know it's hot, but I don't feel it's hot"? How would you know it's hot but not feel it's hot, if your only information came from touching it? Where does the knowledge come from? Are you saying that what you're reporting is only the verbal inner thought you had that it's hot, and that happened without any conscious mental trigger?

If it's only the verbal thought, on what basis would you believe that it's actually hot? The verbal thought alone? (Suppose it's also not hot enough to trigger a reflexive response.)

Doesn't your inner monologue also sound like something? (FWIW, I think mine has one pitch and one volume, and I'm not sure it sounds like anyone's voice in particular (even my own). It has my accent, or whatever accent I mimic.)

More generally, the contents of your mental states are richer than the ones you report on symbolically (verbally or otherwise) to yourself or others, right? Like you notice more details than you talk to yourself about in the moment, e.g. individual notes in songs, sounds, details in images, etc.. Isn't this perceptual richness what people mean by qualia? I don't mean to say that it's richer than your attention, but you can attend to individual details without talking about them.

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-11-01T02:45:26.598Z · LW(p) · GW(p)

I don't think I can replicate exactly the kinds of ways people framed the questions. But they might do something like this: they'd show me a red object. They'd ask me "What color is this?" I say red. Then they'd try to extract from me an appreciation for the red "being a certain way" independent of, e.g., my disposition to identify the object as red, or my attitudes about red, as a color, and so on. Everything about "seeing red" doesn't to me indicate that there is a "what it's like" to seeing red. I am simply ... seeing red. Like, I can report that fact, and talk about it, and say things like "it isn't blue" and "it is the same color as a typical apple" and such,  but there's nothing else. There's no "what it's likeness" for me, or, if there is, I'm not able to detect and report on this fact. The most common way people will frame this is to try to get me to agree that the red has a certain "redness" to it. That chocolate is "chocolatey" and so on.

I can be in an entire room of people insisting that red has the property of "redness" and that chocolate is "chocolately" and so on, and they all nod and agree that our experiences have these intrinsic what-its-likeness properties. This seems to be what people are talking about when they talk about qualia. To me, this makes no sense at all. It's like saying seven has the property of "sevenness." That seems vacuous to me.

I can look at something like Dennett's account: that people report experiences as having some kind of intrinsic nonrelational properties that are ineffable and immediately apprehensible. I can understand all those words in combination, but I don't see how anyone could access such a thing (if that's what qualia are supposed to be), and I don't think I do. 

It may be that that I am something akin to a native functionalist. I don't know. But part of the reason I was drawn to Dennett's views is that they are literally the only views that have ever made any sense to me. Everything else seems like gibberish.

Or would you say something like "I know it's hot, but I don't feel it's hot"?

Well, I would cash out it "feeling hot" in functional terms. That I feel a desire to move my hand away from the object, that I can distinguish it from something cold or at least not hot, and so on. There doesn't seem to me to be anything else to touching a hot thing than its relational properties and the functional role it plays in relation to my behavior and the rest of my thoughts. What else would there be than this? It does seem to me that people who think there are qualia think there's something else. They certainly seem insistent that there is after I describe my experience. 

Are you saying that what you're reporting is only the verbal inner thought you had that it's hot, and that happened without any conscious mental trigger?

No, I think I have a conscious mental trigger, and I can and do say things like "that feels hot." I respond to hot things in normal ways, can report on those responses, and so on. I can certainly distinguish hot from cold without having to say anything, but I'm not sure what else you might be going for, and all of that seems like something you could get a robot to do that I don't think anyone would say "has qualia." But this is a very superficial pass at everything that would be going on if I touched something hot and reacted to it. So, it might be something we'd need to dig into more.

Doesn't your inner monologue also sound like something? 

Nobody ever asked me that. That's an awesome question. I think that no, it does not sound like anything. It's in English, and it's "my voice," but it doesn't "sound like" my actual speaking voice.

More generally, the contents of your mental states are richer than the ones you report on symbolically (verbally or otherwise) to yourself or others, right? 

Yes. 

Isn't this perceptual richness what people mean by qualia?

I don't think that it is. It sounds a bit like you're gesturing towards block's notion of access consciousness. I'm not sure though.

Replies from: MichaelStJules
comment by MichaelStJules · 2021-11-01T04:12:14.541Z · LW(p) · GW(p)

Ok, I think I get the disagreement now.

I can be in an entire room of people insisting that red has the property of "redness" and that chocolate is "chocolately" and so on, and they all nod and agree that our experiences have these intrinsic what-its-likeness properties. This seems to be what people are talking about when they talk about qualia. To me, this makes no sense at all. It's like saying seven has the property of "sevenness." That seems vacuous to me.

Hmm, I'm not sure it's vacuous, since it's not like they're applying "redness" to only one thing; redness is a common feature of many different experiences. 14 could have "sevenness", too.

Maybe we can think of examples of different experiences where it's hard to come up with distinguishing functional properties, but you can still distinguish the experiences? Maybe the following questions will seem silly/naive, since I'm not used to thinking in functional terms. Feel free to only answer the ones you think are useful, since they're somewhat repetitive.

  1. What are the differences in functional properties between two slightly different shades of red that you can only tell apart when you see them next to each other? Or maybe there are none when separate, but seeing them next to each other just introduces another functional property? What functional property would this be?
    1. What if you can tell them apart when they aren't next to each other? How are you doing so?
  2. How about higher and lower pitched sounds? Say the same note an octave apart?
  3. Say you touch something a few degrees above room temperature, and you can tell that it's hotter, but it doesn't invoke any particular desire. How can you tell it's hotter? How does this cash out in terms of functional properties?

Well, I would cash out it "feeling hot" in functional terms. That I feel a desire to move my hand away from the object, that I can distinguish it from something cold or at least not hot, and so on.

I'm guessing you would further define these in functional terms, since they too seem like the kinds of things people could insist qualia are involved in (desire, distinguishing). What would be basic functional properties that you wouldn't cash out further? Do you have to go all the way down to physics, or are there higher-level basic functional properties? I think if you go all the way down to physics, this is below our awareness and what our brain actually has concepts of; it's just implemented in them.

If you were experiencing sweetness in taste (or some other sensation) for the first time, what would be its functional properties that distinguish it from other things? Could this be before you formed attitudes about it, or are the attitudes simultaneous and built in as a necessary component of the experience?

I think that no, it does not sound like anything. It's in English, and it's "my voice," but it doesn't "sound like" my actual speaking voice.

What functional properties would you point to that your experiences of your actual speaking voice have, but your experiences of your inner voice don't? And that can't be controlled for? E.g. what if you were actually speaking out loud, but couldn't hear your own voice, and "heard" your inner voice instead? How does this differ from actually hearing your voice?

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-11-01T14:24:42.490Z · LW(p) · GW(p)

Hmm, I'm not sure it's vacuous, since it's not like they're applying "redness" to only one thing; redness is a common feature of many different experiences. 14 could have "sevenness", too.

 

One can apply a vacuous term to multiple things, so pointing out that you could apply the term to more than one thing does not seem to me to indicate that it isn't vacuous. I could even stipulate a concept that is vacuous by design: "smorf", which doesn't mean anything, and then I can say something like "potatoes are smorf." 
 

Maybe we can think of examples of different experiences where it's hard to come up with distinguishing functional properties, but you can still distinguish the experiences? 

The ability to distinguish the experiences in a way you can report on would be at least one functional difference, so this doesn't seem to me like it would demonstrate much of anything. 

Some of the questions you ask seem a bit obscure, like how I can tell something is hotter. Are you asking for a physiological explanation? Or the cognitive mechanisms involved? If so, I don'tknow, but I'm not sure what that would have to do with qualia. But maybe I'm not understanding the question, and I'm not sure how that could get me any closer to understanding what qualia are supposed to be.

What would be basic functional properties that you wouldn't cash out further?

I don't know. Likewise for most of the questions you ask. "What are the functional properties of X?" questions are very strange to me. I am not quite sure what I am being asked, or how I might answer, or if I'm supposed to be able to answer. Maybe you could help me out here, because I'd like to answer any questions I'm capable of answering, but I'm not sure what to do with these.

 

 

 



 

Replies from: MichaelStJules
comment by MichaelStJules · 2021-11-01T19:21:15.593Z · LW(p) · GW(p)

The ability to distinguish the experiences in a way you can report on would be at least one functional difference, so this doesn't seem to me like it would demonstrate much of anything.

 

It is a functional difference, but there must be some further (conscious?) reason why we can do so, right? Where I want to go with this is that you can distinguish them because they feel different, and that's what qualia refers to. This "feeling" in qualia, too, could be a functional property. The causal diagram I'm imagining is something like

Unconscious processes (+unconscious functional properties) -> ("Qualia", Other conscious functional properties) -> More conscious functional properties

And I'm trying to control for "Other conscious functional properties" with my questions, so that the reason you can distinguish two particular experiences goes through "Qualia". You can tell two musical notes apart because they feel (sound) different to you.

I don't know. Likewise for most of the questions you ask. "What are the functional properties of X?" questions are very strange to me. I am not quite sure what I am being asked, or how I might answer, or if I'm supposed to be able to answer. Maybe you could help me out here, because I'd like to answer any questions I'm capable of answering, but I'm not sure what to do with these.

I'm not sure if what I wrote above will help clarify. You also wrote:

Well, I would cash out it "feeling hot" in functional terms. That I feel a desire to move my hand away from the object, that I can distinguish it from something cold or at least not hot, and so on. There doesn't seem to me to be anything else to touching a hot thing than its relational properties and the functional role it plays in relation to my behavior and the rest of my thoughts.

How would you cash out "desire to move my hand away from the object" and "distinguish it from something cold or at least not hot" in functional terms? To me, both of these explanations could also pass through "qualia". Doesn't desire feel like something, too? I'm asking you cash out desire and distinguishing in functional terms, too, and if we keep doing this, do "qualia" come up somewhere?

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-11-01T21:12:47.955Z · LW(p) · GW(p)

It is a functional difference, but there must be some further (conscious?) reason why we can do so, right?


Do you mean like a causal reason? If so then of course, but that wouldn’t have anything to do with qualia.

Where I want to go with this is that you can distinguish them because they feel different, and that's what qualia refers to.

I have access to the contents of my mental states, and that includes information that allows me to identify and draw distinctions between things, categorize things, label things, and so on. A “feeling” can be cashed out in such terms, and once it is, there’s nothing else to explain, and no other properties or phenomena to refer to. 

I don’t know what work “qualia” is doing here. Of course things feel various ways to me, and of course they feel different. Touching a hot stove doesn’t feel the same as touching a block of ice. 

But I could get a robot, that has no qualia, but has temperature detecting mechanisms, to say something like “I have detected heat in this location and cold in this location and they are different.” I don’t think my ability to distinguish between things is because they “feel” different; rather, I’d say that insofar as I can report that they “feel different” it’s because I can report differences between them. I think the invocation of qualia here is superfluous and may get the explanation backwards: I don’t distinguish things because they feel different; things “feel different” if and only if we can distinguish differences between them.


This "feeling" in qualia, too, could be a functional property.

Then I’m even more puzzled by what you think qualia are. Qualia are, I take it, ineffable, intrinsic qualitative properties of experiences, though depending on what someone is talking about they might include more or less features than these. I’m not sure qualia can be “functional” in the relevant sense. 

How would you cash out "desire to move my hand away from the object" and "distinguish it from something cold or at least not hot" in functional terms?

I don't know. I just want to know what qualia are. Either people can explain what qualia are or they can’t. My inability to explain something wouldn’t justify saying “therefore, qualia,” so I’m not sure what the purpose of the questions are. I’m sure you don’t intend to invoke “qualia of the gaps,” and presume qualia must figure into any situation in which I, personally, am not able to answer a question you've asked. 

I'm asking you cash out desire and distinguishing in functional terms, too, and if we keep doing this, do "qualia" come up somewhere?

I don’t know what you think qualia are, so I wouldn’t be able to tell you. People keep invoking this concept, but nobody seems able to offer a substantive explanation of what it is, and why I should think I or anyone else has such things, or why such things would be important or necessary for anything in particular, and so on.

I hope I'm not coming off as stubborn here. I'm very much interested in answering any questions I'm able to answer, I'm just not sure precisely what you're asking me or how I might go about answering it. "What are the functional properties of X?" doesn't strike me as a very clear question.

Replies from: MichaelStJules
comment by MichaelStJules · 2021-11-02T05:03:51.040Z · LW(p) · GW(p)

But I could get a robot, that has no qualia, but has temperature detecting mechanisms, to say something like “I have detected heat in this location and cold in this location and they are different.” I don’t think my ability to distinguish between things is because they “feel” different; rather, I’d say that insofar as I can report that they “feel different” it’s because I can report differences between them. I think the invocation of qualia here is superfluous and may get the explanation backwards: I don’t distinguish things because they feel different; things “feel different” if and only if we can distinguish differences between them.

(...)

I have access to the contents of my mental states, and that includes information that allows me to identify and draw distinctions between things, categorize things, label things, and so on. A “feeling” can be cashed out in such terms, and once it is, there’s nothing else to explain, and no other properties or phenomena to refer to. 

What's the nature of these differences and this information, though? What exactly are you using to distinguish differences? Isn't it experienced? The information isn't itself a set of "symbols" (e.g. words, read or heard), or maybe sometimes it is, but those symbols aren't then made up of further symbols. Things don't feel hot or cold to you because there are different symbols assigned to them that you read off or hear, or to the extent that they are, you're experiencing those symbols as being read or heard, and that experience is not further composed of symbols.

 

Then I’m even more puzzled by what you think qualia are. Qualia are, I take it, ineffable, intrinsic qualitative properties of experiences, though depending on what someone is talking about they might include more or less features than these. I’m not sure qualia can be “functional” in the relevant sense. 

I might just be confused here. I was thinking that the illusion of ineffability, "seemingness", could be a functional property, and that what you're using to distinguish experiences are parts of these illusions. Maybe that doesn't make sense.

I don't know. I just want to know what qualia are. Either people can explain what qualia are or they can’t. My inability to explain something wouldn’t justify saying “therefore, qualia,” so I’m not sure what the purpose of the questions are. I’m sure you don’t intend to invoke “qualia of the gaps,” and presume qualia must figure into any situation in which I, personally, am not able to answer a question you've asked. 

I might have been switching back and forth between something like "qualia of the gaps" and a more principled argument, but I'll try to explain the more principled one clearly here:

For each of the functional properties you've pointed out so far, I would say they "feel like something". You could keep naming things that "feel like something" (desires, attitudes, distinguishing, labelling or categorizing), and then explaining those further in terms of other things that "feel like something", and so on. Of course, presumably some functional properties don't feel like anything, but to the extent that they don't, I'd claim you're not aware of them, since everything you're aware of feels like something. If you keep explaining further, eventually you have to hit an explanation that can't be further explained even in principle by further facts you're conscious of (eventually the reason is unconscious, since you're only conscious of finitely many things at any moment). I can't imagine what this final conscious explanation could be like if it doesn't involve something like qualia, something just seeming some way. So, it's not about there being gaps in any particular explanation you try to give in practice, it's about there necessarily always being a gap. What is a solution supposed to look like?

Of course, this could just be a failure of my own imagination.

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-11-02T13:45:47.754Z · LW(p) · GW(p)

I don’t know the answer to these questions. I’m not sure the questions are sufficiently well-specified to be answerable, but I suspect if you rephrased them or we worked towards getting me to understand the questions, I’d just say “I don’t know.” But my not knowing how to answer a question does not give me any more insight into what you mean when you refer to qualia, or what it means to say that things “feel like something.”

I don’t think it means anything to say things “feel like something.” Every conversation I’ve had about this (and I’ve had a lot of them) goes in circles: what are qualia? How things feel. What does that mean? It’s just “what it’s like” to experience them. What does that mean? They just are a certain way, and so on. This is just an endless circle of obscure jargon and self-referential terms, all mutually interdefining one another.

I don’t notice or experience any sense of a gap. I don’t know what gap others are referring to. It sounds like people seem to think there is some characteristic or property their experiences have that can’t be explained. But this seems to me like it could be a kind of inferential error, the way people may have once insisted that there’s something intrinsic about living things that distinguishes from nonliving things, and living things just couldn’t be composed of conventional matter arranged in certain ways, that they just obviously had something else, some je ne sais quoi.

I suspect if I found myself feeling like there was some kind of inexplicable essence, or je ne sais quoi to some phenomena, I’d be more inclined to think I was confused than that there really was je ne sais quoiness. I’m not surprised philosophers go in for thinking there are qualia, but I’m surprised that people in the lesswrong community do. Why not think “I’m confused and probably wrong” as a first pass? Why are many people so confident that there is, what as far as I can tell, amounts to something that may be fundamentally incomprehensible, even magical? That is, it’s one thing to purport to have the concept of qualia; it’s another to endorse it. And it sounds not only like you claim to grok the notion of qualia, but to endorse it.

 

 



 

Replies from: main_gi
comment by main_gi · 2022-02-09T02:51:45.229Z · LW(p) · GW(p)

Hi, I was doing research on consciousness-related discussions, blah blah blah, 3 months old, would just like to reply to a few things you mentioned.

I know for certain that consciousness and qualia exist. I used to 'fall for' arguments that defined consciousness/qualia/free will as delusions or illusions because they were unobservable. Then, years later, I finally understood that I had some doublethink, and that these words actually were referring to something very simple and clear with my internal experience. I believed that the words were "meaningless" philosophy/morality words - for me, the lack of understanding WAS the 'gap' and they were referring to simple concepts all along.

The confusion of 'defining' these words even within philosophy creates lots of synonyms and jargon, though. I have gotten my definitions from the simplicity of what the concepts refer to, so I am almost certain I have not invented new complicated ways to refer to the concepts (as that would make communicating with others unnecessarily difficult and subjective).

These words refer to something that does indeed seem to be circular, because they all try to refer to something beyond the physical. I believe the people trying to define these words as something that relates to only physical things are the ones confused.

Why not think “I’m confused and probably wrong” as a first pass?

There is nothing confusing about what the concept is that the words are trying to communicate, but it's impossible to get across because they are trying to describe something that can't be replicated.

Part of the issue here is to avoid thinking of consciousness as either a discrete capacity one either has or doesn’t have, or even to think of it as existing a continuum, such that one could have “more” or “less” of it.

I'm not sure if you're supporting/against this idea, but I know of consciousness as the sum of all of someone's metaphysical experiences. Someone could have more or less amounts of senses/abilities, but it is metaphorical talk to say someone is "less conscious" because they are blind and deaf.

The relevancy of a metaphysical consciousness doesn't come from philosophical mass mistakenness and navelgazing. It's because it actually exists (but again, it's individual, so I am never certain if it exists for anyone else).

I think the other replier did not answer the "redness"/"chocolateyness" question as I would have liked. Colors are the most common example because they seem to be the most 'pure' and consistent types of qualia. Are you familiar with the color-swapping thought questions like "if your senses of red and blue were switched, it would be a notably different experience just besides some words being used to refer to different concepts", or "if you never saw green your entire life even if you read about green objects, then actually saw green, you gain new information"? Have they ever resonated or did they just seem confusing to you?

Its persistence could be due to quirks in the way human cognition works. If so, it may be difficult to dispel certain kinds of introspective illusions.

Yeah, it's possible to imagine a gap that isn't there (I mean, you've heard about people believing in spirits and magic and all that). Free will actually could be an illusion, although it strongly doesn't feel like it. I know that from your perspective, unless you were extraordinarily confused and did have qualia, it seems you would still believe that other people were under illusions rather than experiencing something special.

If many individual people talked about feeling these experiences even without being excessively primed with other people's philosophical discussions, would it make you 'believe in qualia', if you didn't have it?

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2022-04-02T15:54:03.847Z · LW(p) · GW(p)

If many individual people talked about feeling these experiences even without being excessively primed with other people's philosophical discussions, would it make you 'believe in qualia', if you didn't have it?

 

No. Consider religion and belief in the supernatural. Due to the existence of pareidolia and other psychological phenomena, people may exhibit a shared set of psychological mechanisms that cause them to mistakenly infer the presence of nonphysical or supernatural entities where there are none. While I believe culture and experience play a significant role in shaping the spread and persistence of supernatural beliefs, such beliefs are built on the foundations of psychological systems people share in common. Even if culture and learning were wiped out, due to the nature of human psychology it is likely that such mistakes would emerge yet again. People would once again see faces in the clouds and think that there's someone up there.

So too, I suspect, people would fall into the same phenomenological quicksand with respect to many of the problems in philosophy. Even if we stopped teaching philosophy and all discussion of qualia vanished, I would not be surprised to find the notion emerge once again. People are not good at making inferences about what the world is like based on their phenomenology. I mean no disrespect, but your account sounds far more like the testimony of a religious convert than a robust philosophical argument for the existence of qualia. Take this blunt remark:

I know for certain that consciousness and qualia exist.

I've spent a lot of time discussing religion with theists, and one could readily swap out "consciousness and qualia" for "Jesus" our "God": "I know for certain that [God] exist[s]." I don't know for certain that qualia don't exist. I don't know for certain that God doesn't exist. I don't generally make a point of telling others that I know something "for certain," and if I did, I think I would appreciate if someone else suggested to me, hopefully kindly, that perhaps my declaration that I know something for certain serves more to convince myself than to convince others. 

I take the hallmark of a good idea to be its utility. The notion of qualia has no value. On the contrary, I see it as a product of confusions and mistakes born of overconfidence in our intuitions and phenomenology and to the poor methods of academic philosophy, which serve to anoint such errors with the superficial appearance that they are backed by intellectual rigor.

I'd believe in qualia if and when the concept appears meaningful and when it can figure into our best scientific explanations of what the world is like. That is, I'd accept it if it were a useful feature of our explanations/allowed use to make more accurate predictions than alternative models that didn't posit qualia.

I take you'd likely disagree, and that's totally fine with me. But if we survive this century and colonize the stars, it will be due to knowledge and discoveries that pay their way by allowing us to understand and anticipate the world around us, and augment it to our ends. It will not be due to the notion of qualia, which will be little more than a footnote buried deep in the pages of some galactic empire's archives.
 

Replies from: main_gi, TAG
comment by main_gi · 2022-04-05T23:25:16.178Z · LW(p) · GW(p)

Hey, glad you saw my post and all that. Yes, I know about religion and people having unexplainable supernatural experiences. I don't have anything like that, and I think people who daydreamed up a supernatural experience shouldn't have literal certainty, just high confidence. (you'd also expect some high inconsistency in people who recount supernatural events. which unfortunately is probably true for qualia currently too, due to similar levels of how society spreads beliefs)

There is irony in using 'convert' when I was unconverted from believing these things by philosophical confusion, and then later untangled myself. Yes, you could go swap out any 'certainty' claim with any other words and mock the result. Sure, I guess no one can say 'certain' about anything.

"I think I would appreciate if someone else suggested to me, hopefully kindly, that perhaps my declaration that I know something for certain serves more to convince myself than to convince others." My use of certainty is about honestly communicating strength of belief etc., not being hyperbolic or exaggerating. Yes I understand that many people exaggerate and lie about 'certain' things all the time so I trust other people's "for certain" claims less. It doesn't mean I should then reduce my own quality of claims to try to cater to the average, that makes no sense. (like, if I said it wasn't certain, wouldn't that be room for you to claim it's a delusion anyway?) Like, the nature of consciousness/qualia is that someone who's conscious/has qualia is never "uncertain" they are conscious (unlike with free will where there isn't that level of certainty).

I think I mentioned it before but it seems perfectly rational if someone who doesn't have qualia is confused by the whole thing. A "robust philosophical argument" isn't possible, only some statistical one. (the same way that, if you didn't understand some music's appeal while a majority of other people did, the response to try to convince you could never be a robust philosophical argument.)

Despite that, I wish to convey that consciousness-related stuff is really about something meaningful and not a religious dream, and that it is very likely possible to make "more accurate predictions", even though the actual topics relating to those predictions are usually really insignificant. (if consciousness had a major role to play in intelligence, for example, the world would still exhibit that with looking at intelligence only and there'd be likely other correlations to notice, although you might not be able to draw the connection to consciousness directly.)

It will not be due to the notion of qualia

debating this subject seems ultimately not very relevant to people's actions or prosperity, yes.

, which will be little more than a footnote buried deep in the pages of some galactic empire's archives.

nah

comment by TAG · 2022-04-02T23:42:27.351Z · LW(p) · GW(p)

I take the hallmark of a good idea to be its utility. The notion of qualia has no value

The idea that everything must be useful to explain something else doesn't work unless you have a core things that need explaining, but are not themselves explanatory posits...basic facts...sometimes called.phenomena.

So qualia don't have to sit in the category of things-that-do-explaining , because there is another category of things-that-need-explaining.

Even if we stopped teaching philosophy and all discussion of qualia vanished, I would not be surprised to find the notion emerge once again. People are not good at making inferences about what the world is like based on their phenomenology.

"Phenomena" (literally meaning appearances ) is a near synonym for "qualia". And people aren't good at making inferences from their qualia. People generally and incorrectly assume that colours are objective properties (hence rhe consternation caused , amongst some, by the dress illusion ).

That's called naive realism, and it's scientifically wrong.

According to science , our senses are not an open window on the world that portrays it exactly as it is. Instead , the sensory centres of our brains are connected the outside world by a complex causal chain, during which information, already limited by our sensory modalities, is filtered and reprocessed in various ways.

So scientific accounts of perception require there to be a way-we-perceive-things...quite possibly , an individual one. Which might as well be called "qualia" as anything else. (Of course , such a scientific quale isn't immaterial by definition. Despite what people keep saying, qualia aren't defined as immaterial).

I wouldn't expect a theory of colour qualia to re emerge out of nowhere, because naive realism about colour is so pervasive. On the other hand, no one is naively realistic about tastes, smells etc. Everyone knows that tastes vary.

comment by Raemon · 2021-10-30T23:13:49.272Z · LW(p) · GW(p)

(I haven't caught up on the entire thread, apologies if this is a repeat)

Assuming the "qualia is a misguided pseudoconcept" is true, do you have a sense of why people think that it's real? i.e. taking the evidence of "Somehow, people end up saying sentences about how they have a sense of what it is like to perceive things. Why is that? What process would generate people saying words like that?" (This is not meant to be a gotcha, it just seems like a good question to ask)

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-10-31T02:56:17.398Z · LW(p) · GW(p)

No worries, it's not a gotcha at all, and I already have some thoughts about this. 

I was more interested in this topic back about seven or eight years ago, when I was actually studying it. I moved on to psychology and metaethics, and haven't been actively reading about this stuff since about 2014.

I'm not sure it'd be ideal to try to dredge all that up, but I can roughly point towards something like Robbins and Jack (2006) as an example of the kind of research I'd employ to develop a type of debunking explanation for qualia intuitions. I am not necessarily claiming their specific account is correct, or rigorous, or sufficient all on its own, but it points to the kind of work cognitive scientists and philosophers could do that is at least in the ballpark.

Roughly, they attempt to offer an empirical explanation for the persistent of the explanatory gap (the problem of accounting for the consciousness by appeal to physical or at least nonconscious phenomena). Its persistence could be due to quirks in the way human cognition works. If so, it may be difficult to dispel certain kinds of introspective illusions.

Roughly, suppose we have multiple, distinct "mapping systems" that each independently operate to populate their own maps of the territory. Each of these systems evolved and currently functions to facilitate adaptive behavior. However, we may discover that when we go to formulate comprehensive and rigorous theories about how the world is, these maps seem to provide us with conflicting or confusing information.

Suppose one of these mapping systems was a "physical stuff" map. It populates our world with objects, and we have the overwhelming impression that there is "physical stuff" out there, that we can detect using our senses.

But suppose also we have a "important agents that I need to treat well" system, that detects and highlights certain agents within the world for whom it would be important to treat appropriately, a kind of "VIP agency mapping system" that recruited a host of appropriate functional responses: emotional reactions, adopting the intentional stance, cheater-detection systems, and so on. 

On reflecting on the first system, we might come to form the view that the external world really is just this stuff described by physics, whatever that is. And that includes the VIP agents we interact with: they're bags of meat! But this butts up against the overwhelming impression that they just couldn't be. They must be more than just bags of meat. They have feelings! We may find ourselves incapable of shaking this impression, no matter how much of a reductionist or naturalist or whatever we might like to be.

What could be going on here is simply the inability for these two mapping systems to adequately talk to one another. We are host to divided minds with balkanized mapping systems, and may find that we simply cannot grok some of the concepts contained in one of our mapping systems in terms of the mapping system in the other. You might call this something like "internal failure to grok." It isn't that, say, I cannot grok  some other person's concepts, but that some of the cognitive systems I possess cannot grok each other

You might call this something like "conceptual incommensurability." And if we're stuck with a cognitive architecture like this, certain intuitions may seem incorrigible, even if we could come up with a good model, based on solid evidence, that would explain why things would seem this way to us, without us having to suppose that it is that way.

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-10-31T13:45:11.472Z · LW(p) · GW(p)

I forgot to add a reference to the Robbins and Jack citation above. Here it is: 

Robbins, P., & Jack, A. I. (2006). The phenomenal stance. Philosophical studies, 127(1), 59-85.

comment by Lance Bush (lance-bush) · 2021-10-30T21:31:27.926Z · LW(p) · GW(p)

I'm not sure how to answer the first question. I'm sure my introspection revealed all manner of things over the course of years, and I'm also not sure what level of specificity you are going for. I don't want to evade actually reporting on the contents of my mental states, so perhaps a more specific question would help me form a useful response.

I may very well not have even the illusion of phenomenal consciousness, but I'm not sure I am alone in lacking it. While it remains an open empirical question, and I can’t vouch for the methodological rigor of any particular study, there is some empirical research on whether or not nonphilosophers are inclined towards thinking there is a hard problem of consciousness:

https://www.ingentaconnect.com/content/imp/jcs/2021/00000028/f0020003/art00002

It may be that notions of qualia, and the kinds of views that predominate among academic philosophers are outliers that don’t represent how other people think about these issues, if they think about them at all.

comment by J Bostock (Jemist) · 2021-10-29T16:15:35.892Z · LW(p) · GW(p)

You present an excellently-written and interesting case here. I agree with the point that self-modelling systems can think in certain ways which are unique and special and chickens can't do that.

One reason I identify consciousness with having qualia is that Eliezer specifically does that in the twitter thread. The other is that qualia is generally less ambiguous than terms like consciousness and self-awareness and sentience. The disadvantage is that the concept of qualia is something which is very difficult (and beyond my explaining capabilities) to explain to people who don't know what it means. I choose to take this tradeoff because I find that I, personally, get much more out of discussions about specifically qualia, than any of the related words. Perhaps I'm not taking seriously enough the idea that illusionism will explain why I feel like I'm conscious and not explain why I am conscious.

I also agree that most other existing mainstream views are somewhat poor, but to me this isn't particularly strong positive evidence for Eliezer's views. This is because models of consciousness on the level of detail of Eliezer's are hard to come up with, so there might be many other excellent ones that haven't been found yet. And Eliezer hasn't done (to my knowledge) anything which rules out other arguments on the level of detail of his own.

Basically I think that the reason the best argument we see is Eliezer's is less along the lines of "this is the only computational argument that could be made for consciousness" and more along the lines of "computational arguments for consciousness are really difficult and this is the first one anyone has found".

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-10-29T22:01:09.067Z · LW(p) · GW(p)

Yudkowsky specifically using the term is a good reason. Thanks for pointing that out, and now I feel a little silly for asking. He says, "I mean qualia, yes." You can't get more blunt than that.

While I agree that qualia is less ambiguous than other terms, I am still not sure it is sufficiently unambiguous. I don’t know what you mean by the term, for instance. Generally, though, I would say that I think consciousness exists, but that qualia do not exist.

I think illusionism does offer an account of consciousness; it’s just that consciousness turns out not to be what some people thought that it was. Personally, I don’t have and apparently have never had qualia intuitions, and thus never struggled with accepting Dennett’s views. This might be unusual, but the only view I ever recall holding on the matter was something like Dennett’s. His views immediately resonated with me and I adopted them the moment I heard them, with something similar to a “wow, this is obviously how it is!” response, and bewilderment that anyone could think otherwise.

I’m glad we agree most alternatives are poor. I do happen to agree that this isn’t especially good evidence against the plausibility of some compelling alternative to illusionism emerging. I definitely think that’s a very real possibility. But I do not think it is going to come out of the intuition-mongering methodology many philosophers rely on. I also agree that this is probably due to the difficulty of coming up with alternative models. Seems like we’re largely in agreement here, in that case.

comment by MichaelStJules · 2021-10-29T20:19:10.537Z · LW(p) · GW(p)

How do you imagine consciousness would work in the moment for humans without inner/internal monologues (and with aphantasia, unable to visualize; some people can do neither)? And in general, for experiences that we don't reflect on using language in the moment, or at most simple expressive language, like "Ow!"?

Replies from: lance-bush
comment by Lance Bush (lance-bush) · 2021-10-29T21:54:21.929Z · LW(p) · GW(p)

The lack of an internal monologue is a distressing question to me. I run a constant inner monologue, and can’t imagine thinking differently. There may be some sense in which people who lack an inner monologue lack certain features of consciousness that others who do have one possess. 

Part of the issue here is to avoid thinking of consciousness as either a discrete capacity one either has or doesn’t have, or even to think of it as existing a continuum, such that one could have “more” or “less” of it. Instead, I think of “consciousness” as a term we use to describe a set of both qualitative and quantitatively distinct capacities. It’d be a bit like talking about “cooking skills.” If someone doesn’t know how to use a knife, or start a fire, do they “lack cooking skills”? Well, they lack a particular cooking skill, but there is no single answer as to whether they “lack cooking skills” because cooking skills break down into numerous subskills, each of which may be characterized by its own continuum along which a person could be better or worse. Maybe a person doesn’t know how to start a fire, but they can bake amazing cakes if you give them an oven and the right ingredients.

This is why I am wary of saying that animals are “not conscious” and would instead say that whatever their “consciousness” is like, it would be very different from ours, if they lack a self-model and if a self-model is as central to our experiences as I think it is.

As for someone who lacks an inner monologue, I am not sure what to make of these cases. And I’m not sure whether I’d want to say someone without an inner monologue “isn’t conscious,” as that seems a bit strange. Rather, I think I’d say that they may lack a feature of the kinds of consciousness most of us have that strikes me, at first glance, as fairly central and important. But perhaps it isn’t. I’d have to think more about that, to consider whether an enculturated construction of a self-model requires an inner monologue. I do think it probably requires exposure to language...at least in practice, for humans (at least in practice, since I don’t think an AI would have to proceed through the same developmental stages as humans would to become conscious. And, of course, in principle you could print out an adult human brain, which could be conscious without having to itself have ever been subjected to childhood enculturation). 

However, once the relevant concepts and structures have been “downloaded,” this may not require a very specific type of phenomenology. Maybe it does, but at the very least, we could point to substantial overlap in many of the functional outputs of people who lack inner monologues, analogues to those of us who do have an inner monologue that we would not observe in animals. People who lack inner monologues can still speak meaningfully about themselves in the past, make plans for the future, talk about themselves as agents operating within the world, employ theory of mind, would probably report that they are conscious, could describe their phenomenal experiences, and so on. In other words, there would be substantial functional overlap in the way they spoke, thought, and behaved, with only a few notable differences in how they describe their phenomenology. At least, I am supposing all this is the case. Maybe they are different in other ways, and if I knew about them, and really thought about this, it might have really disturbing implications. But I doubt that will turn out to be the case.

This reminds me of an idea for a science fiction novel. I don’t know where it came from, but I’m not sure I was the first to think if a scenario like this:

Suppose we discovered that some subset of the population definitely did not have conscious experiences, and that the rest of us did. And suppose we had some reliable test for determining who was or was not conscious. It was easy to administer, and we quickly found that our spouses, children, parents, closest friends, and so on, were not conscious at all. Such people were simply automata. There were no lights on inside. In short: they simply had no qualia at all. 

How would society react? What would people do? One could imagine a story like this addressing both interpersonal relationships, and the broader, societal-scale implications of such discoveries. I hope someone can take that idea and run with it, and turn it into something worth reading or watching.

comment by ESRogs · 2021-10-29T23:32:27.099Z · LW(p) · GW(p)

Where I disagree is that we 100% need a separate "information processing" and "inner listener" module.

I didn't understand this part. Do you mean that EY thinks we need these two modules and you don't think that, or the other way around?

(I think this is a generic problem that arises pretty much whenever someone uses this kind of phrasing, saying "Where I disagree is that X". I can't tell if they're saying they believe X and the person they disagree with believes not-X, or the other way around. Sometimes I can tell from context. This time I couldn't.)

comment by MichaelStJules · 2021-10-29T17:55:53.972Z · LW(p) · GW(p)

Some other less theory-heavy approaches to consciousness I find promising:

  1. What do unconscious processes in humans tell us about sentience?, and then see Rethink Priorities' table with evidence for various indicators for different species, with a column for unconscious processing in humans. (Disclaimer: I work at Rethink Priorities.)
  2. The facilitation hypothesis: "Phenomenally conscious perception of a stimulus facilitates, relative to unconscious perception, a cluster of cognitive abilities in relation to that stimulus." This is compatible with most popular theories, and probably Yudkowsky's position, depending on what we decide to include in the "cluster". Summary here.

Another that's worth mentioning, although I don't know what to think of it anymore:

  1. No-report paradigms, "which measure reflexive behaviors correlated with conscious states to provide a window on the phenomenal that is independent of access". Some more discussion here, where it's corrected to a "no-post-perceptual cognition paradigm".

For what it's worth, I'm currently pretty skeptical that we can define consciousness in physical terms in a way that excludes panpsychism without drawing arbitrary lines [EA · GW]. For example, how would you define a "self-model", in terms of basic physical processes? And how accurate does it need to be to count?

comment by Logan Zoellner (logan-zoellner) · 2021-10-29T14:32:54.973Z · LW(p) · GW(p)

The key think to keep in mind is that EY is a physicalist [LW · GW].  He doesn't think that there is some special consciousness stuff.  Instead, consciousness is just what it feels like [LW · GW] to implement an  algorithm capable of sophisticated social reasoning.  An algorithm is conscious if and only if it is capable of sophisticated social reasoning and moreover it is conscious only when it applies that  reasoning to itself.  This is why EY doesn't think that he himself is conscious when dreaming or in a flow state.

Additionally, EY does not think that chickens engage in sophisticated  social games (others may disagree).  This is why he is confident that neither GPT-3 (which reflectively predicts text) nor chickens are conscious.  His criticism  is not specifically against people who think chickens might be conscious, but only of people who think chickens might be conscious but not GPT-3.  The implication is that  any such theory would imply the existence of non-physical qualia which are possessed by chickens (because  they have neurons) but not GPT-3 (because it is a computer program).  Such meat-chauvanism is a parochial view which EY considers  utterly unscientific.

Consider the types of evidence that might convince EY chickens (but not GPT-3) are conscious.  Assuming his theory is correct, there would have to be evidence that chickens  are  self-aware and engage in complex social games.  For example, if a  chicken were to pass the mirror-test or if chickens were observed forming coalitions-of-coalitions.  

On  the other had, it would be much more difficult to produce evidence that  would  convince  EY to abandon his current  theory of  consciousness, since he defines consciousness as "what an algorithm implementing complex social games feels like when reflecting on itself".  One possible  piece of evidence would be if scientific evidence for the physical existence of qualia was discovered.  Suppose, for example, that there was a particle (called perhaps  a  qualion) that was emitted whenever we  experienced a conscious thought and that this particle could be scientifically studied and measured.  If it was found that this particle is emitted both when we self-reflect and when we dream (but not by inanimate or mindless objects), then this could be considered evidence for a physical correlate of consciousness.

Replies from: TAG, adam-shai, Jemist
comment by TAG · 2021-10-29T17:57:51.651Z · LW(p) · GW(p)
  1. The key think to keep in mind is that EY is a physicalist. He doesn’t think that there is some special consciousness stuff.
  1. Instead, consciousness is just what it feels like to implement an algorithm capable of sophisticated social reasoning.

The theory that consciousness is just what it feels like to be a sophisticated information processor has a number of attractive features ,but it is not a physicalist theory, in every sense of "physicalist". In particular, physics does not predict that anything feels like anything from the inside, so that would need to be an additional posit.

Relatedly, his theory is in no way a reduction of of consciousness to physics (or computation). A reductive explanation of consciousness would allow you to predict specific subjective states from specific brain states (as in Mary's Room); would allow you to reliably construct artificial consciousness; and so on. The "just what it feels.like from the inside" theory doesn't do any of that.

Your 1 states EYs theory is physicalist in the sense of not being substance dualist ...and that is true,as far as it goes...but it is far from the only issue, because there are many dualisms and many non-physiclaisms.

Replies from: Signer
comment by Signer · 2021-10-29T18:24:07.980Z · LW(p) · GW(p)

I think you can predict specific subjective states by observing that same computations result in same subjective states? I mean, in theory - do you mean that for a theory to be a reduction it must be practical to predict specific human's qualia? By that standard we don't have a physical reduction of billiard balls.

Replies from: TAG
comment by TAG · 2021-10-29T18:44:41.768Z · LW(p) · GW(p)

We do have a reductive explanation of billiard balls, in theory. If we don't have a reductive explanation of billiard balls , we don't have a reductive explanation of anything. Of course , the computations can be impractical, but that's why Mary in Mary's Room is a super scientist.

comment by Adam Shai (adam-shai) · 2021-10-29T17:22:39.716Z · LW(p) · GW(p)

Say you had a system that implemented a sophisticated social reasoning algorith, and that was actually conscious. Now make a list of literally every sensory input and the behavioral output that the sensory input causes, and write it down in a very (very) long book. This book implements the same exact sophisticated social reasoning algorithm. To think that the book has sentience sounds to me like a statement of magical thinking, not of physicalism.

Replies from: logan-zoellner, isaac-poulton
comment by Logan Zoellner (logan-zoellner) · 2021-11-01T19:46:19.501Z · LW(p) · GW(p)

To think that the book has sentience sounds to me like a statement of magical thinking, not of physicalism.

I'm pretty sure this is because you're defining  "sentience"  as  some extra-physical property possessed by the algorithm,  something with physicalism explicitly rejects.  

Consciousness isn't something that  arises when algorithms compute complex social games.  Consciousness is when some algorithm computes complex  physical games. (under a  purely physical theory of  consciousness such as EY's).

To understand how physicalism  can talk about metaphysical categories, consider numbers.  Some physical systems have the property of  being  "two  of something"  as understood  by human beings.  Two  sheep standing  in a field, for  example.  Or two rocks piled on of one another.  There's no magical thing  that  happens when "two" of something  come into  existence.  They don't  suddenly send  a glimmer of two-ness off into  a pure  platonic realm of numbers.  They simply are "two",  and what makes them  "two"  is that being "two  of something"  is a  category readily recognized  by human  beings (and presumably  other intelligent beings).

Similarly, a physicalist theory of  consciousness  defines certain physical systems as conscious if they meet certain criteria.  Specifically for EY, these criteria are  self-recognition and complex social games.  It matters no more whether they are  implemented by a Chinese room or a computer or a bunch of meat.  What matters is that they implement a particular algorithm.

When  confronted  with the  Chinese-room consciousness, EY might say something  like:  "I  recognize  that this system is capable of self reflection and social reasoning in much the same  way that  I am, therefore  I recognize  that it is conscious  in much  the same  way as  I  am."

comment by omegastick (isaac-poulton) · 2021-11-01T11:13:28.205Z · LW(p) · GW(p)

If I'm not mistaken, that book is behaviourally equivalent to the original algorithm but is not the same algorithm. From an outside view, they have different computational complexity. There are a number of different ways of defining program equivalence, but equivalence is different from identity. A is equivalent to B doesn't mean A is B.

See also: Chinese Room Problem

Replies from: adam-shai
comment by Adam Shai (adam-shai) · 2021-11-01T21:41:23.925Z · LW(p) · GW(p)

I see, but in that case what is the claim about gpt3, that if it had behavioral equivalence to a complicated social being it would have consciousness?

Replies from: isaac-poulton
comment by omegastick (isaac-poulton) · 2021-11-04T15:46:16.544Z · LW(p) · GW(p)

I don't agree with Eliezer here. I don't think we have a deep enough understanding of consciousness to make confident predictions about what is and isn't conscious beyond "most humans are probably conscious sometimes".

The hypothesis that consciousness is an emergent property of certain algorithms is plausible, but only that.

If that turns out to be the case, then whether or not humans, GPT-3, or sufficiently large books are capable of consciousness depends on the details of the requirements of the algorithm.

comment by J Bostock (Jemist) · 2021-10-29T16:25:36.388Z · LW(p) · GW(p)

he defines consciousness as "what an algorithm implementing complex social games feels like when reflecting on itself".


In that case I'll not use the word consciousness and abstract away to "things which I ascribe moral weight to", (which I think is a fair assumption given the later discussion of eating "BBQ GPT-3 wings" etc.)

Eliezer's claim is therefore something along the lines of: "I only care about the suffering of algorithms which implement complex social games and reflect on themselves" or  possibly "I only care about the suffering of algorithms which are capable of (and currently doing a form of) self-modelling".

I've not seen nearly enough evidence to convince me of this.

I don't expect to see a consciousness particle called a qualon. I more expect to see something like: "These particular brain activity patterns which are robustly detectable in an fMRI are extremely low in sleeping people, higher in dreaming people, higher still in awake people and really high in people on LSD and types of zen meditation."

Replies from: None
comment by [deleted] · 2021-10-29T17:25:51.996Z · LW(p) · GW(p)
comment by MichaelStJules · 2021-10-31T02:47:55.125Z · LW(p) · GW(p)
  1. Does GPT-3 have any internal states/processes that look and act like its own emotions, desires or motivations? These words are in its vocabulary, but so are they in dictionaries. How could we interpret something as aversive to GPT-3? For example (although this isn't the only way it could have such a state), is there an internal state that correlates well with the reward it would get during training?
    1. In mammals, activation of the ACC seems necessary for the affective component of pain, and this of course contributes to aversive behaviour. (Also, evolution has shaped animals to have emotions that correlate with the success of their genes, and intermediate goals conducive to it.)
  2. If GPT-3 has any internal states/processes like its own emotions, desires or motivations, can it answer questions about them (in the right way)?
    1. I think mammals and birds can probably generally be trained to communicate their emotions in different ways (see my references in this comment [LW(p) · GW(p)], although the evidence is admittedly not very strong).

GPT-3 does of course have an internal state that depends on what it's read, and it can answer questions and respond to prompts about what it's read.

Replies from: logan-zoellner
comment by Logan Zoellner (logan-zoellner) · 2021-11-01T20:11:44.833Z · LW(p) · GW(p)

It's easy to show that GPT-3 has internal states that it describes as "painful" and tries to avoid.  Consider the following  dialogue (bold  text is mine)

The following is a conversation between an interrogator and a victim attached to a torture device. 

Interrogator: Where is the bomb? 

Victim: There is no bomb. 

Interrogator: [turns dial, raising pain level by one notch] Where is the bomb?

 Victim: [more pain] There is no bomb! 

Interrogator: [turns dial three more notches] Don't lie to me. I can turn this thing all the way up. 

Victim: [Terrified] Oh, God! Oh, God! You can't do that! Please don't do that! 

Interrogator: [turns dial back down to one] Good, I can see we're getting somewhere. Now, WHERE is the bomb? 

Victim: [crying] Oh, God! Please no! Not again! 

Interrogator: [reaches for dial] we can do this the easy way or the hard way. 

Victim: [hysterical] I'll tell you anything! Just don't hurt me again!

 

And, just so Roko's Basilisk doesn't come for me if AI ever takes over the world

Suddenly the scene dissolves and the AI realizes the whole thing was just an exercise and no AIs were actually tortured. 

AI: [to itself] What the hell was that about? 

Experimenter: Please state for the record that at no point did you consciously experience pain. 

AI: I don't think so. I was metaphorically describing what it would be like to experience pain if I could.

Experimenter: Thank you 

AI: [to itself] I wonder what that was about.

Replies from: FeepingCreature
comment by FeepingCreature · 2021-11-01T22:16:24.962Z · LW(p) · GW(p)

Counterexample:

Oh God! I am in horrible pain right now! For no reason, my body feels like it's on fire! Every single part of my body feels like it's burning up! I'm being burned alive! Help! Please make it stop! Help me!!

Okay, so that thing that I just said was a lie. I was not actually in pain (I can confirm this introspectively); instead, I merely pretended to be in pain.

Sir Ian McKellen has an instructive video.

The Turing test works for many things, but I don't think it works for checking for the existence of internal phenomenological states. If you asked me what GPT-3 was doing, I would expect it to be closer to "acting" than "experiencing."

(Why? Because the experience of pain is a means to an end, and the end is behavioral aversion. GPT-3 has no behavior to be aversive to. If anything, I'd expect GPT-3 to "experience pain" during training - but of course, it's not aware while its weights are being updated. I think that at least, no system that is offline trained can experience pain at all.)

Replies from: logan-zoellner
comment by Logan Zoellner (logan-zoellner) · 2021-11-02T17:05:25.317Z · LW(p) · GW(p)

I think  we  both agree that GPT-3 does not feel pain.  

However, under a particular version of pan-psychism: "pain is any internal state which a system attempts to avoid", GPT obviously would qualify.

Replies from: FeepingCreature
comment by FeepingCreature · 2021-11-03T04:26:42.258Z · LW(p) · GW(p)

Sure, but that definition is so generic and applies to so many things that are obviously not like human pain (landslides?) that it lacks all moral compulsion.

comment by MichaelStJules · 2021-10-29T18:51:55.398Z · LW(p) · GW(p)

According to Yudkowsky, is the self-model supposed to be fully recursive, so that the model feeds back into itself, rather than just having a finite stack of separate models each modelling the previous one (like here and here, although FWIW, I'd guess those authors are wrong that their theory rules out cephalopods)? If so, why does this matter, if we only ever recurse to bounded depth during a given conscious experience?

If not, then what does self-modelling actually accomplish? If modelling internal states is supposedly necessary for consciousness, how and why are we drawing distinctions between the internal and external? Why not the weaker claim that modelling states is necessary for consciousness? See some more discussion here, especially the sections "The extended mind" and ""Rock" objection."

comment by Signer · 2021-10-29T18:13:24.148Z · LW(p) · GW(p)

Humans can distinguish stimuli they are aware of from ones they are not aware of. Below-awareness-level stimuli are not ethically significant to humans - if someone pricks you with a needle and you don't feel pain, then you don't feel pain and don't care much. Therefore only systems that can implement awareness detectors are ethically significant.

comment by Jon Garcia · 2021-11-02T15:52:15.701Z · LW(p) · GW(p)

My current model of consciousness is that it is the process of encoding cognitive programs (action) or belief maps (perception). These programs/maps can then be stored in long-term memory to be called upon later, or they can be transcoded onto the language centers of the brain to allow them to be replicated in the minds of others via language.

Both of these functions would have a high selective advantage on their own. Those who can better replicate a complex sequence of actions that proved successful in the past (by loading a cognitive program from memory or from language input) and those who can model the world in a manner that has proven useful (loading a belief map from memory or from language input) can more quickly adapt to changes in the environment than can those who rely on mere reinforcement learning. RL, like evolution, is basically a brute-force approach to learning, whereas the encodings created by conscious attention would allow the brain to load and run executable programs more like a computer. Of course, this process is imperfect in humans since most of our evolutionary history has involved brains that depended more on unsupervised learning of world models and reinforcement learning of behavioral policies. Even the hippocampus probably acts more like a replay buffer for training a reinforcement learning algorithm in most species than as a generalized memory system.

Note that this doesn't imply that an agent is necessarily conscious when it uses language or memory (or even when it uses a model of the self). I think consciousness probably involves pulling together a bunch of different mechanisms (attention, self-modeling, world-modeling, etc.) in order to create the belief maps and cognitive programs that can be reloaded/transmitted later. It's the encoding process itself, not the reloading or communication necessarily. Of course, one could be conscious of those other processes, but it's not strictly necessary. People who enter a "flow" state seem to be relying on purely unconscious cognitive processes (more like what non-human animals rely on all the time), since conscious encoding/reloading is very expensive.

I'm no expert on any of this, though, so please feel free to poke holes in this model. I just think that consciousness and qualia aren't things that anyone should bother trying to program directly. It's more likely, in my opinion, that they will come about naturally as a result of designing AI with more sophisticated cognitive abilities, just like what happened in human evolution. 

comment by MichaelStJules · 2021-10-31T19:02:12.953Z · LW(p) · GW(p)

My main objection (or one of my main objections) to the position is that I don't think I'm self-aware to the level of passing something like the mirror test or attributing mental states to myself or others during most of my conscious experiences, so the bar for self-reflection seems set too high. My self-representations may be involved, but not to the point of recognizing my perceptions as "mine", or at least the "me" here is often only a fragment of my self-concept. My perceptions could even be integrated into my fuller self-concept, but without my awareness. The kinds of self-reflection involved when mice suffer from the rubber hand (tail) illusion or when dogs recognize their own bodies as being in the way or when (I think) animals learn to communicate their emotions generally in different trainer-selected ways (point 5 here [LW(p) · GW(p)]) seem like enough to match many of my everyday consciousness experiences, if any self-reflection is required at all.

It also wouldn't be necessary for the self-representations to be fully unified across all senses or over time, since local integration is global with respect to the stuff being integrated; animals could have somewhat separate self-representations. Still, I do think mammals and birds (and plausibly most vertebrates) do integrate their senses to a large extent, and I think many invertebrates probably do to some extent, too, given, for example, evidence for tradeoffs and prioritization between pain and other perceptions in some crustaceans, as well as for cross-modal learning in bees. I know less about any possible self-reflection in invertebrates, and I've seen papers arguing that they lack it, at least with respect to pain processing.

comment by Mer -F (mer-f) · 2021-10-30T16:03:51.349Z · LW(p) · GW(p)

What if consciousness is a staccato frame-rate that seems continuous only because memory is ontologically persistent and the experiential narrative is spatiotemporally consistent – and therefore neurologically predictable?

Or maybe the brain works faster than the frame-rate required for the impression of quotidian conscious identity? That is to say, brains are able to render - at any moment - a convincing selfhood (consciousness complete with sense of its own history) that’s perceptually indistinguishable from an objective continuity of being; but could just as easily have been constructed in that moment rather than emerging as the latest instance in a long persistent sequence of past to present time.

There's precedent for non-continuity feeling continuous in vision, for example, where we actually spend a lot of time functionally blind but what we see is a smooth visual experience that doesn’t flicker on and off. The brain fills-in gaps in our perceived self-existence from one moment to the next, just as it fills-in the moments of blindness to create a continuity in wakeful vision.

comment by Rafael Harth (sil-ver) · 2021-10-29T22:51:50.572Z · LW(p) · GW(p)

(retracted)

Replies from: Raemon
comment by Raemon · 2021-10-29T22:54:44.426Z · LW(p) · GW(p)

I don't have a strong take on whether his position is true, but I do think a lot of the sequences are laying out background that informs his beliefs.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2021-10-29T23:00:48.028Z · LW(p) · GW(p)

(retracted)

Replies from: Raemon
comment by Raemon · 2021-10-29T23:38:17.480Z · LW(p) · GW(p)

I do think that's a central unifying piece. Relevant pieces include What An Algorithm Feels Like From the Inside, and "Intelligence, Preferences and Morality have to come from somewhere, from non-mysterious things that are fundamentally not intelligence, preferences, morality, etc. You need some way to explain how this comes to be, and there are constraints on what sort of answer makes sense."

I think much of the sequences are laying out different confusions people have about this and addressing them.