Why it's so hard to talk about Consciousness

post by Rafael Harth (sil-ver) · 2023-07-02T15:56:05.188Z · LW · GW · 159 comments

Contents

  The Two Intuition Clusters
    Characteristics
    The Generator
    Representations in the literature
  Why it matters
  Discussion/Conclusions
None
160 comments

[Thanks to Charlie Steiner [LW · GW], Richard Kennaway [LW · GW], and Said Achmiz [LW · GW] for helpful discussion. Extra special thanks to the Long-Term Future Fund for funding research related to this post.]

[Epistemic status: my best guess after having read a lot about the topic, including all LW posts and comment sections with the consciousness tag]

There's a common pattern in online debates about consciousness. It looks something like this:

One person will try to communicate a belief or idea to someone else, but they cannot get through no matter how hard they try. Here's a made-up example:


"It's obvious that consciousness exists."

-Yes, it sure looks like the brain is doing a lot of non-parallel processing that involves several spatially distributed brain areas at once, so-

"I'm not just talking about the computational process. I mean qualia obviously exist."

-Define qualia.

"You can't define qualia; it's a primitive. But you know what I mean."

-I don't. How could I if you can't define it?

"I mean that there clearly is some non-material experience stuff!"

-Non-material, as in defying the laws of physics? In that case, I do get it, and I super don't-

"It's perfectly compatible with the laws of physics."

-Then I don't know what you mean.

"I mean that there's clearly some experiential stuff accompanying the physical process."

-I don't know what that means.

"Do you have experience or not?"

-I have internal representations, and I can access them to some degree. It's up to you to tell me if that's experience or not.

"Okay, look. You can conceptually separate the information content from how it feels to have that content. Not physically separate them, perhaps, but conceptually. The what-it-feels-like part is qualia. So do you have that or not?"

-I don't know what that means, so I don't know. As I said, I have internal representations, but I don't think there's anything in addition to those representations, and I'm not sure what that would even mean.


and so on. The conversation can also get ugly, with boldface author accusing quotation author of being unscientific and/or quotation author accusing boldface author of being willfully obtuse.

On LessWrong, people are arguably pretty good at not talking past each other, but the pattern above still happens. So what's going on?

The Two Intuition Clusters

The basic model I'm proposing is that core intuitions about consciousness tend to cluster into two camps, with most miscommunication being the result of someone failing to communicate with the other camp. For this post, we'll call the camp of boldface author Camp #1 and the camp of quotation author Camp #2.

Characteristics

Camp #1 tends to think of consciousness as a non-special high-level phenomenon. Solving consciousness is then tantamount to solving the Meta-Problem of consciousness, which is to explain why we think/claim to have consciousness. (Note that this means explaining the full causal chain in terms of the brain's physical implementaton.) In other words, once we've explained why people keep uttering the sounds kon-shush-nuhs, we've explained all the hard observable facts, and the idea that there's anything else seems dangerously speculative/unscientific. No complicated metaphysics is required for this approach.

Conversely, Camp #2 is convinced that there is an experience thing that exists in a fundamental way. There's no agreement on what this thing is – some postulate causally active non-material stuff, whereas others agree with Camp #1 that there's nothing operating outside the laws of physics – but they all agree that there is something that needs explaining. Moreover, even if consciousness is compatible with the laws of physics, it still poses a conceptual mystery relative to our current understanding. A complete solution (if it is even possible) may also have a nontrivial metaphysical component.

The camps are ubiquitous; once you have the concept, you will see it everywhere consciousness is discussed. Even single comments often betray allegiance to one camp or the other. Apparent exceptions are usually from people who are well-read on the subject and may have optimized their communication to make sense to both sides.

The Generator

With the description out the way, let's get to the interesting question: why is this happening? I don't have a complete answer, but I think we can narrow down the disagreement. Here's a somewhat indirect explanation of the proposed crux.

Suppose your friend John tells you he has a headache. As an upstanding citizen Bayesian agent, how should you update your beliefs here? In other words, what is the explanandum – the thing-your-model-of-the-world-needs-to-explain?

You may think the explanandum is "John has a headache", but that's smuggling in some assumptions. Perhaps John was lying about the headache to make sure you leave him alone for a while! So a better explanandum is "John told me he's having a headache", where the truth value of the claim is unspecified.

(If we want to get pedantic, the claim that John told you anything is still smuggling in some assumptions since you could have also hallucinated the whole thing. But this class of concerns is not what divides the two camps.)

Okay, so if John tells you he has a headache, the correct explanandum is "John claims to have a headache", and the analogous thing holds for any other sensation. But what if you yourself seem to experience something? This question is what divides the two camps:

In other words, the two camps disagree about the epistemic status of apparently perceived experiences: for Camp #2, they're epistemic bedrock, whereas for Camp #1, they're model outputs of your brain, and like all model outputs of your brain, they can be wrong. The axiom of Camp #1 can be summarized in one sentence as "you should treat your own claims of experience the same way you treat everyone else's".

From the perspective of Camp #1, Camp #2 is quite silly. People have claimed that fire is metaphysically special, then intelligence, then life, and so on, and their success rate so far is 0%. Consciousness is just one more thing on this list, so the odds that they are right this time are pretty slim.

From the perspective of Camp #2, Camp #1 is quite silly. Any apparent evidence against the primacy of consciousness necessarily backfires as it must itself be received as a pattern of consciousness. Even in the textbook case where you're conducting a scientific experiment with a well-defined result, you still need to look at your screen (or other output device) to read the result, so even science bottoms out in predictions about future states of consciousness!

An even deeper intuition may be what precisely you identify with. Are you identical to your physical brain or body (or program/algorithm implemented by your brain)? If so, you're probably in Camp #1. Are you a witness of/identical to the set of consciousness exhibited by your body at any moment? If so, you're probably in Camp #2. That said, this paragraph is pure speculation, and the two-camp phenomenon doesn't depend on it.

Representations in the literature

If you ask GPT-4 about the two most popular academic books about consciousness, it usually responds with

  1. Consciousness Explained by Daniel Dennett; and

  2. The Conscious Mind by David Chalmers.

If the camps are universal, we'd expect the two books to represent one camp each because economics. As it happens, this is exactly right!

Dennett devotes an entire chapter to the proper evaluation of experience claims, and the method he champions (called "heterophenomenology") is essentially a restatement of the Camp #1 axiom. He suggests that we should treat experience claims like fictional worldbuilding, where such claims are then "in good standing in the fictional world of your heterophenomenology". Once this fictional world is complete, it's up to the scientist to evaluate how its components map to the real world. Crucially, you're supposed to apply this principle even to yourself, so the punchline is again that the epistemic status of experience claims is always up for debate.

Conversely, Chalmers says this in the introductory chapter of his book (emphasis added):

Some say that consciousness is an "illusion," but I have little idea what this could even mean. It seems to me that we are surer of the existence of conscious experience than we are of anything else in the world. I have tried hard at times to convince myself that there is really nothing there, that conscious experience is empty, an illusion. There is something seductive about this notion, which philosophers throughout the ages have exploited, but in the end it is utterly unsatisfying. I find myself absorbed in an orange sensation, and something is going on. There is something that needs explaining, even after we have explained the processes of discrimination and action: there is the experience.

True, I cannot prove that there is a further problem, precisely because I cannot prove that consciousness exists. We know about consciousness more directly than we know about anything else, so "proof" is inappropriate. The best I can do is provide arguments wherever possible, while rebutting arguments from the other side. There is no denying that this involves an appeal to intuition at some point; but all arguments involve intuition somewhere, and I have tried to be clear about the intuitions involved in mine.

In other words, Chalmers is having none of this heterophenomenology stuff; he wants to condition on "I experience X" itself.

Why it matters

While my leading example was about miscommunication, I think the camps have consequences in other areas as well, which are arguably more significant. To see why, suppose we

For someone in Camp #1, the answer has to be something like this:

I.e., consciousness is [the part of our brain that creates a unified narrative and produces our reports about "consciousness"].[1] So consciousness will be a densely connected part of this network – that is, unless you dispute that it's even possible to restrict it to just a part of the network, in which case it's more "some of the activity of the full network". Either way, consciousness is identified with its functional role, which makes the concept inherently fuzzy. If we built an AI with a similar architecture, we'd probably say it also had consciousness – but if someone came along and claimed, "wait a minute, that's not consciousness!", there'd be no fact of the matter as to who is correct, any more than there's a fact of the matter about the precise number of pebbles required to form a heap. The concept is inherently fuzzy, so there's no right or wrong here.

Conversely, Camp #2 views consciousness as a precisely defined phenomenon. And if this phenomenon is causally responsible for our talking about it,[2] then you can see how this view suggests a very different picture: consciousness is now a specific thing in the brain (which may or may not be physically identifiable with a part of the network), and the reason we talk about it is that we have it – we're reporting on a real thing.

These two views suggest substantially different approaches to studying the phenomenon – whether or not something has clear boundaries is an important property! So the camps don't just matter for esoteric debates about qualia but also for attempts to reverse-engineer consciousness, and to a lesser extent, for attempts to reverse-engineer the brain...

... and also for morality, which is a case where the camps are often major players even if consciousness isn't mentioned. Camp #2 tends to view moral value as mostly or entirely reducible to conscious states, an intuition so powerful that they sometimes don't realize it's controversial. But the same reduction is problematic for Camp #1 since consciousness is now an inherently fuzzy phenomenon – and there's no agreed-upon way to deal with this problem. Some want to tie morality to consciousness anyway, which can arguably work under a moral anti-realist framework. Others deny that morality should be about consciousness to begin with. And some bite the bullet and accept that their views imply moral nihilism. I've seen all three views (plus the one from Camp #2) expressed on LessWrong.

Discussion/Conclusions

Given the gulf between the two camps, how does one avoid miscommunication?

The answer may depend on which camp you're in. For the reasons we've discussed, it tends to be easier for ideas from Camp #1 to make sense to Camp #2 than vice-versa. If you study the brain looking for something fuzzy, there's no reason you can't still make progress if the thing actually has crisp boundaries – but if you bake the assumption of crisp boundaries into your approach, your work will probably not be useful if the thing is fuzzy. Once again, we need only look at the two most prominent theories in the literature for an example of this. Global Workspace Theory is peak Camp #1 stuff,[3] but it tends to be at least interesting to most people in Camp #2. Integrated Information Theory is peak Camp #2 stuff,[4] and I'm yet to meet a Camp #1 person who takes it seriously. Global Workspace Theory is also the more popular one of the two, even though Camp #1 is supposedly in the minority among researchers.[5]

The same pattern seems to hold on LessWrong across the board: Consciousness Explained gets brought up a lot more than The Conscious Mind, Global Workspace Theory gets brought up a lot more than Integrated Information Theory, and most high karma posts [? · GW] (modulo those of Eliezer) are Camp #1 adjacent – even though there are definitely a lot of Camp #2 people here. Kaj Sotala's Multi Agent Models of Mind [? · GW] series is a particularly nice example of a Camp #1 idea[6] with cross-camp appeal, and there's nothing analogous out of Camp #2.

So if you want to share ideas about this topic, it's probably a good idea to be in Camp #1. If that's not possible, I think just having a basic understanding of how ~half your audience thinks is helpful. There are a lot of cases where asking, "does this argument make sense to people with the other epistemic starting point?" is all you need to avoid the worst misunderstandings.

You can also try to convince the other side to switch camps, but this tends to work only around 0% of the time, so it may not be the best practical choice.


  1. This doesn't mean anything that claims to be conscious is conscious. Under this view, consciousness is about the internal organization of the system, not just about its output. After all, a primitive chatbot can be programmed to make arbitrary claims about consciousness. ↩︎

  2. This assumption is not trivial. For example, David Chalmers' theory suggests that consciousness has little to no impact on whether we talk about it. The class of theories that model consciousness as causally passive is called epiphenomenalism. ↩︎

  3. Global Workspace Theory is an umbrella term for a bunch of high-level theories that attempt to model the observable effects of consciousness under a computational lens. ↩︎

  4. Integrated Information theory holds that consciousness is identical to the integrated information of a system, modeled as a causal network. There are precise rules to determine which part(s) of a network are conscious, and there is a scalar quantity called ("big phi") that determines the amount of consciousness of a system, as well as a much more complex object (something like a set of points in high-dimensional Euclidean space) that determines its character. ↩︎

  5. According to David Chalmer's book, the proportion skews about 2/3 vs. 1/3 in favor of Camp #2, though he provides no source for this, merely citing "informal surveys". The phenomenon he describes isn't exactly the same as the two-camp model, but it's so similar that I expect high overlap. ↩︎

  6. I'm calling it a Camp #1 idea because Kaj defines consciousness as synonymous with attention for the purposes of the sequence. Of course, this is just a working definition. ↩︎

159 comments

Comments sorted by top scores.

comment by GeorgeWilfrid · 2023-07-03T05:05:00.172Z · LW(p) · GW(p)

As someone who could be described as "pro-qualia": I think there are still a number of fundamental misconceptions and confusions that people bring to this debate.  We could have a more productive dialogue if these confusions were cleared up.  I don't think that clearing up these confusions will make everyone agree with me on everything, but I do think that we would end up talking past each other less if the confusions were addressed.

First, a couple of misconceptions:

1.) Some people think that part of the definition of qualia is that they are necessarily supernatural or non-physical.  This is false.  A qualia is just a sense perception.  That's it.  The definition of "qualia" is completely, 100% neutral as to the underlying ontological substrate.  It could certainly be something entirely physical.  By accepting the existence of qualia, you are not thereby committing yourself to anti-physicalism.

2.) An idea I sometimes see repeated is that qualia are this sort of ephemeral, ineffable "feeling" that you get over and above your ordinary sense perception.  It's as if, you see red, and the experience of seeing red gives you a certain "vibe", and this "vibe" is the qualia.  This is false.  Maybe someone did explain it that way to you once, but if they did, then they were wrong.  Qualia is nothing over and above your ordinary sense perception.  It's not seeing red plus something else.  It's just seeing red.  That's it.

Those are what I would call the "objective" misconceptions.  Past this point, our intuitions may start coming apart, and it may become harder to communicate exactly what my position is.  But I can still try.

When defining qualia as a "sense perception", something crucial that's implicit in the definition is that it is your first-person experience of the sense perception.  It's what you actually perceive.  Some people may be thinking at this point, "well, I don't know what this 'first-person experience' is.  There is the data I take as input, there is the processing that my brain does on it, there is the behavior that I emit as a result, and I suppose you could call the totality of this whole thing a 'sense perception', but I don't know what this 'first-person experience' component is supposed to add to the story."  For people who are in this position, I would add two more arguments to help clarify what's going on:

1.) Do you know what it feels like to feel pain?  Then congratulations, you know what it feels like to have qualia.  Pain is a qualia.  It's that simple.  If I told you that I was going to put you in intense pain for an hour, but I assured you there would be no physical damage or injury to you whatsoever, you would still be very much not ok with that.  You would want to avoid that experience.  Why?  Because pain hurts!  You're not afraid of the fact that you're going to have an "internal representation" of pain, nor are you worried about what behavior you might display as a result of the pain.  You're worried first and foremost about the fact that it's going to hurt!  The "hurt" is the qualia.

2.) Imagine that you have a very boring and unpleasant task to do.  It could be your day job, it could be a social gathering that you would rather not attend, whatever.  Imagine I offer you a proposition: while you are performing this unpleasant task, I can put you into a state that you will subjectively experience as deep sleep.  You will experience exactly what you experience when you are asleep but not dreaming: i.e., exactly nothing.  The catch is, your body will continue to function as though you were wide awake and functioning.  Your body will move around, your eyes will be open, you will talk to people, you will do everything exactly as you would normally do.  But you will experience none of it.  It sounds like an enticing proposition, right?  You get all the benefit of doing the work without the pain of actually having to experience the work.  It doesn't matter if you think this isn't actually possible to achieve in the real world: it's just a thought experiment to get you to understand the difference between your internal experience and your outward behavior.  What you're essentially being offered in the thought experiment is the ability to "turn off your qualia" for a span of time.

If you read all that and you think "ok, I have a better idea of what you mean by 'qualia' now, but I still don't see why it's a big deal or why it should be hard to explain with standard physics", then that's ok.  That's a reasonable position that's shared by a number of experts in this area.  I'm not trying to make you into a Believer In The Hard Problem, I'm just trying to make you understand what "qualia" means.

If you still think "I still have no idea what qualia is and I think you're delusional"... well, that sort of makes me think that I still just haven't found the right way to explain it.  But, I suppose at some point we just have to accept that people will have wildly divergent intuitions and ways of modeling the world, and there's only so much we can do to bridge the gap.

Replies from: Ape in the coat, LatticeDefect
comment by Ape in the coat · 2023-07-03T17:56:38.063Z · LW(p) · GW(p)

Hard upvote for taking time to describe the concept explicitly and conprehensibly, highlighting the possible places of confusion - non-physical aspect of qualia that is occasionally smuggled in the definition. 

When you define qualia like you do, I (a Camp 1 person, as it turned out) am completely on board with you. Indeed I expect them to be explained with neuroscience, but that's as you've noticed yourself - a bit of a different story.

comment by LatticeDefect · 2023-07-05T17:05:27.447Z · LW(p) · GW(p)

The thoughtexperiments suggest that qualia is tied to memory-formation. If your nociceptors are firing like crazy but the CNS never updates on it, was there any pain?

Then the obvious next question is what distinguishes qualia from memory-formation?

Replies from: army1987, gilch
comment by A1987dM (army1987) · 2024-07-17T08:27:52.812Z · LW(p) · GW(p)

Yep, the first thing I thought after reading "this isn't actually possible to achieve in the real world" was "Yes it is!  See https://en.wikipedia.org/wiki/Highway_hypnosis, or that time I played in a concert while blackout drunk and I can only actually remember playing half of the set list."  The second thing I thought was "But did I actually have no qualia, or do I just not remember them?"  The third thing I thought was "Is there any way I could possibly tell, even in principle?  If there isn't, doesn't that mean that there's no actual difference between qualia and the formation of memories of qualia?"

comment by gilch · 2024-04-07T23:44:22.659Z · LW(p) · GW(p)

I'm pretty sure I'm in the Chalmers camp if I'm in either (because qualia are obviously epistemically primitive, and Dennett is being silly), and I've had the same thought about memory formation. Not from the above thought experiments, just from earlier musings on the topic. It seems possible that memory formation is required somehow, but it also seems possible that it isn't, and I have yet to come up with a thought experiment to distinguish them.

I'm not ready to call a camera conscious just because it is saving data (although I can't totally rule out panpsychism yet, I think we currently probably have no nonliving examples of things with consciousness), so I don't know that memory formation is identical to qualia (but maybe). Maybe memory formation is a necessary, but not sufficient condition?

Or maybe the only methods we currently have to directly observe consciousness are internal to ourselves and happen to go through memory formation before we can report on it (certainly to ourselves). I believe things exist that can't be interacted with [LW · GW], so the inability to observe (past) qualia without going through memory formation doesn't prove that (present) qualia can't exist in the moment without forming memory, but should we care? Midazolam, for example, is a drug that causes sedation and anterograde amnesia, but (reportedly) not unconsciousness. Does a sedated patient have qualia? They seem to act like they do. Is memory formation not happening at all? Or is it just not lasting? Is working memory alone sufficient? I don't know.

comment by Carl Feynman (carl-feynman) · 2023-07-02T23:44:44.717Z · LW(p) · GW(p)

I have a simple, yet unusual, explanation for the difference between camp #1 and camp#2: we have different experiences of consciousness.  Believing that everyone has our kind of consciousness, of course we talk past each other.

I’ve noticed that in conversations about qualia, I’m always in the position of Mr Boldface in the example dialog: I don’t think there is anything that needs to be explained, and I’m puzzled that nobody can tell me what qualia are using sensible words.  (I‘m not particularly stupid or ignorant; I got a degree in philosophy and linguistics from MIT.)  I suggest a simple explanation: some of us have qualia and some of us don’t.  I‘m one of those who don’t.  And when someone tries to point at them, all I can do is to react with obtuse incomprehension, while they point at the most obvious thing in the world.  It apparently is the most obvious thing in the world, to a lot of people.

Obviously I have sensory impressions; I can tell you when something looks red.  And I have sensory memories; I can tell you when something looked red yesterday.  But there isn’t any hard-to-explain extra thing there.

One might object that qualia are so obvious that everyone must have them.  But there are many cases where people differ in their mental faculties, which can be determined only by careful comparison, and which provoke amazement when revealed.  Some people have no visual experience of imagined objects at all.  Some people can’t rotate an imagined object to see it from the other side.  Some people maintain a continuous internal narration.  We all get through life.

Replies from: solvalou, adele-lopez-1, TAG, sil-ver, Ape in the coat, Signer, Mitchell_Porter, None, green_leaf
comment by solvalou · 2023-07-17T19:49:20.646Z · LW(p) · GW(p)

Alternative explanation: everyone has qualia, but some people lack the mental mechanism that makes them feel like qualia require a special metaphysical explanation. Since qualia are almost always represented as requiring such an explanation (or at least as ineffable, mysterious and elusive), these latter people don't recognize their own qualia as that which is being talked about.

How can people lack such a mental mechanism? Either

  1. they simply have never done the particular kind of introspection that's needed to realize the weirdness of qualia, or
  2. there is a correct reductive explanation for qualia, and some people's naive intuition just happens to naturally coincide with this explanation, or
  3. same as 2 except that the explanation is (partially or wholly) incorrect. Presumably, sufficient introspection of the right type would move these people to either 1 or 2 (edit: or to the category of people who are puzzled about qualia, of course).

I don't have a clue about the relative prevalences of these groups, nor do I mean to make a claim about which group you personally are in.

Replies from: carl-feynman
comment by Carl Feynman (carl-feynman) · 2023-07-18T17:53:12.497Z · LW(p) · GW(p)

You've summarized this more elegantly than I can.   Let me rewrite your explanation into my slightly different terminology: "everyone has qualia sensations, but some people lack the mental mechanism that makes them feel like there are also qualia requireing a special metaphysical explanation. Since qualia are almost always represented as requiring such an explanation (or at least as ineffable, mysterious and elusive), these latter people don't recognize their own qualia sensations as that which is being talked about." I would agree with this rephrasing as describing my experience.  I think the rephrasing is harmless, just that what I'm calling (sensation + qualia) is what you're calling (qualia + the mental mechanism etc.)

As for how I can lack such a mental mechanism, I don't think you're on the right track.  Taking the points in order:

  1. I've done plenty of introspection.  I suppose I might be doing 'the wrong kind', but until someone tells me how do 'the right kind', I doubt it.
  2. This might be the case for me.  But if it is, I don't know what the 'correct explanation' is.  When I introspect, I simply don't experience anything 'requiring a metaphysical explanation', or that is 'mysterious, ineffable or elusive', to use your terminology.
  3. I'd want to hear from someone who had actually done this before I think it's possible.
comment by Adele Lopez (adele-lopez-1) · 2023-07-03T04:08:56.576Z · LW(p) · GW(p)

That's interesting, but I doubt it's what's going on in general (though maybe it is for some camp #1 people). My instinct is also strongly camp #1, but I feel like I get the appeal of camp #2 (and qualia feel "obvious" to me on a gut level). The difference between the camps seems to me to have more to do with differences in philosophical priors.

Replies from: carl-feynman
comment by Carl Feynman (carl-feynman) · 2023-07-03T23:37:49.768Z · LW(p) · GW(p)

Oh, I don’t think it’s the only difference between Camp #1 and Camp #2.  But it certainly creates a pre-philosophical bias toward Camp #1, for those of us who don’t have qualia. 
I suspect Daniel Dennet is also in the no-qualia camp, given the arguments advanced in his paper “Quining Qualia”.

comment by TAG · 2023-07-05T14:43:42.316Z · LW(p) · GW(p)

There are less drastic ways of explaining qualiaphobia.

Firstly, to get qualia you have to stop believing in naive realism. Naive realism means that colours are taken to be painted on the surfaces of objects and perceived exactly as they are. People vary a lot about in how easy they find it to get away from naive realism

Secondly subjective feelings are what scientists are trained to ignore in favour of 3rd person perspective. That's a perfectly good methodological rule in the most areas of science, but it tends to get exaggerated into a fact of reality - - "feels don't real". Consciousness isn't a typical scientific field -- subjectivity is central.

Replies from: carl-feynman, carl-feynman
comment by Carl Feynman (carl-feynman) · 2023-07-05T15:58:41.921Z · LW(p) · GW(p)

First, a side note: I don’t like the word “qualiaphobia” for what we’re discussing here, because (a) I’m not afraid of qualia, I just don’t think I have them, and (b) it smacks of homophobia or transphobia, which have a negative connotation.

More later— your comments provoke me to have many thoughts, which I’ll have to finish thinking later, because I have to go to work now.

comment by Carl Feynman (carl-feynman) · 2023-07-06T15:28:20.390Z · LW(p) · GW(p)

“To get qualia you have to stop believing in naive realism.”  Does “get” mean “experience” or “acquire”?  In any event, I don’t believe in naive realism. (if I have correctly understood what naive realism means). I am quite aware of the enormous processing it takes to keep object colors constant under changes in illumination.  I further believe that many things that we feel are “out there” are in fact concocted by our brain to make the world easier to understand.  That includes the ideas of objects that have properties, kinds of objects, people who have beliefs, desires and intentions, and the passage of time,  None of these appear in true reality, but everybody thinks with them, because otherwise it’s too hard.

“Feelings are what scientists are trained to ignore.”  It’s true that I was raised as a scientist, but I’ve believed in the validity of subjective evidence since my sophomore year at college, when I took a cognitive science class and had my mind expanded.  That was also about the time people tried to explain qualia to me, and my first experience of completely failing to get the point.

Replies from: TAG
comment by TAG · 2023-07-07T09:40:06.224Z · LW(p) · GW(p)

Does “get” mean “experience” or “acquire”?

Neither, it means "understand semantically".

That was also about the time people tried to explain qualia to me, and my first experience of completely failing to get the point

what does "get the point" mean? Are you saying you failed to understand what "qualia" means , or.failed to understand why qualia are significant?

Replies from: carl-feynman, carl-feynman
comment by Carl Feynman (carl-feynman) · 2023-07-08T15:42:54.439Z · LW(p) · GW(p)

I failed to understand what qualia were.  Their attempts at explanation failed to engage with anything in my introspection, and in some cases seemed like word salad.  I was eventually led to the conclusion that one of the following was true: (a) I am too dumb to understand qualia.  Probably not true, since I am smart enough for most things.  (B) It’s one of those wooly concepts that continental philosophers like, and doesn’t actually have a referent.  Probably not true, since down-to-earth philosophers, like Dennet or Ned Block, talk about it.  (C) my cognition is such that I don’t have what they were trying to point at.

Replies from: TAG, awg
comment by TAG · 2023-07-08T16:24:36.760Z · LW(p) · GW(p)

D) the idea that the word must mean something weird, since it is a strange word -- it cannot be an unfamilar term for something familiar.

You said you had the experience of redness. I told you that's a quale. Why didn't that tell you what "qualia" means?

comment by awg · 2023-07-08T15:48:47.427Z · LW(p) · GW(p)

When you see the color red, what is that like? When you run your hand over something rough and bumpy, what is that like? When you taste salt, what is that like?

comment by Carl Feynman (carl-feynman) · 2023-07-08T16:56:53.254Z · LW(p) · GW(p)Replies from: Raemon
comment by Raemon · 2023-07-08T17:47:51.320Z · LW(p) · GW(p)

I'm not actually sure I'd argue qualia are particularly different from "the experience of sensation" (but, I think they are different from "sensation"). 

(I notice other people in this thread, who are talking about qualia and asking you questions, seem to be asking different questions than the ones I'd ask, so I'm still not sure even the "obviously qualia!" people are talking about the same thing)

Some quotes of yours I wanted to respond to:

> So what happens if you hallucinate a color? When that happens, is there anything red, any "redness" or "experience of redness" there? 

There is nothing red, there is no redness, but there is an experience of redness.  It’s just another case of my brain lying to me, like telling me I don’t have a blind spot, or have color vision all the way to the periphery.

and

qualia are a kind of tag on top of perceptions, that says “This is real, reason on that basis.” I don’t have that tag, so it’s easier for me to believe that my mind has constructed reality from sense data, rather than that I directly perceive it.

Note that I don't think of qualia as having anything to do with things being real. I think qualia is pretty close to just meaning "experience of sensation". Insofar as I have a tag-connected-with-my-perceptions, it's more like "it matters to me that I experience perceiving this." (I usually think of this as most important for "I experience perceiving happiness, excitement, sadness, fear, i.e. emotions with positive or negative valence)

I think sensation is different from experience-of-sensation. A thermostat has sensation of temperature, but I would be very surprised if it had an experience of sensation (I think when I feel "hot" or "cold", there is an experience of what-that-feels like that I think requires some kind of mental representation, and I don't think thermostats can have temperature representations)

Replies from: Signer
comment by Signer · 2023-07-09T04:54:34.651Z · LW(p) · GW(p)

so I’m still not sure even the “obviously qualia!” people are talking about the same thing

To be clear, the thing the zombies argument is about is explicitly not the thing that caused (only) by ability to have a mental representation.

comment by Rafael Harth (sil-ver) · 2023-07-03T10:02:32.728Z · LW(p) · GW(p)

I tend to think that, regardless of which camp is correct, it's unlikely that the difference is due to different experiences, and more likely that one of the two sides is making a philosophical error. Reason being that experience itself is a low-level property, whereas judgments about experience are a high-level property, and it generally seems to be the case that the variance in high-level properties is way way higher.

E.g., it'd be pretty surprising if someone claimed that red is more similar to green than to orange, but less surprising if they had a strange idea about the meaning of life, and that's pretty much true regardless of what exactly they think about the meaning of life. We've just come to expect that pretty much any high-level opinion is possible.

comment by Ape in the coat · 2023-07-03T18:09:36.230Z · LW(p) · GW(p)

I've heard this approach to the question multiple times and I must say I really dislike it. 

Because 

  1. It's an attempt to sidestep the philosophical disagreement instead of resolving one
  2. It makes us even more map-territory confused as now we conflate abscense of belief in qualia with abscence of qualia
  3. Most obviously it fails to acknowledge that people do change their views on the subject. I used to be a subjective idealist and now I'm a reductive materialist. Did I lost my qualia in the process?
Replies from: carl-feynman
comment by Carl Feynman (carl-feynman) · 2023-07-04T00:01:56.707Z · LW(p) · GW(p)
  1. The existence of people without qualia might be a way to displace the question from philosophy to cognitive psychology, where at least we have some ways to answer questions.  I don’t think it’s illegitimate for me to say what I say; I think it’s fascinating additional data.
  2. Well, we have to be careful to keep the two concepts separate.  I don’t think I have qualia, but I’m sure other people do.  They’ve claimed to on many occasions, and I don’t think they’re lying or deceived.  From my point of view, other people have some extra thing on top of their sensations, which produces philosophical conundrums when they try to think about it.
  3. You tell me! People say qualia are the most obvious thing in the world.  Do you feel like you have them?
Replies from: particularuniversalbeing, Ape in the coat
comment by particularuniversalbeing · 2023-07-04T01:58:53.468Z · LW(p) · GW(p)

From my point of view, other people have some extra thing on top of their sensations, which produces philosophical conundrums when they try to think about it.

As someone who definitely has qualia (and believes that you do too), no, that's not what's going on. There's some confusing extra thing on top of behavior - namely, sensations. There would be no confusion if the world were coupled differential equations all the way down (and not just because there would be no one home to be confused), but instead we're something that acts like a collection of coupled differential equations but also, unlike abstract mathematical structures, is like something to be. 

Replies from: carl-feynman
comment by Carl Feynman (carl-feynman) · 2023-07-04T20:52:29.824Z · LW(p) · GW(p)

“There’s some confusing extra thing on top of behavior, namely sensations.”  Wow, that’s a fascinating notion.  But presumably if we didn’t have visual sensations, we’d be blind, assuming the rest of our brain worked the same, right?  So what exactly requires explanation?  You’re postulating something that acts just like me but has no sensations, I.e. is blind, deaf, etc.  I don’t see how that can be a coherent thing you’re imagining.

When I read you saying “is like something to be,” I get the same feeling I get when someone tries to tell me what qualia are— it’s a peculiar collection of familiar words.  It seems to me that you’re trying to turn a two-place predicate “A imagines what it feels like to be B” into a one-place predicate “B is like something to be”, where it’s a pure property of B.  

Replies from: TAG
comment by TAG · 2023-07-07T14:44:27.533Z · LW(p) · GW(p)

Wow, that’s a fascinating notion. But presumably if we didn’t have visual sensations, we’d be blind, assuming the rest of our brain worked the same, right?

If you lacked information about your environment, you would be functionally impaired. Information about your environment doesn't have to be visual...it could be sonar or something. It doesn't have to be sensory either...you could just somehow know that there is a door ahead of you ,and a turning to the left. Presumably , that's how Dennett thinks it works. For everybody else, the difference between different ways-of-experiencing shows that ways-of-experiencing are more than just information.

I get the same feeling I get when someone tries to tell me what qualia are— it’s a peculiar collection of familiar words

"Time and space are the same, and they can bend and warp" is a peculiar combination of familiar words.

comment by Ape in the coat · 2023-07-04T05:48:33.088Z · LW(p) · GW(p)

The existence of people without qualia might be a way to displace the question from philosophy to cognitive psychology,

There are both philosophical (What are qualia? What having/not having qualia implies?) and neuroscientific (How exactly the closest referent to "qualia" actually works?) aspects to the problem. Both require an answer. Substituting one for another won't do. The issue with the philosophical aspect isn't that we can't get an answer. It's that we get too many, incompatible with each other answers and it's hard to use definitions consistently in such situation. 

I agree that there may be fascinating additional data in the realm of neurosciency. I wouldn't be much surprised if some people indeed have much more impressive subjective experiences than others. It's legitimate to talk about it as a possibility, and yet it's only tangental to the philosophical questions at hand.

I don’t think I have qualia, but I’m sure other people do.  They’ve claimed to on many occasions, and I don’t think they’re lying or deceived.

As you may see from the comments these people also claim that you misunderstand them with such interpretaton. I don't think they are lying either.

You tell me! People say qualia are the most obvious thing in the world.  Do you feel like you have them?

See my reply [LW · GW] to GeorgeWilfrid and his original comment. I have qualia defined the way he did and I expect you to have them too. Let's call it weak qualia (wq). On the other hand, if qualia are defined as irreducible and non-physical - hard qualia (hq) - then I believe that I don't have them, nor that I had them in my subjective idealist days and I don't think anyone does no matter how awesome their subjective experience is.

The problem, however, that there is mob and bailey [LW · GW] dynamics going on. Some people confuse wq with hq, some people think that wq imply hq. People that think they have hq often use the same language that people who think they have only wq. People arguing past each other often use different definitions. And so on.

When we've fixed the definitions. I believe we can properly solve the philosophical aspect. The question is reduced to whether wq indeed imply hq. I think the argument for works like that (if there is someone who holds wq->hq position here, please correct me):

I have direct access to experience. My experience is different from matter. Thus the fact that I have experience at all means that it's not material.

The mistake her is in failure to account for map-territoiry destinction. What if you have direct access only to your experience of experience and not experience itself? Then

My experience of experience is different from my experience of matter. Which doesn't necessary means that experience is not material only that I feel this way even if it's not true.

comment by Signer · 2023-07-03T05:25:28.926Z · LW(p) · GW(p)

What do you think about zombies? Can you imagine something like you, that doesn't feel anything, when looking anywhere?

Replies from: carl-feynman
comment by Carl Feynman (carl-feynman) · 2023-07-03T23:29:53.719Z · LW(p) · GW(p)

So the philosophical zombie is a person who reports a completely normal set of sensations and emotions, while actually having none of them, right?  I think zombies would be a ridiculous way to build an organism.  Much easier to build something that reported the truth, rather than build a perfect liar.  I could imagine such a thing, but that doesn’t say much about whether a zombie could exist.  I read a lot of science fiction and can imagine six impossible things before breakfast.

Replies from: Signer
comment by Signer · 2023-07-04T05:40:13.345Z · LW(p) · GW(p)

The point is not that zombies exist. The point is that "it's a ridiculous way to build an organism" is not a physical law and actual physical laws don't seem to specify that our world is not a zombie-world. For anything else from science fiction you can in principle check corresponding physical equation and conclude that this thing is impossible. How do you do it for the difference between our world and zombie-world?

Replies from: Ape in the coat
comment by Ape in the coat · 2023-07-04T06:36:33.053Z · LW(p) · GW(p)

The point is that "it's a ridiculous way to build an organism" is not a physical law

It kind of is. An organism evolved to be a perfect liar about having consciousness has to have a different causal history than a organism evolved to have consciousness and tell about it so the physical laws that provided these histories have to be different too.

Also, notice that what you are talking here isn't a classical PZ as originally stated: an entity that does everything that a conscious human does for exactly the same reasons up to every elemental particle in the brain but still lacks consciousness. It's a "zombie master" scenario where there are some other causes that makes the zombie pretend that it has consciousness. Confusion between this two scenarios is common and misleading.

Replies from: carl-feynman
comment by Carl Feynman (carl-feynman) · 2023-07-04T21:10:32.101Z · LW(p) · GW(p)

Well, looks like I misremebered what a P-zombie is.  I think the notion of “an entity that does everything a human being does for exactly the same reasons […] but lacks consciousness” is completely absurd.  Obviously someone who lacks consciousness is asleep or comatose.  I don’t see how someone who’s walking around, talking about past experience, reporting sensations, etc, could fail to be conscious.

This has always seemed perfectly obvious to me, but it’s not obvious to other highly sensible people.  Could it be they’re experiencing some extra thing in their sensations, that says “this could be dispensed with, you would have the same sensations, but then you wouldn’t be conscious.”?  If so, I’m here to tell you the good news that your brain is lying about that.

Replies from: TAG, Signer
comment by TAG · 2023-07-07T14:48:41.948Z · LW(p) · GW(p)

A p zombie is supposed to lack qualia , not consciousness in the medical.sense.

comment by Signer · 2023-07-05T08:05:12.452Z · LW(p) · GW(p)

Absurd why? What physical law prevents walking around, talking about past experience and reporting sensations from feeling like being comatose?

Replies from: carl-feynman
comment by Carl Feynman (carl-feynman) · 2023-07-05T15:16:08.256Z · LW(p) · GW(p)

Well, it’s fascinating the extent to which we each find the other’s position completely unrealistic.  I think we’re getting closer to a crux, which is good.

I presume you’re not talking about Cotard’s delusion, which can result in people walking around and talking while claiming they’re dead.  That’s just a delusion.

We measure comatoseness with the Glasgow Coma Scale, which ranges from 0 (eyes closed, no speech, motionless even under painful stimuli) to 15 (normal). You’re talking about people who feel comatose while still scoring 15 on the Glasgow coma scale?  How can someone be comatose and still respond to stimuli, report memories, and perform voluntary action?  It seems implicit in the definition of comatose that that’s impossible.  It may not be a physical law, but it’s certainly a medical one.

Replies from: Signer
comment by Signer · 2023-07-06T03:05:30.573Z · LW(p) · GW(p)

(For the record, I don't find your position completely unrealistic).

How can someone be comatose and still respond to stimuli, report memories, and perform voluntary action?

Not "be comatose" - "feel comatose". No one is disputing medical knowledge - it certainly works in our world. But, regardless of how much it contradicts usual science heuristics, how unlikely it is to actually work like that in reality - can you imagine that the world could be different in only "feeling" aspect? Where zombie-you is looking at the blue sky and doesn't feel like you in the same situation, but feels like you imagine feeling when comatose. If you don't immediately reject that idea as implausible, do you have a concept for it at all?

If you do, then the problem is that, regardless of how absurd it is heuristically, actual laws of physics don't seem to specify that our world is not a zombie-world.

Replies from: hastings-greer, carl-feynman
comment by Hastings (hastings-greer) · 2023-07-08T03:51:37.949Z · LW(p) · GW(p)

Crucially, in a world with only these zombies- where no-one who has ever had qualia - the zombies start arguing about the existing of qualia. (Otherwise, this would be a way to distinguish zombies from people using a physical test)

comment by Carl Feynman (carl-feynman) · 2023-07-06T14:55:10.908Z · LW(p) · GW(p)

That’s just unimaginably weird.   In my experience of feeling comatose, having no vision and not laying down any memories were notable features. There’s no way I can experience a blue sky while simultaneously not experiencing it.  Nor can I report on my recent experiences while being unable to form memories.

See, this is why I think qualia are a thing on top of sensation.  You experience qualia, and feel that without them, something vital would be missing, and it would be like feeling comatose.   And I’m here to tell you that life without qualia is pretty sweet.  

Replies from: Signer
comment by Signer · 2023-07-06T16:39:23.644Z · LW(p) · GW(p)

Zombie-you wouldn't experience blue sky - they would always only experience being comatose. They would behave like you behave down to the level of neurons and atoms, but they would not experience what you experience when you are seeing a blue sky. I understand that this may sound unlikely and, yes, weird, but what's so hard to imagine? You just imagine feeling comatose, nothing more. Sure you can imagine feeling angry, when in reality you would feel sad - how is this different?

Replies from: carl-feynman
comment by Carl Feynman (carl-feynman) · 2023-07-08T16:09:33.028Z · LW(p) · GW(p)

That is, from my point of view, asking me to have two contradictory experiences at once: being normal and being comatose.  And you’re going to say, “not being comatose, feeling comatose.“ And I will say, I can’t imagine acting awake and also feeling comatose.  
Let‘s look at a particular feature of coma: not being able to stand upright.  I would feel like I was unable to stand, while in fact standing up whenever appropriate.  And this is not some crazy delusion— in fact my brain is operating normally.  No, I can’t imagine what that would feel like.
We‘re both intelligent persons, not trying to be deceptive.  And yet we have a large difference in what we can imagine ourselves being like when we introspect.  I claim this is due to an actual difference in the structure of our cognition, best summed up as “I don’t have qualia, you do.”

Replies from: Signer
comment by Signer · 2023-07-09T04:18:08.982Z · LW(p) · GW(p)

No, I can’t imagine what that would feel like.

That would feel like being comatose. Again, I could understand if you said "it's unlikely to happen", but I still don't understand how not being able to even imagine it would work. Some similar things are even can happen in the real world: you can not consciously see anything, don't feel like you can move your hand, but still move your hand. You can just extrapolate from this to not feeling anything. You can say that feelings about being comatose are delusional in that case.

Or, can you imagine that it's not you that experiences blue sky - your copy does - when actual you are a comatose ghost? Like, you don't even need to have qualia to imagine qualia - they can be modeled by just adding a node to your casual graph that includes neurons or whatever. You can do that with your models, right?

Replies from: solvalou
comment by solvalou · 2023-07-17T18:51:25.608Z · LW(p) · GW(p)

Your disagreement is mirrored almost exactly in Yudkowsky's post Zombies Redacted [LW · GW]. The crucial point (as mentioned also in Hastings' sister comment) is that the thought experiment breaks down as soon as you consider the zombies making just the same claims about consciousness as we do, while not actually having any coherent reason for making such claims (as they are defined to not have consciousness in the first place). I guess you can imagine, in some sense, a scenario like that, but what's the point of imagining a hypothetical set of physical laws that lack internal coherence?

Replies from: Signer, tslarm
comment by Signer · 2023-07-18T05:20:49.411Z · LW(p) · GW(p)

Zombies being wrong is not a problem for experiment's coherence - their reasons for making claims about consciousness are just terminated on the level of physical description. The point is that the laws of physics don't seem to prohibit a scenario like this: for other imagined things you can in principle run the calculations and say "no, evolution on earth would not produce talking unicorns", but where is the part that says that we are not zombies? There are reasons to not believe in zombies and more reasons to not believe in epiphenomenalism, like "it would be coincidence for us to know about epiphenomenal consciousness", but the problem is that these reasons seem to be outside of physical laws.

comment by tslarm · 2023-07-18T05:30:08.130Z · LW(p) · GW(p)

what's the point of imagining a hypothetical set of physical laws that lack internal coherence?

I don't think they lack internal coherence; you haven't identified a contradiction in them. But one point of imagining them is to highlight the conceptual distinction between, on the one hand, all of the (in principle) externally observable features or signs of consciousness, and, on the other hand, qualia. The fact that we can imagine these coming completely apart, and that the only 'contradiction' in the idea of zombie world is that it seems weird and unlikely, shows that these are distinct (even if closely related) concepts.

This conceptual distinction is relevant to questions such as whether a purely physical theory could ever 'explain' qualia, and whether the existence of qualia is compatible with a strictly materialist metaphysics. I think that's the angle from which Yudkowsky was approaching it (i.e. he was trying to defend materialism against qualia-based challenges). My reading of the current conversation is that Signer is trying to get Carl to acknowledge the conceptual distinction, while Carl is saying that while he believes the distinction makes sense to some people, it really doesn't to him, and his best explanation for this is that some people have qualia and some don't.

comment by Mitchell_Porter · 2023-07-03T00:33:51.752Z · LW(p) · GW(p)

Obviously I have sensory impressions; I can tell you when something looks red.  And I have sensory memories; I can tell you when something looked red yesterday.  But there isn’t any hard-to-explain extra thing there.

What is "looking red", in terms of something physical?

Replies from: carl-feynman
comment by Carl Feynman (carl-feynman) · 2023-07-03T01:43:35.906Z · LW(p) · GW(p)

The brevity of your question makes me suspect that I am about to fall into a philosophical trap.  But I will go ahead and answer it.

There’s one particular interface in my brain.  It’s got some kind of reference to the thing in question, bound to a representation for the color I’ve been trained to call ‘red’.  This color mostly is detected for objects that mostly reflect the longest wavelengths of visible light.  Is that the kind of ‘physical’ you were looking for?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2023-07-03T02:34:05.697Z · LW(p) · GW(p)

a philosophical trap

Let's just say it's a test to see whether you have qualia in your worldview after all. 

I'll try not to get stuck on terms like interface, reference, binding, and focus for now just on this entity called a "representation". Is that the thing which is red, or which looks red? And if so, could you remind us what it is, physically? 

Replies from: carl-feynman
comment by Carl Feynman (carl-feynman) · 2023-07-03T23:17:06.587Z · LW(p) · GW(p)

Nope, it’s the object in the world, an apple or whatever, that looks red and (usually) is red. The representation in my brain usually responds to a red object in the world, but it can be fooled by psychedelics or clever illumination.  I don’t know how data structures are represented in my brain, so I can’t answer “what it is, physically”. if I knew more neuroscience, I might be able to localize it to a particular brain area, but no more (given my current understanding of what we know).

I hope you’re going to tell me some way to tell if I have qualia :-).

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2023-07-04T05:03:07.272Z · LW(p) · GW(p)

So what happens if you hallucinate a color? When that happens, is there anything red, any "redness" or "experience of redness" there? 

Replies from: carl-feynman
comment by Carl Feynman (carl-feynman) · 2023-07-04T20:13:04.824Z · LW(p) · GW(p)

There is nothing red, there is no redness, but there is an experience of redness.  It’s just another case of my brain lying to me, like telling me I don’t have a blind spot, or have color vision all the way to the periphery.

Replies from: TAG, Mitchell_Porter
comment by TAG · 2023-07-07T14:53:11.229Z · LW(p) · GW(p)

there is an experience of redness

That's exactly a quale.

comment by Mitchell_Porter · 2023-07-05T10:05:32.817Z · LW(p) · GW(p)

What about when you're not hallucinating? On that occasion, is there redness as well as an experience of redness?

Replies from: carl-feynman
comment by Carl Feynman (carl-feynman) · 2023-07-05T14:49:04.250Z · LW(p) · GW(p)

The object is red, I experience it as red.  I suppose you could say there “is redness”, but I find that a strange way to put it.  

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2023-07-06T17:35:05.246Z · LW(p) · GW(p)

I have been mulling over this discussion, trying to identify the best ways to move it forward - focus on the case of an object that isn't red but still looks red? focus on the relationship between representation and experience? - not just because the nature of reality is interesting, but because getting the nature of consciousness right, is potentially central to alignment of superintelligence (what OpenAI is now calling "superalignment"). I was also interested in exploring your hypothesis that some substantial difference (of phenomenology, cognition, and/or metacognition), maybe even a phenotypic difference, might be the reason why some naturally favor qualia, and others don't. 

However, in another comment [LW(p) · GW(p)] you have declared that along with qualia, you also disbelieve in properties, kinds, people, and time. These are all concocted by our brains. So your ontology seems to be one in which, there's physics, and then there are brains, which fabricate an entire fake reality, which is nonetheless the reality that we live in. 

At this point, I have to conclude that I'm not dealing with a subtly different phenomenology, but rather with the effects of a philosophical belief system. There's no reason to suppose that your skepticism towards qualia has a special subtle cause, when you deny the reality of so much else. 

Maybe we could call it Democritus Syndrome, since he had a very similar outlook. In reality, there's just atoms and the void; but "by convention", we also say there's color and taste and everything else. Interestingly, the fragment which reports this proposition (fragment 125) actually attributes it to the intellect, and also presents a riposte from the senses, who say, how can you deny us when you rely on our evidence? 

But Democritus is just one of the first known examples of this stance. When Locke distinguishes secondary qualities from primary qualities, it's a step in the same direction. One response to that distinction is found in doctrines like property dualism and emergentism, in which one acknowledges that physics portrays the world of the primary qualities as causally closed and self-sufficient, but one is unable to deny the reality of the secondary qualities, and therefore regards them as equally real, and either correlated with, or dependent on, the primary qualities. 

Then there's the response which denies the reality of the secondary qualities, or at least regards them as having a lesser reality. You seem to fall into this category. I don't know if you adhere to a specific philosophical school of this type, or if you're just a scientific materialist who is agnostic about the details. (I also want to note something Dennett said about eliminativism, that eliminativists can be selective; they can believe in some aspects of reported phenomenology, and disbelieve in others.)

Then we have various ontological doctrines for which robust realism regarding the phenomenal is all but axiomatic. There is the idealist option which says, if I am faced with a dilemma between physics and phenomenology, of course I choose phenomenology, that's what I actually and directly know about reality. And then there's a kind of qualic realism which says, physics only provides a relational or structural description of reality, but phenomenology gives us a glimpse of the things in themselves. The experienced world, the "lifeworld", is the actual nature of the conscious part of the brain. That is my position, more or less. 

So if I try to imagine what it's like to be you, I imagine a phenomenology which is a normal human phenomenology, but then (when ontological issues arise) there's an internal commentary which reminds you that all this is a construction of the brain, and the true reality is the wavefunction of the universe (or however it is that you conceive of fundamental physics). 

Replies from: carl-feynman
comment by Carl Feynman (carl-feynman) · 2023-07-08T17:34:22.505Z · LW(p) · GW(p)

I first noticed my inability to understand qualia in 1981 or 1982, when I was an undergraduate in Ned Block’s Philosophy of Mind class.  That wasn’t a big deal, I didn’t understand lots of things as an undergraduate.  But it was a niggling problem. It wasn’t until the ‘90s sometime that I came up with my “I don’t have qualia” theory.  And it wasn’t until 2016, when I read The Thing Itself by Adam Roberts, and other Kantian philosophy, that I realized that many things whose reality I accepted were actually constructed by my mind for my convenience.  
That‘s a problem for the theory that Democritus Syndrome causes claiming disbelief in qualia, since I claimed to not have qualia before I caught Democritus Syndrome.

Here’s an alternate theory.  Qualia are a kind of tag on top of perceptions, that says “This is real, reason on that basis.” I don’t have that tag, so it’s easier for me to believe that my mind has constructed reality from sense data, rather than that I directly perceive it.  The direction of causality is reversed from your theory.

Replies from: TAG
comment by TAG · 2023-07-08T17:43:36.986Z · LW(p) · GW(p)

Saying that qualia dont really exist, but only appear to, solves nothing. For one thing qualia are definitionally appearances , so you haven't eliminated them. For another, the physicalist still need to explain how and why the brain produces such appearances. For a third, you have to know what "qualia" means to express a sceptical.theory about them.

Replies from: Signer
comment by Signer · 2023-07-09T05:11:37.749Z · LW(p) · GW(p)

For a third, you have to know what “qualia” means to express a sceptical.theory about them.

I mean, by your definition the experience of red is a quale, by their definition experience is some neural activity, and then there is nothing else to explain. The sceptical theory is only sceptical about "but experience is not neural activity!" and for that "qualia, as a thing that is not neural activity, only appears to exist" is a reasonable answer when appearances are defined to be some neural activity.

Replies from: TAG
comment by TAG · 2023-07-09T12:04:14.857Z · LW(p) · GW(p)

by their definition experience is some neural activity,

That's a theory, not a definition. Confusion between theories and definitions is one of the persistent problems in this debate.

Replies from: Signer
comment by Signer · 2023-07-10T04:19:51.471Z · LW(p) · GW(p)

The way I see it works is every definition is under a theory of the model that includes these definitions describing reality better or worse. Otherwise they are just empty words. So "qualia are experiences" or just "there are such things as qualia" are also implicitly low-resolution theories. Experiences are privileged only under misguided theories of knowledge (which are theories because it's in the name) which make experiences axiomatically true. Otherwise just gesturing to "you know, experiences, you obviously see some things" is not fundamentally different from gesturing to neural activity, and the one about neural activity is more precise.

So, I don't understand which part of the above you have a problem with. You don't disbelief in theoretical ability of neuroscience to show on a screen what you are seeing, right? Because all that talk about reductive explanation may give such impression. So it's all about Mary? That even after we obtain precise theory of what you see, it still wouldn't make you see and that... "seems necessary" or something.

I don't mind corrections to specific steps, but would appreciate you confirming that yes, you think Mary is a strong argument. And then it would be nice to have a better justification for this than "seems necessary".

Replies from: TAG
comment by TAG · 2023-07-19T14:38:12.680Z · LW(p) · GW(p)

>Experiences are privileged only under misguided theories of knowledge (which are theories because it’s in the name) which make experiences axiomatically true.

Science regards experiences as probably correct about their causes, because you can't do empiricism without that assumption. "Qualia are axiomatically true" is not something you need to claim to define qualia, and not something that is always claimed about qualia, and not central to the problem of qualia.

> Otherwise just gesturing to “you know, experiences, you obviously see some things” is not fundamentally different from gesturing to neural activity, and the one about neural activity is more precise.


It's different because we don't experience neural activity as neural activity. That doesn't rule out neural activity being causal or constitutive of qualia. But what the camp #2 person wants is an explanation of how neural activity constitutes the experience. Asserting, as a definition, that it does isn't a persuasive explanation...and is talking-past.


>So, I don’t understand which part of the above you have a problem with. You don’t disbelief in theoretical ability of neuroscience to show on a screen what you are seeing, right? 

That's ambiguous in just the way that Mary's Room is supposed to disambiguate. Mary is able to tell what someone is seeing in the third-person reading-the label sense, just not in the first person, drinking the wine, sense.


>Because all that talk about reductive explanation may give such impression. So it’s all about Mary? That even after we obtain precise theory of what you see, it still wouldn’t make you see and that… “seems necessary” or something.

An objective explanation of seeing red doesn't make you personally see red *and* personally seeing red is necessary to know what red looks like...ie. the explanation is incomplete. 

Physicalists sometimes respond to Mary's Room by saying that one can not expect Mary actually to actually instantiate Red herself just by looking at a brain scan. It seems obvious to them that a physical description of brain state won't convey what that state is like, because it doesn't put you into that state. As an argument for physicalism, the strategy is to accept that qualia exist, but argue that they present no unexpected behaviour, or other difficulties for physicalism.

That is correct as stated but somewhat misleading: the problem is why is it necessary, in the case of experience, and only in the case of experience to instantiate it in order to fully understand it. Obviously, it is true a that a descirption of a brain state won't put you into that brain state. But that doesn't show that there is nothing unusual about qualia. The problem is that there in no other case does it seem necessary to instantiate a brain state in order to understand something.

>I don’t mind corrections to specific steps, but would appreciate you confirming that yes, you think Mary is a strong argument. And then it would be nice to have a better justification for this than “seems necessary

It's tautologous that an explanation of subjective experience needs be about subjective experience. If subjective experience is reducible to brain states, then an explanation should be able to predict qualia, including novel ones...given a brain state as an input , it predicts a set of qualia as an output. 
But what does that mean? How can an entirely objective theory produce such an output? You say: well, naturally doesn't , because it doesn't put you into the brain state. But even though you have excused the shortcoming, it is still there. You have a meta explanation for why the explanation fails, not a successful explanation. And "Mary needs to actually instantiate Red herself" concedes that there are some things that are intrinsically subjective.

Replies from: hairyfigment, Signer
comment by hairyfigment · 2023-07-20T08:52:46.458Z · LW(p) · GW(p)

Have you actually seen orthonormal's sequence [LW · GW] on this exact argument? My intuitions say the "Martha" AI described therein, which imitates "Mary," would in fact have qualia; this suffices to prove that our intuitions are unreliable (unless you can convincingly argue that some intuitions are more equal than others.) Moreover, it suggests a credible answer to your question: integration is necessary in order to "understand experience" because we're talking about a kind of "understanding" which necessarily stems from the internal workings of the system, specifically the interaction of the "conscious" part with the rest.

(I do note that the addendum to the sequence's final post should have been more fully integrated into the sequence from the start.)

Replies from: TAG
comment by TAG · 2023-07-20T19:30:00.900Z · LW(p) · GW(p)

Have you actually seen orthonormal’s sequence [LW · GW] on this exact argument?

Yes.

My intuitions say the “Martha” AI described therein, which imitates “Mary,” would in fact have qualia;

Obviously, both arguments rely on intuition.

this suffices to prove that our intuitions are unreliable

I don't think intuitions are 100% reliable. I do think we are stuck with them.

(unless you can convincingly argue that some intuitions are more equal than others.)

I have been addressing the people who have the expected response to Mary's Room ..I can't do much about the rest.

Moreover, it suggests a credible answer to your question: integration is necessary in order to “understand experience” because we’re talking about a kind of “understanding” which necessarily stems from the internal workings of the system, specifically the interaction of the “conscious” part with the rest.

I think that sort of objection just pushes the problem back. If "integration" is a fully physical and objective process, and if Mary is truly a superscientist, then Mary will fully understand how her subject "integrated" their sense experience, and won't be surprised by experiencing red.

comment by Signer · 2023-07-20T06:27:56.297Z · LW(p) · GW(p)

Thank you for clarifying things.

It’s different because we don’t experience neural activity as neural activity. That doesn’t rule out neural activity being causal or constitutive of qualia. But what the camp #2 person wants is an explanation of how neural activity constitutes the experience. Asserting, as a definition, that it does isn’t a persuasive explanation...and is talking-past.

Yeah, that's what I mean when I talk about axiomatically privileging experience and what I explicitly disagree with - we don't experience experiences as experiences either. It's not different. Describing things as "I'm seeing blue" or having similar internal thoughts is not inherently better. In fact it's worse, because it's less precise. There is no strong foundation for preferring such theory/definitions and so there is no reason to demand for a better theory to logically derive concepts from a worse one - it's not how reductionism works[1].

And as to why Mary doesn't provide such foundation...

That is correct as stated but somewhat misleading: the problem is why is it necessary, in the case of experience, and only in the case of experience to instantiate it in order to fully understand it.

...it's not necessary. At the point where physical theory fully describes both knowledge and state of Mary, there is no argument for why you must define knowledge in a way that leads to contradictions. And there are arguments why you shouldn't - we understand how knowledge works physically, so you can't just say that "not fully understand" feels appropriate here and treat it as enough of a justification.

And again, experience is not the only case - if you told someone to look at Mary falling from a bicycle and asked them whether she knows how to ride a bicycle, they would say that she doesn't.

So, considering that the meta explanation is correct in identifying the demand to use the bad definitions as wrong, why would someone not be persuaded? What is the argument for the necessity of instantiating experience for knowledge that keeps you persuaded in it?

  1. ^

    It's tautologous that an explanation of subjective experience needs be about subjective experience.

    It doesn't need to be. An explanation of fire is not about fire in logical sense - it's about atoms.

Replies from: TAG
comment by TAG · 2023-07-22T12:53:44.985Z · LW(p) · GW(p)

Yeah, that’s what I mean when I talk about axiomatically privileging experience and what I explicitly disagree with—we don’t experience experiences as experiences either.

Of course we do.

It’s not different. Describing things as “I’m seeing blue” or having similar internal thoughts is not inherently better. In fact it’s worse, because it’s less precise.

It would be a less accurate way of defining the same thing, if we already knew that experiences are fully identifiable with neural activity. But we don't know, that....that is what the whole debate is about.

Once you have a successful theory, it is reasonable to change a definition in accordance. For instance, knowing that water ("wet stuff in rivers, seas, and lakes") is H2O,you can define water as H2O.

You can't make the arrow go in the other direction. Defining a tail as a leg doesn't prove a dog has five legs.

Would you concede that it's ever possible to misuse arbitrary redefinitions?

There is no strong foundation for preferring such theory/definitions and so there is no reason to demand for a better theory to logically derive concepts from a worse one—it’s not how reductionism works[1].

Definitions aren't theories. Preferring "precise",objective, etc., definitions of words doesn't prove everything is objective , because it's just your own preference. What you are not doing is investigating reality in an unbiased way... instead you have placed yourself in the driving seat.

An explanation of fire is not about fire in logical sense—it’s about atoms.

It is of course, about both. A reductive explanation relates a higher level phenomenon to a lower level one, If you insist on ignoring the higher level phenomeneon because it is "bad" or "imprecise", you can't achieve an explanation. You have to have the vague understanding of water as wet stuff before you can have the precise understanding of water as H2O.

...it’s not necessary. At the point where physical theory fully describes both knowledge and state of Mary, there is no argument for why you must define knowledge in a way that leads to contradictions.

What contradiction? If something is contradictory, you need to show it.

Physical theory doesn't fully describes the knowledge and state of Mary, because physical theory can't describe sensations. That's the whole point. There is an argument against physical theory being fully adequate, and since the theory isn't known to be correct, we shouldn't change the definition of "quale".

And there are arguments why you shouldn’t—we understand how knowledge works physically,

We understand how some kinds of knowledge do, but maybe not all kinds. People have believed in knowledge-by-aquaintance for a long time.

so you can’t just say that “not fully understand” feels appropriate here and treat it as enough of a justification.

You cant just say "fully understand" feels appropriate here and treat it as enough of a justification. It's intuitions either way

And again, experience is not the only case—if you told someone to look at Mary falling from a bicycle and asked them whether she knows how to ride a bicycle, they would say that she doesn’t.

You're not the first person to think that knowledge-by-acquaintance is the same thing as know-how. But...consider showing it , not just telling it.

So, considering that the meta explanation is correct in identifying the demand to use the bad definitions as wrong, why would someone not be persuaded?

You haven't shown anything is bad.

What is the argument for the necessity of instantiating experience for knowledge that keeps you persuaded in it?

The Mary's Room argument is not a logical proof. It is nonetheless persuasive to a lot of people because a lot of people find that experiencing something personally is more informative than hearing about it at second hand.

Replies from: Signer
comment by Signer · 2023-08-04T11:18:58.512Z · LW(p) · GW(p)

To be clear, I don't argue for physicalism about qualia in general here, only against Mary.

Would you concede that it’s ever possible to misuse arbitrary redefinitions?

Yes, of course, it's possible to use a definition from incomplete or wrong theory, among other things.

What contradiction?

The contradiction between with physical description of knowledge.

You cant just say “fully understand” feels appropriate here and treat it as enough of a justification. It’s intuitions either way

It's not - it's intuitions and precise description of everything about the situation (Which you agree with, right? That it's not surprising for an image of a brain scan to have a different effect on Mary from seeing something red, that physicalism predicts the difference) on the one side and just intuition on the other. So...

It would be a less accurate way of defining the same thing, if we already knew that experiences are fully identifiable with neural activity. But we don’t know, that....that is what the whole debate is about.

...we (or we from future, or Mary) do know this by observing that neural activity works the same way the thing that you call "experience" works. The argument for identifying experiences with neural activity works as much now as arguments for reductive explanation about trajectory of a falling leaf, but even if you want to check whether it works in the future and imagine Mary, you would still discover that at best it's slightly unintuitive.

There is an argument against physical theory being fully adequate, and since the theory isn’t known to be correct, we shouldn’t change the definition of “quale”.

The problem is that the whole argument is "it feels unintuitive", when the theory is known to be correct to the level of precisely describing everything about the situation.

We understand how some kinds of knowledge do, but maybe not all kinds. People have believed in knowledge-by-aquaintance for a long time.

We also understand how knowledge-by-acquaintance works physically - it just changes your brain. There is nothing problematic on the knowledge level.

If you insist on ignoring the higher level phenomeneon because it is “bad” or “imprecise”, you can’t achieve an explanation.

The only part being ignored in physical description of knowledge-by-aquaintance is the feeling of it being unintuitive.

a lot of people find that experiencing something personally is more informative than hearing about it at second hand.

Which is explained physically. What's the argument for demanding more?

You’re not the first person to think that knowledge-by-acquaintance is the same thing as know-how. But...consider showing it , not just telling it.

They are not precisely the same thing - they are different neural processes. But yes, they both harder to obtain with just description. What's there to show? The argument was that experiences are the only kind of knowledge that requires something except physical description. Do you disagree that Mary can have all physical knowledge but still don't know how to drive a bike? The thing we can deduce from this is that such definition of physical knowledge is bad.

Of course we do.

And why do you think this is true? All definitions bottom out somewhere and there is no reason to stop at experiences specifically.

The only way Mary can work as an argument is if you give the "we experience experiences as experiences"-assumption a special status: if you have an axiom of "I know what it's like to see red", then you can build on that the justification for why it's so important to preserve all aspects of your assumed knowledge including what must be called knowledge and the intuitions about knowledge-by-acquaintance.

Replies from: TAG
comment by TAG · 2023-08-10T13:26:44.297Z · LW(p) · GW(p)

To be clear, I don’t argue for physicalism about qualia in general here, only against Mary.

You're arguing against Mary's Room on the basis of physicalism:-

It’s not—it’s intuitions and precise description of everything about the situation

The idea that a complete physical and explanation captures "everything" is a clam equivalent to physicalism.

The contradiction between with physical description of knowledge.

Of course "Mary doesn't know what Red looks like" contradicts "physical descriptions in the form of detailed brain scans capture everything"...and vice versa. That's the point. An argument for X contradicts not X. That's not the same as a self contradiction.

(Which you agree with, right? That it’s not surprising for an image of a brain scan to have a different effect on Mary from seeing something red, that physicalism predicts the difference)

The point is not just that seeing a tomato has a different effect , the point is that Mary learns something. And physicalism does not predict that , because it implies complete physical descriptions leave something out.

on the one side and just intuition on the other. So...

It would be a less accurate way of defining the same thing, if we already knew that experiences are fully identifiable with neural activity.

We don't know that. Assuming it is equivalent to assuming physicalism, which begs the question against Mary's Room.

..we (or we from future, or Mary) do know this by observing that neural activity works the same way the thing that you call “experience” works.

No. One of them is only knowable by personal instantiation...as you concede...kind of.

The argument for identifying experiences with neural activity works as much now as arguments for reductive explanation about trajectory of a falling leaf,

Every individual reductive argument has to pay it's own way. There's no global argument for reductionism.

but even if you want to check whether it works in the future and imagine Mary, you would still discover that at best it’s slightly unintuitive.

Since we don't actually have a reductive explanation of conscious experience, it's intuition telling you that we will or should or must.

There is an argument against physical theory being fully adequate, and since the theory isn’t known to be correct, we shouldn’t change the definition of “quale”.

The problem is that the whole argument is “it feels unintuitive”, when the theory is known to be correct to the level of precisely describing everything about the situation.

No it isn't.

a lot of people find that experiencing something personally is more informative than hearing about it at second hand.

Which is explained physically. What’s the argument for demanding more?

Are you saying:-

  1. we don't learn from acquaintance..

  2. but we have a false intuition we do...

  3. and science can definitely predict 2.

...?

Ie., something like illusionism. Because I'm pretty sure 3 is false.

You’re not the first person to think that knowledge-by-acquaintance is the same thing as know-how. But...consider showing it , not just telling it.

They are not precisely the same thing—they are different neural processes. But yes, they both harder to obtain with just description. What’s there to show? The argument was that experiences are the only kind of knowledge that requires something except physical description. Do you disagree that Mary can have all physical knowledge but still don’t know how to drive a bike?

I agree, but I don't see how that makes both things the same.

Of course we do.

And why do you think this is true? All definitions bottom out somewhere and there is no reason to stop at experiences specifically.

I've didn't say there was. I'm calling for experiences to be accepted as having some sort of existence, and explained somehow. To not be ignored.

The only way Mary can work as an argument is if you give the “we experience experiences as experiences”-assumption a special status: if you have an axiom of “I know what it’s like to see red”,

I agree with “I know what it’s like to see red” , but I don't see how it equates to "we experience experiences as experiences". What else would we experience our own experiences as? Brain scans?

then you can build on that the justification for why it’s so important to preserve all aspects of your assumed knowledge

It's important not to disregard things, and the claim that you have a "complete" explanation.

Replies from: Signer
comment by Signer · 2023-08-22T00:15:31.713Z · LW(p) · GW(p)

I'm saying that it's ok to beg the question here, because, as you say, Mary is not a logical argument: if there is no contradiction either way, then physicalism wins by precision. And you don't need to explicitly assume "physical descriptions in the form of detailed brain scans capture everything" - you only need to consistently use one of common-sense-to-someone-who-knows-physics definitions of knowledge.

Ie., something like illusionism. Because I’m pretty sure 3 is false.

Yes, I'm saying that you can non-contradictory choose your definitions of knowledge in such a way that 1 is true and so 2 is also true because intuition asserting non-true proposition is wrong and 3 is true because intuition is just neural activity and science predicts all of it. And yes, that means that illusionism is right in that you can be wrong about your (knowledge of) experiences.

I agree with “I know what it’s like to see red” , but I don’t see how it equates to “we experience experiences as experiences”. What else would we experience our own experiences as? Brain scans?

As neural signals. There is no justification to start from a model that includes experiences. If Mary is not an argument for adding experiences to a physical model, then it's not an argument for not ignoring (contradictory aspects of) them when reducing high level description to a physical model.

I’m calling for experiences to be accepted as having some sort of existence, and explained somehow. To not be ignored.

They are not ignored, they're represented by corresponding neural processes. Like, what is ignored and not explained by a physical description? It's not the need for instantiation - it's predicted by experiences being separate neural process. You can't say "it ignores qualia" - that would be ignoring the whole Mary setup and begging the question - as far as Mary goes there is no problem with "qualia are neural processes". So it leaves only intuition about knowledge - about high-level concept which you can define however you want.

No. One of them is only knowable by personal instantiation...as you concede...kind of.

Under a definition of knowledge that calls experiences "knowledge" knowing some of your own neural activity also requires instantiating that neural activity.

Replies from: TAG
comment by TAG · 2023-09-27T21:18:59.978Z · LW(p) · GW(p)

I’m saying that it’s ok to beg the question here, because, as you say, Mary is not a logical argument: if there is no contradiction either way, then physicalism wins by precision.

There is a fact of the matter about whether physical descriptions are exhaustive, even if Mary's Room doesnt prove it. If physical descriptions don't convey experiences as such , they are fundamentally flawed , and the precision isn't much compensation.

And you don’t need to explicitly assume “physical descriptions in the form of detailed brain scans capture everything”—you only need to consistently use one of common-sense-to-someone-who-knows-physics definitions of knowledge.

Defining knowledge as purely physical doesn't prove anything about the world. (But you are probably using "definition" to mean "theory'.)

Are you saying:-

1 we don’t learn from acquaintance..

2 but we have a false intuition we do...

3 and science can definitely predict 2.

Yes, I’m saying that you can non-contradictory

Lots of things are non contradictory. Non circularity is more of an achievement.

choose your definitions of knowledge in such a way that 1 is true and so 2 is also true because intuition asserting non-true proposition is wrong and 3 is true because intuition is just neural activity and science predicts all of it. And yes, that means that illusionism is right in that you can be wrong about your (knowledge of) experiences.

Again, you can't prove things by adopting definitions. If we had a detailed understanding of neuroscience that predicted an illusion of knowledge-by -acquaintance specifically, you'd be onto something. But illusionist claims are philosophical theories, not scientific facts.

What else would we experience our own experiences as? Brain scans?

As neural signals.

We don t experience experiences as neural signals. A person can spend their life with no idea that there is such a thing as a neural signal.

There is no justification to start from a model that includes experiences

Experience need to be explained because everything needs to be explained. Experiences need not end up in the final ontological model, because sometimes less an explanation explains-away.

If Mary is not an argument for adding experiences to a physical model, then it’s not an argument for not ignoring (contradictory aspects of) them when reducing high level description to a physical model. I’m calling for experiences to be accepted as having some sort of existence, and explained somehow. To not be ignored.

They are not ignored, they’re represented by corresponding neural processes. Like, what is ignored and not explained by a physical description?

The experience itself.

It’s not the need for instantiation—it’s predicted by experiences being separate neural process. You can’t say “it ignores qualia”—that would be ignoring the whole Mary setup and begging the question—as far as Mary goes there is no problem with “qualia are neural processes”.

That would be the case if physicalism is true, but you don't know that physicalism is.true.. You basically assumed it, by assuming that physical explanations are complete. That's circular.

Under a definition of knowledge that calls experiences “knowledge” knowing some of your own neural activity also requires instantiating that neural activity.

So maybe I could arbitrarily assume that definition?

Replies from: Signer
comment by Signer · 2023-09-28T17:54:29.313Z · LW(p) · GW(p)

Lots of things are non contradictory. Non circularity is more of an achievement.

Such physical definitions of knowledge are not more circular than anything, I think?

So maybe I could arbitrarily assume that definition?

I mean, go ahead - then Mary would just be able to imagine red.

Again, you can’t prove things by adopting definitions.

Exactly - that's why Mary doesn't work.

If we had a detailed understanding of neuroscience that predicted an illusion of knowledge-by -acquaintance specifically, you’d be onto something. But illusionist claims are philosophical theories, not scientific facts.

There is no need for additional scientific facts. There are enough scientific facts to accept physical explanation of the whole Mary setup. That's why people mostly seek philosophical problems with physicalism and why physicalists answer with philosophical theories - if physicalism is philosophically coherent, then it is undoubtedly true.

That would be the case if physicalism is true, but you don’t know that physicalism is.true..You basically assumed it, by assuming that physical explanations are complete. That’s circular.

The Mary's room was supposed to be an argument against physicalism. If there are no philosophical problems in the setup after you assume physicalism, then argument fails. It is equivalent to disagreeing with some step of an argument, like "Mary gets new knowledge" - you can't just disallow disagreeing with this because it's logically equivalent to assuming physicalism - that would be assuming non-physicalism that the argument was about. Of course, I don't just assume physicalism - you need to satisfy the "no philosophical problems" condition, so I talk about why "Mary gets new knowledge" is just trying to prove things by adopting definitions. I don't see how do you think it can work otherwise - you can't derive "physicalism is true" from Mary's assumptions alone. Obviously, assuming physicalism doesn't prove that physicalism is true. But again, I don't argue, that physicalism is true, I'm arguing that Mary is a bad argument.

There is a fact of the matter about whether physical descriptions are exhaustive, even if Mary’s Room doesnt prove it. If physical descriptions don’t convey experiences as such , they are fundamentally flawed , and the precision isn’t much compensation.

Sure. So you do agree now that talking about Mary or knowledge is unnecessary?

They are not ignored, they’re represented by corresponding neural processes. Like, what is ignored and not explained by a physical description?

The experience itself.

So, what is your argument against "experience itself is explained by "human experiences are neural processes"", if it's not Mary?

Experience need to be explained because everything needs to be explained. Experiences need not end up in the final ontological model, because sometimes less an explanation explains-away.

If you don't demand specific experiences to be in the final ontological model, they are explained the same way the fire is explained. The explanation of fire does not usually set you on fire. What you call "I'm seeing blue" is actually "your neurons are activated in a way similar to a way they are activated when blue light is directed to your eyes". On what basis then you say that these 3gb of numbers from a simulation do not explain fire?

Replies from: TAG
comment by TAG · 2023-09-29T17:24:50.261Z · LW(p) · GW(p)

Such physical definitions of knowledge are not more circular than anything, I think?

I don't know what your mean. I wasn't intentionally saying anything physical or non physical.

So maybe I could arbitrarily assume that definition?

I mean, go ahead—then Mary would just be able to imagine red.

No,because you can't prove things through definitions.

Again, you can’t prove things by adopting definitions.

Exactly—that’s why Mary doesn’t work.

The Mary's Room argument is not an argument from definitions.

If we had a detailed understanding of neuroscience that predicted an illusion of knowledge-by -acquaintance specifically, you’d be onto something. But illusionist claims are philosophical theories, not scientific facts.

There is no need for additional scientific facts. There are enough scientific facts to accept physical explanation of the whole Mary setup.

Show me a prediction of a novel quale!

That’s why people mostly seek philosophical problems with physicalism and why physicalists answer with philosophical theories—if physicalism is philosophically coherent, then it is undoubtedly true.

No. Consistency is necessary for truth, but nowhere near sufficient.

That would be the case if physicalism is true, but you don’t know that physicalism is.true..You basically assumed it, by assuming that physical explanations are complete. That’s circular.

The Mary’s room was supposed to be an argument against physicalism. If there are no philosophical problems in the setup after you assume physicalism,

It's suppose to be an argumetn against phsysicalism, so you can't refute it by assuming physicalism.

then argument fails. It is equivalent to disagreeing with some step of an argument, like “Mary gets new knowledge”—you can’t just disallow disagreeing with this

I don't disallow disagreeing with it. I disallow assuming physicalism. The point is to think about what would happen in the situation whilst suspending judgement about the ontology the world works on.

because it’s logically equivalent to assuming physicalism—that would be assuming non-physicalism that the argument was about.

Non physicalism doesn't imply "Mary would not know what Red looks like">

Of course, I don’t just assume physicalism—you need to satisfy the “no philosophical problems” condition, so I talk about why “Mary gets new knowledge” is just trying to prove things by adopting definitions. I don’t see how do you think it can work otherwise—you can’t derive “physicalism is true” from Mary’s assumptions alone. Obviously, assuming physicalism doesn’t prove that physicalism is true. But again, I don’t argue, that physicalism is true, I’m arguing that Mary is a bad argument.

So you do agree now that talking about Mary or knowledge is unnecessary?

No.

They are not ignored, they’re represented by corresponding neural processes.

There's no fact of the matter about that. If they are fully represented , then Mary would know what red looks like, otherwise not. If we could perform M's R as a rela experiment, we would not need it as a thought experiment.

Like, what is ignored and not explained by a physical description?

The experience itself.

So, what is your argument against “experience itself is explained by “human experiences are neural processes”″, if it’s not Mary?

There's no reason it shouldn't be Mary. Mary's Room isn't a proof, but there is no proof of the contrary. Arguments that start "assuming physicalism" are not proof because they are invalid because they are circular.

If you don’t demand specific experiences to be in the final ontological model, they are explained the same way the fire is explained.

We have a detailed gears-level explanation of fire, we do not have one of conscious experience. There are three possibilities, not two:

  1. X is explained, and survives the explanation as part of ontology.
  2. X is explained away.
  3. X is not explained at all.

Merely saying that "X is an emergent, high level phenomenon..but don't ask me how or why" is not an explanation, despite what many here think.

The explanation of fire does not usually set you on fire.

You only need to instantiate something yourself if it fundamentally subjective.

Physicalists sometimes respond to Mary's Room by saying that one can not expect Mary actually to actually instantiate Red herself just by looking at a brain scan. It seems obvious to them that a physical description of brain state won't convey what that state is like, because it doesn't put you into that state. As an argument for physicalism, the strategy is to accept that qualia exist, but argue that they present no unexpected behaviour, or other difficulties for physicalism.

That is correct as stated but somewhat misleading: the problem is why is it necessary, in the case of experience, and only in the case of experience to instantiate it in order to fully understand it. Obviously, it is true a that a descirption of a brain state won't put you into that brain state. But that doesn't show that there is nothing unusual about qualia. The problem is that there in no other case does it seem necessary to instantiate a brain state in order to undertstand something.

If another version of Mary were shut up to learn everything about, say, nuclear fusion, the question "would she actually know about nuclear fusion" could only be answered "yes, of course....didn't you just say she knows everything"? The idea that she would have to instantiate a fusion reaction within her own body in order to understand fusion is quite counterintuitive. Similarly, a description of photosynthesis will make you photosynthesise, and would not be needed for a complete understanding of photosynthesis.

There seem to be some edge cases.: for instance, would an alternative Mary know everything about heart attacks without having one herself? Well, she would know everything except what a heart attack feels like, and what it feels like is a quale. the edge cases, like that one, are cases are just cases where an element of knowledge-by-acquaintance is needed for complete knowledge. Even other mental phenomena don't suffer from this peculiarity. Thoughts and memories are straightforwardly expressible in words, so long as they don't involve qualia.

So: is the response "well, she has never actually instantiated colour vision in her own brain" one that lays to rest and the challenge posed by the Knowledge argument, leaving physicalism undisturbed? The fact that these physicalists feel it would be in some way necessary to instantiate colour, but not other things, like photosynthesis or fusion, means they subscribe to the idea that there is something epistemically unique about qualia/experience, even if they resist the idea that qualia are metaphysically unique.

What you call “I’m seeing blue” is actually “your neurons are activated in a way similar to a way they are activated when blue light is directed to your eyes”.

Says who? You can't actually show me the explanation, and you can't prove it by assuming physicalism.

On what basis then you say that these 3gb of numbers from On what basis then you say that these 3gb of numbers from a simulation do not explain fire? I didn;'t say fire doesn't have an explanation. Note that explanations have nothing to do with simulations. Explanations have to do with

i) showing that two things are necessarily, not arbitrarily linked.

ii) making predictions, especially novel ones.

An explanation of conscious experience would render zombies unimaginable (because of i) and allow you to predict novel qualia (because of ii).

Replies from: Signer
comment by Signer · 2023-09-30T11:35:12.752Z · LW(p) · GW(p)

That is correct as stated but somewhat misleading: the problem is why is it necessary, in the case of experience, and only in the case of experience to instantiate it in order to fully understand it.

Here - "fully understand" depends on definition of "understand". What you understand is not a matter of fact, it's a matter of definition. All you talk about is how it is "counterintuitive" to call instantiating nuclear reaction in yourself "understanding". "It's intuitive to call new experience "additional knowledge"" is an argument from definitions.

There seem to be some edge cases.: for instance, would an alternative Mary know everything about heart attacks without having one herself? Well, she would know everything except what a heart attack feels like, and what it feels like is a quale. the edge cases, like that one, are cases are just cases where an element of knowledge-by-acquaintance is needed for complete knowledge. Even other mental phenomena don’t suffer from this peculiarity. Thoughts and memories are straightforwardly expressible in words, so long as they don’t involve qualia.

They are only edge cases of specific definitions of knowledge. There is no fundamental reason why you must call "knowledge" heart attack's effect on your brain and not call "knowledge" fire's effect on your hand.

The fact that these physicalists feel it would be in some way necessary to instantiate colour, but not other things, like photosynthesis or fusion, means they subscribe to the idea that there is something epistemically unique about qualia/experience, even if they resist the idea that qualia are metaphysically unique.

"Necessary" for what? Judging from "epistemically unique" it is implied that it is necessary for knowledge? Then it's certainly incorrect - it's either not necessary, because Mary can have a more compact representation of knowledge about color, or it's necessary for all things, if Mary supposed to have all representations of knowledge. It may be necessary for satisfying Mary's preferences to have qualia independently of their epistemic value - that's your perfectly physicalist source of subjectivity.

If you only care about matters of fact, then there are no problems for physicalism in that the human qualia are unusual - it predicts that different neural processes are different. And predicts that it's useful to see things for yourself. And that it will feel intuitive to say "Mary gets new knowledge" for some people. I think it even follows from casual closure, that it doesn't make sense for there to be unphysical explanation for intuitions? If your intuition is not predicted by physics, then atoms somewhere have to be unexpectedly nudged - is it what you propose? I... don't really understand the argument here? The physicalism doesn't say that all things that it is intuitive to call "knowledge" are equally easy to get from books, or something - why exactly it is an argument against physicalism that Mary gets what it predicts?

No,because you can’t prove things through definitions.

There’s no fact of the matter about that. If they are fully represented , then Mary would know what red looks like, otherwise not. If we could perform M’s R as a rela experiment, we would not need it as a thought experiment.

Wait, is the problem that you actually think that it is not obviously physically possible to imagine red without seeing it? Like, knowing everything plausible includes having all permutations of neuron states, including the state where you are seeing red. Is your "matter of fact" about knowing what it is like to see considers the possibility that without actually seeing Mary could only simulate zombie-red or something?

Oh, I finally got why are you talking about predicting novel qualia - you are saying that physicalism doesn't predict Mary seeing red, right? Because it only predicts neural activity. My point is that this complain doesn't have anything to do with Mary or knowledge. If you only talk about Mary, then there is no motivation to doubt physicalism from the experiment. The point of Mary is that she gains knowledge and physicalism predicts gaining knowledge. There is no need to talk about novel qualia, because physical knowledge contains knowledge about differences between different, old and novel, qualia. You agree, that physicalism at least (allows definition of knowledge where it) predicts gaining some knowledge from instantiation when Mary leaves room, right? Then even if you have doubts about this predicted knowledge being incomplete, Mary doesn't provide anything that justifies this doubt - your arguments about insufficient gears-level explanations would work the same way in situations without novel qualia or complete physical knowledge. Or do you have an example of specific difference between qualia that is not predicted by physicalism and uniquely depends on the whole instantiation thing? I mean, my position is that there are no differences between qualia that are not predicted by physicalism at all, so any examples would be appreciated.

Show me a prediction of a novel quale!

I predict, that if you open this link, you will experience red hair: https://www.youtube.com/watch?v=AHPzikH0tXE

Novel relative to what epistemic state? Sure, we probably can't ethically and consistently make a human say "wow, it was neither sight nor hearing" now, but I really don't get what's the justification for ignoring other facts about qualia that physicalism can predict? Some of them were novel for humanity in their time.

Says who?

Induction.

We have a detailed gears-level explanation of fire, we do not have one of conscious experience.

We don't usually have very detailed explanations of specific fires. And we have detailed explanation of conscious experience - physics equations^^. But ok, there is a space for more useful theories. The thing I don't understand is how it is an argument against physicalism - do you expect to not get gears-level explanation in the future? The whole point of doing Mary is that no one expects it.

Merely saying that “X is an emergent, high level phenomenon..but don’t ask me how or why” is not an explanation, despite what many here think.

Yes, but that would just mean that the correct position is "physicalism is right, but the detailed explanation is in the works". Not detailed-enough explanation at the present moment is just one of factors you weight, along with "physicalism has detailed explanations about physics, neurons and all other things", not something that logically prohibits believing in physicalism. Again, that's not what mainstream arguments against physicalism are? It's always "physicalism can't possibly explain consciousness even if it's explanation have been detailed".

i) showing that two things are necessarily, not arbitrarily linked.

That's what I am against - it's not justified, depending on what do you mean by "necessarily" - atoms are not necessarily linked to fire. In the end, we just arbitrary call some atoms "fire". So why demand this only for qualia? If it's only "as necessary as reduction of fire", than it is already that necessary - the expectation that you will get neurological explanation in future is the same kind of inductive reasoning that you do, when you decide that correlations between atoms and fire are enough to believe explanation in terms of atoms.

comment by [deleted] · 2023-07-05T09:49:57.234Z · LW(p) · GW(p)

Do you think you have experienced a dissociative crisis at any point of your life? I mean the sensations of derealisation/depersonalisation, not other symptoms, and it doesn't need to have been 'strong' at all. 

 

 

 

I ask because those sensations are not in any obvious way about processing sensory data, and because of the feeling of detachment from reality that comes with them. So I was curious if you could identify anything like that. 

Replies from: carl-feynman
comment by Carl Feynman (carl-feynman) · 2023-07-05T14:43:11.662Z · LW(p) · GW(p)

I have on three occasions experienced a state where I can still perceive shapes, but they don’t have any meaning, don’t feel real, and do not resolve into separate objects.  It only lasted a few seconds in each case, and was not distressing.  In fact it was fascinating and I wished it lasted longer so I could gather more data. I’m still capable of voluntary action during these spells— I know this because I once said “Oh hey, I’m derealized!” (or something like that) while it was happening.

I used to experience a phenomenon that I privately call ‘paralysis of the will’, which lasted about ten seconds, and during which I was incapable of willing any new voluntary action, but could continue with my present activity.  For example, it happened when I was driving, and I continued to drive, halted a stop sign, and then proceeded.  But if someone had asked me a question during that time, I wouldn’t have been able to reply.  It’s never been a problem for me, since it looks like absent-mindedness or preoccupation and doesn’t last long.  I used to get it every few months, but not for the last ten years.  It’s not an absence seizure because my memory is continuous through it.

I don’t know if this tells you anything.  I might be typical-minding here, but I think lots of people get various brief funny mental phenomena, and most people just shrug it off.

comment by green_leaf · 2023-07-04T06:36:14.250Z · LW(p) · GW(p)

Do you simultaneously know what it's like when something looks red, and also believe that you don't have qualia?

Replies from: carl-feynman
comment by Carl Feynman (carl-feynman) · 2023-07-04T21:30:05.156Z · LW(p) · GW(p)

Yes.  
If qualia is defined as George Wilfrid describes it elsewhere in this thread, as nothing more than sensation, then I definitely have it.  But I suspect there’s something more— plenty of people have tried to point to it, using phrases like “conceptually separate the information content from what it feels like”.  Well, I can’t.  That phrase doesn’t mesh with a phenomenon in my mind.  The information content is what it feels like.

Replies from: TAG, green_leaf
comment by TAG · 2023-07-07T14:56:00.452Z · LW(p) · GW(p)

It's not more than sensation. It's just the subjective aspect without the behavioural aspect.

comment by green_leaf · 2023-08-10T21:18:24.968Z · LW(p) · GW(p)

That depends on how we define "information" - for one definition of information, qualia are information (and also everything else is, since we can only recognize something by the pattern it presents to us).

But for another definition of information, there is a conceptual difference - for example, morphine users report knowing they are in pain, but not feeling the quale of pain.

comment by mishka · 2023-07-02T18:31:33.449Z · LW(p) · GW(p)

Integrated Information Theory is peak Camp #2 stuff

As a Camp #2 person, I just want to remark that from my personal viewpoint, Integrated Information Theory is sharing the key defect with Global Workspace Theory, and hence is no better.

Namely, I think that the Hard Problem of Consciousness has the Hart Part: the Hard Problem of Qualia. As soon as the Hard Problem of Qualia is solved, the rest of the Hard Problem of Consciousness is much less mysterious (perhaps, the rest can be treated in the spirit of the "Easy Problems of Consciousness", e.g. the question why I am me and not another person might be treatable as a symmetry violation, a standard mechanism in physics, and the question why human qualia seem to normally cluster into belonging to a particular subject (my qualia vs. all other qualia) might not be excessively mysterious either).

So the theory purporting to actually solve the Hard Problem of Consciousness needs to shed some light onto the nature and the structure of the space of qualia, in order to be a viable contender from my personal viewpoint.

Unfortunately, I am not aware of any such viable contenders, i.e. of any theories shedding much light onto the nature and the structure of the space of qualia. Essentially, I suspect we are still mostly at square zero in terms of our progress towards solving the Hard Problem of Qualia (I should be able to talk about this particular shade of red, and this particular smell of coffee, and this particular psychedelic sound and what they are or what they are made of, so that the ways they are perceived do come through, and if I can't, a candidate theory has not even started to address what matters personally to me).

So, as a temporary measure, I am just adjoining qualia as primitives to my overall world view and building on top of that. For example, as a temporary measure, I would model a subjective reality as a neural machine processing vectors representing linear combinations of qualia among other things, so I am talking about "a vector space generated by qualia and, perhaps, also by other base elements". This is not a pathological assumption; if we eventually understand the nature of qualia, it is still a mathematically legitimate construction to consider a vector space generated by them.

As long as one does take a leap of faith and adjoins qualia in this fashion, classical neuroscience theories (e.g. the old 40hz Crick-Koch theory, or Global Workspace Theory) might just work for a Camp #2 person.

:-) But yes, I certainly do agree with the main punchline of this post :-)

Replies from: sil-ver, Signer, particularuniversalbeing
comment by Rafael Harth (sil-ver) · 2023-07-02T19:00:45.890Z · LW(p) · GW(p)

I think a lot of Camp #2 people would agree with you that IIT doesn't make meaningful progress on the hard problem. As far as I remember, it doesn't even really try to; it just states that consciousness is the same thing as integrated information and then argues why this is plausible based on intuition/simplicity/how it applies to the brain and so on.

I think IIT "is Camp #2 stuff" in the sense that being in Camp #2 is necessary to appreciate IIT - it's definitely not sufficient. But it does seem necessary because, for Camp #1, the entire approach of trying to find a precise formula for "amount of consciousness" is just fundamentally doomed, especially since the math doesn't require any capacity for reporting on your conscious states, or really any of the functional capabilities of human consciousness. In fact, Scott Aaronson claims (haven't read the construction myself) here that

the system that simply applies the matrix W to an input vector x—has an enormous amount of integrated information Φ

So yeah, Camp #2 is necessary but not sufficient. I had a line in an older version of this post where I suggested that the Camp #2 memeplex is so large that, even if you're firmly in Camp #2, you'll probably find some things in there that are just as absurd to you as the Camp #1 axiom.

Replies from: mishka
comment by mishka · 2023-07-02T19:18:59.483Z · LW(p) · GW(p)

Yes, I agree with all this.

(Some years ago I tried to search for "qualia" in IIT texts, and I think I got literally zero results; I was super disappointed to discover that indeed "it doesn't even really try to make meaningful progress on the hard problem".

I was particularly disappointed because it came from Christof Koch, and their "40hz paper" from 1990 has been a revelation and a remarkable conceptual breakthrough Francis Crick and Christof Koch, "Towards a neurobiological theory of consciousness", so I had all those hopes and expectations for IIT because it was from Koch :-) :-( )

comment by Signer · 2023-07-03T06:06:54.052Z · LW(p) · GW(p)

So the theory purporting to actually solve the Hard Problem of Consciousness needs to shed some light onto the nature and the structure of the space of qualia, in order to be a viable contender from my personal viewpoint

I get "the nature" part, but why the structure is a part of the Hard Problem? It sure would be nice to have more advanced neuroscience, but we already have working theories about structure and can make you see blue instead of red. So it's not a square zero situation.

Replies from: mishka
comment by mishka · 2023-07-03T07:21:04.325Z · LW(p) · GW(p)

Because qualia are related to each other. We want to understand that relation at least to some extent(otherwise it is unlikely that we'll understand what they are reasonably well). Our subjective reality is made out of them, but their interrelations probably do matter a lot.

can make you see blue instead of red

But this example is about relationship between physical stimuli and qualia (in this particular instance, "red" is a not a quale, only "blue" is a quale (and "red" is a physical stimulus which would result in a red quale under different conditions).

But yes, we do understand quite a bit about color qualia (e.g. we understand that we can structure them in a three-dimensional space if we want to do so based on how mixed colors are perceived in various psychophysical experiments (so that's a parametrization of them by a subset of physical stimuli), or we can consider them independent and consider an infinite-dimensional space generated by them as atomic primitives, and it's not all that clear which of these ways is more adequate for the purpose of describing the structure of subjective experience (which seems to be likely to be much older historically than humanity experience with deliberately mixing the colors)).

Quoting my old write-up:

what is the dimension of the space of subjective colors? This depends quite a bit on whether a particular person wants to state that brown is a kind of dark orange (then one is probably heading towards three-dimensional space of subjective colors), or if, contrary to that, one wants to state that brown is a very particular sensation, quite independent from orange (then one is probably heading towards a high-dimensional or, perhaps, even infinitely-dimensional space of subjective colors).

However, if one fancies to choose the infinitely-dimensional space, some colors are still similar to each other, they do change gradually, there is still a non-trivial metric between them, this space is not well-understood...

But when I say "we are at square zero", "color qualia" are not just "abstract colors", but "subjectively perceived colors", and we just don't understand what this means... Like, at all... We do understand quite a bit about the "physical colors => neural processing" chain, but that chain is as "subjectively colorless" as any theoretical model, as "subjectively colorless" as words "red" and "blue" in this text (in my perception at the moment)...

I do hope we'll start making progress in this direction sooner rather than later...

comment by particularuniversalbeing · 2023-07-04T01:27:26.070Z · LW(p) · GW(p)

So the theory purporting to actually solve the Hard Problem of Consciousness needs to shed some light onto the nature and the structure of the space of qualia, in order to be a viable contender from my personal viewpoint.

Unfortunately, I am not aware of any such viable contenders, i.e. of any theories shedding much light onto the nature and the structure of the space of qualia. Essentially, I suspect we are still mostly at square zero in terms of our progress towards solving the Hard Problem of Qualia (I should be able to talk about this particular shade of red, and this particular smell of coffee, and this particular psychedelic sound and what they are or what they are made of, so that the ways they are perceived do come through, and if I can't, a candidate theory has not even started to address what matters personally to me).

As another Camp #2 person, I mostly agree - IIT is at best barking up a different wrong tree from the functionalist accounts - but Russellian [1]monism makes it at least part of the way to square 1. The elevator pitch goes like this:

  • On the one hand, we know an enormous amount about what physical entities do, and nothing whatsoever about what they are. The electromagnetic field is the field that couples to charges in such and such a way; charge is the property in virtue of which particles couple to the electromagnetic field. To be at some point X in space is to interact with things in the neighborhood of X; to be in the neighborhood of X is (among other things) to interact with things at X. For all we know there might not be such things as things at all: nothing (except perhaps good taste) compels us to believe that "electrons" are anything more than a tool for making predictions about future observations.
  • On the other hand, we're directly acquainted with the intrinsic nature of at least some qualia, but know next to nothing about their causal structure. I know what red is like, I know what blue is like, I know what high pitches are like, and I know what low pitches are like, but nothing about those experiences seems sufficient to explain why we experience purple but not highlow. 
  • So we have lawful relations of opaque relata and directly accessible relata with inexplicable relations: maybe there's just the one sort of stuff, which simultaneously grounds physics and constitutes experience.

Is it right? No clue. I doubt we'll ever know. But it's at least the right sort of theory.

  1. ^

    As in Bertrand Russell

Replies from: Kenku, Signer, mishka
comment by Kenku · 2023-07-04T21:15:40.700Z · LW(p) · GW(p)

If your intuitions about the properties of qualia are the same as mine, you might appreciate this schizo theory pattern-matching them to known physics.

Replies from: mishka
comment by mishka · 2024-07-28T15:04:58.263Z · LW(p) · GW(p)

The link disappeared, but is available on the Wayback Machine: http://web.archive.org/web/20230706153511/https://www.burntcircuit.blog/untangling-consciousness/

The long paper, On the Psycho-Physical Parallelism, it references is still available: https://s3-us-west-2.amazonaws.com/psyphy/PsyPhy_latest.pdf?ref=burntcircuit.blog

comment by Signer · 2023-07-04T05:46:35.842Z · LW(p) · GW(p)

Is it right?

Yes [LW · GW].

comment by mishka · 2023-07-04T02:11:41.256Z · LW(p) · GW(p)

Neutral monism does sound like a good direction to probe further.

I doubt we'll ever know.

If we survive long enough, we might live to see a convincing solution for the "Hard Problem".

If we don't solve this ourselves, then I expect that advanced AIs will get very curious about what is this thing ("subjective experience", "qualia") those humans are talking about and they will get very curious about finding ways to experience those things themselves. And being very smart, they might have better chances to solve this.

But groups of humans might also try to organize to solve this themselves (I think not nearly enough is done at present, both theoretically and empirically; for example, people often tend to assume that Neuralink-style interfaces are absolutely necessary to explore hybrid consciousness between biological entities and electronics, but I strongly suspect that a lot can be done with non-invasive interfaces (which is much cheaper/easier/quicker to accomplish and also somewhat safer (although still not quite safe) for participating biological entities))...

That's for experiments. For theory, we just need to demand what we usually demand of novel physics: non-trivial novel experimental predictions of subjectively observable effects. Some highly non-standard ways to obtain strange qualia or to synchronize two minds, something like that. Something we don't expect, and which a new candidate theory predicts, and which turns out to be correct... That's how we'll know that a particular candidate theory in question is more than just a philosophical take...

comment by Valentine · 2023-07-03T02:38:08.197Z · LW(p) · GW(p)

This is a really clear breakdown. Thank you for writing it up!

I'm struck by the symmetry between (a) these two Camps and (b) the two hemispheres of the brain as depicted by Iain McGilchrist. Including the way that one side can navigate the relationship between both sides while the other thinks the first is basically just bonkers!

It's a strong enough analogy that I wonder if it's causal. E.g., I expect someone from Camp 1 to have a much harder time "vibing". I associate Camp 1 folk with rejection of ineffable insights, like "The Tao that can be said is not the true Tao" sounding to them like "the Tao" is just incoherent gibberish.

In which case the "What is this 'qualia' thing you're talking about?" has an awful lot in common with the daughter's arm phenomenon [LW · GW]. The whole experience of seeing a rainbow, knowing it's beautiful, and then witnessing the thought "Wow, what a beautiful rainbow!" would be hard to pin down because the only way to pin it down in Camp 1 is by modeling the experiential stream and then talking about the model. The idea that there could be a direct experience that is itself being modeled and is thus prior to any thoughts about it… just doesn't make sense to the left hemisphere. It's like talking about the Tao.

I don't know how big a factor, if at all, this plays in the two camps thing. It's just such a striking analogy that it seems worth bringing up.

comment by LatticeDefect · 2023-07-05T17:16:08.272Z · LW(p) · GW(p)

I wonder how much camp #1 correlates with aphantasia.

Replies from: elityre
comment by Eli Tyre (elityre) · 2024-07-19T05:50:29.651Z · LW(p) · GW(p)

This is empirically testable!

comment by Shmi (shminux) · 2023-07-02T20:18:03.634Z · LW(p) · GW(p)

Good writeup, I certainly agree with the frustration of people talking past each other with no convergence in sight.

First, I don't understand why IIT is still popular, Scott Aaronson showed its fatal shortcomings 10 years ago, as soon as it came out. 

Second, I do not see any difference between experiencing something and claiming to experience something, outside of intentionally trying to deceive someone. 

Third, I don't know which camp I am in, beyond "of course consciousness is an emergent concept, like free will and baseball". Here by emergence I mean the Sean Carroll version: https://www.preposterousuniverse.com/blog/2020/08/11/the-biggest-ideas-in-the-universe-21-emergence/

I have no opinion on whether some day we will be able to model consciousness and qualia definitively, like we do with many other emergent phenomena. Which may or may not be equivalent to them being outside the "laws of physics", if one defines laws of physics as models of the universe that a human can comprehend. I can certainly imagine a universe where some of the more complicated features just do not fit into the tiny minds of the tiny parts of the universe that we are. In that kind of a universe there would be equivalents of "magic" and "miracles" and "paranormal", i.e. observations that are not explainable scientifically. Whether our world is like that, who knows.

Replies from: sil-ver, Ilio
comment by Rafael Harth (sil-ver) · 2023-07-02T20:48:05.509Z · LW(p) · GW(p)

Thanks!

First, I don't understand why IIT is still popular, Scott Aaronson showed its fatal shortcomings 10 years ago, as soon as it came out

Well, Scott constructed an example for which the theory gives a highly unintuitive result. This isn't obviously a fatal critique; you could always argue that a lot of theories give some unintuitive results. It's also the kind of thing you could maybe fix by tweaking the math,[1] rather than tossing out the entire approach.

I believe Tononi is on record somewhere biting the bullet on that point (i.e., agreeing that Scott's construction would indeed have high , and that that's okay). But I don't know where, and I think I already searched for it a few months ago (probably right after IIT4.0 was dropped) and couldn't find it.

Second, I do not see any difference between experiencing something and claiming to experience something, outside of intentionally trying to deceive someone.

Third, I don't know which camp I am in, beyond "of course consciousness is an emergent concept, like free will and baseball". Here by emergence I mean the Sean Carroll version: https://www.preposterousuniverse.com/blog/2020/08/11/the-biggest-ideas-in-the-universe-21-emergence/

I think this puts you firmly into Camp #1 (though you saying this proves that, at a minimum, the idea wasn't communicated as clearly as I'd hoped). Like, the introductory dialogue shows someone failing to communicate the difference, so if this difference isn't intuitively obvious to you, this would be a Camp #1 characteristic.

And like, since the whole point was that [trying to articulate what exactly it means for experience to exist independently of the report] is extremely difficult and usually doesn't work, I'm not gonna attempt it here.


  1. Though as mentioned in another comment, I haven't actually read through the construction -- I always just trusted Scott here -- so maybe I'm wrong. ↩︎

Replies from: Ilio
comment by Ilio · 2023-09-30T14:07:02.467Z · LW(p) · GW(p)

I believe Tononi is on record somewhere biting the bullet on that point (i.e., agreeing that Scott's construction would indeed have high Φ, and that that's okay). But I don't know where, and I think I already searched for it a few months ago (probably right after IIT4.0 was dropped) and couldn't find it.

There’s a link in SA last post on the topic: https://scottaaronson.blog/?p=1823

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2023-09-30T14:50:53.486Z · LW(p) · GW(p)

Thanks! Sooner or later I would have searched until finding it, now you've saved me the time.

comment by Ilio · 2023-09-30T14:09:48.032Z · LW(p) · GW(p)

First, I don't understand why IIT is still popular

Imho comment #41 from SA last post on the topic (link above) explains the appeal, plus the smart/sneaky move of forever saying this theory is not finished yet.

It seems to [Darrell Burgan] the IIT theorists should be applauded for even attempting to come up with a mathematical definition of consciousness. Their current theory is flawed but early models of the atom were similarly naive. Here’s hoping that IIT leads to a much more effective model of consciousness. It certainly seems like an important field of study.

comment by wilkox · 2023-07-03T06:29:27.432Z · LW(p) · GW(p)

This is a clear and convincing account of the intuitions that lead to people either accepting or denying the existence of the Hard Problem. I’m squarely in Camp #1, and while I think the broad strokes are correct there are two places where I think this account gets Camp #1 a little wrong on the details.

According to Camp #1, the correct explanandum is still "I claim to have experienced X" (where X is the apparent experience). After all, if we can explain exactly why you, as a physical system, uttered the words "I experienced X", then there's nothing else to explain. […] In other words, the two camps disagree about the epistemic status of apparently perceived experiences: for Camp #2, they're epistemic bedrock, whereas for Camp #1, they're model outputs of your brain, and like all model outputs of your brain, they can be wrong.

I think this is conflating two difference senses of ‘claim’. The first sense is the interpersonal or speech sense: John makes a claim to you about his internal experience, in the form of speech. In this sense, ‘John claims to have a headache’ is the correct explanandum, in the Camp #1 view, of John telling you he has a headache, because it’s the closest thing to John’s actual experience that you have access to.

However, there is something different going on in the case where you yourself seem to have had an experience. You can believe you have had a certain experience without telling anybody about it, or without even uttering the words ‘I experienced X’ into an empty room, so the interpersonal or speech sense of ‘claim’ doesn’t really seem to apply. This only leaves us with the sense of ‘making a claim to yourself’, which might more precisely be called ‘thinking’ or ‘believing’.

Even in the Camp #1 view, there really is something different about a claim you make to yourself. You have privileged access to the contents of your own mind that you don’t have to contents of other people’s minds, by virtue of the mundane physical fact that the neurones in your brain are connected to the other neurones in your brain but not to the neurones in other people’s brains. Even if you don’t utter the words ‘I experienced X’, there is still something to be explained that lies between ‘actually experiencing X’ and ‘claiming in speech to have experienced X: why did you have the thought or belief ‘I experienced X’, instead of ‘I didn’t experience X but it would be useful for me to lie about it’? The explanandum in the case of your own experience is located a little deeper than it is in the case of the experiences of others. You can still be wrong about the underlying reality of your experiences – perhaps the memory of having a headache was falsely implanted with nefarious technology – but you have access to a type of evidence about it that John does not.

(I've never been able to figure out if Thomas Nagel, in ‘What is it like to be a bat?’, believes that the mere existence of this sort of privileged evidence about one's own experiences tells us something about the nature of  qualia/subjectivity. He says ‘The point of view in question is not one accessible only to a single individual. Rather it is a type.’ But, from my Camp #1 perspective, he never seems to explain what the difference is.)

So consciousness will be a densely connected part of this network – no more, no less – and it will have fuzzy boundaries because there is, ultimately, no ground truth as to what does or doesn't constitute consciousness.

Perhaps this overly nit-picky, but I don’t believe Camp #1 intuitions imply that consciousness is or arises from a particular ‘part’ of the brain, in the sense that you could say ‘it comes from the neurones in this region’ or ‘it comes from the subset of neurones lighting up on this fMRI’, even allowing fuzzy boundaries. There’s no reason to expect the physical substrate of the brain, or even the network topology of the its connections, to always map straightforwardly to some feature or property of the mind, and particularly not for more abstract and higher-level properties. Sometimes there is such an obvious mapping (e.g. visual pathways), but there’s no more reason to expect that there is a ‘consciousness part of the brain’ than a ‘reasoning part of the brain’ or an ‘optimising part of the brain’; it might just be a thing that the whole brain is or does. By analogy, you might be able to point to a particular bit of circuitry in a computer that processes raw data from a camera sensor, but you can’t point to any one part and say ‘this is where the operating system comes from’.

The upshot is the same: Camp #1 will view consciousness as an ‘inherently fuzzy phenomenon’. We might just find it to be even fuzzier than you suggest here.

Replies from: cubefox, TAG, sil-ver
comment by cubefox · 2023-07-04T17:19:29.081Z · LW(p) · GW(p)

You can still be wrong about the underlying reality of your experiences – perhaps the memory of having a headache was falsely implanted with nefarious technology – but you have access to a type of evidence about it that John does not.

But presumably everyone in camp 2 will agree that memories are not perfectly reliable and that memories of experiences are different from those experiences themselves. We could be misremembering. The actually interesting case is whether you can be wrong about having certain experiences now, such that no memory is involved.

Say, you are having a strong headache. Here the headache itself seems to be the evidence. Which seems to mean you can't be mistaken about currently having a headache.

Replies from: wilkox, Ape in the coat
comment by wilkox · 2023-07-05T00:55:26.643Z · LW(p) · GW(p)

You’re absolutely right that this is the more interesting case. I intentionally chose the past tense to make it easier to focus on the details of the example rather than the Camp #1/Camp #2 distinction per se. For completeness, I'll try to recapitulate my understanding of Rafael's account for the present-tense case ‘I have a headache right now’.

From my Camp #1 perspective, any mechanistic description of the brain that explained why it generated the thought/belief/utterance ‘I have a headache right now’ instead of ‘I don’t have a headache right now’ in response to a given set of inputs would be a fully satisfying explanation. Perhaps it really is impossible for a human brain to generate the output ‘I have a headache right now’ without meeting some objective definition of a headache (some collection of facts about sensory inputs and brain state that distinguishes a headache from e.g. a stubbed toe), but there doesn’t seem to be any reason why this impossibility could not be a mundane fact conditional on the physical details of human brains. The brain is taking some combination of inputs, which might include external sensory data as well as introspective data about its own state, and generating a thought/belief/utterance output. It doesn’t seem impossible in principle that, by tweaking certain connections or using TMS or whatever, the mapping between these inputs and outputs could be altered such that the brain reliably generates the output ‘I don’t have a headache right now’ in situations where the chosen objective definition of ‘having a headache’ holds true. So, for Camp #1 the explanandum really is the output ‘I have a headache right now’. (The purpose of my comment was to expand the definition of ‘output’ to explicitly include thoughts and beliefs as well as utterances, and to acknowledge that the inputs in the case ‘I have a headache’ really are different to those in the case ‘John says he has a headache’.)

Camp #2 would say that it is impossible even in principle to be mistaken about the experience of having a headache. They might say it is impossible to meaningfully define ‘having a headache’ only in terms of sensory and/or introspective inputs to the brain. In their view, there is a sort of hard, irreducible kernel of experiencing-a-headache-subjective-qualia-stuff which is closely entangled with the objective inputs and outputs (they would agree that you are more likely to experience a headache if you were hit on the head with a hammer, and more likely to say ‘I have a headache’ if you were experiencing a headache), but nevertheless exists independent from and in addition to these objective facts and is not reducible to an account of only the inputs, outputs, and mapping between them. The explanandum, in their view, is the subjective-qualia-stuff. Camp #2 would fully admit that it's really difficult to pin down the nature of the subjective-qualia-stuff; that's why it's a Hard Problem.

I've done my best here to represent Camp #2 accurately, but it's difficult because their perspective is very alien to me. Apologies in advance to any Camp #2 people and happy to hear your corrections.

Replies from: cubefox
comment by cubefox · 2023-07-05T13:52:48.295Z · LW(p) · GW(p)

Okay, so you are saying that in the first-person case, the evidence for having a headache is not itself the experience of having a headache, but the belief that you have the experience of having a headache. So according to you, one could be wrong about currently having a headache, namely when the aforementioned belief is false, when you have the belief but not the experience. Is this right?

If so, I see two problems with this.

  • Intuitively it doesn't seem possible to be wrong about one's own current mental states. Imagine a patient complains to a doctor about having a terrible headache. The doctor replies: "You may be sure you are having a terrible headache, but maybe you are wrong and actually don't have a headache at all." Or a psychiatrist: "I'm sure you aren't lying, but you may yourself be mistaken about being depressed right now, maybe you are actually perfectly happy". These cases seem absurd. I don't remember any case where I considered myself being wrong about a current mental state. We don't say: I just thought I was feeling pain, but actually I didn't.

  • A belief seems to be itself a mental state. So even if you add the belief as an intermediary layer of evidence between the agent and their experience, then you still have something which the agent is infallible about: Their belief. The evidence for having a belief would be the belief itself. Beliefs seem to be different from utterances, in that the latter are mechanistically describable third person events (sound waves), while beliefs seem to be just as mental as experiences. So the explanandum, the evidence, would in both cases be something mental. But it seems you require the explanandum to be something "objective", like an utterance.

Replies from: wilkox, Signer
comment by wilkox · 2023-07-05T16:32:41.462Z · LW(p) · GW(p)

Okay, so you are saying that in the first-person case, the evidence for having a headache is not itself the experience of having a headache, but the belief that you have the experience of having a headache.

Not quite. I would say that in the first-person case, the explanandum – the thing that needs to be explained – is the belief (or thought, or utterance) that you have the experience of having a headache. Once you have explained how some particular set of inputs to the brain led to that particular output, you have explained everything that is going on, in the Camp #1 view. Quoting the original post, in the Camp #1 view ‘if we can explain exactly why you, as a physical system, uttered the words "I experienced X", then there's nothing else to explain.’

So according to you, one could be wrong about currently having a headache, namely when the aforementioned belief is false, when you have the belief but not the experience. Is this right?

I would actually agree that ’you can't be mistaken about your own current experiences’, but I think the problem Rafael's post points out is that Camp #1 and Camp #2 would understand that to mean different things.

Intuitively it doesn't seem possible to be wrong about one's own current mental states.

I'm a bit confused about what you mean by ‘mental states’. It's certainly possible to be wrong about one's own current mental state, as I understand the term; people experiencing psychosis usually firmly believe they are not psychotic. I don't think the two Camps would disagree on this.

The three examples you mention, of having a headache, being depressed (by which I assume you mean feeling down rather than the psychiatric condition specifically), and feeling pain, all seem like examples of subjective experiences. Insofar as this paragraph is saying ‘it's not possible to be wrong about your own subjective experience’, I would agree, with the caveat as above that what I think this means might be different to what a Camp #2 person thinks this means.

So the explanandum, the evidence, would in both cases be something mental. But it seems you require the explanandum to be something "objective", like an utterance.

I don't require the explanandum to be an utterance, and I don't think there's any important sense in which an utterance is more objective than a thought or belief. My original comment was intended only to point out that in the first-person case you have privileged access to certain data, namely the contents of your own mind, that you don't have in the third-person case. The reasons for this are completely mundane and conditional on the current state of affairs, namely that we currently have no practical way of accessing the semantic content inside each others' skulls other than via speech. It's possible to imagine technology that might change this state of affairs, like a highly accurate thought-reading device for example.

I do think the explanandum is required to be an output, because being able to explain or predict the output is the test of your model of what is going on. If you predict ‘this person is going to say they don't have a headache’, and the person says ‘I have a headache’, then there's something wrong with your model.

Replies from: cubefox
comment by cubefox · 2023-07-05T17:55:19.291Z · LW(p) · GW(p)

I don't require the explanandum to be an utterance, and I don't think there's any important sense in which an utterance is more objective than a thought or belief.

I think this is the crucial point of contention. I find the following obvious: thoughts or beliefs are on the same subjective level as experiences, which is quite different from utterances, which are purely mechanical third-person events, similar to the movement of a limb. In your view however, if I'm not misunderstanding you, beliefs are more similar to utterances than to experiences. So while I think beliefs are equally hard to explain as experiences, in your view beliefs are about as easy to explain as utterances. Is this a fair characterization?

The reason I think utterances are "easy" to explain is that they are physical events and therefore obviously allow for a mechanistic third-person explanation. The explanation would not in principle be different from explaining a simple spinal reflex. Nerve inputs somehow cause nerve outputs, except that for an utterance there are orders of magnitude more neurons involved, which makes the explanation much harder in practice. But the principle is the same.

For subjective attitudes like beliefs and experiences the explanandum is not just a mouth movement (as in the case of utterances) which would be directly caused by nervous signals. It is unclear how to even grasp subjective beliefs and experiences in a mechanical language of cause and effect. As an illustration, it is not obvious why an organism couldn't theoretically be a p-zombie -- have the usual neuronal configuration, behave completely normally, do all the same utterances -- without having any subjective beliefs or experiences.

(It seems vaguely plausible to me that for beliefs and experiences, a reductive, rather than causal, explanation would be needed. Yet the model of other reductive explanations in science, like explaining the temperature of a gas with the average kinetic energy of the particles it is made out off, doesn't obviously fit what would be needed in the case of mental states. But this is a longer story.)

Replies from: wilkox
comment by wilkox · 2023-07-05T22:45:28.710Z · LW(p) · GW(p)

Huh, this is interesting. I wouldn't have suspected this to be the crux. I'm not sure how well this maps to the Camp 1 vs 2 difference as opposed to idiosyncratic differences in our own views.

In your view however, if I'm not misunderstanding you, beliefs are more similar to utterances than to experiences. So while I think beliefs are equally hard to explain as experiences, in your view beliefs are about as easy to explain as utterances. Is this a fair characterization?

This is a fair characterisation, though I don't think ease of explanation is a crucial point. I would certainly say that beliefs are more similar to utterances than to experiences. To illustrate this, sitting here now on the surface of Earth I think it's possible for me to produce an utterance that is about conditions at the centre of Jupiter, and I think it's possible for me to have a belief or a thought that is about conditions at the centre of Jupiter, and all of these could stand in a truth relation to what conditions are actually like at the centre of Jupiter. I don't think I can have an experience that is about conditions at the centre of Jupiter. Strictly, I don't think I can have an experience that is ‘about’ anything. I don't think experiences are models of the world, in the way that utterances, beliefs, and thoughts can be. This is why I would agree that it is not possible to be mistaken about an experience, though in everyday language we often elide experiences with claims about the world that do have truth values (‘it looks red’ almost always means ‘I believe it is actually red’, not ‘when I look at it I experience seeing red but maybe that's just a hallucination’).

I find the following obvious: thoughts or beliefs are on the same subjective level as experiences,

What do you see as the important difference between ‘subjective’ and ‘objective’? Is subjectivity about who has access to a phenomenon, or is it a quality of the phenomenon itself?

The reason I think utterances are "easy" to explain is that they are physical events and therefore obviously allow for a mechanistic third-person explanation. The explanation would not in principle be different from explaining a simple spinal reflex. Nerve inputs somehow cause nerve outputs, except that for an utterance there are orders of magnitude more neurons involved, which makes the explanation much harder in practice. But the principle is the same.

I agree with this.

It is unclear how to even grasp subjective beliefs and experiences in a mechanical language of cause and effect.

If for the sake of argument we strike out ‘beliefs’ here and make it just about experiences, this seems to be a restatement of the Camp 1 vs 2 distinction. As a Camp 1 person, a mechanical explanation of whatever chain of events leads me to think or say that I have a headache would fully dissolve the question [? · GW]. I wouldn't feel that there is anything left to explain. From what I understand of Camp 2, even given such an explanation they would still feel there is something left to explain, namely how these objective facts come together to produce subjective experience.

Replies from: cubefox
comment by cubefox · 2023-07-06T00:03:37.363Z · LW(p) · GW(p)

Mental states do not need to be "about" something, but it is pretty clear they can be. One can be just happy, but it seems one can also be happy about something. One certainly can wish for something, or fear that something is the case, or hope for it, etc. The form in the following is the same: the belief that x, the desire that x, the fear that x, the hope that x. Here x is a proposition. In case of e.g. loving x or hating x, x is an object, not a proposition, but again the mental state is about something. These states seem all hard to explain in a way that utterances aren't.

What do you see as the important difference between ‘subjective’ and ‘objective’? Is subjectivity about who has access to a phenomenon, or is it a quality of the phenomenon itself?

The relevant difference here is the access. The "subjective" is exactly that which an agent is directly acquainted with, while the "objective" stuff is only inferred indirectly. It is unclear how one could explain one with the other.

As a Camp 1 person, a mechanical explanation of whatever chain of events leads me to think or say that I have a headache would fully dissolve the question. I wouldn't feel that there is anything left to explain.

As I said, it is unclear how such a mechanical explanation of a thought or belief would look like. It is clear that utterances are caused by mouth movements which are caused by neurons firing, but it is not clear how neurons could "cause" a belief, or how to otherwise (e.g. reductively) explain a belief. It is not clear how to distinguish p-zombies from normal people, or explain why they wouldn't be possible.

Replies from: wilkox
comment by wilkox · 2023-07-06T01:04:28.103Z · LW(p) · GW(p)

Mental states do not need to be "about" something, but it is pretty clear they can be.

I'm still a bit confused by what you mean by ‘mental states’. My best guess is that you are using it as a catch-all term for everything that is or might be going on in the brain, which includes experiences, beliefs, thoughts, and more general states like ‘psychotic’ or ‘relaxed’.

I agree that mental states do not need to be about something, but I think beliefs do need to be about something and thoughts can be about something (propositional in the way you describe). I don't think an experience can be propositional. I don't understand this relates to whether these particular mental states are able to be explained.

It is clear that utterances are caused by mouth movements which are caused by neurons firing, but it is not clear how neurons could "cause" a belief, or how to otherwise (e.g. reductively) explain a belief.

My best account for what is going on here is that we have two interacting intuitive disagreements:

  1. The ‘ordinary’ Camp 1 vs 2 disagreement, as outlined in Rafael's post, where we disagree where the explanandum lies in the case of subjective experience.
  2. A disagreement over whether whatever special properties subjective experience has also extend to other mental phenomena like beliefs, such that in the Camp 2 view there would be a Hard Problem of why and how we have beliefs analogous to or identical with the Hard Problem of why and how we have subjective experience.

Does this account seem accurate to you?

Replies from: cubefox
comment by cubefox · 2023-07-06T01:47:22.633Z · LW(p) · GW(p)

I'm still a bit confused by what you mean by ‘mental states’. My best guess is that you are using it as a catch-all term for everything that is or might be going on in the brain, which includes experiences, beliefs, thoughts, and more general states like ‘psychotic’ or ‘relaxed’.

I would not count "psychotic" here, since one is not necessarily directly acquainted with it (one doesn't necessarily know one has it).

I don't think an experience can be propositional. I don't understand this relates to whether these particular mental states are able to be explained.

I thought you saw the fact that beliefs are about something as evidence that they are easier to explain than experiences, or that they at least more similar to utterances than to experiences. I responded that aboutness (technical term: intentionality) doesn't matter, as several things that are commonly regarded as qualia, just like experiences, can be about something, e.g. loves or fears. So accepting beliefs as explanandum would not be in principle different from accepting fears or experiences as explanandum, which would seem to put you more in Camp #2 rather than #1.

I think the main disagreement is actually just one, the above: What counts as a simple explanandum such that we would not run into hard explanatory problems? My position is that only utterances act as such a simple explanandum, and that no subjective mental state (things we are directly acquainted with, like intentional states, emotions and experiences) is simple in this sense, since they are not obviously compatible with any causal explanation.

Replies from: wilkox
comment by wilkox · 2023-07-06T03:43:59.864Z · LW(p) · GW(p)

I would not count "psychotic" here, since one is not necessarily directly acquainted with it (one doesn't necessarily know one has it).

Would it be fair to say then that by ‘mental states’ you mean ‘everything that the brain does that the brain can itself be aware of’?

I thought you saw the fact that beliefs are about something as evidence that they are easier to explain than experiences

I don't think there is any connection between whether a thought/belief/experience is about something and whether it is explainable. I'm not sure about ‘easier to explain’, but it doesn't seem like the degree of easiness is a key issue here. I hold the vanilla Camp 1 view that everything the brain is doing is ultimately and completely explainable in physical terms.

or that they at least more similar to utterances than to experiences

I do think beliefs are more similar to utterances than experiences. If we were to draw an ontology of ‘things brains do’, utterances would probably be a closer sibling to thoughts than to beliefs, and perhaps a distant cousin to experiences. A thought can be propositional (‘the sky is blue’) or non-propositional (‘oh no!’), as can an utterance, but a belief is only propositional, while an experience is never propositional. I think an utterance could be reasonably characterised as a thought that is not content to stay swimming around in the brain but for whatever reason escapes out through the mouth. To be clear though, I don't think any of this maps on to the question of whether these phenomena are explicable in terms of the physical implementation details of the brain.

So accepting beliefs as explanandum would not be in principle different from accepting fears or experiences as explanandum, which would seem to put you more in Camp #2 rather than #1.

I think there is an in-principle difference between Camp 1 ‘accepting beliefs [or utterances] as explanandum’ and Camp 2 ‘accepting experiences as explanandum’. When you ask ‘What counts as a simple explanandum such that we would not run into hard explanatory problems?’, I think the disagreement between Camp 1 and Camp 2 in answering this question is not over ‘where the explanandum is’ so much as ‘what it would mean to explain it’.

It might help here to unpack the phrase ‘accepting beliefs as explanandum’ from the Camp 1 viewpoint. In a way this is a shorthand for ‘requiring a complete explanation of how the brain as a physical system goes from some starting state to the state of having the belief’. The belief or utterance as explanandum works as a shorthand for this for the reasons I mentioned above, i.e. that any explanation that does not account for how the brain ended up having this belief or generating this utterance is not a complete and satisfactory explanation. This doesn't privilege either beliefs or utterances as special categories of things to be explained; they just happen to be end states that capture everything we think is worth explaining about something like ‘having a headache’ in particular circumstances like ‘forming a belief that I have a headache’ or ‘uttering the sentence “I have a headache”’.

By analogy, suppose that I was an air safety investigator investigating an incident in which the rudder of a passenger jet went into a sudden hardover. The most appropriate explanandum in this case is ‘the rudder going into a sudden hardover’, because any explanation that doesn't end with ‘...and this causes the rudder to go into a sudden hardover’ is clearly unsatisfactory for my purposes. Suppose I then conduct a test flight in which the aircraft's autopilot is disconnected from the rudder, and discover a set of conditions that reliably causes the autopilot to form an incorrect model of the state of the aircraft, such that if the autopilot was connected to the rudder it would command a sudden hardover to ‘correct’ the situation. It seems quite reasonable in this case for the explanandum to be ‘the autopilot forming an incorrect model of the state of the aircraft’. There is no conceptual difference in the type of explanation required in the two cases. They can both in principle be explained in terms of a physical chain of events, which in both cases would almost certainly include some sequence of computations inside the autopilot. The fact that the explanandum in the second case is a propositional representation internal to the autopilot rather than a physical movement of a rudder doesn't pose any new conceptual mysteries. We're just using the explanandum to define the scope of what we're interested in explaining.

This is distinct from the Camp 2 view, in which even if you had a complete description of the physical steps involved in forming the belief or utterance ‘I have a headache’, there would still be something left to explain, that is the subjective character of the experience of having a headache. When the Camp 2 view says that the experience itself is the explanandum, it does privilege subjective experience as a special category of things to be explained. This view asserts that experience has a property of subjectiveness that in our current understanding cannot be explained in terms of the physical steps, and it is this property of subjectiveness itself that demands a satisfactory explanation. When Camp 2 point to experience as explanandum, they're not saying ‘it would be useful and satisfying to have an explanation of the physical sequence of events that lead up to this state’; they're saying ‘there is something going on here that we don't even know how to explain in terms of a physical sequence of events’. Quoting the original post, in this view ‘even if consciousness is compatible with the laws of physics, it still poses a conceptual mystery relative to our current understanding.’

Replies from: cubefox
comment by cubefox · 2023-07-06T12:05:12.574Z · LW(p) · GW(p)

Would it be fair to say then that by ‘mental states’ you mean ‘everything that the brain does that the brain can itself be aware of’?

Yeah, aware of, or conscious of. Psychosis seems to be less a mental state in this sense than a disposition to produce certain mental states.

Suppose I then conduct a test flight in which the aircraft's autopilot is disconnected from the rudder, and discover a set of conditions that reliably causes the autopilot to form an incorrect model of the state of the aircraft, such that if the autopilot was connected to the rudder it would command a sudden hardover to ‘correct’ the situation. It seems quite reasonable in this case for the explanandum to be ‘the autopilot forming an incorrect model of the state of the aircraft’.

What you call "model" here would presumably correspond only to the the externally observable neural correlate of a belief, not to the belief. The case would be the same for a neural correlate of an experience, so this doesn't provide a difference between the two. Explaining the neural correlate is of course just as "easy" as explaining an utterance. The hard problem is to explain actual mental states with their correlates. So the case doesn't explain the belief/experience in question in terms of this correlate. It would be unable to distinguish between a p-zombie, which does have the neural correlate but not the belief/experience, and a normal person. So it seems that either the explanation is unsuccessful as given, since it stops at the neural correlate and doesn't explain the belief or experience, or you assume the explanandum is actually just the neural correlate, not the belief.

Replies from: wilkox
comment by wilkox · 2023-07-07T09:27:59.827Z · LW(p) · GW(p)

Apologies for the repetition, but I'm going to start by restating a slightly updated model of what I think is going on, because it provides the context for the rest of my comment. Basically I still think there are two elements to our disagreement:

  1. The Camp 1 vs Camp 2 disagreement. Camp 1 thinks that a description of the physical system would completely and satisfactorily explain the nature of consciousness and subjective experience; Camp 2 thinks that there is a conceptual element of subjective experience that we don't currently know how to explain in physical terms, even in principle. Camp 2 thinks there is a capital-H Hard Problem of consciousness, the ‘conceptual mystery’ in Rafael's post; Camp1 does not. I am in Camp 1, and as best I can tell you are in Camp 2.
  2. You think that all(?) ‘mental states’ pose this conceptual Hard Problem, including intentional phenomena like thoughts and beliefs as well as more ‘purely subjective’ phenomena like experiences. My impression is that this is a mildly unorthodox position within Camp 2, although as I mentioned in my original comment I've never really understood e.g. what Nagel was trying to say about the relationship between mental phenomena being only directly accessible to a single mind and them being Hard to explain, so I might be entirely wrong about this. In any case, because I don't believe that there is a conceptual mystery in the first place, the question of (e.g.) whether the explanandum is an utterance vs a belief means something very different to me than it does to you. When I talk about locating the explanandum at utterances vs beliefs, I'm talking about the scope of the physical system to be explained. When you talk about it, you're talking about the location(s) of the conceptual mystery.

What you call "model" here would presumably correspond only to the the externally observable neural correlate of a belief, not to the belief. The case would be the same for a neural correlate of an experience, so this doesn't provide a difference between the two. Explaining the neural correlate is of course just as "easy" as explaining an utterance. The hard problem is to explain actual mental states with their correlates. So the case doesn't explain the belief/experience in question in terms of this correlate.

As a Camp 1 person, I don't think that there is any (non-semantic) difference between the observable neurological correlates of a belief or any other mental phenomenon and the phenomenon itself.  Once we have a complete physical description of the system, we Camp 1-ites might bicker over exactly which bits of it correspond to ‘experience’ and ‘consciousness’, or perhaps claim that we have reductively dissolved such questions [LW · GW] entirely; but we would agree that these are just arguments over definitions rather than pointing to anything actually left unexplained. I don't think there is a Hard Problem.

It would be unable to distinguish between a p-zombie, which does have the neural correlate but not the belief/experience, and a normal person.

I take Dennett's view on p-zombies, i.e. they are not conceivable.

So it seems that either the explanation is unsuccessful as given, since it stops at the neural correlate and doesn't explain the belief or experience, or you assume the explanandum is actually just the neural correlate, not the belief.

In the Camp 1 view, once you've explained the neural correlates, there is nothing left to explain; whether or not you have ‘explained the belief’ becomes an argument over definitions [? · GW].

comment by Signer · 2023-07-06T02:46:15.661Z · LW(p) · GW(p)

Intuitively it doesn’t seem possible to be wrong about one’s own current mental states. Imagine a patient complains to a doctor about having a terrible headache. The doctor replies: “You may be sure you are having a terrible headache, but maybe you are wrong and actually don’t have a headache at all.”

Of course it's possible, at least in principle: the doctor could have connected all your neurons, that detect headache and generate thoughts about it, to another person's neurons, that generate headache. Then you would be sure, that you are having a having a headache, but actually it is another person who is having a headache.

comment by Ape in the coat · 2023-07-04T17:51:53.537Z · LW(p) · GW(p)

You can definetely be mistaken regarding what the headache means. When the headache is extreme you may feel as if you are dying. Yet, despite feeling this way you may not actually die.

Likewise you may feel as if your feelings are immaterial even though they are not. As soon as the question isn't just about your immediate experience but also about how this experience is related to the world - you may very well be wrong.

comment by TAG · 2023-07-08T11:13:35.151Z · LW(p) · GW(p)

You have privileged access to the contents of your own mind that you don’t have to contents of other people’s minds, by virtue of the mundane physical fact that the neurones in your brain are connected to the other neurones in your brain but not to the neurones in other people’s brains.

You don't just have a level of access, you have a type of access. Your access to your own mind isn't like looking at a brain scan.

I’ve never been able to figure out if Thomas Nagel, in ‘What is it like to be a bat?’, believes that the mere existence of this sort of privileged evidence about one’s own experiences tells us something about the nature of qualia/subjectivity.

The Mary's Room thought experiment brings it out. Mary has complete access to someone elses mental state, from the outside, but still doesn't experience it from the inside.

Replies from: wilkox
comment by wilkox · 2023-07-08T13:31:34.622Z · LW(p) · GW(p)

You don't just have a level of access, you have a type of access. Your access to your own mind isn't like looking at a brain scan.

From my Camp 1 perspective, this just seems like a restatement of what I wrote. My direct access to my own mind isn't like my indirect access to other people's minds; to understand another person's mind, I can at best gather scraps of sensory data like ‘what that person is saying’ and try to piece them together into a model. My direct access to my own mind isn't like looking at a brain scan of my own mind; to understand a brain scan, I need to gather sensory data like ‘what the monitor attached to the brain scanner shows’ and try to piece them into a model. This seems to be completely explained by the fact that my brain can only gather data about the external world though a handful of imperfect sensory channels, while it can gather data about its own internal processes through direct introspection. To make things worse, my brain is woefully underpowered for the task of modelling complex things like brains, so it's almost inevitable that any model I construct will be imperfect. Even a scan of my own brain would give me far less insight into my mind than direct introspection, because brains are hideously complicated and I'm not well-equipped to model them.

Whether you call that a ‘level’ or ‘type’ of access, I'm still no closer to understanding how Nagel relates the (to me mundane) fact that these types of access exist to the ‘conceptual mystery’ of qualia or consciousness.

The Mary's Room thought experiment brings it out. Mary has complete access to someone elses mental state, form the outside, but still doesn't experience it from the inside.

Imagine a one-in-a-million genetic mutation that causes a human brain to develop a Simulation Centre. The Simulation Centre might be thought of as a massively overdeveloped form of whatever circuitry gives people mental imagery. It is able to simulate real-world physics with the fidelity of state-of-the-art computer physics simulations, video game 3D engines, etc. The Simulation Centre has direct neural connections to the brain's visual pathways that, under voluntary control, can override the sensory stream from the eyes. So, while a person with strong mental imagery might be able to fuzzily visualise something like a red square, a person with the Simulation Centre mutation could examine sufficiently detailed blueprints for a building and have a vivid photorealistic visual experience of looking at it, indistinguishable from reality.

Poor Mary, locked in her black-and-white room, doesn't have a Simulation Centre. No matter how much information she is given about what wavelengths correspond to the colour blue, she will never have the visual experience of looking at something blue. Lucky Sue, Mary's sister, was born with the Simulation Centre mutation. Even locked in a neighbouring black-and-white room, when she learns about the existence of materials that don't reflect all wavelengths of light but only some wavelengths, Sue decides to model such a material in her Simulation Centre, and so is able to experience looking at the colour blue.

In other words: the Mary's Room thought experiment seems to me (again, from a Camp 1 perspective) to illustrate that our brains lack the machinery to turn a conceptual understanding of a complex physical system into subjective experience.[1] This seems like a mundane fact about our brains (‘we don't have Simulation Centres’) rather than pointing to any fundamental conceptual mystery.

  1. ^

    This might just be a matter of degree. Some people apparently can do things like visualise a red square, and it seems reasonable that a person who had seen shapes of almost every colour before but had never happened to see a red square could nevertheless visualise one if given the concept.

Replies from: TAG
comment by TAG · 2023-07-08T18:41:35.209Z · LW(p) · GW(p)

From my Camp 1 perspective, this just seems like a restatement of what I wrote. My direct access to my own mind isn’t like my indirect access to other people’s minds; to understand another person’s mind, I can at best gather scraps of sensory data like ‘what that person is saying’ and try to piece them together into a model

At this point, I can prove to you that you are actually in Camp #2. All I have to is point out that the kind of access you have to your mind is (or rather includes) qualia!

I’m still no closer to understanding how Nagel relates the (to me mundane) fact that these types of access exist to the ‘conceptual mystery’ of qualia or consciousness

The mystery relates entirely to the expectation that there should be a reductive physical explanation of qualia.

The Hard Problem of Qualia Whilst science has helped with some aspects of the mind body problem, it has made others more difficult, or at least exposed their difficulty. In pre scientific times, people were happy to believe that the colour of an object was an intrinsic property of it, which was perceived to be as it was. This "naive realism", was disrupted by a series of discoveries, such as the absence of anything resembling subjective colour in scientific descriptions, and a slew of reasons for recognising a subjective element in perception.

A philosopher's stance on the fundamental nature of reality is called an ontology. The success of science in the twentieth and twentyfirst centuries has led many philosophers to adopt a physicalist ontology, basically the idea that the fundamental constituents of reality are what physics says they are. (It is a background assumption of physicalism that the sciences form a sort of tower, with psychology and sociology near the top, and biology and chemistry in the middle , and with physics at the bottom. The higher and intermediate layers don't have their own ontologies -- mind-stuff and elan vital are outdated concepts -- everything is either a fundamental particle, or an arrangement of fundamental particles)

So the problem of mind is now the problem of qualia, and the way philosophers want to explain it is physicallistically. However, the problem of explaining how brains give rise to subjective sensation, of explaining qualia in physical terms, is now considered to be The Hard Problem. In the words of David Chalmers:-

" It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.”

What is hard about the hard problem is the requirement to explain consciousness, particularly conscious experience, in terms of a physical ontology. Its the combination of the two that makes it hard. Which is to say that the problem can be sidestepped by either denying consciousness, or adopting a non-physicalist ontology.

Examples of non-physical ontologies include dualism, panpsychism and idealism . These are not faced with the Hard Problem, as such, because they are able to say that subjective, qualia, just are what they are, without facing any need to offer a reductive explanation of them. But they have problems of their own, mainly that physicalism is so successful in other areas.

Eliminative materialism and illusionism, on the other hand, deny that there is anything to be explained, thereby implying there is no problem, But these approaches also remain unsatisfactory because of the compelling subjective evidence for consciousness.

Now, maybe Nagel doesn't say all that, but he's not the only occupant of camp #2.

Poor Mary, locked in her black-and-white room, doesn’t have a Simulation Centre. No matter how much information she is given about what wavelengths correspond to the colour blue, she will never have the visual experience of looking at something blue. Lucky Sue, Mary’s sister, was born with the Simulation Centre mutation. Even locked in a neighbouring black-and-white room, when she learns about the existence of materials that don’t reflect all wavelengths of light but only some wavelengths, Sue decides to model such a material in her Simulation Centre, and so is able to experience looking at the colour blue.

That doesn't prove anything relevant, because Mary's sister is not creating or using a reductive physical explanation. Maybe her visualisation abilities, and everybody elses , use non physical pixie dust. Nothing about her ability refutes that clam, because it's an ability, not an explanation.

Physicalists sometimes respond to Mary's Room by saying that one can not expect Mary actually to actually instantiate Red herself just by looking at a brain scan. It seems obvious to then that a physical description of brain state won't convey what that state is like, because it doesn't put you into that state. As an argument for physicalism, the strategy is to accept that qualia exist, but argue that they present no unexpected behaviour, or other difficulties for physicalism.

That is correct as stated but somewhat misleading: the problem is why is it necessary, in the case of experience, and only in the case of experience to instantiate it in order to fully understand it. Obviously, it is true a that a descirption of a brain state won't put you into that brain state. But that doesn't show that there is nothing unusual about qualia. The problem is that there in no other case does it seem necessary to instantiate a brain state in order to undertstand something.

If another version of Mary were shut up to learn everything about, say, nuclear fusion, the question "would she actually know about nuclear fusion" could only be answered "yes, of course....didn't you just say she knows everything"? The idea that she would have to instantiate a fusion reaction within her own body in order to understand fusion is quite counterintuitive. Similarly, a description of photosynthesis will make you photosynthesise, and would not be needed for a complete understanding of photosynthesis.

In other words: the Mary’s Room thought experiment seems to me (again, from a Camp 1 perspective) to illustrate that our brains lack the machinery to turn a conceptual understanding of a complex physical system into subjective experience.[1] [LW(p) · GW(p)] This seems like a mundane fact about our brains

The fact that we have experience at all is mundane...yet it has no explanation. Mundane and mysterious just aren't opposites. We experience gravity all the time, but it's still hard to understand.

Replies from: Signer, Signer
comment by Signer · 2023-07-09T06:25:17.517Z · LW(p) · GW(p)

All I have to is point out that the kind of access you have to your mind is (or rather includes) qualia!

And because there is a physicalist explanation for the difference of access, there is physicalist explanation for qualia and the problem is solved.

Replies from: TAG
comment by TAG · 2023-07-09T11:56:22.747Z · LW(p) · GW(p)

It is not an explanation to predict that one thing is different from another in an unspecified way.

Replies from: Signer
comment by Signer · 2023-07-09T18:24:01.770Z · LW(p) · GW(p)

Yes, but the actual explanation is obviously possible. One access is different from another because one is between regions of the brain via neurons, and the other is between brain and brain scan via vision. What part do you think is impossible to specify?

Replies from: TAG
comment by TAG · 2023-07-10T18:45:05.954Z · LW(p) · GW(p)

The qualia. How does a theory describe a subjective sensation?

comment by Signer · 2023-07-09T06:20:49.564Z · LW(p) · GW(p)

The problem is that there in no other case does it seem necessary to instantiate a brain state in order to undertstand something.

Riding a bicycle. And you need to instantiate a brain state to know anything - instantiating brain states is what it means for a brain to know something. The explanation for "why it seems to be unnecessary in other cases" is "people are bad at physics".

Or you can use a sensible theory of knowledge where Mary understands everything about seeing red without seeing it and the explanation for "why it seems that she doesn't understand" is "people are bad in distinguishing between being and knowing".

I mean, there is physicalist explanation of everything about this scenario. You could have an arguments on the level of "but people find it confusing for a couple of seconds!" against physicality of anything from mirrors to levers.

Replies from: TAG
comment by TAG · 2023-07-09T11:58:36.608Z · LW(p) · GW(p)

And you need to instantiate a brain state to know anythinh

No, knowledge can be stored outisde brains.

Mary understands everything about seeing red without seeing it and the explanation for “why it seems that she doesn’t understand” is “people are bad in distinguishing between being and knowing”.

Or people insist by fiat that they are the same, when they are plainly different.

comment by Rafael Harth (sil-ver) · 2023-07-03T07:22:19.722Z · LW(p) · GW(p)

Yeah, I agree with both points. I edited the post to reflect it; for the whole brain vs parts thing I just added a sentence; for the kind of access thing I made it a footnote and also linked to your comment. As you said, it does seem like a refinement of the model rather than a contradiction, but it's definitely important enough to bring up.

comment by Richard_Kennaway · 2023-07-04T09:14:06.980Z · LW(p) · GW(p)

I think there's a pervasive error being made by both camps, although more especially Camp 2 (and I count myself in Camp 2). There is a frantic demand for and grasping after explanations, to the extent of counting the difficulty of the problem as evidence for this or that solution. "What else could it be [but my theory]?"

We are confronted with the three buttons labelled "Explain", "Ignore", and "Worship" [LW · GW]. A lot of people keep on jabbing at the "Explain" button, but (in my view) none of the explanations get anywhere. Some press the "Ignore" button and proclaim it an inherently unsolvable problem. Some press "Worship" and declare consciousness to be the fundamental reality of the universe.

I recognise my inability to hit "Explain" and the futility of "Ignore" and "Worship". I leave the buttons floating there. On the one hand, I have subjective experience, but on the other, nothing else we yet know about the world seems capable even in principle of providing an explanation. None that I have seen from either camp seems to me to even touch the problem. Nevertheless, I recognise that the problem is still there.

comment by omegastick (isaac-poulton) · 2023-07-03T00:04:32.472Z · LW(p) · GW(p)

This probably isn't the case, but I secretly wonder if the people in camp #1 are p-zombies.

Replies from: Richard_Kennaway, Ape in the coat
comment by Richard_Kennaway · 2023-07-04T08:51:46.437Z · LW(p) · GW(p)

They wouldn't strictly be p-zombies, because by definition, p-zombies display behaviour indistinguishable from non-zombies. Instead, Camp 1 people notably talk about consciousness differently from Camp 2 — as one would expect if they have different experiences of their own consciousness, or none at all.

So Camp 1 are just ordinary zombies.

ETA: It's not just you and me. Some actual psychologists have speculated that grand psychological theories are nothing more than accounts of their creators' subjective experiences of themselves. Radical behaviourists are the ones without such experience. I don't have easily findable references, but I just found a mention of this book, whose title is suggestive: "Psychology's Grand Theorists How Personal Experiences Shaped Professional Ideas".

Replies from: Ape in the coat
comment by Ape in the coat · 2023-07-04T09:33:33.437Z · LW(p) · GW(p)

Wait a second! I think you are onto something. What if it's Camp 2 people who are p-zombies? Lacking the ability to experience things but pretending that they do, they overcompensate with singing dithyrambs to qualia and subjective experience, proclaiming its obvious fundamentality due to how awesome the experience alledgedly is! 

While regular people who have consciousness, notice that while it's curious and an obvious starting point for epistemology, after some amount of evidence it becomes very likely that the same material laws that works for everything else works for consciousness as well and that it will be eventually explained by matter interactions.

Joking, of course.

comment by Ape in the coat · 2023-07-04T09:19:25.508Z · LW(p) · GW(p)

See this comment [LW(p) · GW(p)] and my ongoing discussion with Carl Feynman.

comment by Noosphere89 (sharmake-farah) · 2023-09-30T21:52:11.643Z · LW(p) · GW(p)

I'm going to argue a complementary story: The basic reason why it's so hard to talk about consciousness has to do with 2 issues that are present in consciousness research, and both make it impossible to do productive research on:

  1. Extraordinarily terrible feedback loops, almost reminiscent of the pre-deep learning alignment work on LW (I'm looking at you MIRI, albeit even then it achieved more than the consciousness research to date, and LW is slowly shifting to a mix of empirical and governance work, which is quite a lot faster than any consciousness related work has done, and it's transition is more complete than the consciousness field.)

  2. To the extent that we have feedback loops, we basically only have one, which is competing introspections/intuition pumps, and the problem here is that for our purposes, humans are extraordinarily terrible at reporting their internal states accurately, and self-reports are basically worthless here. In essence, you can't get any useful evidence at all, and thus consciousness discourse goes nowhere useful.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2023-10-01T08:02:36.055Z · LW(p) · GW(p)

Agreed. My impression has been for a while that there's a super weak correlation (if any) between whether an idea goes into the right direction and how well it's received. Since there's rarely empirical data, one would hope for an indirect correlation where correctness correlates with argument quality, and argument quality correlates with reception, but second one is almost non-existent in academia.

comment by Seth Herd · 2023-07-04T18:39:25.198Z · LW(p) · GW(p)

Great post! I think this captures most of the variance in consciousness discussions.

I've been interested in consciousness through a 23 year career in computational cognitive neuroscience. I think making progress on bridging the gap between camp 1 and camp 2 requires more detailed explanations of neural dynamics. Those can be inferred from empirical data, but not easily, so I haven't seen any explanations similar to the one I've been developing in my head. I haven't published on the topic because it's more of a liability for a neuroscience career than an asset. Now that I'm working on AI safety, consciousness seems like a distraction. It's tempting to write a long post about it, since this community seems substantially better at engaging with the topic than neuroscientists are; but the time cost is still substantial. If I do write such a post, I'll cite this one in framing the issue.

One possible route to people actually caring about explanations of consciousness is in public debates on AI consciousness. People are already questioning whether LLMs might have some sort of consciousness and therefore being worthy of ethical consideration like people are. It won't take much more person-like behavior (I think just a consistent memory) before that debate becomes really interesting to me and I think to the public at large. That wouldn't give an LLM phenomenal consciousness much like a human, but it would give enough self-awareness to strike a lot of people as worthy of ethical consideration.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2023-07-04T19:16:01.391Z · LW(p) · GW(p)

Thanks!

I've been interested in consciousness through a 23 year career in computational cognitive neuroscience. I think making progress on bridging the gap between camp 1 and camp 2 requires more detailed explanations of neural dynamics. Those can be inferred from empirical data, but not easily, so I haven't seen any explanations similar to the one I've been developing in my head. I haven't published on the topic because it's more of a liability for a neuroscience career than an asset. Now that I'm working on AI safety, consciousness seems like a distraction. It's tempting to write a long post about it, since this community seems substantially better at engaging with the topic than neuroscientists are; but the time cost is still substantial. If I do write such a post, I'll cite this one in framing the issue.

If you do, and if you're interested in exchanging ideas, feel free to reach out. I've been thinking about this topic for several years now and am also planning to write more about it, though that could take a while.

comment by Steven Byrnes (steve2152) · 2023-07-03T13:30:24.115Z · LW(p) · GW(p)

Strong agree all around—this post echoes a comment I made here [LW(p) · GW(p)] (me in Camp #1, talking to someone in Camp #2):

If you ask me a question about, umm, I’m not sure the exact term, let’s say “3rd-person-observable properties of the physical world that have something to do with the human brain”…then I feel like I’m on pretty firm ground, and that I’m in my comfort zone, and that I’m able to answer such questions, at least in broad outline and to some extent at a pretty gory level of detail. (Some broad-outline ingredients are in my old post here [LW · GW], and I’m open to further discussion as time permits.)

BUT, I feel like that’s probably not the game you want to play here. My guess is that, even if I perfectly nail every one of those “3rd-person” questions above, you would still say that I haven’t even begun to engage with the nature of qualia, that I’m missing the forest for the trees, whatever. (I notice that I’m putting words in your mouth; feel free to disagree.)

If I’m correct so far, then this is a more basic disagreement about the nature of consciousness and how to think about it and learn about it etc. You can see my “wristwatch” discussion here [LW · GW] for basically where I’m coming from. But I’m not too interested in hashing out that disagreement, sorry. For me, it’s vaguely in the same category as arguing with a theology professor about whether God exists (I’m an atheist): My position is “Y’know, I really truly think I’m right about this, but there’s a gazillion pages of technical literature on this topic, and I’ve read practically none of it, and my experience strongly suggests that we’re not going to make any meaningful progress on this disagreement in the amount of time that I’m willing to spend talking about it.” :-P Sorry!

comment by TAG · 2023-07-03T11:56:50.825Z · LW(p) · GW(p)

-It's obvious that conscious experience exists.

-Yes, it sure looks like the brain is doing a lot of non-parallel processing that involves several spatially distributed brain areas at once, so

-You mean, it looks from the outside. But I'm not just talking about the computational process, which I am not even aware of as such, I am talking about conscious experience.

-Define qualia

-Look at a sunset. The way it looks is a quale. taste some chocolate,. The way it tastes is a quale.

-Well, I got my experimental subject to look at a sunset and taste some chocolate, and wrote down their reports. What's that supposed to tell me?

-No, I mean you do it.

-OK, but I don't see how that proves the existence of non-material experience stuff!

-I didn't say it does!

-Buy you qualophiles are all the same -- you're all dualists and you all believe in zombies!

-Sigh....!

In other words, the two camps disagree about the epistemic status of apparently perceived experiences: for Camp #2, they’re epistemic bedrock, whereas for Camp #1, they’re model outputs of your brain, and like all model outputs of your brain, they can be wrong.

Both camps are broad. You don't have to regard qualia as incorrigible to belong in camp #2. You don't have to believe in zombies, either.

Something that is very important for camp #2, but not mentioned in the OP is the reducibility, of qualia. We don't have a reductive explanation of qualia, and according to the intended conclusion of the The Mary's Room thought experiment , we can't -- complete physical knowledge just isn't enough. The belief in irreducibility is much more of a sine qua non of qualia[philia].

Consciousness Explained gets brought up a lot more than The Conscious Mind

And on top of that, Chalmers views are regularly strawmanned and misrepresented in a way that Dennett's aren't.

Eg.

https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies?commentId=mDcrepyDkhyoZX9JJ [LW(p) · GW(p)]

ETA:

https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies?commentId=kZ57nbWk8SennPuzm [LW(p) · GW(p)]

And of course,

https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies?commentId=5qKe5gQ8HWgfRq9Dw [LW(p) · GW(p)]

Replies from: torekp, Signer
comment by torekp · 2023-07-04T19:29:38.925Z · LW(p) · GW(p)

The belief in irreducibility is much more of a sine qua non of qualiaphobia,

Can you explain that?  It seems that plenty of qualiaphiles believe they are irreducible, epistemically if not metaphysically.  (But not all:  at least some qualiaphiles think qualia are emergent metaphysically.  So, I can't explain what you wrote by supposing you had a simple typo.)

comment by Signer · 2023-07-03T13:43:08.547Z · LW(p) · GW(p)

What is misrepresented in the linked comment?

Replies from: TAG
comment by TAG · 2023-07-03T14:15:10.289Z · LW(p) · GW(p)

It isn't an example of misrepresentation: it points out a misrepresentation. As in the first sentence.

Replies from: Signer
comment by Signer · 2023-07-03T18:11:31.028Z · LW(p) · GW(p)

Ok, then I don't get what misinterpretation is not addressed in https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies?commentId=chZLkQ8Piu4J5ibC9 [LW(p) · GW(p)]. Or is it just that the post itself presents Chalmers as believing in epiphenomenalism (which he shouldn't do), when he actually believes in epiphenomenalism|dualism|monism (which he also shouldn't do)?

comment by torekp · 2023-07-04T20:28:10.686Z · LW(p) · GW(p)

There are many features you get right about the stubbornness of the problem/discussion.  Certainly, modulo the choice to stop the count at two camps, you've highlighted some crucial facts about these clusters.  But now I'm going to complain about what I see as your missteps.

Moreover, even if consciousness is compatible with the laws of physics, ... [camp #2 holds] it's still metaphysically tricky, i.e., it poses a conceptual mystery relative to our current understanding.

I think we need to be careful not to mush together metaphysics and epistemics.  A conceptual mystery, a felt lack of explanation - these are epistemic problems.  That's not sufficient reason to infer distinct metaphysical categories.  Particular camp #2 philosophers sometimes have arguments that try to go from these epistemic premises, plus additional premises, to a metaphysical divide between mental and physical properties.  Those arguments fail, but aside from that, it's worthwhile to distinguish their starting points from their conclusions.

Secondly, you imply that according to camp #2, statements like "I experienced a headache" cannot be mistaken.  As TAG already pointed out, the claim of incorrigibility is not necessary.  As soon as one uses a word or concept, one is risking error.  Suppose you are at a new restaurant, and you try the soup, and you say, "this soup tastes like chicken."  Your neighbor says, "no, it tastes like turkey."  You think about it, the taste still fresh in your mind, and realize that she is right.  It tastes (to you) like turkey, you just misidentified it.

Finally, a bit like shminux, I don't know which camp I'm in - except that I do, and it's neither.  Call mine camp 1.5 + 3i.  It's sort of in-between the main two (hence 1.5) but accuses both 1 + 2 of creating imaginary barriers (hence the 3i).

Replies from: TAG, sil-ver
comment by TAG · 2023-07-05T14:35:18.998Z · LW(p) · GW(p)

I think we need to be careful not to mush together metaphysics and epistemics. A conceptual mystery, a felt lack of explanation—these are epistemic problems. That’s not sufficient reason to infer distinct metaphysical categories.

It's nonetheless the best reason. The amount of times you should add new ontological categories isn't zero, ever -- even if you shouldn't also add a category every time you are confused. Physicists were not wrong to add the nuclear forces to gravity and electromagnetism.

Unfortunately, there is no simple algorithm to tell you when you should add categories.

Particular camp #2 philosophers sometimes have arguments that try to go from these epistemic premises, plus additional premises, to a metaphysical divide between mental and physical properties. Those arguments fail,

Do they? Camp #1 is generally left with denialism about qualia (including illusionism), or promissory physicalism, neither of which is hugely attractive. Regarding promissory physicalism, it's a subjective judgement, not a proof , that we will have a full reductive explanation of consciousness one day, so it is quite cheeky to call the other camp "wrong" because they have a subjective judgement that we won't.

Fair point about the experience itself vs its description. But note that all the controversy is about the descriptions.

No, it's about the implications. People are quite explicit that they don't want to believe in qualia becasue they don't want to have to believe in epiphenomenalism, zombies, non physical properties, etc..

Of course, rejecting evidence because it doesn't fit a theory is the opposite of rationality.

In general, you can also just hold that consciousness is a different way to look at the same process, which is sometimes called dual-aspect monism, and that’s physicalist, too.

Well, materialist -- it doesn't require immaterial substances or non physical properties, but it also denies that all facts are physical facts, contra strong physicalism.

I don't see DANM as a radical third option to the two camps, I see it as the lightweight or minimalist position in camp #2.

comment by Rafael Harth (sil-ver) · 2023-07-04T20:43:05.537Z · LW(p) · GW(p)

I think we need to be careful not to mush together metaphysics and epistemics. A conceptual mystery, a felt lack of explanation - these are epistemic problems. That's not sufficient reason to infer distinct metaphysical categories. Particular camp #2 philosophers sometimes have arguments that try to go from these epistemic premises, plus additional premises, to a metaphysical divide between mental and physical properties. Those arguments fail, but aside from that, it's worthwhile to distinguish their starting points from their conclusions.

Agreed; too tired right now but will think about how to rewrite this part.

Secondly, you imply that according to camp #2, statements like "I experienced a headache" cannot be mistaken. As TAG already pointed out, the claim of incorrigibility is not necessary. As soon as one uses a word or concept, one is risking error. Suppose you are at a new restaurant, and you try the soup, and you say, "this soup tastes like chicken." Your neighbor says, "no, it tastes like turkey." You think about it, the taste still fresh in your mind, and realize that she is right. It tastes (to you) like turkey, you just misidentified it.

I don't think I said that. I think I said that Camp #2 claims one cannot be wrong about the experience itself. I agree (and I don't think the post claims otherwise) that errors can come in during the step from the experience to the task of finding a verbalization of the experience. You chose an example where that step is particularly risky, hence it permits a larger error.

Note that for Camp #2, you can draw a pretty sharp line between conscious and unconscious modules in your brain, and finding the right verbalization is mostly an unconscious process.

Replies from: torekp
comment by torekp · 2023-07-05T00:32:59.504Z · LW(p) · GW(p)

Fair point about the experience itself vs its description.  But note that all the controversy is about the descriptions.  "Qualia" is a descriptor, "sensation" is a descriptor, etc.  Even "illusionists" about qualia don't deny that people experience things.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2023-07-05T10:13:55.079Z · LW(p) · GW(p)

Alright, so I changed the paragraph into this:

Conversely, Camp #2 is convinced that there is an experience thing that exists in a fundamental way. There's no agreement on what this thing is – some postulate causally active non-material stuff, whereas others agree with Camp #1 that there's nothing operating outside the laws of physics – but they all agree that there is something that needs explaining. Moreover, even if consciousness is compatible with the laws of physics, it still poses a conceptual mystery relative to our current understanding. A complete solution (if it is even possible) may also have a nontrivial metaphysical component.

I think a lot of Camp #2 people want to introduce new metaphysics, which is why I don't want to take out the last sentence.

But note that all the controversy is about the descriptions. "Qualia" is a descriptor, "sensation" is a descriptor, etc. Even "illusionists" about qualia don't deny that people experience things.

I don't think this is true. E.g., Dennett has these bits in Consciousness Explained: 1, 2, 3, 4.

Of course, the issue is still tricky, and you're definitely not the only one who thinks it's just a matter of description, not existence. Almost everyone agrees that something exists, but Camp #2 people tend to want something to exist over and above the reports of that thing, and Dennett seems to deny this. And (as I mentioned in some other comment) part of the point of this post is that you empirically cannot nail down exactly what this thing is in a way that makes sense to everyone. But I think it's reasonable to say that Dennet doesn't think people experience things.

Also, Dennett in particular says that there is no ground truth as to what you experience, and this is arguably a pretty well-defined property that's in contradiction with the idea that the experience itself exists. Like, I think Camp #2 people will generally hold that, even if errors can come in during the reports of experience, there is still always a precise fact of the matter as to what is being experienced. And depending on their metaphysics, it would be possible to figure out what exactly that is with the right neurotech.

And another reason why I don't think it's true is because then I think illusionism wouldn't matter for ethics, but as I mentioned in the post, there are some illusionists who think their position implies moral nihilism. (There are also people who differentiate illusionism and eliminativism based on this point, but I'm guessing you didn't mean to do that.)

Replies from: torekp
comment by torekp · 2023-07-05T23:49:44.812Z · LW(p) · GW(p)

this [that there is no ground truth as to what you experience] is arguably a pretty well-defined property that's in contradiction with the idea that the experience itself exists.

I beg to differ.  The thrust of Dennett's statement is easily interpreted as the truth of a description being partially constituted by the subject's acceptance of the description.  E.g., in one of the snippets/bits you cite, "I seem to see a pink ring."  If the subject said "I seem to see a reddish oval", perhaps that would have been true.  But compare:

My freely drinking tea rather than coffee is partially constituted by saying to my host "tea, please."  Yet there is still an actual event of my freely drinking tea.  Even though if I had said "coffee, please" I probably would have drunk coffee instead.

We are getting into a zone where it is hard to tell what is a verbal issue and what is a substantive one.  (And in my view, that's because the distinction is inherently fuzzy.)  But that's life.

comment by Ape in the coat · 2023-07-03T11:58:27.646Z · LW(p) · GW(p)

I think you are a bit off the mark.

As a reductive materialist, expecting to find a materialistic explanation for consciousness, in your model I'd be Camp 2. And yet in the dialogue

"It's obvious that consciousness exists."

-Yes, it sure looks like the brain is doing a lot of non-parallel processing that involves several spatially distributed brain areas at once, so-

"I'm not just talking about the computational process. I mean qualia obviously exists."

-Define qualia.

"You can't define qualia; it's a primitive. But you know what I mean."

-I don't. How could I if you can't define it?

"I mean that there clearly is some non-material experience stuff!"

-Non-material, as in defying the laws of physics? In that case, I do get it, and I super don't-

"It's perfectly compatible with the laws of physics."

-Then I don't know what you mean.

"I mean that there's clearly some experiential stuff accompanying the physical process."

-I don't know what that means.

I much more relate to the position expressed via bold text. Because the Camp 2 person here smuggles the assumption that qualia are non-material and experience stuff is separate from the physical process. Some definitions of consciousness/experience/qualia are incoherent and we may not even know about it yet because we do not know all the related physics. Doing Socratic method here and refusing to accept vague handwaving aka "you know what I mean" is a valid strategy to get rid of the confusion around the topic. 

People are confused about consciousness due to their initial intuitions, and then they find full fledged philosophies built around this confusion, created long before we made substantial progress in neuroscience. Thus people get validation of their confusion and may form their identities around it, instead of trying to resolve it. And for these historic reasons we have completely different narratives around consciousness, not unllike political ones among those, who try to resolve their confusion, and those who don't. 

According to Camp #1, the correct explanandum is still "I claim to have experienced X"

I wonder, can we mend the rift by introducing a bit of recursion. Consider:

I experience to have experienced X. 

Now I can make statements like: 

My experience of experience is different from my experience of matter

Without smuggling the assumption that experience is indeed different from matter. It's curious how many philosophical problems can be deconfused by the propper understanding of map-territory framework.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2023-07-03T12:10:56.840Z · LW(p) · GW(p)

Thanks for that comment. Can you explain why you think you're Camp #2 according to the post? Because based on this reply, you seem firmly (in fact, quite obviously) in Camp #1 to me, so there must be some part of the post where I communicated very poorly.

( ... guessing for the reason here ...) I wrote in the second-last section that consciousness, according to Camp #1, has fuzzy boundaries. But that just means that the definition of the phenomenon has fuzzy boundaries, meaning that it's unclear when consciousness would stop being consciousness if you changed the architecture slightly (or built an AI with similar architecture). I definitely didn't mean to say that there's fuzziness in how the human brain produces consciousness; I think Camp #1 would overwhelmingly hold that we can, in principle, find a full explanation that precisely maps out the role of every last neuron.

Was that section the problem Or sth else?

Replies from: Ape in the coat
comment by Ape in the coat · 2023-07-03T12:44:51.630Z · LW(p) · GW(p)

At first I also thought that I'm a central example of Camp 1 based on the general vibes but then I reread the descriptions. I've boldened the things that I agree with in both of them 

Camp #1 tends to think of consciousness as a non-special high-level phenomenon. Solving consciousness is then tantamount to solving the Meta-Problem of consciousness, which is to explain why we think/claim to have consciousness. In other words, once we've explained why people keep uttering the sounds kon-shush-nuhs, we've explained all the hard observable facts, and the idea that there's anything else seems dangerously speculative/unscientific. No complicated metaphysics is required for this approach.

Conversely, Camp #2 is convinced that there is an experience thing that exists in a fundamental way. There's no agreement on what this thing is – theories range anywhere from hardcore physicalist accounts to substance dualists that postulate causally active non-material stuff – but they all agree that there is something that needs explaining. Also, getting your metaphysics right is probably a part of making progress.

I do not think that explaining why people talk about consciousness is the same as explaining what consciousness is. People talk about "consciousness" because they possess some mental property that they call "consciousness". What exactly this is is still an open problem. I expect to find something like a specific encoding that my brain uses to translate signals from my body to the interface that the central planning agent interacts with. And while I agree that no complicated metaphysics is required, discarding metaphysics still counsts as getting it exactly right. I do not think that consciousness is fundamental but as you've included hardcore physicalist accounts into Camp 2 - I'm definetely Camp 2.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2023-07-03T13:11:32.034Z · LW(p) · GW(p)

Okay, that makes a lot of sense. I'm still pretty sure that you're a central example of what I meant by Camp #1, and that the problem was how I described them. In particular,

  • Solving consciousness = solving the Meta Problem: what I meant by "solving the meta problem" here entails explaining the full causal chain. So if you say "People talk about 'consciousness' because they possess some mental property that they call 'consciousness'", then this doesn't count as a solution until you also recursively unpack what this mental property is, until you've reduced it to the brain's physical implementation. So I think you agree with this claim as it was intended. The way someone might disagree is if they hold something like epiphenomenalism, where the laws of physics are not enough and additional information is required. Or, if they are physicalists, they might still hold that additional conceptual/philosophical/metaphysical work is required from our part.

  • hardcore physicalist accounts: I think virtually everyone in Camp #1 is a physicalist, whereas camp #2 is split. So this doesn't put you in camp #2.

  • getting your metaphysics right: well, this formulation was dumb since, as you say, needing to not bring strange metaphysics into the picture is also one way of getting it right. What I meant was that the metaphysics is nontrivial.

I've just rewritten the descriptions of the two camps. Ideally, you should now fully identify with the first. (Edit: I also rewrote the part about consciousness being fuzzy, since I think that was poorly phrased even if it didn't cause issues here.)

Replies from: Ape in the coat
comment by Ape in the coat · 2023-07-03T13:53:26.017Z · LW(p) · GW(p)

Okay, now Camp 1 feels more like home. Yet, I notice that I'm confused. How can anyone in Camp 2 be a physicalist then? Can you give me an example?

So if you say "People talk about 'consciousness' because they possess some mental property that they call 'consciousness'", then this doesn't count as a solution until you also recursively unpack what this mental property is, until you've reduced it to the brain's physical implementation.

Sounds about right. But just to be clear it doesn't mean that "consciousness" equals "talks about consciousness". It's just that by explaining a bigger thing (consciousness) we will also explain the smaller one (talks about consciousness) that depends on it. I expect consciousness to be related to many other stuff and talks about it being just an obvious example of a thing that wouldn't happen without consciousness.

I was under the impression that your camps were mostly about whether a person thinks there is a Hard Problem of Consciousness or not. But now it seems that they are more about whether the person includes idealism in some sense into their worldview? I suppose you are trying to compress both these dimensions (idealism/non-idealism, HP/non-HP) into one. And if so, I'm afraid your model is going to miss a lot of nuances.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2023-07-03T14:20:43.993Z · LW(p) · GW(p)

Sounds about right. But just to be clear it doesn't mean that "consciousness" equals "talks about consciousness". It's just that by explaining a bigger thing (consciousness) we will also explain the smaller one (talks about consciousness) that depends on it. I expect consciousness to be related to many other stuff and talks about it being just an obvious example of a thing that wouldn't happen without consciousness.

Yes, this is also how I meant it. Never meant to suggest that the consciousness phenomenon doesn't have other functional roles.

Okay, now Camp 1 feels more like home. Yet, I notice that I'm confused. How can anyone in Camp 2 be a physicalist then? Can you give me an example?

So first off, using the word physicalist in the post was very stupid since people don't agree what it means, and the rewrite I made before my previous comment took the term out. So what I meant, and what the text now says now without the term, is "not postulating causal power in addition to the laws of physics".

With that definition, lots of Camp #2 people are physicalists -- and on LW in particular, I'd guess it's well over 80%. Even David Chalmers is an example; consciousness doesn't violate the laws of physics under his model, it's just that you need additional -- but non-causally-relevant -- laws to determine how consciousness emerges from matter. In general, you can also just hold that consciousness is a different way to look at the same process, which is sometimes called dual-aspect monism, and that's physicalist, too.

I was under the impression that your camps were mostly about whether a person thinks there is a Hard Problem of Consciousness or not. But now it seems that they are more about whether the person includes idealism in some sense into their worldview? I suppose you are trying to compress both these dimensions (idealism/non-idealism, HP/non-HP) into one. And if so, I'm afraid your model is going to miss a lot of nuances.

I mean, I don't think it's just about the hard problem; otherwise, the post wouldn't be necessary. And I don't think you can say it's about idealism because people don't agree what idealism means. Like, the post is about describing what the camps are, I don't think I can do it better here, and I don't think there's a shorter description that will get everyone on board.

In general, another reason why it's hard to talk about consciousness (which was in a previous version of this post but I cut it) is that there's so much variance in how people think about the problem, and what they think terms mean. Way back, gwern said about LLMs that "Sampling can prove the presence of knowledge but not the absence". The same thing is true about the clarity of concepts; discussion can prove that they're ambiguous, but never that they're clear. So you may talk to someone, or even to a bunch of people, and you'll communicate perfectly, and you may think "hooray, I have a clear vocabulary, communication is easy!". And then you talk to smn else the next day and you're talking way past each other. And it's especially problematic if you pre-select people who already agree with you.

Overall, I suspect the Camp #1/Camp #2 thing is the best (as in, the most consistently applicable and most informative) axis you'll find. Which is ultimately an empirical question, and you could do polls to figure it out. I suspect asking about the hard problem is probably pretty good (but significantly worse than the camps) and asking about idealism is probably a disaster. I also think the camps get at a more deeply rooted intuition compared to the other stuff.

Replies from: Ape in the coat
comment by Ape in the coat · 2023-07-03T17:46:07.715Z · LW(p) · GW(p)

So what I meant, and what the text now says now without the term, is "not postulating causal power in addition to the laws of physics".

 

Oh I see. Yeah, that's an unconventianal use of "physicalism" I don't think I've ever seen it before. 

Using the conventional philosophical language, or at least the one supported by Wikipedia and search engines, Camp 1 maps pretty well to monist materialism aka physicalism, while Camp 2 is everything else: all kinds of metaphysical pluralism, dualism, idealism and more exotic types of monism

Anyway, then indeed, camp one is all the way for me. While I'm still a bit worried that people using such a broad definitions will miss the important nuance it's a very good first approximation.

comment by Eli Tyre (elityre) · 2024-12-12T17:55:47.615Z · LW(p) · GW(p)

I think this post cleanly and accurately elucidates a dynamic in conversations about consciousness. I hadn't put my finger on this before reading this post, and I noe think about it every time I hear or participate in a discussion about consciousness.

comment by Gordon Seidoh Worley (gworley) · 2023-07-02T23:46:08.986Z · LW(p) · GW(p)

I don't feel like I fall super hard into one of these camps or the other, although I agree they exist. I think from the outside folks would probably say I'm a very camp 2 person, but as I see it that's only insofar as I'm not willing to give up and say that there's nothing of value beyond the camp 1 approach.

This is perhaps reflected in my own thinking about "consciousness". I think the core thing going on is not complex, but instead quite simple: negative feedback loops that create information that's internal to a system. I identify signals within a feedback circuit as the core thing we care about when we talk about qualia. Humans just happen to be made up of very complex circuits layered on top of each other to create very complex, multi-circuit signals that combine in ways to create the relatively complex sorts of experiences we have, and this causes all kinds of confusion.

I think folks of both camps dislike this sort of thing though because, if we reduce consciousness to this, it implies pansychism, but not pansychism of the kind most panpsychist proponents put forward. Thus, everyone is pissed off. Still, I don't see a better option.

I wrote about these ideas in some depth a while back [? · GW] as part of trying to figure out some things about AI alignment. I'm not sure I was very successfully on the figuring out things about AI alignment part, but I feel pretty good about the understanding consciousness stuff 5 years later, especially given everything we've since seen about the mind thanks to evidence suggesting something like perceptual control theory is probably close to right.

comment by Signer · 2023-07-03T07:39:39.372Z · LW(p) · GW(p)

I agree that the epistemic status of experience is important, but... First of all does anyone actually disagree with concrete things that Dennett says? That people are often wrong about their experiences is obviously true. If that was the core disagreement, it would be easy to persuade people. The only persistent disagreement seems to be about whether there is something additional to the physical explanation of experience (hence the zombies argument) or whether fundamental consciousness is even coherent concept at all - just replacing absolute certainty with uncertainty wouldn't solve it, when you can't even communicate what's your evidence is.

Replies from: TAG, sil-ver
comment by TAG · 2023-07-03T12:57:15.981Z · LW(p) · GW(p)

The disagreement is about whether qualia exist enough to need explaining. A rainbow is ultimately explained as a kind of illusion, but to arrive at the explanation , you have to accept that they appear to exist, that people aren't lying about them.

Dennett doesn't just think you can be wrong about what's going on in your mind, he thinks qualia don't exist at all, and that he is zombie ... but his opponents don't all think that qualia are fundamental, indefinable, non physical etc. It's important to remember that the camp #2 argument given here is very exagerated.

comment by Rafael Harth (sil-ver) · 2023-07-03T08:35:49.770Z · LW(p) · GW(p)

Yes; there are definitely people who disagree with most things Dennett says, including how exactly you can be wrong about your experience. Don't really want to get into the details here since that's not part of the post.

comment by AlphaAndOmega · 2023-07-02T22:46:48.419Z · LW(p) · GW(p)

Great post, I felt it really defined and elaborated on a phenomena I've seen recur on a regular basis.

It's funny how consciousness is so difficult to understand, to the point that it seems pre-paradigmatic to me. At this point, I like, like presumably many others, evaluate claims of conscientiousness by setting the prior that I'm personally conscious to near 1, and then evaluating the consciousness of other entities primarily by their structural similarity to my own computational substrate, the brain.

So another human is almost certainly conscious, most mammals are likely conscious and so on, and while I wouldn't go so far as to say that novel or unusual computational substrates such as say, an octopus, aren't conscious, I strongly suspect their consciousness is internally different than ours.

Or more precisely, it's not really the substrate but the algorithm running on it that's the crux of it, and it's only that conservation of the substrate's arrangement constrains our expectations of what kind of algorithm runs on it. I expect a human brain's consciousness to be radically different from an octopus because the different structure requires a different algorithm to handle, in the latter case a far more diffuse one.

I'd go so far as to say that I think substrate can be irrelevant in practise, since I think that a human brain emulation experiences consciousness near identical to one running on head cheese, and not akin to an octopus or some AI that was trained by modern ML.

Do I know this for a fact? Hell no, and at this point I expect it to be an AGI-complete problem to solve, it's just that I need an operational framework to live by in the mean time and this is the best I've got.

comment by Eli Tyre (elityre) · 2024-07-19T04:41:46.498Z · LW(p) · GW(p)

All this sounds correct to me.

Reflecting on some previous conversations in which parallel the opening vignette, I now suspect that many people are just not conscious in the way that I am / seem to be.

comment by Review Bot · 2024-07-26T01:00:08.217Z · LW(p) · GW(p)

The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

comment by Review Bot · 2024-07-26T01:00:07.809Z · LW(p) · GW(p)

The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

comment by awg · 2023-07-05T17:49:21.318Z · LW(p) · GW(p)

I'm wondering where Biological Naturalism[1] falls within these two camps? It seems like sort of a "third way" in between them, and incidentally, is the explanation that I personally have found most compelling.

Here's GPT-4's summary:

Biological Naturalism is a theory of mind proposed by philosopher John Searle. It is a middle ground between two dominant but opposing views of the mind: materialism and dualism. Materialism suggests that the mind is completely reducible to physical processes in the brain, while dualism posits that the mind and body are distinct and separate.

Searle's Biological Naturalism, on the other hand, asserts that while mental processes are caused by physical processes in the brain, they are not reducible to them. This means that while consciousness and other mental phenomena are rooted in the physical workings of the brain, they also have their own first-person ontology that is not captured by third-person descriptions of the brain's workings.

Searle uses the analogy of water and H2O to explain this concept. Just as water is composed of H2O molecules but has properties (like wetness) that are not properties of individual H2O molecules, consciousness is caused by physical processes in the brain but has properties (like subjectivity) that are not properties of individual neurons or neural networks.

In other words, Biological Naturalism acknowledges the physical basis of consciousness while also recognizing that consciousness has a subjective character that is not explained by purely physical descriptions. This theory allows for the study of the mind as part of the natural biological sciences, while also acknowledging the unique properties of mental phenomena.

Setting aside how problematic of an individual Searle is, this theory has always struck me as the most cogent and has stood up to the test of time in my own ontology.

Taking it a step further into my own theorizing: I suspect consciousness is a natural feature of all systems and exists on a spectrum from very-low-consciousness systems (individual atoms, stars, clouds of gas, rocks) to very-high-consciousness systems (animals). My pet theory is that we will one day find out that everything is conscious and it's just a matter of "how much." Hmm, maybe this indicates I'm a Camp #2 person? I'm finding it hard to classify myself. Maybe someone else will find it easier.

  1. ^

    Despite its name, I don't think there's anything in the theory that says consciousness has to arise from biological components per se, just that consciousness is a natural byproduct of at least some information processing systems, most notably the biological ones that exists in our skulls.

Replies from: sil-ver, Dagon
comment by Rafael Harth (sil-ver) · 2023-07-05T18:23:30.793Z · LW(p) · GW(p)

I'm wondering where Biological Naturalism[1] falls within these two camps? It seems like sort of a "third way" in between them, and incidentally, is the explanation that I personally have found most compelling.

My take based on the summary is that it's squarely in Camp #2.

In particular, I think this part seals the deal

This means that while consciousness and other mental phenomena are rooted in the physical workings of the brain, they also have their own first-person ontology that is not captured by third-person descriptions of the brain's workings.

According to Camp #1, there's nothing ontologically special about consciousness, so as soon as you give it its own ontology, you've decided which camp you're in.

comment by Dagon · 2023-07-05T20:54:24.915Z · LW(p) · GW(p)

Wait, isn't that just dualism with hand-waving about complexity?  

The analogy of water and H2O is a good one:  the property of wetness IS measurable in surface tension, viscosity, and adhesion to various surfaces.  And those are absolutely caused by interactions at the level of molecules (or lower down, but definitely physics).  "Wetness" is not easily CALCULABLE from first principles, but that's a failing of us as modelers and our computational power, not a distinct category of properties.