Seth Explains Consciousness
post by Jacob Falkovich (Jacobian) · 2023-08-22T18:06:42.653Z · LW · GW · 125 commentsThis is a link post for https://putanumonit.com/2023/08/19/seth-explains-consciousness/
Contents
The Real Problem Seeing a Strawberry Cybernetic Organism Your brain is a cybernetic regulator controlling your body into the state of being alive. Self Image Body Ownership First-person Perspective Narrative Self Free Will Being a Map None 125 comments
The Real Problem
For as long as there have been philosophers, they loved philosophizing about what life really is. Plato focused on nutrition and reproduction as the core features of living organisms. Aristotle claimed that it was ultimately about resisting perturbations. In the East the focus was less on function and more on essence: the Chinese posited ethereal fractions of qi as the animating force, similar to the Sanskrit prana or the Hebrew neshama. This lively debate kept rolling for 2,500 years — élan vital is a 20th century coinage — accompanied by the sense of an enduring mystery, a fundamental inscrutability about life that will not yield.
And then, suddenly, this debate dissipated. This wasn’t caused by a philosophical breakthrough, by some clever argument or incisive definition that satisfied all sides and deflected all counters. It was the slow accumulation of biological science that broke “Life” down into digestible components, from the biochemistry of living bodies to the thermodynamics of metabolism to genetics. People may still quibble about how to classify a virus that possesses some but not all of life’s properties, but these semantic arguments aren’t the main concern of biologists. Even among the general public who can’t tell a phospholipid from a possum there’s no longer a sense that there’s some impenetrable mystery regarding how life can arise from mere matter.
In Being You, Anil Seth is doing the same to the mystery of consciousness. Philosophers of consciousness have committed the same sins as “philosophers of life” before them: they have mistaken their own confusion for a fundamental mystery, and, as with élan vital, they smuggled in foreign substances to cover the gaps. This is René Descartes’ res cogitans, a mental substance that is separate from the material.
This Cartesian dualism in various disguises is at the heart of most “paradoxes” of consciousness. P-zombies are beings materially identical to humans but lacking this special res cogitans sauce, and their conceivability requires accepting substance dualism. The famous “hard problem of consciousness” asks how a “rich inner life” (i.e., res cogitans) can arise from mere “physical processing” and claims that no study of the physical could ever give a satisfying answer.
Being You by Anil Seth answers these philosophical paradoxes by refusing to engage in all but the minimum required philosophizing. Seth’s approach is to study this “rich inner life” directly, as an object of science, instead of musing about its impossibility. After all, phenomenological experience is what’s directly available to any of us to observe.
As with life, consciousness can be broken into multiple components and aspects that can be explained, predicted, and controlled. If we can do all three we can claim a true understanding of each. And after we’ve achieved it, this understanding of what Seth calls “the real problem of consciousness” directly answers or simply dissolves enduring philosophical conundrums such as:
- What is it like to be a bat?
- How can I have free will in a deterministic universe?
- Why am I me and not Britney Spears?
- Is “the dress” white and gold or blue and black?
Or at least, these conundrums feel resolved to me. Your experience may vary, which is also one of the key insights about experience that Being You imparts.
The original photograph of “the dress”
Seeing a Strawberry
On a plate in front of you is a strawberry. Inside your skull is a brain, a collection of neurons that have direct access only to the electrochemical state of other neurons, not to strawberries. How does the strawberry out there create the perception of redness in the brain?
In the common view of perception, red light from the strawberry hits the red-sensitive cones in your retina. These cones are wired into other neurons that detect edges, then shapes, and finally these are combined into an image of a red strawberry. This view is intuitively appealing: when we see a strawberry we perceive that a strawberry is right there, and the extant strawberry intuitively seems to be the sole and sufficient cause of our perception of redness.
But if we study closely any element of this perception, we find the common sense intuition immediately challenged.
You may see a strawberry up close or far away, at different angles, partially obscured, in dim light, etc. The perception of it as red, roughly conical, and about an inch across doesn’t change even though the light hitting your retina is completely different in each case: different angles of your visual field, different wavelengths, and so on. In fact, you will perceive a red strawberry in the absence of any red light at all, as in the following image that contains nary a single red-hued pixel in it:
You can zoom in to check, the red-seeming pixels are all gray with R<G, B
You (well, some of you) can simply visualize a red strawberry with your eyes closed, or in a dream, or on acid. People can perceive redness through color-grapheme synaesthesia, including people who have been blind for decades. You easily perceive magenta, a color which has no associated wavelength at all, while the same exact wavelengths coming from different parts of the same image can produce perceptions of entirely different colors as in Adelson’s chessboard illusion. Wherever the perception of color is coming from, it is certainly not the mere bottom-up decoding of wavelengths of light.
And again, this redness is somehow perceived by a collection of 86 billion neurons, none of which come labeled “red” or “strawberry” or even “part of the visual system”. They just are. To understand seeing a strawberry we need to ask: how could you derive strawberries if all you had access to are the states of 86 billion simple variables and no prior idea of the connection between them?
After observing patterns of neurons firing for a while, you will notice that the states of some neurons are entirely determined by the states of others. Others appear more independent, with states that can’t be conclusively derived from the state of the rest of the brain. These independent neurons have non-random patterns — you may notice that some of them statistically tend to fire together, for example. You could infer that there are hidden causes outside the brain that affect these, and consider the state of independent neurons to be a sensory input effected by these hidden causes. To make sense of your senses, you must model this hidden world external to the brain.
Perhaps these sensory neurons that fired together are red-sensing cones in your retina. Their congruence implies that the hidden cause of their firing (let’s call it “colored light”, although it isn’t labeled as such in the brain itself) comes from continuous “surfaces” that reflect a similar hue throughout. This isn’t a given — “color” could be distributed randomly pixel by pixel throughout space — but it’s a reasonable inference from the states of neurons that come into being. Your model doesn’t contain the labels “colored light” and “surface” but it does contain objects with the property of stably and homogeneously colored surfaces.
You also notice that if some blue-sensing cones suddenly activate (perhaps you went from a warmly illuminated room to stand under a blue sky) this dampens the activation of the red-sensing ones elsewhere. Thus, the best model that predicts the state of all your retinal neurons is that surfaces have a fixed property that affects the relative activation of your cones. “Red” is your model of a property of a surface that activates red-sensing cones if illuminated by warm light but will activate all cones similarly (as a gray surface would normally) if the illuminant is cool. This explains the red-appearing gray strawberries in the greenish image above, and the general property of “discounting the illuminant” in human vision.
We have also solved the mystery of “the dress”: if you perceive it as white and gold it’s because your brain models it as being illuminated by a blue light — perhaps you trained it by shopping for clothes often in outdoor bazaars. If you see it as blue and black you must imagine it in a warmly lit indoor space. Spend more time outdoors and you may start to perceive it differently!
The important point here is that “redness” is a property of your brain’s best model for predicting the states of certain neurons. Redness is not “objective” in the sense of being “in the object”, it only exists in the models generated by the sort of brains which are hooked up to eyes. It feels objective because 92% of people will share your assessment (8% are colorblind) as opposed to ~50% agreement on the dress, but they have the same status of being generated by your brain. We intuitively separate “objective” properties of a strawberry (red, real, occupying a volume) from “subjective” ones (good in salads, pretty, evocative of spring) but all of these are properties of your brain’s predictive model of strawberries, they’re not out there to be perceived in a brain-independent way.
These models are “predictive” in the important sense that they perceive not just how things are at the moment but also anticipate how your sensory inputs would change under various conditions and as a consequence of your own actions. Thus:
- Red = would create a perception of a warmer color relative to the illuminant even if the illumination changes.
- Good in salads = would create perceptions of deliciousness and positive affect if consumed alongside arugula and goat cheese.
- Real = would look like a strawberry from a different angle if you walked around it, and would generate perceptions of solidity and weight if you picked it up. An image of a strawberry on a screen generates almost the same visual input as a physical strawberry, but you perceive it as very different (unreal) because you predict different consequences to trying to grab it.
This, in broad strokes, is the predictive processing theory of perception. But Being You doesn’t just lay out the how of predictive processing, it also answers the how come and the so what.
Cybernetic Organism
Why is our brain “trying to predict its sensory inputs”? What does it even mean for it to have a goal?
Seth draws the insightful comparison to cybernetic systems, systems that control some variable of interest using feedback loops. These range in complexity from a thermostat that turns a heater on and off to regulate temperature in a room to an ecosystem where plant and animal populations are balanced through complicated interactions. A conspicuous feature of cybernetic systems is that they usually appear to have a “purpose”, like a self-guided missile aiming at a target or your thermostat aiming at a temperature goal.
The Good Regulator Theorem highlights the importance of appropriately matching the complexity of a regulator system to the complexity of the environment it operates in. By understanding this principle, scientists and engineers can develop more robust and efficient control systems that can effectively regulate and adapt to dynamic environments.
An important insight about cybernetic systems was formulated by William Ashby and Roger Conant, stating that “every good regulator of a system must be (or contain) a model of that system”. A thermostat that has access only to a thermometer will do a worse job at regulating a room’s temperature than one that has access to weather forecasts, the properties of the room and the heater, and its own tolerances and inaccuracies. The more mutual information there is between the regulator and its environment, the better it can exert control over it.
Your brain is a cybernetic regulator controlling your body into the state of being alive.
The Terminator, a cybernetic organism of a more metal variety
This is a consequence of evolution and, more broadly, thermodynamics. Being alive is a very low entropy state — your body temperature is in a narrow range around 98.6°F, your organs are neatly arranged — in a high entropy world that inexorably tries to dissolve you into room-temperature mush. You can’t persist long in your special state of living by being passive, you have to actively regulate every aspect of yourself and your environment that impacts your vitals. You have to take action. As Ashby and Conant said, to regulate yourself and your environment you must comprehensively model both, and in particular the consequences of your actions on both.
Thus, while all of your perceptions are subjective in the sense that they are features of your mind’s map and not of any independent territory, they are not arbitrary or subject to your whims. The core of your model is an unmodifiable prediction, a fixed “hyperprior”, of remaining alive. Its subjective flavor is the base inchoate sense of simply being a living organism, and the survival instinct that overrides all other perceptions when the prediction of staying alive is threatened with disconfirmation. One level above the prediction of just living are predictions of your body’s vitals, from its basic integrity to control of variables like heart rate or blood sugar and oxygenation. Your most important and vivid perceptions — embodiment, pain, emotions, moods — are interoceptive experiences that have more to do with your body than with the world outside.
Finally, exteroceptive sensations are your model of the outside world, primarily as you can act upon it to impact your body. The strawberry comes with an immediate perception of edibility along with redness. This is because eating stuff is an action potentially available to you, and discriminating between edible and inedible things is vital to staying vital.
Self Image
To summarize so far: the contents of your consciousness, your perceptions, are features of a best-guess model concocted by your brain to predict its own present and future states. It’s driven by an overriding prediction of staying alive, which impacts your brain through a rich and vivid channel of interoceptive sensation.
While it’s easy to see how perceptions like redness, hunger, and edibility fit into this picture it may not seem immediately applicable to more complex conscious experiences that have to do with your selfhood. This is the biggest section of the book, disentangling selfhood into components such as ownership of a body, a first person view, a continuous narrative history, a sense of volition, and more. It details how all these are explained through the lens of a generative model keeping your body alive, and also the clever experimental setups used by Seth and his fellow scientists to understand each one (and mess with it at will). I can’t do this section full justice in a brief review, but we can take a whirlwind tour of what it means not just to be but to be you.
Body Ownership
To a collection of neurons locked inside a skull, the rest of the body is as much “out there” as any other object. And yet, it doesn’t feel that way: you have a strong sense of the exact extent of your body and police this boundary rigorously. You apply this even to something like saliva, which turns from a normal part of your body into a yucky foreign substance the moment it crosses an invisible line somewhere in the vicinity of your teeth.
The main determinant of what feels like your body is that your body elicits interoceptive sensations that match external ones. This can be demonstrated by the rubber hand illusion: a detached rubber hand feels like a mere object if you just look at it, but if it is stroked with a brush at the same time as your real hand the consilience between seeing and feeling the strokes creates a strong feeling that it is part of you. You will instinctively flinch in horror if “your” rubber is suddenly threatened.
Illustration of the rubber hand illusion experiment from The Scientist magazine
Your map of the world is chiefly a model of the sensory consequences of your actions, and these sensory consequences are what distinguishes your body from everything else.
First-person Perspective
The experience of observing the world from a single point somewhere between your eyes and slightly behind your forehead comes from your generative model of vision. As you move around the world, this first-person POV is the prediction that surfaces will be visible if they are facing that point with no obstruction, and not visible otherwise.
Illustration from Steven Lehar’s “The Boundaries of Human Knowledge” demonstrating that although you model objects as occupying volume, you are also aware that you are only seeing a 2D surface that faces a single point
But as we’ve seen, visual-based perceptions aren’t as fundamental and stable as embodied ones. In a 2007 experiment, subjects saw through a VR set a real-time video of their own body filmed from a few feet behind their back. When they were stroked with a brush and saw this happening synchronously in the video feed, they reported feeling that the “virtual” body standing a few feet in front of them was in fact theirs and that they were observing it from a third-person POV.
Of course, people have forever reported having “out of body” experiences. These experiences are almost certainly true, even if explaining them via demonic possession or astral projection is clearly bunk. In a survey I ran myself, 27% of responders said that they see themselves in the third person when visualizing entering a familiar room. Coincidentally, this is similar to the percentage of people who apply makeup daily, an activity that involves looking at yourself in the third person (through a mirror) for some time while stroking yourself with a brush.
Narrative Self
An important component of selfhood is having a narrative about yourself, your continuous life history and the type of person you are. This self-story is not strictly necessary for a lot of normal functioning; Being You recounts the story of a music producer suffering from total amnesia who lives permanently with no memory beyond the last few seconds. And yet he is able to play the piano perfectly and even rekindle love with his wife.
What is your narrative self predicting and controlling? Most likely: your social self, how other people perceive you, predict you, and treat you. This is of course of vital importance to social creatures like us! Hanson and Simler’s The Elephant in the Brain meticulously demonstrates that our story of who we are and why we do the things we do often has little to do with our real motives and a lot to do with securing assistance from others and avoiding punishment.
Free Will
For many people, the aspect of selfhood they cling to most tightly is their volition, the feeling of being the originator and director of their own actions. And if philosophically inclined, they may worry about reconciling this volition with a deterministic universe. Are you truly exercising free will or merely following the laws of physics?
This question betrays that same dualistic map-territory confusion that asks how the material redness of a strawberry could cause the phenomenological redness in your mind. Redness, free will, belief in deterministic physics — these are all features of your generative model. There is no “spooky free will” that violates the laws of physics in our common model of them, but the experience of free will certainly exists and is informative.
Imagine that you are making a cup of tea. When did it feel like you exercised free will? Likely more so at the start of the process, when you observed that all tea-making tools are available to you and contemplated alternatives like coffee or wine. Once you’re deep in the process of making the cup the subsequent actions feel less volitional, executed on autopilot if making tea is a regular habit of yours. What is particular about that moment of initiation?
One particularity is the perception that you are able to predict and control many degrees of freedom: observe and move several objects around, react in complex ways to setbacks, etc. This separates free will from actions that feel “forced” by the configuration of the world outside, like slipping on a wet surface, and reinforces the sensation that free will comes from within.
The experience of volition is also a useful flag for guiding future behavior. If the universe were arranged again in the same exact configuration as when you made tea you will always end up making tea. But the universe (in particular, the state of your brain) never repeats. The feeling of “I could have done otherwise” is the experience of paying attention to the consequences of your action so that in a subjectively similar but not perfectly identical situation you could act differently if the consequences were not as you predicted. If the tea didn’t satisfy as expected, the experience of free will you had when you made it shall guide you the next day to the cold beer you should have drank instead.
Being a Map
Being You is an science book, covering the results of research in various fields and presenting a comprehensive model of what your phenomenology is and how it works. But I suspect that it’s impossible to read it without it actually changing what being you is like — if all you are is a generative model of the world then enhancing this model with new insights will surely affect it.
For one, the decoupling of conscious experience from deterministic external causes implies that there’s truly no such thing as a “universal experience”. Our experiences are shared by virtue of being born with similar brains wired to similar senses and observing a similar world of things and people, but each of us infers a generative model all of our own. For every single perception mentioned in Being You it also notes the condition of having a different one, from color blindness to somatoparaphrenia — the experience that one of your limbs belongs to someone else. The typical mind fallacy [? · GW] goes much deeper than mere differences in politics or abstract beliefs.
The subjective nature of all experience also offers a way to purposefully change yourself that falls between utter fatalism and “it’s all in your head” solipsistic voluntarism. Your brain is always making its best predictions; these can’t be changed by a single act of will but can be updated with enough evidence. Almost everything you know and perceive was inferred from scratch, and what was trained can be retrained. If you want to build habits, just observing yourself doing the thing for whatever reason is more useful than any story or incentive structure you can come up with. In particular, do things with your body to learn them, that’s the part of the universe your brain pays the most attention to.
The intimate connection of our consciousness to our living body also implies that we shouldn’t blithely assume that it can be easily disembodied. Robin Hanson posits digital emulations that have a similar basic consciousness to biological humans, like an emulated Elon Musk who shares a sense of selfhood with the biological version and enjoys virtual feasts or Ferraris. But it is not clear at all that such a being could even exist in principle. A digital mind that lacks a body it is trying to keep alive, that has entirely different senses than our interoceptive and exteroceptive ones, and that has an entirely different repertoire of actions available to it will have an entirely alien generative model of its world, and thus an entirely alien phenomenology — if it even has one. We can guess what it’s like to be a bat: its reliance on sonar to navigate the world likely creates a phenomenology of “auditory colors” that track how surfaces reflect sound waves similar to our perception of visual color. It’s much harder to guess what it’s like, if anything, to be an “em”.
In general, the fact that our consciousness has a lot to do with living and little with intelligence implies that we should more readily ascribe it to animals and less readily to AI. Eliezer seems to think [? · GW] that selfhood is necessary for conscious experience, and that babies and animals aren’t sentient. But selfhood is itself just a bundle of perceptions, separable from each other and from experiences like pain or pleasure. An animal with no selfhood cannot report “it is I, Bambi, who is suffering”, but that doesn’t mean there is no suffering happening when you harm it. And as for AI, I believe that Eliezer and Seth are in agreement that world-optimizing intelligence and what-it’s-like-to-be-you consciousness are quite orthogonal.
But there is something interesting still about intelligence: how is it that reading a book of declarative knowledge can change how I perceive the world, how I think of my own self, how I relate to my mortality? This all happened to me after reading Being You and again upon rereading it. Yet almost everything the book talks about is the domain of “system 1”, our intuitive and automatic perception. I had the chance to ask this of Seth personally and he sent me a link to a paper on how “system 2” function relates to perceiving the contents of working memory. But it still feels to me that there is something magic about our ability to reason explicitly [LW · GW] and how it fits into this new understanding of consciousness.
Since you are reading this review you are likely interested in the same things as well. I highly recommend that you read Being You, and then that you spend a good while thinking about it.
125 comments
Comments sorted by top scores.
comment by ShardPhoenix · 2023-08-23T05:18:49.656Z · LW(p) · GW(p)
Some interesting examples but this seems to be yet another take that claims to solve/dissolve consciousness by simply ignoring the Hard Problem.
Replies from: gworley, caleb-reske, Jacobian, tangerine↑ comment by Gordon Seidoh Worley (gworley) · 2023-08-23T23:38:41.827Z · LW(p) · GW(p)
It sounds like Seth's position is that the hard problem of consciousness is the result of confusion, so he's not ignoring it, but saying that it only appears to exist because it's asked within the context of a confused frame.
Seth seems to be suggesting that the hard problem of consciousness is a bit like asking why don't people fall off the edge of the Earth? We think of this question as confused because we believe the Earth is round. But if you start from the assumption that the Earth is flat, then this is a reasonable question, and no amount of explanation will convince you otherwise.
The reason these two situations look different is that it's now easy for us to verify that the Earth is not flat, but it's hard for us to verify what's going on with consciousness. Seth's book is making a bid, by presenting the work of many others, to say that what we think of as consciousness is explainable in ways that make the Hard Problem a nonsensical question.
That seems quite a big different from "simply ignoring the Hard Problem", though I admit Jacob does not go into great detail about Seth's full arguments for this. But I'd posit that if you want to disagree with something, you need to disagree with the object-level claims Seth makes first, and only after reaching a point where you have no more disagreements is it worth considering whether or not the Hard Problem still makes sense, and if you do then it should be possible to make a specific argument about where you think the Hard Problem arises and what it looks like in terms of the presented model.
Replies from: tslarm↑ comment by tslarm · 2023-08-24T04:04:45.441Z · LW(p) · GW(p)
Without reading the book we can't be sure. But the trouble is that this claim has been made a million times, and in every previous case the author has turned out to be either ignoring the hard problem, misunderstanding it, or defining it out of existence. So if a longish, very positive review with the title 'x explains consciousness' doesn't provide any evidence that x really is different this time, it's reasonable to think that it very likely isn't.
The reason these two situations look different is that it's now easy for us to verify that the Earth is flat, but it's hard for us to verify what's going on with consciousness.
Even if I had no way of verifying it, "the earth is (roughly) spherical and thus has no edges, and its gravity pulls you toward its centre regardless of where you are on its surface" would clearly be an answer to my question, and a candidate explanation pending verification. My question was only 'confused' in the sense that it rested on a false empirical assumption; I would be perfectly capable of understanding your correction to this assumption. (Not necessarily accepting it -- maybe I think I have really strong evidence that the earth is flat, or maybe you haven't backed up your true claim with good arguments -- but understanding what it means and why it would resolve my question).
Are you suggesting that in the case of the hard problem, there may be some equivalent of the 'flat earth' assumption that the hard-problemists hold so tightly that they can't even comprehend a 'round earth' explanation when it's offered?
Replies from: gworley, Signer↑ comment by Gordon Seidoh Worley (gworley) · 2023-08-24T16:22:08.794Z · LW(p) · GW(p)
Are you suggesting that in the case of the hard problem, there may be some equivalent of the 'flat earth' assumption that the hard-problemists hold so tightly that they can't even comprehend a 'round earth' explanation when it's offered?
Yes. Dualism is deeply appealing because most humans, or at least most of humans who care about the Hard Problem, seem to experience themselves in dualistic ways (i.e. experience something like the self residing inside the body). So even if it becomes obvious that there's no "consciousness sauce" per se, the argument is that the Problem seems to exist only because there are dualistic assumptions implicit in the worldview that thinks the Problem exists.
I'd go on to say that if we address the Meta Hard Problem like this in such a way that it shows the Hard Problem to be the result of confusion, then there's nothing to say about the Hard Problem, just like there's nothing interesting to say about why ships never sail off the edge of the Earth.
Replies from: Shiroe↑ comment by Shiroe · 2023-08-24T20:20:25.195Z · LW(p) · GW(p)
So you don't believe there is such a thing as first-person phenomenal experiences, sort of like Brian Tomasik? Could you give an example or counterexample of what would or wouldn't qualify as such an experience?
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2023-08-25T02:14:18.910Z · LW(p) · GW(p)
I think that there's a process we can meaningfully point to and call qualia, and it includes all the things we think of as qualia, but qualia is not itself a thing per se but rather the reification of observations of mental processes that allows us to make sense of them.
I have theories of what these processes are and how they work and they mostly line up with the what's pointed at by this book. In particular I think cybernetic models are sufficient to explain most of the interesting things going on with consciousness, and we can mostly think of qualia as the result of neurons in the brain hooked up in loops so that their inputs include information not only from other neurons but also from themselves, and these self-sensing loops provide the input stream of data that other neurons interpret as self-experience/qualia/consciousness.
Replies from: TAG↑ comment by TAG · 2023-10-08T19:37:11.944Z · LW(p) · GW(p)
but qualia is not itself a thing per se but rather the reification of observations of mental processes
I don't see how that helps. We don't have a reductive explanation of consciousness as a thing, and we don't have a reductive explanation of consciousness as a process.
↑ comment by Signer · 2023-08-24T09:10:10.103Z · LW(p) · GW(p)
Are you suggesting that in the case of the hard problem, there may be some equivalent of the ‘flat earth’ assumption that the hard-problemists hold so tightly that they can’t even comprehend a ‘round earth’ explanation when it’s offered?
I wouldn't say "can’t even comprehend" but my current theory is that one such detrimental assumption is "I have direct knowledge of content of my experiences".
Replies from: Shiroe, sharmake-farah, TAG↑ comment by Shiroe · 2023-08-24T15:33:52.479Z · LW(p) · GW(p)
but my current theory is that one such detrimental assumption is "I have direct knowledge of content of my experiences"
It's true this is the weakest link, since instances of the template "I have direct knowledge of X" sound presumptuous and have an extremely bad track record.
The only serious response in favor of the presumptuous assumption [edit] that I can think of is epiphenomenalism in the sense of "I simply am my experiences", with self-identity (i.e. X = X) filling the role of "having direct knowledge of X". For explaining how we're able to have conversations about "epiphenomenalism" without it playing any local causal role in us having these conversations, I'm optimistic that observation selection effects could end up explaining this.
Replies from: Signer, TAG↑ comment by Noosphere89 (sharmake-farah) · 2023-08-26T15:57:59.266Z · LW(p) · GW(p)
Similarly, I think that one inapplicable assumption is the idea that people can reliably self-analyze and come to accurate conclusions, thus being presumed reliable in their reports, including consciousness. I remember reading something that people's ability to self-analyze correctly is basically 0, that is people are pretty much always incorrect about their own traits and thoughts.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2023-08-26T16:48:04.859Z · LW(p) · GW(p)
Interpret things strictly enough and everyone is always wrong about everything. They can still be usefully right.
Replies from: sharmake-farah↑ comment by Noosphere89 (sharmake-farah) · 2023-08-26T18:05:02.227Z · LW(p) · GW(p)
The point is that they're usually not even that useful, as bringing an outsider would probably help the situation, and therefore one of the basic assumptions of a lot of consciousness discourse and intuitions is false, and they don't know this, and in particular, it's why I now dislike a lot of consciousness intuitions, but this goes especially for dualism.
The fact that we are so bad at self-analysis is why we need outsider help so much.
↑ comment by TAG · 2023-08-24T16:06:31.360Z · LW(p) · GW(p)
Is there a reason why it is detrimental? Note that it “I have direct knowledge of content of my experiences”.doesn't imply certain knowledge, a or non-physical ontology, or epiphenomenalism...
Replies from: Shiroe, Signer↑ comment by Signer · 2023-08-25T09:51:08.302Z · LW(p) · GW(p)
I think it's detrimental because "direct" there prevents people from accepting weak forms of illusionism, and that creates problems additional to The Hard Problem like Mary or Chalmer's conceivability of qualia's structure. And because... I don't want to say "the assumption is wrong" because knowledge is arbitrary high-level concept, but you can formulate a theory of knowledge where it doesn't hold and that theory is better.
↑ comment by Caleb Reske (caleb-reske) · 2023-08-23T06:28:58.159Z · LW(p) · GW(p)
Agreed! These topics - the narrative self, the perception of free will, the predictive processing-theory, etc. - are all incredibly interesting and worth studying. But what has been explained in the book doesn't seem to come close to what consciousness is at all - rather, how our perceptions in consciousness are influenced by our sense of self and story, something that has already been well-studied. I'm fairly convinced by the predictive-processing theory of self and cognition - but I don't treat this as an explanation for the existence of experience itself. A generative artificial model can have "predictive processing," but does this give it a subjective, conscious experience? What would it mean, exactly, if it did?
I'm reminded of this post [LW · GW] - the reason the two "consciousness camps" seem to be talking past each other might be because we have different intuitions about what needs explaining. To me, what really needs explaining is the fact of consciousness - its existence, not its qualities. Why "feel" at all? This book, while it looks interesting, doesn't look like it touches that question.
↑ comment by Jacob Falkovich (Jacobian) · 2023-08-24T18:48:21.838Z · LW(p) · GW(p)
I tried to communicate a psychological process that occurred for me: I used to feel that there's something to the Hard Problem of Consciousness, then I read this book explaining the qualities of our phenomenology, now I don't think there's anything to HPoC. This isn't really ignoring HPoC, it's offering a way out that seems more productive than addressing it directly. This is in part because terms HPoC insists on for addressing it are themselves confused and ambiguous.
With that said, let me try to actually address HPoC directly although I suspect that this will not be much more convincing.
HPoC roughly asks "why is perceiving redness accompanies by the quale of redness". This can be interpreted in one of two ways.
1. Why this quale and not another?
This isn't a meaningful question because the only thing that determines a quale as being a "quale of redness" is that it accompanies a perception of something red. I suspect that when people read these words they imagine something like looking at a tomato and seeing blue, but that's incoherent — you can't perceive red but have a "blue" quale.
2. Why this quale and not nothing?
Here it's useful to separate the perception of redness, i.e. a red object being part of the map, and the awareness of perceiving redness, i.e. a self that perceives a red object being part of the map. These are two separate perceptions. I suspect that when people think about p-zombies or whatever they imagine experiencing nothingness or oblivion, and not a perception unaccompanied by experience, or they imagine some subliminal "red" making them hungry similar to how it would affect a p-zombie. There is no coherent way to imagine being aware of perceiving red, and this being different from just perceiving red, without this awareness being an experience. All you have is experience.
HPoC is demanding a justification of experience from within a world in which everything is just experiences. Of course it can't be answered! If it could formulate a different world that was even in principle conceivable, it would make sense to ask why we're in world A and not in world B. But this second world isn't really conceivable if you focus on what it would mean. The things you're actually imagining are seeing a blue tomato or seeing nothing or seeing a tomato without being aware of it, you're not actually imagining an awareness of seeing a red tomato that isn't accompanied by experience.
↑ comment by TAG · 2023-10-08T19:18:20.566Z · LW(p) · GW(p)
Why this quale and not another?
This isn’t a meaningful question because the only thing that determines a quale as being a “quale of redness” is that it accompanies a perception of something red.
Edit: It's a meaningful question because we, as far as we are concerned, it could have been different because we don't have a way of predicting it. Moreover, iyt quite possibly does vary between individuals, because red-green colour blindness is a thing. What determines, in the sense of pinning down, a quale is a combination of the external stimulus, eg. 600nm light, and the subject.
But that isn't the relevant sense of "determines". It isn't causal deterinism, and it isn't the kind of "vertical" determinism that arises from having a reductive explanation. If subjective red is an entirely physical phenomenon, then it should be determined by, and predictable from, the underlying physics. This we cannot do--we cannot predict non human qualia, or novel human qualia. If there is a set of facts that cannot be deduced from physics, physicalism is wrong.
Reductionism allows some basic facts, about fundamental laws and primitive entities to go unreduced, but not high level phenomena, which includes consciousness.
HPoC is demanding a justification of experience from within a world in which everything is just experiences.
No, it demands a justification of experience on the basis of a physical world, if you assume you are in one. There is no HP in an Idealist ontology, because there is no longer a need to explain one thing on terms of another. It's unlikely that Seth is an idealist.
The success of science in the twentieth and twentyfirst centuries has led many philosophers to adopt a physicalist ontology, basically the idea that the fundamental constituents of reality are what physics says they are. (It is a background assumption of physicalism that the sciences form a sort of tower, with psychology and sociology near the top, and biology and chemistry in the middle , and with physics at the bottom. The higher and intermediate layers don't have their own ontologies -- mind-stuff and elan vital are outdated concepts -- everything is either a fundamental particle, or an arrangement of fundamental particles)
So the problem of mind is now the problem of qualia, and the way philosophers want to explain it is physicalisticaly. However, the problem of explaining how brains give rise to subjective sensation, of explaining qualia in physical terms, is now considered to be The Hard Problem. In the words of David Chalmers:-
" It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.”
What is hard about the hard problem is the requirement to explain consciousness, particularly conscious experience, in terms of a physical ontology. Its the combination of the two that makes it hard. Which is to say that the problem can be sidestepped by either denying consciousness, or adopting a non-physicalist ontology.
Examples of non-physical ontologies include dualism, panpsychism and idealism . These are not faced with the Hard Problem, as such, because they are able to say that subjective, qualia, just are what they are, without facing any need to offer a reductive explanation of them. But they have problems of their own, mainly that physicalism is so successful in other areas.
Eliminative materialism and illusionism, on the other hand, deny that there is anything to be explained, thereby implying there is no problem, But these approaches also remain unsatisfactory because of the compelling subjective evidence for consciousness.
So you can sidestep the Hard Problemn by debying that there ius anything to be explained, or by denying that conscious experience neeeds to be explained in physical terms.
The third approach to the Hard Problem is to answer it in its own terms.
↑ comment by ShardPhoenix · 2023-08-25T02:36:00.632Z · LW(p) · GW(p)
HPoC is demanding a justification of experience from within a world in which everything is just experiences. Of course it can't be answered!
I think I see what you're saying and I do suspect that experience might be too fundamentally subjective to have a clear objective explanation, but I also think it's premature to give up on the question until we've further investigated and explained the objective correlates of consciousness or lack thereof - like blindsight, pain asymbolia, or the fact that we're talking about it right now.
And does "everything is just experiences" mean that a rock has experiences? Does it have an infinite number of different ones? Is your red, like, the same as my red, dude? Being able to convincingly answer questions like these is part of what it would mean to me to solve the Hard Problem.
Replies from: Jacobian↑ comment by Jacob Falkovich (Jacobian) · 2023-08-25T04:04:47.728Z · LW(p) · GW(p)
By "everything is just experiences" I mean that all I have of the rock are experiences: its color, its apparent physical realness, etc. As for the rock itself, I highly doubt that it experiences anything.
As for your red being my red, we can compare the real phenomenology of it: does your red feel closer to purple or orange? Does it make you hungry or horny? But there's no intersubjective realm in which the qualia themselves of my red and your red can be compared, and no causal effect of the qualia themselves that can be measured or even discussed.
I feel that understanding that "is your red the same as my red" is a question-like sentence that doesn't actually point to any meaningful question is equivalent to understanding that HPoC is a confusion, and it's perhaps easier to start with this.
Here's a koan: WHO is seeing two "different" blues in the picture below?
↑ comment by TAG · 2023-10-08T19:42:52.171Z · LW(p) · GW(p)
By “everything is just experiences” I mean that all I have of the rock are experiences: its color, its apparent physical realness, etc.
Presumably you mean all you have epistemically...in your other comments,it doesn't sound like you are solving the HP with idealism.
↑ comment by ShardPhoenix · 2023-08-25T05:57:31.552Z · LW(p) · GW(p)
-
In general how can you know whether and how much something has experiences?
-
I think with things like the nature of perception you could say there's a natural incomparability because you couldn't (seemingly) experience someone else's perceptions without translating them into structures your brain can parse. But I'm not very sure on this.
↑ comment by tangerine · 2023-08-23T11:54:48.240Z · LW(p) · GW(p)
The Hard Problem doesn’t exist.
Do you believe that all your beliefs are represented only in the structure of your brain? Then changing the structure of your brain changes your beliefs; this way you could theoretically be made to believe anything, including, of course, things that are false. Some false beliefs are useful, such as some optical illusions and illusions in general, such as the belief that “you” are “experiencing” things. (I once interviewed a man who had had a stroke and reported feeling like “he wasn’t there” anymore and he would look at his own hand and say, “it’s like it’s not mine”. This caused difficulties with locomotion and knowing when “he” had to go to the bathroom, because it was hard for him to realize it was actually “his” bladder being full and “him” having to make a decision to relieve it.) It’s a useful, evolved structure in your brain that makes you believe that. But it’s technically still false.
Similarly, you could hallucinate that a dragon is standing in front of you; some people actually have such hallucinations. The rational thing to do at such a moment is to disbelieve your direct experience, based on all the other things you know about the world; you know that the evidence against the existence of dragons is overwhelming and you know hallucinations happen. However, disbelieving that what you’re experiencing is real doesn’t make you suddenly not experience it, that’s why hallucinations can be so debilitating.
Ever had déjà vu? When I have déjà vu, I get an overwhelming sense of having experienced something before and my mind starts racing, trying to explain it. I usually recognize rationally that this is probably a déjà vu, but that explanation feels very unsatisfying in the moment because the feeling of recognition is so convincing. It’s only when this sensation subsides about ten seconds later that I can put the matter to rest, assured that it was just a glitch in my brain. But what if it never subsided? What if you had a déjà vu that lasted the rest of your life? It would be very hard to ignore the constant sensation of recognition. This is exactly the kind of thing that “believing you are experiencing” (and many associated beliefs) is. In both cases it’s false, but in the latter it’s there because it’s useful.
The point is that sometimes the structure of our brains can reach a state in which certain things seem to be true, even though rationally we should disbelieve it based on all the other things we know. The evidence for the existence of the Hard Problem is extremely thin. In fact, it’s unfalsifiable—not even wrong. Everything we know about science, truth and knowledge points to the Hard Problem not existing. What actually needs to be explained is not why we experience things, but why we think that we experience things and we have perfectly good (physical) explanations for that.
Replies from: gilch, Slapstick, quetzal_rainbow, TAG, shankar-sivarajan↑ comment by gilch · 2023-08-23T16:14:46.213Z · LW(p) · GW(p)
This is again simply ignoring the Hard Problem. Your supporting paragraphs seem both true and irrelevant. You're equivocating, conflating consciousness with self-awareness. Consciousness is not the sense-of-self. That is merely one of many things that one can be conscious of.
Replies from: tangerine, tangerine↑ comment by tangerine · 2023-08-24T11:57:37.188Z · LW(p) · GW(p)
The burden of proof is on those who assert that the Hard Problem is real. You can say what consciousness is not, but can you say what it is? As it stands, no explanation of the Hard Problem is possible, because the Hard Problem has no criteria for what would comprise a satisfying explanation; no way to distinguish a correct explanation from an incorrect one. All real science has such criteria, yet even David Chalmers has none. Until those criteria are established, the existence of the Hard Problem will forever remain unfalsifiable, unscientific and belief in it irrational.
Unfortunately, proper criteria for explanations always involve (physical) observations and their predictions. Therefore any attempt to establish criteria for explanations of the Hard Problem is met with the criticism that, because it refers to physical aspects of consciousness, it ignores the Hard Problem. Evidently, proponents of the Hard Problem have backed themselves into a contradictory corner; the Hard Problem is unfalsifiable and any attempt to make it falsifiable makes it not the Hard Problem.
If the Hard Problem is “above” science (i.e., not science), as it seems to be, then it is above inquiry and if it’s above inquiry, why inquire?
The naked truth is that belief in the existence of the Hard Problem fetishizes mystery; it abhors actual explanation and therefore scrambles to keep its suppositions immune to it.
Belief in the Hard Problem, being unscientific and therefore not real, begs the question why such beliefs can nonetheless take root in the face of overwhelming contrary evidence, which is what my earlier post attempted to explain.
I’m experiencing just like you, but the Hard Problem doesn’t jive at all with the rest of my beliefs (and I have seen many attempts to reconcile them, all unsuccessful). Therefore I choose to accept the benefits of the sensation of experience and accept the Easy Problem of consciousness as the overwhelmingly likely Only Problem of consciousness.
Replies from: TAG, gilch, Shiroe↑ comment by TAG · 2023-10-08T19:29:05.960Z · LW(p) · GW(p)
You can say what consciousness is not, but can you say what it is?
The fact that we can't fully explain consciousness is a point in favour of the HP.
Of course, any statement of a problem has to state something about what it is ... but something isn't everything.
The burden of proof is on those who assert that the Hard Problem is real.
Yes, and they have arguments you haven't addressed.
As it stands, no explanation of the Hard Problem is possible, because the Hard Problem has no criteria for what would comprise a satisfying explanation; no way to distinguish a correct explanation from an incorrect one.
I use the criterion of being able to make novel predictions. We clearly don't have a solution that reaches that criterion.
Replies from: tangerine↑ comment by tangerine · 2023-10-09T16:36:29.629Z · LW(p) · GW(p)
The fact that we can't fully explain consciousness is a point in favour of the HP.
But my question was, what exactly can’t we fully explain? What are you referring to when you say “consciousness” and what about it can’t we explain?
they have arguments you haven't addressed.
Such as?
I use the criterion of being able to make novel predictions. We clearly don't have a solution that reaches that criterion.
Agreed, but what exactly should it predict? General relativity made novel predictions when it was first formulated, but about the movement of planets and so forth, so I presume that doesn’t count as a solution to the Hard Problem of consciousness.
Replies from: TAG↑ comment by gilch · 2023-08-25T01:19:08.758Z · LW(p) · GW(p)
Hard Problem has no criteria for what would comprise a satisfying explanation; no way to distinguish a correct explanation from an incorrect one.
I feel like most of your comment is unfair, except for this part. Let me attempt to make it more concrete for you.
Suppose a future scientist offers you technological immortality, but the procedure will physically destroy your brain over time, replacing it with synthetic parts. Your new synthetic brain won't fail from old age and unlike a biological brain, can be backed up (and reconstructed) to protect against its inevitable accidental destruction over the coming eons. Do you take his offer? What assurances do you need? If you're wrong about certain details and accept, you die (brain destroyed) so you'd better get it right.
I expect that assurances one could rationally accept would constitute a solution to the Hard Problem. But maybe you'll surprise me. This scenario is a crux for me (well, one of a few perhaps) such that were they addressed, I would either consider the Hard Problem solved, or else decide that I have no reason left to care about the Hard Problem.
The scenario has a number of assumptions that may not hold for you. But I can only guess. Can we agree on the following?
- Humans do not have ectoplasmic-ghost souls or the like. Rather the mind more directly inhabits the brain, and if it's destroyed, you have permanently died. Gradually replacing your brain with the wrong sort of synthetic parts (such as plastic) will kill you.
- The physical molecules of the brain are completely replaced by natural biological processes over time, i.e., your mind is not your brain's atoms, but rather something about their structure, and therefore a procedure like the offer could (in principle) work.
- There are physical structures, including complex (even biological) ones that are not alive in the sense of being conscious/aware. I.e., panpsychism is false.
- Automatons (chatbots?) can say they are conscious when they are not. I.e., zombies can be constructed, in principle, and a procedure like this could replace you with one. (This is not a Chalmers "p-zombie". Its brain is synthetic, thus physically distinguishable from a normal biological human.)
↑ comment by tangerine · 2023-08-25T13:59:33.672Z · LW(p) · GW(p)
Thank you for this reply, I think this helps to pin down where our disagreement comes from.
Technically I don’t disagree with your assumptions, because I think it’s equally valid to say they’re true as that they’re false, which is exactly the issue I have with them. There doesn’t seem to be a fact of the matter about them (i.e., there’s no way to experimentally distinguish a world in which any of these assumptions holds from one in which it does not), so if the existence of the Hard Problem is derived from them, then that doesn’t alleviate the issue of its unfalsifiability.
The cause of this issue is that (from my point of view) many of the words you’re using don’t have clear definitions in the domain that you’re trying to use them in. I don’t mean to be a pedant, but if we’re really trying to use language for extraordinary investigations like these, then I think precision is warranted. For now, let me just focus on the thought experiment you posed. The way I see it, it’s equivalent to the Ship of Theseus. I think what we’re ultimately trying to grapple with is how best to model reality and it seems to me that we actually already have a perfectly good model to solve the Ship of Theseus and your thought experiment, namely particle physics. If you look at the Ship of Theseus or a person’s brain or body (or a piece of text they wrote), these are collections of particles that create a causal chain to somebody saying “Hey, it’s the Ship of Theseus!” or “Hey, gilch wrote a reply!” Over time, some of those particles may get swapped for others and cause us to still use the same name or maybe not. There’s no mystery or contradiction there, it’s a bunch of particles doing their thing and names are patterns in those particles, for example in the air when we speak them or in silicon when we’re writing it on a phone.
Do we think about the world in terms of fundamental particles? No, it’s wildly impractical, so we’ve been forced to resort, through our evolution and the evolution of language, to much simpler models/heuristics. Daniel Dennett has this idea of “folk psychology”, which talks specifically about how we model other people’s behaviors, by talking about things like “belief”, “desire”, “fear” and “hope”. This model works most of the time, but it breaks down when you try to use it to model, for example, the behavior of a schizophrenic person, or the behavior of a dead person. You can extend this idea to a kind of “folk reality”, where we model the world in terms of “people”, “alive”, “dead”, “conscious”, “justice”, “love” and pretty much all other words, which can similarly break down when trying to use them to communicate about things that they’re not normally used to communicate about.
If you like, I could go into detail how this applies to each of your assumptions, but I’ll do so just for your last assumption for now. Consciousness in normal usage is a word that evolved to mean something like “able to respond appropriately to its surroundings”, so a person who is sleeping or knocked out is basically unconscious; that’s enough for practical, daily usage. Similarly, we say humans normally are conscious, other primates and mammals maybe a little less, insects maybe and plants not really, i.e., the fewer traits it has that we recognize in humans the less conscious it is; this is already a bit less practical and more academic, but it affects how we behave. (For example, vegans claim eating animals is bad, while eating plants is okay, even though they’re absurdly glossing over whether plants can feel pain, which is not clear at all.) Over time, the evolution of language (which is a product of both chance and deliberate human decisions) adapted the meaning of words like consciousness to remain a useful part of folk reality. Our intuitions about the meanings of words and in turn about reality depend on how we see these words being used as we grow up, even if they don’t model reality correctly; we always end up with somewhat mistaken intuitions, because folk reality does not model reality exactly. And now, quite suddenly, we’ve ended up in a situation where there are machines that can behave in a way that we’re only used to recognizing in humans and so there’s a lot of confusion over whether they are conscious. Again, from a particle physics perspective it’s clear what’s going on; it’s particles doing their thing like they always have. Some particles are arranged in a structure we haven’t seen before, so what? However, our folk reality model breaks down because it’s imprecise and not adapted to this new situation. That’s also not an issue in itself; language and intuitions just have to adapt. Maybe we’ll come to a consensus that they are just as conscious as we are, or maybe we’ll see them as inferior and therefore treat them with greater indifference, even though how we describe them doesn’t actually change their nature, just our perception and treatment of them.
The real problems begin when people assume that their intuitions are true and fail to recognize that our intuitions and language are models of reality (largely inherited from cultures before us who had much less experience with the world) and that they frequently don’t generalize well. So when I encounter something like the Hard Problem, I throw my intuitions about how “I really feel like I’m experiencing things, so I can’t just be an automaton” out the window, because going down that road just leads to a bunch of useless contradictions and I conclude that whatever is going on must be made possible by particles doing their thing and nothing else, at least until I encounter a better model.
As for whether I would choose to undergo the procedure, I probably would. I don’t see any meaningful difference between my brain being replaced by new synthetic or biological material. In fact, according to my intuition (perhaps mistaken), my future counterpart with a 100% biological brain would be just as much a different person from me as my alternative future counterpart with a 100% synthetic brain.
↑ comment by Shiroe · 2023-08-24T15:05:28.491Z · LW(p) · GW(p)
The burden of proof is on those who assert that the Hard Problem is real. You can say what consciousness is not, but can you say what it is?
In the sense that you mean this, this is a general argument against the existence of everything, because ultimately words have to be defined either in terms of other words or in terms of things that aren't words. Your ontology has the same problem, to the same degree or worse. But we only need to give particular examples of conscious experience, like suffering. There's no need to prove that there is some essence of consciousness. Theories that deny the existence of these particular examples are (at best) at odds with empiricism.
Therefore I choose to accept the benefits of the sensation of experience and accept the Easy Problem of consciousness as the overwhelmingly likely Only Problem of consciousness.
It's deeply unclear to me what you mean by this. If you're denying that you have phenomenal experiences like suffering (i.e. negative valences), your rational decision making should be strongly affected by this belief. In the same way that someone who has stopped believing in Hell and Heaven should change their behavior to account for this radical change in their ontology.
Replies from: tangerine↑ comment by tangerine · 2023-08-26T14:24:50.661Z · LW(p) · GW(p)
Hi, please see my reply to gilch above.
To add to that reply, an explanation only ever serves one function, namely to aid in prediction; every moment of our life, we try to achieve outcomes by predicting which action will lead to which outcome. An explanation to the Hard Problem doesn’t do that. Any state of consciousness that I try to achieve I do so with concepts related to the Easy Problem. I do have experiences (I don’t know what the word “phenomenal” would add to that), such as pain, but to the extent that I can predict and control these, I do so purely with solutions to the Easy Problem. And in my book, concepts that exist only in explanations that don’t aid in prediction are by definition not real. But the Hard Problem is even worse than that; it’s set up so that we can’t tell the difference between a correct and incorrect explanation in the first place, which means literally anything could be an explanation, which is equivalent to no explanation at all. Sure, you can choose to believe that something like panpsychism is real or that it’s not real, but because neither belief adds any predictive power, you’re better off just cutting it out, as per Occam’s Razor.
Replies from: Shiroe↑ comment by Shiroe · 2023-09-09T17:26:50.395Z · LW(p) · GW(p)
You seem to be claiming that you have experiences, but that their role is purely functional. If you were to experience all tactile sensations as degrees of being burnt alive, but you could still make predictions just as well as before, it wouldn't make any difference to you?
Replies from: tangerine↑ comment by tangerine · 2023-09-09T19:34:35.816Z · LW(p) · GW(p)
It doesn’t make sense to say that I could make predictions just as well as before if I experienced all tactile sensations as degrees of being burnt alive, because such sensations would be equivalent to predictions that I would be burning alive, which would be false and therefore interfere with my functioning. You can’t separate experience from its consequences. That’s also why philosophical zombies are impossible; if you could have a body which doesn’t experience, then it’s not going to function as normal.
If I were to experience all tactile sensations as degrees of being burnt alive, I would assume something was wrong with my body and I would want to alleviate that situation by making predictions about which actions would alleviate it by wielding only concepts related to the Easy Problem. How would the Hard Problem help me in that situation?
Replies from: Shiroe↑ comment by Shiroe · 2023-09-26T05:23:57.304Z · LW(p) · GW(p)
because such sensations would be equivalent to predictions that I would be burning alive, which would be false and therefore interfere with my functioning
I don't see a necessary equivalence here. You could be fully aware that the sensations were inaccurate, or hallucinated. But it would still hurt just as much.
if you could have a body which doesn’t experience, then it’s not going to function as normal.
A human body, or any kind of body? It seems like a robot could engage in the same self-preservation behavior as a human without needing to have anything like burning sensations. I can imagine a sort of AI prosthesis for people born with congenital insensitivity to pain that would make their hand jerk away from a burning hot surface, despite them not ever experiencing pain or even knowing what it is.
Replies from: tangerine↑ comment by tangerine · 2023-09-27T19:01:42.900Z · LW(p) · GW(p)
You could be fully aware that the sensations were inaccurate, or hallucinated. But it would still hurt just as much.
The experience of hurting makes you respond as if you really were hurting; you have some voluntary control over your response by the frontal cortex’ modulation of pain signals, but it is very limited. Any control we exert over our experiences corresponds to physical interventions. The Hard Problem simply does not add anything of value here.
I can imagine a sort of AI prosthesis for people born with congenital insensitivity to pain that would make their hand jerk away from a burning hot surface, despite them not ever experiencing pain or even knowing what it is.
That you can imagine such a prosthesis does not mean that it could exist. It depends on how such a prosthesis would work exactly. I suspect that the more such a prosthesis was able to mimick the normal response, the more its wearer would experience pain, i.e., inducing the normal response is equivalent to inducing the normal experience.
Replies from: Shiroe↑ comment by Shiroe · 2023-10-13T14:21:59.768Z · LW(p) · GW(p)
Here are some cruxes, stated from what I take to be your perspective:
- That there's nothing at stake whether or not we have first person experiences of the kind that eliminitivists deny; it makes no practical difference to our lives whether we're so-called "automatons" or "zombies", such terms being only theoretical distinctions. Specifically it should make no difference to a rational ethical utilitarian whether or not eliminitivism happens to be true. Resources should be allocated the same way in either case, because there's nothing at stake.
- Eliminitivism is a more parsimonious theory than non-eliminitivism, and is strictly better than it for scientific purposes; elimitivism already explains all of the facts about our world, and adding so-called "first person experiences" is just a cog which won't connect to anything else; removing it wouldn't require arbitrary double standards for the validity of evidence.
- There's no way of separating experience from functionality in a system. If an organism manifests consistent and enduring behaviors of self-preservation, goal-seeking, etc. then it must have experiences, regardless of how the organism itself happens to be constructed.
I'm looking for double cruxes now. The first two don't seem very useful to me as double cruxes, but maybe the last one is. Any ideas?
Replies from: tangerine↑ comment by tangerine · 2023-10-18T20:04:32.680Z · LW(p) · GW(p)
From my point of view, much or all of the disagreement around the existence of the Hard Problem seems to boil down to the opposition between nominalism and philosophical realism. I’ll discuss how I think this opposition applies to consciousness, but let me start by illustrating it with the example of money having value.
In one sense, the value of money is not real, because it's just a piece of paper or metal or a number in a bank’s database. We have systems in place such that we can track relatively consistently that if I work some number of hours, I get some of these pieces of paper or metal or the numbers on my bank account change in some specific way and I can go to a store and give them some of these materials or connect with the bank’s database to have the numbers decrease in some specific way, while in exchange I get a coffee or a t-shirt or whatever. But this is a very obtuse way of communicating, so we just say that “money has value” and everybody understands that it refers to this system of exchanging materials and changing numbers. So in the case of money, we are pretty much all nominalists; we say that money has value as a shorthand and in that sense the value of money is real. On the other hand, a philosophical realist would say that actually the value of money is real independently from our definition of the words. (I view this idea similarly to how Eliezer Yudkowsky talks about buckets being “magical” in this story [LW · GW].)
In the case of the value of money, philosophical realism does not seem to be a common position. However, when it comes to consciousness, the philosophical realist position seems much more common. This strikes me as odd, since both value and consciousness appear to me to originate in the same way; there is some physical system which we, through the evolution of language and culture generally, come to describe with shorthands (i.e., words), because reality is too complicated to talk about exhaustively and in most practical matters we all understand what we mean anyway. However, for philosophical realists, such words appear to take on a life of their own, perhaps because existing words are simply passed down to younger generations as if they were the only way to think about the world, without mentioning that those words happen to conceptualize the world in one specific way out of an infinite and diverse number of ways and, importantly, that that conceptualization oversimplifies to a large extent. This oversimplification is not something we can escape. Any language, including any internal one, has to cope with the fact that reality is too complicated to capture in an exhaustive way. Even if we’re rationalists, we’re severely bounded ones. We can’t see reality for what it is and compare it to how we think about it to see where the differences are; we only see our own thoughts.
Following the Hard Problem through to its logical conslusions seems to lead to contradictions such as the interaction problem. None of the solutions proposed by myself or any Hard Problem enthusiast dissolve these contradictions in a way that satisfy me, therefore I conclude that my mind's conception of my own consciousness is flawed. I'll nonetheless stick to that conception because it's useful, but I have no illusions that it is universally correct; this last step is one that proponents of the Hard Problem seem not to be prepared to take. From my point of view it looks like they conclude that because their minds conceptualize the world in a certain way that this conception must somehow correspond exactly to reality. However, the map is not the territory.
P.S. I would not call myself an eliminativist. I consider “experience” and “consciousness” and related terms as real as the value of money.
Replies from: Shiroe↑ comment by Shiroe · 2023-10-20T23:38:16.719Z · LW(p) · GW(p)
I appreciate hearing your view; I don't have any comments to make. I'm mostly interested in finding a double crux.
This isn't really a double crux, but it could help me think of one:
If someone becomes convinced that there isn't any afterlife, would this rationally affect their behavior? Can you think of a case where someone believed in Heaven and Hell, had acted rationally in accordance with that belief, then stopped believing in Heaven and Hell, but still acted just the same way as they did before? We're assuming their utility function hasn't changed, just their ontology.
Replies from: tangerine↑ comment by tangerine · 2023-10-21T07:08:24.808Z · LW(p) · GW(p)
Well, for me, one crux is this question of nominalism vs philosophical realism. One way to investigate this question for yourself is to ask whether mathematics is invented (nominalism) or discovered (philosophical realism). I don’t often like to think in terms of -isms, but I have to admit I fall pretty squarely in the nominalist camp, because while concepts and words are useful tools, I think they are just that: tools, that we invented. Reality is only real in a reductionist sense; there are no people, no numbers and no consciousness, because those are just words that attempt to cope with the complexity of reality, so we just shouldn’t take them so seriously. If you agree with this, I don’t see how you can think the Hard Problem is worth taking seriously. If you disagree, I’m interested to see why. If you could convince me that there is merit to the philosophical realist position, I would strongly update towards the Hard Problem being worth taking seriously.
Replies from: TAG, Shiroe↑ comment by TAG · 2023-10-21T15:16:32.489Z · LW(p) · GW(p)
Reality is only real in a reductionist sense; there are no people, no numbers and no consciousness, because those are just words that attempt to cope with the complexity of reality,
That isn't what reductionism says. Reduction is a form of explanation. It is not elimination. What is reductively explained still exists -- heat still exists-- it is just not different to its reduction base.
If you agree with this, I don’t see how you can think the Hard Problem is worth taking seriously.
The hard problem emerges from the requirement to explain consciousness reductively.
Examples of non-physical ontologies include dualism, panpsychism and idealism . These are not faced with the Hard Problem, as such, because they are able to say that subjective, qualia, just are what they are, without facing any need to offer a reductive explanation of them.
Replies from: tangerine↑ comment by tangerine · 2023-10-22T08:28:44.854Z · LW(p) · GW(p)
heat still exists
Saying that something “exists” is more subtle than that. In everyday life we don’t have to be pedantic about it, but in this discussion, I think we do.
There are lots of different ontologies which explain how certain parts of reality work. The concept of heat is one that most people include in their ontologies, because it’s just very useful most of the time, though not always. For example, there’s not much sense in asking what the temperature is of a single particle. Virtually every ontology breaks down in such a way at some point, which is to say that in certain situations it does not describe what happens in reality closely enough to be of practical value in that situation.
In pagan cultures, there were ontologies containing gods which ostensibly influenced certain parts of reality. There’s a storm? Zeus must be angry. To these cultures, Zeus existed, because it seemed to explain what was happening. It wasn’t a very good explanation from our perspective because it didn’t bestow great power in predicting storms.
But also in modern science, we have had and still do have theories which explain reality only partially. Newtonian mechanics describes the world very accurately, but not quite exactly. Einstein’s general relativity filled in some of the gaps, but we’re pretty sure that that is not exactly right either, because it’s not a quantum theory, which we think a better theory should be. Given that we know our theories are wrong, does inertia exist? Does spacetime exist? Do points of infinite density exist?
The hard problem emerges from the requirement to explain consciousness reductively.
You could similarly say that any valid ontology has the requirement to explain heat reductively, but then the pagan could also say that any ontology has the requirement to explain Zeus reductively. Seeing reality through the lens of ontologies, which we all have no choice but to do, colors the perception of what you think exists and needs to be explained. True, “heat” needs to be explained insofar as it does correspond to reality, but we might Pareto-improve our understanding of reality by using an entirely different ontology which doesn’t contain the concept of heat at all, which is pretty much what happened to the concept of Zeus. The concept of consciousness must be held to the same standard. We have to take a step back from our ontologies and ask what parts are actually useful and what exactly it means for them to be useful. The litmus test of modern science is that it must add predictive power. The problem is that consciousness, as described by the Hard Problem, is an ontological outgrowth (derived analytically from an existing ontology) that does not have any predictive power. Even worse, consciousness as described by the Hard Problem is unfalsifiable, meaning it has been by definition pre-empted from having predictive power (otherwise it could potentially have been falsified by comparing its predictions to some outcome), so why should I include it in my ontology?
Replies from: TAG↑ comment by TAG · 2023-10-29T15:22:51.251Z · LW(p) · GW(p)
There are lots of different ontologies which explain how certain parts of reality work. The concept of heat is one that most people include in their ontologies, because it’s just very useful most of the time, though not always. For example, there’s not much sense in asking what the temperature is of a single particle. Virtually every ontology breaks down in such a way at some point,
But that's a completely general argument. If the worst thing you can say about phenomenal consciousness is that it is occasionally inapplicable, it is no worse off than heat.
which is to say that in certain situations it does not describe what happens in reality closely enough to be of practical value in that situation.
Yep. Hardly a damning indictment.,
In pagan cultures, there were ontologies containing gods which ostensibly influenced certain parts of reality. There’s a storm? Zeus must be angry. To these cultures, Zeus existed, because it seemed to explain what was happening. It wasn’t a very good explanation from our perspective because it didn’t bestow great power in predicting storms.
Note the difference between the phenomenon being explained, the explanandum, and the explanation. There is not much doubt that thunder and lightning exist , but there is much doubt that Zeus or Thor causes them .
But also in modern science, we have had and still do have theories which explain reality only partially. Newtonian mechanics describes the world very accurately, but not quite exactly. Einstein’s general relativity filled in some of the gaps, but we’re pretty sure that that is not exactly right either, because it’s not a quantum theory, which we think a better theory should be. Given that we know our theories are wrong, does inertia exist? Does spacetime exist? Do points of infinite density exist?
That's another completely general argument.
The hard problem emerges from the requirement to explain consciousness reductively.
You could similarly say that any valid ontology has the requirement to explain heat reductively, but then the pagan could also say that any ontology has the requirement to explain Zeus reductively.
That's the wrong way round: Zeus is posited, dobntfully, to explain something for which there is clear evidence. Consciousness is equivalent to the thunder, not the thunder god (particularly under a minimal definition ... it's important not to get misled by the idea that qualia are necessarily nonphyiscal or something).
Seeing reality through the lens of ontologies, which we all have no choice but to do, colors the perception of what you think exists and needs to be explained.
In general. Still not a specific point against consciousness.
True, “heat” needs to be explained insofar as it does correspond to reality,
It needs to be explained inasmuch as it appears to exist. Corresponding to reality is setting the bar far too high -- we won't know what is real until we have complete explanations. Rainbows are a useful example: they are worth explaining , and we have an explanation, and the explanation tells us they don't literal exist as arches in the sky.
but we might Pareto-improve our understanding of reality by using an entirely different ontology which doesn’t contain the concept of heat at all, which is pretty much what happened to the concept of Zeus.
That only tells me consciousness might not exist. It doesn't tell me that the problem of consciousness is easy or a non problem.
The concept of consciousness must be held to the same standard.
Who said otherwise?
We have to take a step back from our ontologies and ask what parts are actually useful and what exactly it means for them to be useful.
Useful for what?
The litmus test of modern science is that it must add predictive power.
The litmus test of philosophy is that it must tell the truth. If prediction isn't available, you should accept that. You shouldn't argue against X on the basis that it prevents prediction , because you have no reason to believe that the universe is entirely predictable. Science is based on the hope that things are predictable and comprehensible, but not in the certainty.. They are falsifiable claims.
The problem is that consciousness, as described by the Hard Problem, is an ontological outgrowth (derived analytically from an existing ontology) that does not have any predictive power.
Why? Where is that proven?
Even worse, consciousness as described by the Hard Problem is unfalsifiable, meaning it has been by definition pre-empted from having predictive power
Consciousness as described by the HP is not necessarily epiphenomenal.
Now, Chalmers set out the HP, and Chalmers *might* be an epiphneomnealist, but it doesn't follow that the hP requires epiphenomenalism (the field is rife with that kind of misunderstanding).
Also, the argument form epiphenomenalism is quite different to the argument from nominalism.
Replies from: tangerine↑ comment by tangerine · 2023-10-29T17:42:55.009Z · LW(p) · GW(p)
But that's a completely general argument. If the worst thing you can say about phenomenal consciousness is that it is occasionally inapplicable, it is no worse off than heat.
Unlike heat, I can’t imagine any situation in which consciousness as described by the Hard Problem is applicable. Can you give me a situation in which you can make better predictions using the concept?
Note the difference between the phenomenon being explained, the explanandum, and the explanation. There is not much doubt that thunder and lightning exist , but there is much doubt that Zeus or Thor causes them .
Zeus is posited, dobntfully, to explain something for which there is clear evidence. Consciousness is equivalent to the thunder, not the thunder god (particularly under a minimal definition ... it's important not to get misled by the idea that qualia are necessarily nonphyiscal or something).
We can agree that thunder and lightning exist and that Zeus and Thor do not, but not that consciousness exists as posed by the Hard Problem. To resolve that disagreement we need to agree on what it means for something to exist. I proposed this litmus test of additive predictive power.
The litmus test of philosophy is that it must tell the truth. If prediction isn't available, you should accept that. You shouldn't argue against X on the basis that it prevents prediction , because you have no reason to believe that the universe is entirely predictable. Science is based on the hope that things are predictable and comprehensible, but not in the certainty.. They are falsifiable claims.
How does one test that a statement is true (or at least not false)? I accept that there may be things that are true that I can’t know are true, but there is an infinite number of such possible things. How would I decide which to believe and which not? And if I did, what would that get me?
The problem is that consciousness, as described by the Hard Problem, is an ontological outgrowth (derived analytically from an existing ontology) that does not have any predictive power.
Why? Where is that proven?
Consciousness as described by the Hard Problem is not derived from any observation that can be independently corroborated. When you claim to observe your own consciousness, you are not observing reality directly, you are observing your own ontology. Your ontology contains consciousness as described by the Hard Problem and that is why you’re seeing it.
Replies from: TAG↑ comment by TAG · 2023-10-31T16:20:29.913Z · LW(p) · GW(p)
Unlike heat, I can’t imagine any situation in which consciousness as described by the Hard Problem is applicable.
Applicable to what? As I said ,it is an explanandum, not an explanation. We have prima facie evidence of consciousness because we are conscious. Also , I dont buy that "consciousness as described by the Hard Problem" is epiphenomneal.
Can you give me a situation in which you can make better predictions using the concept?
I can give you situations where I can make equally good predictions of external behaviour. Pain is a quale, I fI feel pain, I grimace, go "ouch!" etc.
Note the difference between the phenomenon being explained, the explanandum, and the explanation. There is not much doubt that thunder and lightning exist , but there is much doubt that Zeus or Thor causes them . Zeus is posited, dobntfully, to explain something for which there is clear evidence. Consciousness is equivalent to the thunder, not the thunder god (particularly under a minimal definition … it’s important not to get misled by the idea that qualia are necessarily nonphyiscal or something).
We can agree that thunder and lightning exist and that Zeus and Thor do not, but not that consciousness exists as posed by the Hard Problem.
What does "as posed by the HP" mean? Again, the HP does not state that consciousness is epiphenomenal or nonphysical.
To resolve that disagreement we need to agree on what it means for something to exist. I proposed this litmus test of additive predictive power.
I propose that it is predictive power or, explanatory power , or prima facie evidence.
You can't just have closed loop of explanatory posits explaining each other. Explanations are explanations of something.
The litmus test of philosophy is that it must tell the truth. If prediction isn’t available, you should accept that. You shouldn’t argue against X on the basis that it prevents prediction , because you have no reason to believe that the universe is entirely predictable. Science is based on the hope that things are predictable and comprehensible, but not in the certainty.. They are falsifiable claims.
How does one test that a statement is true (or at least not false)?
It's complicated. But persistent failure to explain thing X using method Y is a hint of falsehood.
I accept that there may be things that are true that I can’t know are true, but there is an infinite number of such possible things. How would I decide which to believe and which not?
No one is asking you to believe in things which are invisible and non predictive.
And if I did, what would that get me? The problem is that consciousness, as described by the Hard Problem, is an ontological outgrowth (derived analytically from an existing ontology) that does not have any predictive power.
Why? Where is that proven?
Consciousness as described by the Hard Problem is not derived from any observation that can be independently corroborated.
No. So what? If you had certain apriori knowledge that everything is objective, that would be a problem for consciousness. If there are fundamentally subjective perceptions, that's a problem for physicalism.
You have evidence of your own perceptions, whether or not the you can corroborate then. You can make predictions from your own perceptions.
Insisiting on objective evidence is looking for the key under the lamppost.
When you claim to observe your own consciousness, you are not observing reality directly,
When you claim to observe the outside world, you are not observing reality directly.
you are observing your own ontology. Your ontology contains consciousness as described by the Hard Problem and that is why you’re seeing it.
Unless it's really there. You can't assume that something necessarily doesn't exist in the territory, just because it does feature in the map.
Well, if you did, scientific realism is dead.
Replies from: tangerine↑ comment by tangerine · 2023-10-31T19:18:28.550Z · LW(p) · GW(p)
As I said ,it is an explanandum, not an explanation. We have prima facie evidence of consciousness because we are conscious.
I believe consciousness exists and that we both have it, but I don’t think either of us have the kind of consciousness that you claim you have, namely consciousness as described by the Hard Problem. By consciousness as described by the Hard Problem I mean the kind of consciousness that is not fully explained by solutions to the Easy Problem.
Why do you believe that solutions to the Easy Problem are not sufficient? Conversely, why do you believe that heat is a sufficient explanation for what happens to one’s finger when touching fire? What does the latter do that the former does not? How do you in general decide that an explanation is sufficient?
Replies from: TAG↑ comment by TAG · 2023-11-01T09:44:45.495Z · LW(p) · GW(p)
I believe consciousness exists and that we both have it, but I don’t think either of us have the kind of consciousness that you claim you have, namely consciousness as described by the Hard Problem.
I believe that consciousness as described by the HP is just phenomenal consciousness...not epiphenomenal consciousness. Phenomenal consciousness is actually mundane..it's just how things seem to you, how it feels to be sitting in a seat, looking at a screen , reading these words, right here, right now.
I have that, and I'm pretty sure you do too
By consciousness as described by the Hard Problem I mean the kind of consciousness that is not fully explained by solutions to the Easy Problem.
The kind of consciousness the HP is about isn't defined as inexplicable. It's defined as phenomenal ... and then noticed as being unexplained.
Why do you believe that solutions to the Easy Problem are not sufficient?
They are not sufficient to explain phenomenal consciousness...ie. my own experience. They may well be sufficient to explain others behaviour.
Conversely, why do you believe that heat is a sufficient explanation for what happens to one’s finger when touching fire?
What happens is an objective process,.my finger gets hotter and a subjective sensation. The latter is not explained by the reductive explanations of heat. Reductive explanations are able to predict and terrorist their explananda. One can predict a temperature, and confirm the prediction , because one can measure temperature.
Replies from: tangerine↑ comment by Shiroe · 2023-10-24T13:08:37.967Z · LW(p) · GW(p)
I don't think I fall into either camp because I think the question is ambiguous. It could be talking about the natural structure of space and time ("mathematics") or it could be talking about our notation and calculation methods ("mathematics"). The answer to the question is "it depends what you mean".
The nominalist vs realist issue doesn't appear very related to my understanding of the Hard Problem, which is more about the definition of what counts as valid evidence. Eliminitivism says that subjective observations are problematic. But all observations are subjective (first person), so defining what counts as valid evidence is still unresolved.
Replies from: tangerine↑ comment by tangerine · 2023-10-24T16:47:08.087Z · LW(p) · GW(p)
the natural structure of space and time ("mathematics")
What exactly do you mean by this? That nature is mathematical?
all observations are subjective (first person)
This sounds like it could be a double crux, because if I believed this the Hard Problem would follow trivially, but I don’t believe it.
Replies from: Shiroe↑ comment by Shiroe · 2023-10-25T18:02:05.149Z · LW(p) · GW(p)
You don't believe that all human observations are necessarily made from a first-person viewpoint? Can you give a counter-example? All I can think of are claims that involve the paranormal or supernatural.
Replies from: TAG, tangerine↑ comment by TAG · 2023-10-25T18:26:07.694Z · LW(p) · GW(p)
Observations being made from a first person perspective is a rather trivial definition of subjective,. because it's quite possible for different observers to agree on observations.(And for some aspects of the persepective to be predictable from objective facts).The forms of subjectivity that count are where they disagree, or where they can't even express their perceptions.
Replies from: Shiroe↑ comment by tangerine · 2023-10-25T21:12:03.202Z · LW(p) · GW(p)
Like TAG said, in a trivial sense human observations are made from a first-person, subjective viewpoint. But all of these observations are also happening from a third-person perspective, just like the rest of reality. The way I see it, the third-person perspective is basically the default, i.e., reality is like a list of facts, from which the first-person view emerges. Then of course the question is, how is that emergence possible? I can understand the intuition that the third-person and first-person view seem to be fundamentally different, but I think of it this way: all the thoughts you think and the statements you make are happening in reality and the structure of that reality determines your thoughts. This is where the illusion arguments become relevant; illusions, such as optical ones, demonstrate clearly that you can be made to believe things that are wrong because of convenience or simply brain malfunction. Changing the configuration of your brain matter can make you believe absolutely anything. The belief in the first-person perspective has evolved because it’s just very useful for survival and you can’t choose to disbelieve what your brain makes you believe.
Given the above, to say that the first-person perspective is fundamentally different seems like the more supernatural claim to me.
Replies from: TAG, Shiroe↑ comment by TAG · 2023-11-01T10:59:01.966Z · LW(p) · GW(p)
the thoughts you think and the statements you make are happening in reality and the structure of that reality determines your thoughts.
The HP is about sensory qualities, not thoughts. Are you saying that we have, today, a theory which can predict the nature of sensory qualities from objective facts? (Which would be a solution to the HP, which would imply there ever was an HP)
Replies from: tangerine↑ comment by tangerine · 2023-11-01T14:09:49.052Z · LW(p) · GW(p)
Are you saying that we have, today, a theory which can predict the nature of sensory qualities from objective facts?
Yes, for example, if blood flow to the brain is decreased, you can use that to correctly predict a decrease in consciousness. If I show you a red piece of paper, you will experience red, if the paper is green, you experience green, etc.
Replies from: TAG↑ comment by TAG · 2023-11-01T16:29:57.171Z · LW(p) · GW(p)
Now, how about the hard problems: can you predict novel qualia? Can you predict non human qualia? Can you explain why red looks like that?
(And are you admitting hta thtere ever was a problem?)
Replies from: tangerine↑ comment by tangerine · 2023-11-01T16:49:01.513Z · LW(p) · GW(p)
Sure, change the neural patterns in a person’s brain and they’ll get new experiences. As far as non-humans are concerned, if you punch them in the face they’ll experience pain and fear or anger. Red looks like that because that’s what we mean when we say red. If a cup breaks, can you explain where its cupness has gone?
Replies from: TAG↑ comment by TAG · 2023-11-02T14:02:08.399Z · LW(p) · GW(p)
Can you really not imagine why I put forward those particular examples?
Sure, change the neural patterns in a person’s brain and they’ll get new experiences.
What new experiences? That's the hard problem.
As far as non-humans are concerned, if you punch them in the face they’ll experience pain and fear or anger.
Yes, non human animals have some experiences in common with humans. They also have some that are different, like dolphin sonar. That's the other hard problem?
Could you really not see what I was getting at?
And are you now admitting that there ever was a problem?
Replies from: tangerine↑ comment by tangerine · 2023-11-02T17:30:16.740Z · LW(p) · GW(p)
>What new experiences? That's the hard problem.
Sure, that’s a hard problem, but it’s not the hard problem. You can go through the usual scientific process and identify what neural patterns correlate with which experiences, but that’s all doable with solutions to the Easy Problem.
>Yes, non human animals have some experiences in common with humans. They also have some that are different, like dolphin sonar. That's the other hard problem?
Again, sure, a hard problem, but to explain such things you can go through the usual scientific process and come up with new ontologies to describe new kinds of experiences.
In contrast, the problem with the Hard Problem is that you can’t even begin the scientific process. What it looks like to me what you’re trying to get at is that, for example, if there is a cup, we can both acknowledge that there are physical constituents that make up the cup, but you seem to pose that in addition to this there is a “cupness” to the cup. This is basically the essentialist position, which is related to philosophical realism.
In terms of consciousness, you seem to be saying that there is something it is like to be conscious, in addition to what the brain is doing from an objective standpoint. I deny that this is the case and therefore I deny that there is a problem that needs explanation. What does need explanation is why some people such as yourself claim that it requires an explanation, which I have tried to explain earlier.
Replies from: TAG↑ comment by TAG · 2023-11-02T19:44:36.757Z · LW(p) · GW(p)
You can go through the usual scientific process and identify what neural patterns correlate with which experiences
Why would you? The point of using reductive explanation is that it identifies phenomenal consciousness with neural activity, and therefore supports physicalism. On the other hand, you would still be able find correlations in a universe where dualism holds.
You can't prove that physicslism, or anything else is true just by assuming it.
In contrast, the problem with the Hard Problem is that you can’t even begin the scientific process.
So what? You can't assume that only things you want to explain in a particular way exist. Why would the universe care?
You seem to be assuming that science/physicslism is unfalsifiable -- that if something defies scientific epistemology or physical ontology, then it can be dismissed for that reason.
In terms of consciousness, you seem to be saying that there is something it is like to be conscious, in addition to what the brain is doing from an objective standpoint.
I''m not positing that there is: I have subjective conscious experience because I'm a subject. I'm not looking at myself from the outside. Are you?
Qualia don't go away when you stop believing in them: pains still hurt , tomatoes are still red.
In terms of consciousness, you seem to be saying that there is something it is like to be conscious, in addition to what the brain is doing from an objective standpoint
I am saying that there is something that is not , currently, explained from an objective viewpoint. It's not the problem that assumes a non physicalist ontology, it's the lack of solution that implies it.
Replies from: tangerine↑ comment by tangerine · 2023-11-02T20:55:08.211Z · LW(p) · GW(p)
>Why would you? The point of using reductuive explanation is that it *identitfies" phenomenal consciousness with neural activity, and therefore supports physicalism. On the other hand, you would still be able find correlations in a universe where dualism holds
You asked how changing neural patterns in a person’s brain can be linked to what experiences. You can use the scientific process to establish those links inasfar as they can be linked.
There is no possible universe in which dualism holds due to the interaction problem, unless you use a very narrow definition of what’s physical. (For example, I have encountered people who claimed that light is not physical. That’s fair, but that’s not the definition of physical that I or the vast majority of physicists and scientists use.)
>So what? You can't assume that only things you want to explain in a particular way exist. Why would the universe care?
The point is that you can’t say anything meaningful about things you can’t explain using the scientific process. You can’t even say they exist. They may well exist, but you can’t tell something that doesn’t exist apart from something that can’t be explained scientifically. The scientific process is not just a particular way to explain things; indeed, the universe does not care to what degree you can know things; it just so happens that falsifying theories through predictions is the only way to know things. If horoscopes or dowsing rods were a way to know what’s true they would be science, but they aren’t so they're not.
>I''m not positing that there is: I have subjective conscious experience because I'm a subject. I'm not looking at myself from the outside. Are you?
You are an object. Of course it looks like you’re a subject because that’s what your brain (i.e., “you”) looks like to that brain.
>I am saying that there is something that is not , currently, explained from an objective viewpoint. I've said so several times.
How do you decide whether a candidate explanation is sufficient to explain this something?
Replies from: Mitchell_Porter, TAG↑ comment by Mitchell_Porter · 2023-11-03T01:25:00.863Z · LW(p) · GW(p)
There is no possible universe in which dualism holds due to the interaction problem
Hopefully making this point won't derail your discussion:
Quantum mechanics is not deterministic. This is a gap which dualists can use. The major names here might be John Eccles (in neuroscience) and Henry Stapp (in physics).
Replies from: gilch, tangerine↑ comment by tangerine · 2023-11-03T09:28:50.851Z · LW(p) · GW(p)
This is a gap which dualists can use.
How could dualists use a random process?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2023-11-03T10:16:27.967Z · LW(p) · GW(p)
How could dualists use a random process?
If a theory says that something is fundamentally random - e.g. when a nucleus undergoes radioactive decay - then the event had no cause. It just happened. This is an opportunity to extend the theory by introducing a cause. In these dualistic quantum mind theories, a nonphysical mind is added to quantum mechanics as an additional causal factor that determines some of the randomness.
Replies from: tangerine↑ comment by tangerine · 2023-11-06T18:04:09.162Z · LW(p) · GW(p)
This is an opportunity to extend the theory by introducing a cause. In these dualistic quantum mind theories, a nonphysical mind is added to quantum mechanics as an additional causal factor that determines some of the randomness.
Firstly, how would we know that the correct way to extend the theory was to introduce a nonphysical mind as a cause? How would we tell the difference between the validity of this hypothesis and that of the infinite other possible causes?
Secondly, what is the difference between something physical and nonphysical? I hope I can assume that you agree that if something exists, then it behaves in some way. It is then up to us to try to describe that behavior as far as we can. Whether or not something is physical or not seems meaningless at this point. Quarks might as well be considered supernatural, magical, nonphysical objects whose behavior we happen to be able to describe, including how our mundane, physical reality emerges from it.
Supernatural, magical and nonphysical are contradictions in terms unless one decides on some arbitrary distinction between behaviors that are such and those that are not, because they will regardless behave in some way and we can predict that behavior inasfar as we can describe it.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2023-11-07T08:04:08.322Z · LW(p) · GW(p)
what is the difference between something physical and nonphysical?
In naive, pre-scientific, pre-philosophical experience, there's a world of things that we know through the senses, and a world of our own thoughts and feelings that we know in some other way. That is the root of physical versus non-physical, or matter versus mind.
Once science and philosophy get involved, the dividing line between physical and nonphysical can shift away from its naive starting point. Idealist philosophy can try to claim everything for the mind, physical science can try to claim everything for matter.
In the case of these quantum mind theories which have a Cartesian kind of dualism (mind and matter as distinct kinds of "substance"), there is no attempt to assimilate the mental world of thoughts and feelings, to the material world of things.
For example, in the Eccles theory, apparently thoughts and feelings in some way set the probabilities of quantum events in the synapse, and that's how the interaction of substances occurs. Thoughts and feelings may fairly be called non-physical in such a theory, because there is no attempt to identify them with attributes of the physical brain. The mental realm is made of thoughts and feelings, the physical realm is made of particles with mass and spin, they are separate kinds of entity that interact in a specific way, and that's it.
How would we tell the difference between the validity of this hypothesis and that of the infinite other possible causes?
Any theory (whether dualist, monist, or something else) that includes both mind and matter, is constrained by two kinds of data: introspective observation of thoughts and feelings, and physical observation of the material world. So you test it against the facts, like any other theory. Facts are not always easy to ascertain, they may be ambiguous, disputed, or denied, but they are still the touchstone of truth.
↑ comment by TAG · 2023-11-03T16:29:38.132Z · LW(p) · GW(p)
You asked how changing neural patterns in a person’s brain can be linked to what experiences. You can use the scientific process to establish those links inasfar as they can be linked.
Which isnt enough to exclude dualism.
There is no possible universe in which dualism holds due to the interaction problem,
There are lots of logically universes where the interaction problem doesn't apply.
If there is complete determinism, or physical closure, then interaction implies overdetermination. But complete determinism and physical closure aren't logical implications of just having some sort of physics.
So what? You can’t assume that only things you want to explain in a particular way exist. Why would the universe care?
The point is that you can’t say anything meaningful about things you can’t explain using the scientific process. You can’t even say they exist. They may well exist, but you can’t tell something that doesn’t exist
Sure you can. I have qualia right now, and so do you.
apart from something that can’t be explained scientifically. The scientific process is not just a particular way to explain things; indeed, the universe does not care to what degree you can know things; it just so happens that falsifying theories through predictions is the only way to know things.
Of course not. People knew things before Francis Bacon.
If horoscopes or dowsing rods were a way to know what’s true they would be science, but they aren’t so they’re not.
Horoscopes and dowsing rods aren't the only alternative to science.
I″m not positing that there is: I have subjective conscious experience because I’m a subject. I’m not looking at myself from the outside. Are you?
You are an object.
I'm an object to you , and a subject to me. You can doubt my qualia, but I can't doubt your own. You would notice your own if you did not insist on looking at yourself from the outside.
Of course it looks like you’re a subject because that’s what your brain (i.e., “you”) looks like to that brain.
Of course it looks to you like I'm a object because that’s what my brain looks like to your brain.
So far, so symmetrical.
You're not showing that the object perspective is the only possible one. You could show that by showing that all subjective phenomena reduce to objective ones, since reduction is asymmetrical. But that involves actually solving the HP, not just writing down correlations.
I am saying that there is something that is not , currently, explained from an objective viewpoint. I’ve said so several times.
How do you decide whether a candidate explanation is sufficient to explain this something?
Using the same criteria I apply to anything else, such as falsifiability the ability to make novel predictions. You are the one who is special-pleading for a lower bar.
Replies from: tangerine↑ comment by tangerine · 2023-11-03T17:43:32.580Z · LW(p) · GW(p)
How do you decide that an explanation specifically for this something (that is not currently explained from an objective viewpoint) is falsified?
Replies from: TAG↑ comment by TAG · 2023-11-04T15:08:51.166Z · LW(p) · GW(p)
If it mispredicts a quale , I suppose. Of course, I don't know how an equation describes a quale, and I also don't know how to build a qualiometer. But then I'm not on side that thinks the HP can be solved by ordinary scientific means.
Replies from: tangerine↑ comment by tangerine · 2023-11-07T10:05:06.066Z · LW(p) · GW(p)
I don't know how an equation describes a quale, and I also don't know how to build a qualiometer.
When you find an explanation, how will you know that that was the explanation you were looking for?
If as you say you don’t know in advance how to describe qualia, that means you won’t be able to recognize that an explanation actually describes qualia, which in turn means you don’t actually know what you mean when you talk about qualia.
If as you say you don’t know in advance how to measure qualia, that means the explanation’s predictions can’t be tested against observations because we won’t know whether we are actually measuring qualia, which in turn means any explanation is a priori unfalsifiable.
You need to know in advance how to describe and measure what you’re seeking to explain in such a way that a third party can use those descriptions and measurements to falsify an explanation, otherwise the falsity of any explanation depends on your personal sensibilities; somebody else may have different sensibilities and come to an equally legitimate yet contradictory decision. Presumably, we are in a shared reality where it is an objective matter of fact that we either have qualia or we don’t; it can’t be subjectively true and false at the same time, depending on who you are.
I’m not saying qualia don’t exist, but I am saying that without objective descriptions of qualia and the ability to measure them objectively we can’t tell the difference between qualia and something that doesn’t exist.
Replies from: tslarm, TAG↑ comment by tslarm · 2023-11-07T10:45:00.582Z · LW(p) · GW(p)
That's the thing, though -- qualia are inherently subjective. (Another phrase for them is 'subjective experience'.) We can't tell the difference between qualia and something that doesn't exist, if we limit ourselves to objective descriptions of the world.
Replies from: TAG, tangerine↑ comment by tangerine · 2023-11-07T18:05:25.555Z · LW(p) · GW(p)
That's the thing, though -- qualia are inherently subjective. (Another phrase for them is 'subjective experience'.) We can't tell the difference between qualia and something that doesn't exist, if we limit ourselves to objective descriptions of the world.
That doesn’t mean qualia can be excused and are to be considered real anyway. If we don’t limit ourselves to objective descriptions of the world then anyone can legitimately claim that ghosts exist because they think they’ve seen them, or similarly that gravity waves are transported across space by angels, or that I’m actually an attack helicopter even if I don’t look like one, or any other unfalsifiable claim, including the exact opposite claims, such as that qualia actually don’t exist. You won’t be able to disagree on any grounds except that you just don’t like it, because you sacrificed the assumptions to do so in order to support your belief in qualia.
Replies from: TAG, tslarm↑ comment by TAG · 2023-11-07T21:17:26.392Z · LW(p) · GW(p)
The claim is that there is a hard problem...that qualia exist enough to need explaining,...not that they are ultimately real.
At one time, the existence of meteorites was denied because it didn't fit with what people "knew" to be true.
There's a problem in taking scattered subjective reports as establishing some conclusion definitively...but there's an equal and opposite problem in rejecting reports because they don't fit a prevailing dogma.
↑ comment by tslarm · 2023-11-08T03:40:28.913Z · LW(p) · GW(p)
That doesn’t mean qualia can be excused and are to be considered real anyway. If we don’t limit ourselves to objective descriptions of the world then anyone can legitimately claim that ghosts exist because they think they’ve seen them, or similarly that gravity waves are transported across space by angels, or that I’m actually an attack helicopter even if I don’t look like one, or any other unfalsifiable claim, including the exact opposite claims, such as that qualia actually don’t exist. You won’t be able to disagree on any grounds except that you just don’t like it, because you sacrificed the assumptions to do so in order to support your belief in qualia.
Those analogies don't hold, because you're describing claims I might make about the world outside of my subjective experience ('ghosts are real', 'gravity waves are carried by angels', etc.). You can grant that I'm the (only possible) authority on whether I've had a 'seeing a ghost' experience, or a 'proving to my own satisfaction that angels carry gravity waves' experience, without accepting that those experiences imply the existence of real ghosts or real angels.
I wouldn't even ask you to go that far, because -- even if we rule out the possibility that I'm deliberately lying -- when I report those experiences to you I'm relying on memory. I may be mistaken about my own past experiences, and you may have legitimate reasons to think I'm mistaken about those ones. All I can say with certainty is that qualia exist, because I'm (always) having some right now.
I think this is one of those unbridgeable or at least unlikely-to-be-bridged gaps, though, because from my perspective you are telling me to sacrifice my ontology to save your epistemology. Subjective experience is at ground level for me; its existence is the one thing I know directly rather than inferring in questionable ways.
Replies from: tangerine↑ comment by tangerine · 2023-11-08T09:17:41.588Z · LW(p) · GW(p)
Those analogies don't hold, because you're describing claims I might make about the world outside of my subjective experience ('ghosts are real', 'gravity waves are carried by angels', etc.).
The analogies do hold, because you don’t get to do special pleading and claim ultimate authority about what’s real inside your subjective experience any more than about what’s real outside of it. Your subjective experience is part of our shared reality, just like mine.
People are mistaken all the time about what goes on inside their mind, about the validity of their memories, or about the real reasons behind their actions. So why should I take at face value your claims about the validity of your thoughts, especially when those thoughts lead to logical contradictions?
Replies from: tslarm↑ comment by tslarm · 2023-11-12T17:12:55.577Z · LW(p) · GW(p)
I think we're mostly talking past each other, but I would of course agree that if my position contains or implies logical contradictions then that's a problem. Which of my thoughts lead to which logical contradictions?
Replies from: tangerine↑ comment by tangerine · 2023-11-12T21:12:41.005Z · LW(p) · GW(p)
Let’s say the Hard Problem is real. That means solutions to the Easy Problem are insufficient, i.e., the usual physical explanations.
But when we speak about physics, we’re really talking about making predictions based on regularities in observations in general. Some observations we could explain by positing the force of gravity. Newton himself was not satisfied with this, because how does gravity “know” to pull on objects? Yet we were able to make very successful predictions about the motions of the planets and of objects on the surface of the Earth, so we considered those things “explained” by Newton’s theory of gravity. But then we noticed a slight discrepancy between some of these predictions and our observations, so Einstein came up with General Relativity to correct those predictions and now we consider these discrepancies “explained”, even though the reason why that particular theory works remains mysterious, e.g., why does spacetime exist? In general, when a hypothesis correctly predicts observations, we consider these observations scientifically explained.
Therefore to say that solutions to the Easy Problem are insufficient to explain qualia indicates (at least to me) one of two things.
- Qualia have no regularity that we can observe. If they really didn’t have regularities that we could observe, we wouldn’t be able to observe that they exist, which contradicts the claim that they do exist. However, they do have regularities! We can predict qualia! Which means solutions to the Easy Problem are sufficient after all, which contradicts the assumption that they’re insufficient.
- We’re aspiring to a kind of explanation for qualia over and above the scientific one, i.e., just predicting is not enough. You could posit any additional requirements for an explanation to qualify, but presumably we want an explanation to be true. You can’t know beforehand what’s true, so you can’t know that such additional requirements don’t disqualify the truth. There is only one thing that we know will be true however, namely that whatever we will observe in the future is what we will observe in the future. Therefore as long as the predictions of a theory don’t deviate from future observations, we can’t rule out that it’s accurately describing what’s actually going on, i.e., we can’t falsify it. In a way it’s a low bar, but it’s the best we can do. However, if a hypothesis makes predictions that are compatible with any and all observations, i.e., it’s unfalsifiable, then we can’t ever gain any information about its validity from any observations even in principle, which directly contradicts the assumption that you can find an explanation.
↑ comment by TAG · 2023-11-11T20:35:39.687Z · LW(p) · GW(p)
When you find an explanation, how will you know that that was the explanation you were looking for?
I'm describing how the process of explaining qualia scientifically would look.
Science isn't based on exactly predermining an explanation before you have it.
which in turn means you don’t actually know what you mean when you talk about qualia.
Says who? If I can tell that qualia are indescribable or undetectable, I must know something of "qualia" means. One of the problems with the Logical Positivist theory of meaning is tha it can't be the whole story.
If as you say you don’t know in advance how to measure qualia, that means the explanation’s predictions can’t be tested against observations because we won’t know whether we are actually measuring qualia,
I said we dont know how to measure qualia, not that some device might be measuring something else. One could test a qualiometer on oneself.
which in turn means any explanation is a priori unfalsifiable.
L
If you exclude qualiometers , entirely subjective approaches and a few other things, you can't have a standard scientific explanation. You can still have a philosophical explanation, such as qualia don't exist, qualia are non physical, etc.
You need to know in advance how to describe and measure what you’re seeking to explain in such a way that a third party can use those descriptions and measurements to falsify an explanation, otherwise the falsity of any explanation depends on your personal sensibilities; somebody else may have different sensibilities and come to an equally legitimate yet contradictory decision.
But I'm not saying there is a scientific explanation of qualia
Presumably, we are in a shared reality here it is an objective matter of fact that we either have qualia or we don’t;
And if it is an objective fact that there is some irreducible subjectivity, it is clan objective fact that it doesn't have a full scientific explanation.
it can’t be subjectively true and false at the same time, depending on who you are.
I don't know who is suggesting that.
I’m not saying qualia don’t exist, but I am saying that without objective descriptions of qualia and the ability to measure them objectively we can’t tell the difference between qualia and something that doesn’t exist.
But you can notice your own qualia.. anaesthesia makes a difference.
Replies from: tangerine↑ comment by tangerine · 2023-11-11T21:58:41.114Z · LW(p) · GW(p)
Science isn't based on exactly predermining an explanation before you have it.
But then how you would know that a given explanation, scientific or not, explains qualia to your satisfaction? How will you be able to tell that that explanation is indeed what you were looking for before?
If I can tell that qualia are indescribable or undetectable, I must know something of "qualia" means.
People have earnestly claimed the same thing about various deities. Do you believe in those? Why would your specific belief be true if theirs weren’t? Why are you so sure you’re not mistaken?
And if it is an objective fact that there is some irreducible subjectivity
Could be, but we don’t know that.
One could test a qualiometer on oneself.
How would you determine that it is working? That if you’re seeing something red, the qualiometer says “red”? If so, how would that show that there is something more going on than what’s explained with solutions to the Easy Problem?
it can’t be subjectively true and false at the same time, depending on who you are.
I don't know who is suggesting that.
It’s a logical consequence of claiming there is no objective fact about something.
But you can notice your own qualia.. anaesthesia makes a difference.
Again, I agree with you that subjective experience exists, but I don’t see why solutions to the Easy Problem wouldn’t satisfy you. There’s something mysterious about subjective experience, but that’s true for everything, including atoms and electromagnetic waves and chairs and the rest of objective reality. Why does anything in the universe exist? It’s “why?” all the way down.
Replies from: TAG↑ comment by TAG · 2023-11-12T15:55:21.388Z · LW(p) · GW(p)
But then how you would know that a given explanation, scientific or not, explains qualia to your satisfaction?
You keep asking the same question. If we are talking about scientific explanation: a scientific explanation of X succeeds if it is able to predict X's, particularly novel ones, and it doesn't mispredict X's.
A scientific explanation of qualia is exactly that with X=qualia. It's not a different style of explanation. It may well be impossible, but that's another story.
As for a philosophical explanation...well, how do you know? You have some philosophical account, probably along the lines of qualia don;'t exist or aren't meaningful, although you refuse to say which. So you have some criteria for judging that to be the best explanation.
If I can tell that qualia are indescribable or undetectable, I must know something of “qualia” means.
People have earnestly claimed the same thing about various deities.
Of course they have. To believe in Zeus youmust know what "Zeus" means, and likewise to disbelieve in Zeus.
Do you believe in those?
HUh? Why are you asking?. I said "qualia" is meaningful. I also believe in qualia, but I don't believe in qualia just because "qualia" is meaningful, I believe in qualia because I have them, as I have stated many times.
Why would your specific belief be true if theirs weren’t?
I don't see Zeus, I do see colours. Do you find that confusing?
How would you determine that it is working? That if you’re seeing something red, the qualiometer says “red”?
Whatever. I am not saying there is a scientific explanation of qualia.
it can’t be subjectively true and false at the same time, depending on who you are.
I don’t know who is suggesting that.
It’s a logical consequence of claiming there is no objective fact about something.
That's conflating two senses of "subjective". Qualia are subjective in the sense that subjects can access their own qualia, but not other peoples.
But you can notice your own qualia.. anaesthesia makes a difference.
Again, I agree with you that subjective experience exists, but I don’t see why solutions to the Easy Problem wouldn’t satisfy you.
They don't explain subjective experience. The Easy Problem is everything except subjective experience.
There’s something mysterious about subjective experience, but that’s true for everything, including atoms and electromagnetic waves and chairs and the rest of objective reality.
The fact that qualia are physically mysterious can't be predicted from physics .. if physicalism is true, they should as explicable as the Easy problem stuff. That suggests physicalism is wrong.
Replies from: tangerine↑ comment by tangerine · 2023-11-12T17:33:01.836Z · LW(p) · GW(p)
You say you see colors and have other subjective experiences and you call those qualia and I can accept that, but when I ask why solutions to the Easy Problem wouldn’t be sufficient you say it’s because you have subjective experiences, but that’s circular reasoning. You haven’t said why exactly solutions to the Easy Problem don’t satisfy you, which is why I keep asking what kind of explanation would satisfy you. I genuinely do not know, based on what you have said. It doesn’t have to be scientific.
If we are talking about scientific explanation: a scientific explanation of X succeeds if it is able to predict X's, particularly novel ones, and it doesn't mispredict X's.
But it’s not clear to me how you would judge that any explanation, scientific or not, does these things for qualia, because it seems to me that solutions to the Easy Problem do exactly this; I can already predict what kind of qualia you experience, even novel ones. If I show you a piece of red paper, you will experience the qualia of red. If I give you a drink or a drug you haven’t had before I can predict that you will have a new experience. I may not be able to predict quite exactly what those experiences will be in a given situation because I don’t have complete information, but that’s true for virtually any explanation, even when using quantum mechanics.
I suspect you may now object again and say, “but that doesn’t explain subjective experience”. Then I will object again and say, “what explanation would satisfy you?”, to which you will again say, “if it predicts qualia”, to which I will say, “but we can already predict what qualia you will have in a given situation”. Then you will again object and say, “but that doesn’t explain subjective experience”. And so on.
It looks to me like you’re holding out for something you don’t know how to recognize. True, maybe an explanation is impossible, but you don’t know that either. When some great genius finally does explain it all, how will you know he’s right? You wouldn’t want to miss out, right?
They don't explain subjective experience. The Easy Problem is everything except subjective experience.
But this is the very thing in question. Can you explain to me how exactly you come to this conclusion? Having subjective experience does not in itself imply that it’s not physical.
The fact that qualia are physically mysterious can't be predicted from physics
I’m genuinely curious what you mean by this. Can you expand on this?
Replies from: TAG↑ comment by TAG · 2023-11-13T15:50:18.788Z · LW(p) · GW(p)
You say you see colors and have other subjective experiences and you call those qualia and I can accept that, but when I ask why solutions to the Easy Problem wouldn’t be sufficient you say it’s because you have subjective experiences,
No it's because the Easy Problem is , by definition ,everything except subjective experience. Its [consciousness-experience] explained [however], not [consciousness] explained [physically]. It happens to be the case that easy problems can be explained physically, but its not built into the definition.
Can you explain to me how exactly you come to this conclusion? Having subjective experience does not in itself imply that it’s not physical.
Because I've read the passages where Chalmers defines the Easy/ Hard distinction.
"What makes the hard problem hard and almost unique is that it goes beyond problems about the performance of functions. To see this, note that even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience? (1995, 202, emphasis in original)."
See? It's not defined in terms of physicality!
Have you even read that passage before?
but that’s circular reasoning. You haven’t said why exactly solutions to the Easy Problem don’t satisfy you, which is why I keep asking what kind of explanation would satisfy you.
...an EP explanation isn't it even trying to be an explanation of X for X=qualia.
It doesn’t have to be scientific. If we are talking about scientific explanation: a scientific explanation of X succeeds if it is able to predict X’s, particularly novel ones, and it doesn’t mispredict X’s. But it’s not clear to me how you would judge that any explanation, scientific or not, does these things for qualia, because it seems to me that solutions to the Easy Problem do exactly this;
Only by lowering the bar.
I can already predict what kind of qualia you experience, even novel ones.
Of course not ...you can't even express them.
If I show you a piece of red paper, you will experience the qualia of red.
I'm a colour blind super scientist , what is this Red?
If I give you a drink or a drug you haven’t had before I can predict that you will have a new experience. I may not be able to predict quite exactly what those experiences will be
Unfortunately , that's what "predict novel experiences" means. CF other areas of science: you don't get Nobels for saying "I predict some novel effect I cant describe or quantify".
in a given situation because I don’t have complete information, but that’s true for virtually any explanation, even when using quantum mechanics.
The problem isnt that you don't have infinite information, it's that you are not reaching the base line of every other scientific theory, because "novel qualia , don't ask me what" isn't a meaningful prediction.
I suspect you may now object again and say, “but that doesn’t explain subjective experience”. Then I will object again and say, “what explanation would satisfy you?”, to which you will again say, “if it predicts qualia”, to which I will say, “but we can already predict what qualia you will have in a given situation”.
Not in a good enough way, you can't.
Then you will again object and say, “but that doesn’t explain subjective experience”. And so on. It looks to me like you’re holding out for something you don’t know how to recognize. True, maybe an explanation is impossible, but you don’t know that either. When some great genius finally does explain it all, how will you know he’s right? You wouldn’t want to miss out, right? They don’t explain subjective experience.
The fact that qualia are physically mysterious can’t be predicted from physics
I’m genuinely curious what you mean by this. Can you expand on this?
I explained in the next clause, which you snipped:-
if physicalism is true, they [qualia] should as explicable as the Easy problem stuff.
Replies from: tangerine↑ comment by tangerine · 2023-11-13T19:29:05.066Z · LW(p) · GW(p)
The core issue is that there’s an inference gap between having subjective experience and the claim that it is non-physical. One doesn’t follow from the other. You can define subjective experience as non-physical, as Chalmer’s definition of the Hard Problem does, but that’s not justified. I can just as legitimately define subjective experience as physical.
I can understand why Chalmers finds subjective experience mysterious, but it’s not more mysterious than the existence of something physical such as gravity or the universe in general. Why is General Relativity enough for you to explain gravity, even though the reason for the existence of gravity is mysterious?
Replies from: TAG↑ comment by TAG · 2023-11-13T20:02:10.173Z · LW(p) · GW(p)
he core issue is that there’s an inference gap between having subjective experience and the claim that it is non-physical.
Of course there is. There is no reason there should not be. Who told you otherwise? Chalmers takes hundreds of pages to set out his argument.
Physical reductionism is compatible with the idea that the stuff at the bottom of the stack is irreducible, but consciousness appears to be a high level phenomenon.
Replies from: tangerine↑ comment by tangerine · 2023-11-13T20:19:09.872Z · LW(p) · GW(p)
Chalmers takes hundreds of pages to set out his argument.
His argument does not bridge that gap. He, like you, does not provide objective criteria for a satisfying explanation, which means by definition you do not know what the thing is that requires explanation, no matter how many words are used trying to describe it.
Replies from: TAG↑ comment by TAG · 2023-11-14T00:24:43.427Z · LW(p) · GW(p)
The discussion was about whether there is a Hard Problem , not whether Chalmers or I have solved it.
Replies from: tangerine↑ comment by tangerine · 2023-11-14T17:17:48.280Z · LW(p) · GW(p)
I know. Like I said, neither Chalmers nor you or anyone else have shown it plausible that subjective experience is non-physical. Moreover, you repeatedly avoid giving an objective description what you’re looking for.
Until either of the above change, there is no reason to think there is a Hard Problem.
Replies from: TAG↑ comment by TAG · 2023-11-14T18:13:29.981Z · LW(p) · GW(p)
Like.I said, I don't have to justify non physicalism when that is not.what the discussion is about.
Also., the existence of a problem does not depend on the existence of a solution.
Replies from: tangerine↑ comment by tangerine · 2023-11-14T20:11:07.763Z · LW(p) · GW(p)
Also., the existence of a problem does not depend on the existence of a solution.
Agreed, but even if no possible solution can ultimately satisfy objective properties, until those properties are defined the problem itself remains undefined. Can you define these objective properties?
Replies from: TAG↑ comment by TAG · 2023-11-14T21:06:45.838Z · LW(p) · GW(p)
Weve been through this.
-
You don't have a non circular argument that everything is objective
-
It can be an objective fact that subjectivity exists.
↑ comment by tangerine · 2023-11-14T22:41:18.876Z · LW(p) · GW(p)
All I’m asking for is a way for other people to determine whether a given explanation will satisfy you. You haven’t given enough information to do that. Until that changes we can’t know that we even agree on the meaning of the Hard Problem.
Replies from: TAG↑ comment by Shiroe · 2023-10-26T15:11:43.600Z · LW(p) · GW(p)
But all of these observations are also happening from a third-person perspective, just like the rest of reality.
This is a hypothesis, based on information in your first-person perspective. To make arguments about a third-person reality, you will always have to start with first-person facts (and not the other way around). This is why the first person is epistemologically more fundamental.
It's possible to doubt that there is a third-person perspective (e.g. to doubt that there's anything like being God). But our first person perspective is primary, and cannot be escaped from. Optical illusions and stage tricks aren't very relevant to this, except in showing that even our errors require a first-person perspective to occur.
EDIT: The third-person perspective being epistemologically more/less fundamental than the first-person perspective could work as a double crux with me. Does it work on your end as well?
Replies from: tangerine↑ comment by tangerine · 2023-10-29T18:29:42.647Z · LW(p) · GW(p)
Would I be correct to say that you think the third-person perspective emerges from the first-person perspective? Or would you say that they’re simply separate?
Replies from: Shiroe↑ comment by Shiroe · 2023-11-04T12:59:45.214Z · LW(p) · GW(p)
If I had to choose between those two phrasings I would prefer the second one, for being the most compatible between both of our notions. My notion of "emerges from" is probably too different from yours.
The main difference seems to be that you're a realist about the third-person perspective, whereas I'm a nominalist about it, to use your earlier terms. Maybe "agnostic" or "pragmatist" would be good descriptors too. The third-person is a useful concept for navigating the first-person world (i.e. the one that we are actually experiencing). But that it seems useful is really all that we can say about it, due to the epistemological limitations we have as human observers.
I think this is why it would be a good double crux if we used the issue of epistemological priority: I would think very differently about Hard Problem related questions if I became convinced that the 3rd person had higher priority than the 1st person perspective. Do you think this works as a double crux? Is it symmetrical for you as well in the right way?
Replies from: tangerine↑ comment by tangerine · 2023-11-11T21:11:26.927Z · LW(p) · GW(p)
If I had to choose between those two phrasings I would prefer the second one, for being the most compatible between both of our notions. My notion of "emerges from" is probably too different from yours. The main difference seems to be that you're a realist about the third-person perspective, whereas I'm a nominalist about it, to use your earlier terms.
That actually sounds more like the first phrasing to me. If you are a nominalist about the third-person perspective, then it seems that you think the third-person perspective does not actually exist and the concept of the third-person perspective is borne of the first-person perspective.
Do you think this works as a double crux?
I’m not sure whether this is a good double crux, because it’s not clear enough to me what we mean by first- and third-person perspectives. It seems conceivable to me that my conception of the third-person perspective is functionally equivalent to your conception of the first-person perspective. Let me expand on that below.
If only the first-person perspective exists, then presumably you cannot be legitimately surprised, because that implies something was true outside of your first-person perspective prior to your experiencing it, unless you define that as being part of your first-person perspective, which seems contradictory to me, but functionally the same as just defining everything from the third-person perspective. The only alternative possibility that seems available is that there are no external facts, which would mean reality is actually an inconsistent series of experiences, which seems absurd; then we wouldn’t even be able to be sure of the consistency of our own reasoning, including this conversation, which defeats itself.
Replies from: Shiroe↑ comment by Shiroe · 2023-11-21T18:56:29.630Z · LW(p) · GW(p)
I'm sorry that comparing my position to yours led to some confusion: I don't deny the reality of 3rd person facts. They probably are real, or at least it would be more surprising if they weren't than if they were. (If not, then where would all of the apparent complexity of 1st person experience come from? It seems positing an external world is a good step in the right direction to answering this). My comparison was about which one we consider to be essential. If I had used only "pragmatist" and "agnostic" as descriptors, it would have been less confusing.
Again, I think the main difference between our positions is how we define standards of evidence. To me, it would be surprising if someone came to know 3rd person facts without using 1st person facts in the process. If the 1st person facts are false, this casts serious doubt on the 3rd person facts which were implied. At our stage of the conversation, it seems like we can start proposing far more effective theories, like that nothing exists at all, which explains just as much of the available evidence we still have if we have no 1st person facts.
You seem to believe we can get at the true third person reality directly, maybe imagining we are equivalent to it. You can imagine a robot (i.e. one of us) having its pseudo-experiences and pseudo-observations all strictly happening in the 3rd person, even engaging in scientific pursuits, without needing to introduce an idea like the 1st person. But as you said earlier, just because you can imagine something, doesn't mean that it's possible. You need to start with the evidence available to you, not what sounds reasonable to you. The idea of that robot is occurring in your 1st person perspective as a mental experience, which means it counts as evidence for the 1st person perspective at least as much as it counts as evidence for the 3rd. So does what it feels like to think eliminitivism is possible, and so does what it feels like to chew 5 Gum® and etc, and etc.
To me, all of this is a boring tautology. For you, it's more like a boring absurdity, or rather it's the truth turned upside down and pulled inside out. This is why I'm more interested in finding a double crux, something that would reveal the precise points where our thinking diverges and reconverges. There are already some parallels that we've both noticed, I think. I would say that you believe in the 1st person but with only one authentic observer: God, who is and who sees everything with perfect indifference, like in Spinoza's idea. You could also reframe my notion of the 1st person to be a kind of splintered or shattered 3rd person reality, one which can never totally connect itself back together all at once. Our ways of explaining away the problems are essentially the same: we both stress that our folk theoretic concepts are untrustworthy, that we are deceiving ourselves, that we apply a theory which shapes our interpretations without us realizing it. We are also both missing quite a few teeth, from biting quite a few bullets.
There must be some precise moment where our thinking diverges. What is that point? It seems like something we need to use a double crux to find. Do you have any ideas?
Replies from: tangerine↑ comment by tangerine · 2023-10-07T10:48:36.975Z · LW(p) · GW(p)
You're equivocating, conflating consciousness with self-awareness. Consciousness is not the sense-of-self.
I agree those are separate, but the (useful, evolved) sense-of-self leads to a belief in consciousness. Disproving the reality of the self (i.e., the sense of self being illusory) removes the logical support for consciousness.
Moreover, proponents of the Hard Problem often say “If consciousness is illusory, who is experiencing the illusion?”, thereby revealing their belief that a self is required for consciousness.
So, ostensibly, consciousness and a sense of self are not the same but do imply each other. However, I argue that proponents of the Hard Problem confuse the existence of the sense of self with the existence of an actual self, which leads to erroneous conclusions.
↑ comment by Slapstick · 2023-08-23T15:02:06.700Z · LW(p) · GW(p)
It just seems to me like there's a deeper level of explaination required to conclude the experience of consciousness is a false belief, relative to things like deja vu.
It seems like you're using terms that presume experience in order to explain how experience is false via analogy.
You experience deja vu, the experience lends towards false beliefs about previous experiences. How can that have explanatory power in concluding experience itself is false? Wouldn't that conclusion undermine the premises of the comparison?
↑ comment by quetzal_rainbow · 2023-08-23T12:17:08.634Z · LW(p) · GW(p)
I'm usually very on board with claim that Hard Problem is non-problem, but I always struggled to understand illusionst point of view. When I see illusion of appearing white spots in grid illusion, I know that it is illusion in a sense that there are reality in form of constant state of pixels that doesn't change. If "experiencing" is "illusion", what is reality?
(My position on Hard Problem is that subjective experience exists and has completely mundane physical/computatonal explanation.)
Replies from: TAG, None↑ comment by TAG · 2023-08-26T14:57:10.394Z · LW(p) · GW(p)
My position on Hard Problem is that subjective experience exists and has completely mundane physical/computatonal explanation.
Which is what?
Replies from: quetzal_rainbow↑ comment by quetzal_rainbow · 2023-08-27T08:42:12.668Z · LW(p) · GW(p)
I don't know! I don't like the whole formulation of "Hard Problem" because it looks so arrogant: it is assumed that all possible computational solution to question "how can conscious behaviour be produced" are so trivial, that they can't provide answer to "why we have subjective experience". Let's find computational solution for conscious behavior first and check that we won't find any unexpected insights that make us say "wow, this is really obvious mundane way to produce conscious experience".
Replies from: TAG↑ comment by TAG · 2023-08-27T21:38:07.983Z · LW(p) · GW(p)
The hard problem argument only says that some problems are harder than others, not that they are impossible.
The mundane approach had been tried over and over. When you say that a mundane solution exists , you seem to mean that the thing that hasn't worked for decades will start working.
Replies from: quetzal_rainbow, Shiroe↑ comment by quetzal_rainbow · 2023-09-22T12:47:51.963Z · LW(p) · GW(p)
By "mundane solution" I mean "deep gear-level understanding of functional aspects of consciousness, such as someone who has such understanding can program functionally-conscious entity from scratch". Claim of "Hard Problem" is "even if you have such understanding, you can't explain subjective experience" and I consider this claim to be false.
Replies from: TAG↑ comment by TAG · 2023-09-23T17:46:56.409Z · LW(p) · GW(p)
I agree that if you can use a non trial-and-error method to build consciousness, then you understand it well enough.
But do you have a non trial-and-error method for building something that has conscious experience? Or are you assuming you get it for free with the rest of the functionality?
↑ comment by [deleted] · 2023-08-23T12:24:08.281Z · LW(p) · GW(p)
I don't like the word "illusionism" here because people just get caught on the obvious semantic 'contradiction' and always complain about it.
The arguments based on perceptual illusions in general are meant to show that our perception is highly constructed by the brain, it's not something 'simple'. The point of illusionism is just to say that we are confused about what the phenomenological properties of qualia really are qua qualia because of wrong ideas that come from introspection.
Replies from: Shiroe↑ comment by Shiroe · 2023-08-23T21:05:46.575Z · LW(p) · GW(p)
I don't like "illusionism" either, since it makes it seem like illusionists are merely claiming that consciousness is an illusion, i.e., it is something different than what it seems to be. That claim isn't very shocking or novel, but illusionists aren't claiming that. They're actually claiming that you aren't having any internal experience in the first place. There isn't any illusion.
"Fictionalism" would be a better term than "illusionism": when people say they are having a bad experience, or an experience of saltiness, they are just describing a fictional character.
↑ comment by Shankar Sivarajan (shankar-sivarajan) · 2023-08-23T18:26:11.918Z · LW(p) · GW(p)
Of course it exists! It's like the Harder Problem of Zelionicity. And the Impossible Problem of Allimoxing.
Replies from: Shiroe↑ comment by Shiroe · 2023-08-23T21:14:20.365Z · LW(p) · GW(p)
Are you saying that you don't think there's any fact of the matter whether or not you have phenomenal experiences like suffering? Or do you mean that phenomenal experience is unreal in the same way that the hellscape described by Dante is unreal?
Replies from: shankar-sivarajan↑ comment by Shankar Sivarajan (shankar-sivarajan) · 2023-08-24T15:27:49.392Z · LW(p) · GW(p)
Closer to the latter, though I wouldn't call it "unreal." The experience of suffering exists in the same sense that my video game character is down to one life and has low HP: that state is legibly inspectable on screen, and is encoded in silicon. I only scorn the term "consciousness" to the extent it is used as woo.
I think some version of the Hard Problem really was meaningful in the past, and that it was hard: it's far from obvious how "mere matter" could encode states as rich as what one … perceives, experiences, receives as the output of the brain's processes. Mills and clocks, as sophisticated as they may have been, were still far too simple. I consider modern technology to have demonstrated in sufficient detail precisely how it's possible. It didn't require anything conceptually new, so also I understand why some don't find the answer satisfying.
comment by solvalou · 2023-08-24T08:44:06.251Z · LW(p) · GW(p)
This bit was very interesting to me:
These models are “predictive” in the important sense that they perceive not just how things are at the moment but also anticipate how your sensory inputs would change under various conditions and as a consequence of your own actions. Thus:
- Red = would create a perception of a warmer color relative to the illuminant even if the illumination changes.
My current pet theory of qualia is that there is an illusion that they are a specific thing (e.g. the redness of red) when in reality there are only perceived relations between a quale and other qualia, and a perceived identity between that quale and memories of that quale. But the sense of identity (or constancy through time) is not caused by an actual specific thing (the "redness" that one erroneously tries to grasp but always seems just beyond reach), but by a recurrence of those relations.
Why I like the quoted part is because it can be read as a predictive processing-flavoured version of the same theory. The illusion (that there is a reified thing instead of only a jumble of relationships) is strengthened by the fact that we not only recognize the cluster of qualia relationships and can correctly identify it, but furthermore predict how it will behave. Framing a "quale" as an ability to predict how a sense impression will change with varying sensory (or imaginary) impressions seems to make the definition both richer (it is not just an isolated flash of recognition of what's in front of your eyes, but a set of predictions of how it might behave) and more coherent (different experiences of "redness" are tied together by the same quale because they could be "transformed" into each other while adhering to the changes that the quale "redness" says are applicable to itself).
comment by gilch · 2023-08-23T17:19:23.199Z · LW(p) · GW(p)
But it is not clear at all that such a being could even exist in principle. A digital mind that lacks a body it is trying to keep alive, that has entirely different senses than our interoceptive and exteroceptive ones, and that has an entirely different repertoire of actions available to it will have an entirely alien generative model of its world, and thus an entirely alien phenomenology — if it even has one. We can guess what it’s like to be a bat: its reliance on sonar to navigate the world likely creates a phenomenology of “auditory colors” that track how surfaces reflect sound waves similar to our perception of visual color. It’s much harder to guess what it’s like, if anything, to be an “em”.
Sure they could. In principle, if a brain could be emulated virtually, why not a virtual body and virtual environment? That was always my understanding of what an "em" would be, not this disembodied straw-em.
Replies from: sharmake-farah↑ comment by Noosphere89 (sharmake-farah) · 2023-08-23T21:51:11.460Z · LW(p) · GW(p)
Similarly, the consciousness explanation that Jacob Falkovich gives would give a lot of things consciousness, though not everything. In particular, the improved Good Regulator Theorem, which proposes that at least one part of consciousness is essentially having models of the world, is applicable to any capable system. Similarly, I expect the cybernetic model to have widespread use, in the sense that a lot of things will find it useful to regulate something.
I think the strongest takeaway from Anil Seth's model is that future consciousness could be very, very alien, especially once we take away certain parts of it.
comment by Rafael Harth (sil-ver) · 2023-08-23T10:31:35.706Z · LW(p) · GW(p)
Having not read the book yet, I'm kind of stumped at how different this review is to the one from Alexander [LW · GW]. The two posts make it sound like a completely different book, especially with respect to the philosophical questions, and especially especially with respect to the expressed confidence. Is this book a neutral review of the leading theories that explicitly avoids taking sides, or is it a pitch for another I-solved-the-entire-problem theory? It can't really be both.
Replies from: Ilio