Book Review: Being You by Anil Seth

post by Alexander (alexander-1) · 2021-10-26T05:44:10.865Z · LW · GW · 23 comments

Contents

  Overview
  Philosophical Zombies!?
  Measuring Consciousness
  Locked-in Syndrome
  Inference
  Emotions
  Non-human Consciousness
    Animal Consciousness
    Machine Consciousness
  Final Thoughts
None
23 comments

Overview

Consciousness is "deeply inscribed into the wider patterns of nature."

This book is a good non-technical synopsis on the cutting edge of consciousness research. However, this book offers little new insight. Seth manages to represent all sides of the argument fairly without giving up on his adherence to physicalism. Through this book, it is apparent that Seth is a proponent of Embodied Cognition, the idea that our bodies—not just our brains as passive information processors—play a crucial role in forming our conscious experiences.

Remark: When talking about consciousness, adherence to physicalism is necessary. Information is fundamentally physical (refer to papers by R Landauer and D Deutsch). There is nothing magical about information, emotions, intuitions [LW · GW] and consciousness. They all obey the laws of physics. DHCA is hard evidence against the existence of an immaterial soul that leaves the body after death:

Deep hypothermic circulatory arrest (DHCA) is a surgical technique that induces deep medical hypothermia. It involves cooling the body to temperatures between 20 °C to 25 °C, and stopping blood circulation and brain function for up to one hour. It is used when blood circulation to the brain must be stopped because of delicate surgery within the brain, or because of surgery on large blood vessels that lead to or from the brain. DHCA is used to provide a better visual field during surgery due to the cessation of blood flow. DHCA is a form of carefully managed clinical death in which heartbeat and all brain activity cease."

This book might be frustrating to some readers because Seth is agnostic about several contentious open problems in the consciousness space, such as whether consciousness is substrate independent. Seth is prudent and reverent. I have seen many consciousness researchers treating the more outlandish hypotheses with ridicule, but he mostly remains agnostic as long as the jury is still out on the matter. Knowledge creation requires idea generation, not just idea judgement. Therefore, we should be more tolerant of the more outlandish ideas as long as there are no knock-down arguments or evidence against them.

This book covers the whole gamut of consciousness research. Seth covers topics ranging over:

Philosophical Zombies!?

Seth addresses the distinction between beliefs and reality early in the book. I was happy to see this because confusion about this distinction can lead to an infinite regress, where clever people get lost in their own beliefs about their own beliefs [LW · GW] ad infinitum and forget that reality is the ultimate judge of the accuracy of our beliefs.

Can you imagine an A380 flying backwards? Of course you can. Just imagine a large plane in the air, moving backwards. Is such a scenario really conceivable? Well, the more you know about aerodynamics and aeronautical engineering, the less conceivable it becomes. It just cannot be done.

Seth follows this up with a knock-down argument against Chalmers' philosophical zombies:

In one sense it's trivial to imagine a philosophical zombie. I just picture a version of myself wandering around without having any conscious experiences. But can I really conceive this? What I'm being asked to do, really, is to consider the capabilities and limitations of a vast network of many billions of neurons and gazillions of synapses (the connections between neurons), not to mention glial cells and neurotransmitter gradients and other such neurobiological goodies, all wrapped into a body interacting with a world which includes other brains in other bodies. Can I do this? Can anyone do this? I doubt it.

This part of the book had me reminiscing about Zombies: The Movie [LW · GW], which was one of the highlights of my Rationality: A-Z [? · GW] experience. I had a really good giggle reading that post.

Scott Aaronson has argued in his seminal paper Why Philosophers Should Care About Computational Complexity that philosophers should seriously consider the limits imposed by computation and the laws of physics. Physics constrains everything else in a way that everything else does not constrain physics. We cannot choose to change the laws of physics. The tendency to take armchair intuitions too seriously and privilege them above reality is a relic from the unavailing philosophies of Hegel and his ilk. This quote from Roger Penrose captures the relationship between physics and consciousness nicely:

We have a closed circle of consistency here: the laws of physics produce complex systems, and these complex systems lead to consciousness, which then produces mathematics, which can then encode in a succinct and inspiring way the very underlying laws of physics that gave rise to it.

Trying to explain consciousness via introspection is like trying to explain how Google works by doing Google searches. As Dennett puts it, we are the end-user of consciousness. Therefore, what we experience is a highly abstract user interface. In Every Thing Must Go, James Ladyman and Don Ross stake a strong claim about the uselessness of "esoteric debates based on prioritising armchair intuitions about the nature of the universe over scientific discoveries." Armchair intuitions about reality are unlikely to have predicted any of the results of quantum physics, such as the counterintuitive Bell inequality.

That being said, Seth does present the reader with the good old cloning thought experiment. Seth asks the reader to imagine a machine capable of creating a perfect clone of someone and then asks if the clone is the same person. He completely ignores the no-cloning theorem in quantum mechanics. I don't understand the point of these "thought experiments" when physics has ruled them out. Seth criticises Chalmers' philosophical zombies but then makes the same mistake.

Remark: Seth mentions the word 'emergence' a total of only 6 times in the entire main text, which I thought was impressive for a book on consciousness. Calling something we do not yet understand 'emergent' doesn't help us understand it any better than saying nothing.

Measuring Consciousness

Seth points out that to measure something, we must first find a fixed point, an unchanging reference against which we can measure it. We achieved this using absolute zero for temperature.

Seth refers to Tononi's Integrated Information Theory (IIT) as a candidate for a measure of consciousness, using the quantity Phi, symbolically . Phi is poorly defined, and its definition keeps changing. Vagueness and change are characteristic of poor theories. A good theory should be rigid and precise in its assertions about reality, making itself vulnerable to a critical test. IIT offers little explanatory power, and I was disappointed to see Seth ignore Scott Aaronson's incisive knock-down of IIT.

Aaronson asserts that IIT has already been falsified. It has failed to explain the very things it purports to explain. According to IIT, a large grid of XOR gates and error-correcting codes are conscious. When Aaronson pointed this out to Tonini, Tononi "didn't only bite the bullet, he devoured it." Tononi's response to Aaronson was something like: "Yes, a large grid of XOR gates and error-correcting codes are conscious, and your intuition about consciousness is wrong and needs to change." Tononi is retrofitting consciousness to his theory instead of providing a good explanation of consciousness. This is the equivalent of a theory of temperature according to which lava doesn't burn, and when you point this out, you get told: "Your intuition about what burning feels like is wrong."

Locked-in Syndrome

The next part of the book blew my mind. I think I found an answer to the question, "What is your biggest fear?" It is called locked-in syndrome. This is a freaky condition, and I cannot begin to imagine what such an experience would be like. It is terrifying.

Locked-in syndrome [is] where consciousness is fully present despite total paralysis of the body. This rare affliction can follow damage to the brainstem, a region at the base of the brain (and at the top of the spinal cord) which, among other roles, mediates control of muscles in the body and in the face.

Some locked-in syndrome patients maintain the ability to make limited eye movement, offering a narrow channel for communication (the book The Diving Bell and the Butterfly was written this way!) and diagnosis, while others are entirely locked-in. It is estimated that one to two thousand undiagnosed locked-in patients languish forgotten in nursing homes and hospital wards worldwide.

Inference

I am going to treat these complicated topics with very broad strokes for the sake of keeping this review easy to read. Seth gives an excellent overview of the Free Energy Principle (FEP), Active Inference, Bayesian Inference and the Good Regulator Theorem, which I view as being closely related concepts.

I am going to skip giving an overview of Bayesian inference to a rationality community. There are plenty of excellent posts [? · GW] about Bayesian inference on LessWrong. Why is Bayesian inference relevant to life and cognition? Karl Friston thinks that all the processes of living systems boil down to solving inference problems to minimise predictive error and maximise evidence about the organism's own existence.

Let's take a top-down approach. Let's start from the top and ask this elementary question: If a system exists and actively maintains its own existence for periods of time, then what must it be doing? Before we tackle this question, let's take a step back and ask a more concrete question. Suppose you have an air conditioning system, and its goal is to maintain the temperature of a given building. The Good Regulator Theorem asserts that "every good regulator of a system must be a model of that system." Therefore, the air conditioning system must have some heat-map (e.g. via thermostats) of the building (i.e. a model). Similarly, for an organism to maintain its existence, it must have a model of the system it is trying to sustain, i.e. a model of itself and its surrounding environment. This way, the organism can remain within a narrow set of favourable physical states (the organism's attractor set), which allow it to stay alive.

Karl Friston is widely considered to be the most influential and cited cognitive scientist alive. Friston is a physicist at heart, and therefore, his theories about biology and cognition are grounded in physics. Friston originally came up with the inspiration for FEP when he was eight years old while watching woodlice in a garden. He observed that woodlice moved faster while exposed to sunlight and moved slower under shade. He then hypothesised that the reason for this is literally that in the sunlight, the lice have more energy! This insight is very obvious but it is nevertheless very insightful. This inspired Friston to pursue a grounds-up physics-based explanation for life.

FEP is remarkably elegant. In very broad terms, FEP says that all the processes of living organisms are solving inference problems to make decisions to move uphill on the probability distribution for the evidence of the organism's existence. In other words, organisms are attracted (both in the literal sense and in the sense of a dynamical attractor) to resolving uncertainty about their own existence. FEP is not falsifiable, and that is ok. It is more like a law of physics. It is a deduction from the fact that living systems exist and persist over time. We can ask questions like, "Does this system conform to FEP?" It nevertheless provides immense explanatory power, and theories based on it will provide us with falsifiable predictions.

Bayesian optimality becomes computationally intractable as you throw large volumes of data at it. An organism must model its environment efficiently. This can be achieved by minimising variational free energy. Minimising variational free energy provides an algorithm agnostic and computationally tractable way of approximating Bayesian optimality. Richard Feynman introduced variational free energy in 1972 to convert an intractable integration problem into a tractable optimisation problem while working on quantum electrodynamics.

Remark: Friston likes to joke and say maximising Bayesian Model Evidence is the solution to all problems. This is so true. Whatever your goal, you have a better chance of achieving it by maximising the evidence showing that you’re achieving it (easier said than done).

FEP can be generalised to action through Active Inference. According to Active Inference, actions are a form of self-fulfilling perceptual prediction. When you go around searching for something, you are moving uphill on the probability distribution of finding what you are looking for. We use actions to manipulate our surrounding environments to maximise the evidence for our existence. Thinking about action in this way underlines how action and perception are two sides of the same coin. Rather than perception being the input and action being the output, with the brain being a passive information processor, action and perception work together to solve inference problems.

Seth is interested in marrying together the Free Energy Principle and Integrated Information Theory, but he thinks we are far from achieving such a feat. "FEP starts from the simple statement that 'things exist' and derives from this the whole of neuroscience and biology, but not consciousness. IIT starts from the simple statement 'consciousness exists' and launches a direct assault on the hard problem. It's not surprising that they often talk past each other."

Emotions

Emotions are not magic. Again and again, I come across people who think emotions somehow reveal the truth of the universe to them. Everything Seth says about emotions (like most of the rest of the book) is a well-articulated summary of other people's ideas. His treatment of emotions is based chiefly on the works of Antonio Damasio and Lisa Feldman Barrett, who have put emotions on solid grounds.

Confusion about emotions arises because what science tells us about how emotions work appears counterintuitive with respect to what emotions feel like from the inside. For example, our intuition about emotions tells us that we cry because we are sad, but science tells us that "we are sad because we perceive our bodily state in the condition of crying." Science has, again and again, subverted our intuitive, how-things-seem notions about the world and ourselves. Our intuitions about emotions tell us that emotions cause bodily responses, but science asserts that the relationship is the other way around.

Emotions are closely linked with changes in bodily state. However, emotions are not as simple as just expressions of distinct bodily states. Appraisal theories of emotions claim that a context-based cognitive inference process takes place, such that the same bodily state can result in different emotions based on context. Barrett's research asserts that emotions are best-guesses that compromise accuracy for speed. Furthermore, according to Barrett, we can control, to some extent, what emotions we create. Given that emotions are inferences, they can readily be explained by FEP (but is there anything FEP doesn't purport to explain?).

Non-human Consciousness

Animal Consciousness

The chapter on animal consciousness was exciting and insightful. This was one of the funniest things I've ever read:

Plagues of rodents, locusts, weevils, and other such smaller animals were less easy to deal with via legal proceedings. In one celebrated sixteenth-century case, the French lawyer Bartholomew Chassenée successfully exonerated some rats with the clever argument that they could not reasonably be expected to turn up to trial, given the dangers posed to them by the many cats lying in wait along the route. In other cases, including various weevil infestations, the offending animals were issued with written orders to leave a property or a barley crop, often on a specific day and even by a specific hour.

Seth talks about the mirror test, which I thought was insightful. The mirror test is an experiment in which animals are anaesthetised and marked with a dye on their bodies and then, upon waking up, are placed in front of a mirror. Upon seeing the mark on their bodies in the mirror, those animals that examine the mark on their own bodies pass the mirror test.

Who passes the mirror test? Among mammals, some great apes, a few dolphins and killer whales, and a single Eurasian elephant. A parade of other mammalian creatures, including pandas, dogs, and various monkeys, have failed—at least so far. Given how intuitive mirror self-recognition is for us humans, and how otherwise cognitively competent many of these non-self-recognising mammals seem to be, this pass list is remarkably short. There is no convincing evidence that any non-mammal passes the mirror test...

The rest of this chapter is dedicated to octopuses. Octopuses are of great interest to consciousness researchers. Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness is an entire book dedicated to exploring octopus consciousness. This is because octopuses and mammals shared a common ancestor a long evolutionary time ago. It is hypothesised that we shared a flatworm that trawled the seafloor 750 million years ago as a common ancestor. Therefore, octopus consciousness is the most alien kind of nontrivial consciousness we can find on this planet. Octopuses use a remarkably different biological infrastructure to instantiate their consciousness. Their "brains" are distributed through their bodies. Therefore, Seth speculates about "what it is like to be an octopus arm."

Machine Consciousness

Seth doesn't say a lot about machine intelligence. I thought this was a mistake. There is a lot we can learn about consciousness from the cutting edge of machine intelligence. I bet that AI will help us make the most significant advances towards understanding consciousness. As Feynman said, "What I cannot create, I do not understand."

Seth mentions in passing that he believes that intelligence and consciousness are distinct and that you can have an "intelligent" system that is not "consciousness" and vice versa. I found this to be poorly justified. There are good reasons to believe that both consciousness and intelligence are processes of inference/modelling. My hunch is that intelligence and consciousness are deeply intertwined (but Seth is the world-class cognitive scientist, not me). This distinction is a debate about the definitions of words, not the workings of intelligence or consciousness and is therefore unlikely to be fruitful.

I thought this was an exciting way to capture our experience of the exponentials: "Where are we on this exponential curve? The problem with exponential curves—as many of us learned during the recent coronavirus pandemic—is that wherever you stand on them, what's ahead looks impossibly steep and what's behind looks irrelevantly flat. The local view gives no clue to where you are." This makes sense because the derivative of  is itself. Hence you cannot tell where you are on the curve.

Final Thoughts

Overall, this is a decent book. It offers no new insights, but it nevertheless offers a good non-technical synopsis. Seth covers a wide range of topics with great depth while presenting the reader with easy-to-grasp explanations. The lack of conclusive answers might be disappointing to some, but I don't expect us to find ultimate theories explaining the whole of consciousness any time soon. That being said, we shouldn't treat every momentary difficulty as insurmountable or evidence that consciousness is somehow non-physical. Instead, let's keep working towards finding a precise explanation.

If you are interested in a more technical synopsis on consciousness, I think this is a good technical survey paper, The rise of machine consciousness: Studying consciousness with computational models.

23 comments

Comments sorted by top scores.

comment by Steven Byrnes (steve2152) · 2021-10-26T13:10:58.438Z · LW(p) · GW(p)

That being said, Seth does present the reader with the good old cloning thought experiment. Seth asks the reader to imagine a machine that is capable of creating a perfect clone of someone and then asks if the clone is the same person or someone else. He completely ignores the No-cloning theorem in quantum mechanics. I don't understand the point of these "thought experiments" when physics has ruled them out. Seth criticises Chalmers' philosophical zombies but then makes the same mistake.

I think you're nitpicking here about no-cloning. Here are his exact words:

Imagine that a future version of me, perhaps not so far away, offers you the deal of a lifetime. I can replace your brain with a machine that is its equal in every way, so that from the outside, nobody could tell the difference. This new machine has many advantages – it is immune to decay, and perhaps it will allow you to live forever.

But there’s a catch. Since even future-me is not sure how real brains give rise to consciousness, I can’t guarantee that you will have any conscious experiences at all, should you take up this offer. Maybe you will, if consciousness depends only on functional capacity, on the power and complexity of the brain’s circuitry, but maybe you won’t, if consciousness depends on a specific biological material – neurons, for example. Of course, since your machine-brain leads to identical behaviour in every way, when I ask new-yo whether you are conscious, new-you will say yes. But what if, despite this answer, life – for you – is no longer in the first person?

Maybe he shouldn't have said "equal in every way" because that kinda implies "down to the last quantum fluctuation of the last quark". But in context, I think it's pretty clear that he means that the operations only need to be similar enough to get out indistinguishable-from-the-outside behavior. Most people (including me … in fact including almost everyone but Roger Penrose) think that there's no important information stored in the exact quantum state of a biological system at a particular moment. After all, everything is getting randomly knocked around by water molecules every nanosecond, which changes the quantum state in random ways, but the system still functions fine. So if you (in principle) measure the state of the brain at or near the quantum limit (but not beyond the quantum limit), and then run that simulation forward using the most accurate microscopic physics simulation currently available, I'm confident that this would be good enough to get indistinguishable-from-the-outside simulated behavior. (In fact, I think it would be massive overkill.)

Replies from: alexander-1, alexander-1
comment by Alexander (alexander-1) · 2021-10-26T20:52:39.458Z · LW(p) · GW(p)

On some further thought, although the quote you shared is relevant, it is not exactly the part of the book that I was referring to. I was referring to the teleportation thought experiment in chapter 8 "Expect Yourself":

One day, there’s a hitch. The vaporisation module in London malfunctions and Eva – the Eva who is in London, anyway – feels like nothing’s happened and that she’s still in the transportation facility. A minor inconvenience. They’ll have to reboot the machine and try again, or maybe leave it until the following day. But then a technician shuffles into the room, carrying a gun. He mumbles something along the lines of ‘Don’t worry, you’ve been safely teletransported to Mars, just like normal, it’s just that the regulations say that we still need to … and, look here, you signed this consent form …’ He slowly raises his weapon and Eva has a feeling she’s never had before, that maybe this teletransportation malarkey isn’t quite so straightforward after all.

The point of this thought experiment, which is called the ‘teletransportation paradox’, is to unearth some of the biases most of us have when we think about what it means to be a self.

...
Is the Eva on Mars (let’s call her Eva2) the same person as Eva1 (the Eva still in London)? It’s tempting to say, yes, she is: Eva2 would feel in every way as Eva1 would have felt had she actually been transported instantaneously from London to Mars. What seems to matter for this kind of personal identity is psychological continuity, not physical continuity.* But then if Eva1 has not been vaporised, which is the real Eva?
I think the correct – but admittedly strange – answer is that both are the real Eva.

My disagreement relating to the no-cloning theorem aside, I have another disagreement about Seth's conclusion here. Claiming that the "correct" answer is that they are both the same person really stretches the idea of selfhood. Seth doesn't justify convincingly why he thinks this is the "correct" answer.

If the teleportation paradox is physically possible (if this imaginary machine must destroy the body to clone it, then how could it malfunction and still perform the cloning?), then I find Derek Parfit's answer (YouTube version) to the teleportation paradox more persuasive.

Parfit argues that any criteria we attempt to use to determine sameness of person will be lacking, because there is no further fact. What matters, to Parfit, is simply "Relation R", psychological connectedness, including memory, personality, and so on.

Replies from: JBlack
comment by JBlack · 2021-10-29T03:48:50.401Z · LW(p) · GW(p)

The "selfhood" relation doesn't necessarily have to be symmetric or transitive, but the term is used as if it is, and I think this causes a lot of problems in discussion.

Eva1 and Eva2 likely both consider Eva0 (who walked into the machine) to be their past self, but that doesn't mean that they must automatically consider themselves to be the same person as each other. It also doesn't mean that Eva0 would agree with one or both of them.

I also think there is not any objective, external way to determine this relation: it's purely psychological.

However, if I think further into a future where people could copy themselves, and later psychologically integrate both sets of memories, behaviour, and so on, then Eva1 and Eva2 in such a world may well consider themselves to be the same person as each other, and also some future Eva3,4, and so on. The thought of this few minutes branch of herself not contributing to her future self's memories might not be so horrible, but I don't think she'd merely take the word of a technician that Eva2 actually exists to carry on her survival.

Replies from: alexander-1
comment by Alexander (alexander-1) · 2021-10-29T09:31:20.436Z · LW(p) · GW(p)

Excellent points. I hadn’t given much thought to the psychological vs external sameness of selfhood.

One is naturally lead to wonder about how such dilemma would be dealt with in legal proceedings. Your assertion about the lack of an external objective criteria for the sameness of selfhood implies that if Eva1 committed a crime then we cannot reasonably convict Eva2 for it.

Replies from: michielper, JBlack
comment by michielper · 2023-01-19T15:45:12.142Z · LW(p) · GW(p)

As Seth justly states, immediately after the cloning, all the Eva's become different persons because they acquire different experiences. They do share a common history but they will soon start telling different stories about this history just as different people do.

comment by JBlack · 2021-10-31T05:27:03.873Z · LW(p) · GW(p)

Yes, legal identity is an even bigger can of worms. Even in some cases in the real world, you can already lose your continuity of "legal identity" in some corner cases. Being able to duplicate people would just make it even messier.

Do duplicates "inherit" into some sort of joint ownership of property? Is the property divided like inheritance? Are they new people entirely with no claims on property at all? What about citizenship? If Eva0 committed a crime, should we hold both Eva1 and Eva2 responsible for it? If after duplication Eva2 committed a crime that strongly benefits Eva1, but killed herself before conviction, can the prosecution go after Eva1? Do they need to prove beyond reasonable doubt that the intent was in the mind of Eva0 before duplication?

Being able to "merge" mind states would make it very much messier still.

Replies from: alexander-1
comment by Alexander (alexander-1) · 2021-10-31T22:16:56.130Z · LW(p) · GW(p)

Do they need to prove beyond reasonable doubt that the intent was in the mind of Eva0 before duplication?

That's gnarly.

Another aspect that I'm led to contemplate is the ease of collusion with your clone. It's reasonable to believe that Eva1 would collude with Eva2 more easily than with an entirely different person.

comment by Alexander (alexander-1) · 2021-10-26T19:57:22.786Z · LW(p) · GW(p)

Very insightful comment, Steven. Putting it that way, I agree with you that the quantum fluctuations (most likely) don’t matter for our experiences.

I was indeed nitpicking, but the broader point I'm interested in is about the futility of thought experiments that ignore the constraints imposed by physics rather than about whether quantum fluctuations play a role in how consciousness works.

This quote from Frank Wilczek claims that we are yet to attribute any high-level phenomena to quantum fluctuations:

Consistency requires the metric field to be a quantum field, like all the others. That is, the metric field fluctuates spontaneously. We do not have a satisfactory theory of these fluctuations. We know that the effects of quantum fluctuations in the metric field are usually—in our experience so far, always—small in practice, simply because we get very successful theories by ignoring them! From delicate biochemistry to exotic goings-on at accelerators to the evolution of stars and the early moments of the big bang, we’ve been able to make precise predictions, and have seen them accurately verified, while ignoring possible quantum fluctuations in the metric field. Moreover, the modern GPS system maps out space and time directly. It doesn’t allow for quantum gravity, yet it works very well. Experimenters have worked very hard to discover any effect that could be ascribed to quantum fluctuations in the metric field, or, in other words, to quantum gravity. Nobel Prizes and everlasting glory would attend such a discovery. So far, it hasn’t happened.

comment by Tamir · 2021-11-30T02:02:21.649Z · LW(p) · GW(p)

I am just about to finish Being You and  had a rising frustration which I did not quite where to take ... so hope am not bothering anyone by raising it here.  

Seth's hope that his account of consciousness will dissolve the "Hard Problem of Consciousness" into the "Real Problem of Consciousness" did not at all work for me.   He frequently uses terms like 'causation' and 'correlation' to describe the relationship between physical states of bodies and brains, on the one hand, and mental phenomena, on the other.  The more I think about it, that just has to be wrong.

Please bare with me a moment to for an analogy.

If I imagine having complete information about the physical state of a billiard ball moving toward two other billiard balls it will  soon come to strike sending them off in their respective (different) directions, it is accurate to say: 1) the first billiard ball "caused" the movement of the other two; 2) the movements of the other two are "correlated"; and 3) if I also have an account of the heat generated by the collision such that all of the energy present in the state at which I had complete information is accounted for in the collision, then I have a complete description of the causal effects of the collision (and so too the resulting correlations between the objects effected by that 'causal' event). 

So the same ought to be true if I had a complete description of a neuron which is about to impart an electrical charge to two other previously uncharged neurons.  The activity of the two other neurons will be correlated (as between each other) because their energized states would have been 'caused' by the activity of the first neuron.  And if I also account for any additional energy present at the time at which I had complete information (e.g some heat), then there can be no other causes arising from the activity of the first neuron.

The problem is, unlike in the case of the billiard balls, there can be a phenomena in the consciousness of a person whose brain houses the three neurons.  The mental phenomena associated with the first neuron's activity can't stand in a relationship of causation to that neuron because all of the energy present in the state at which I had complete information has already been accounted for.  Correlation also doesn't work as a term because if we assume that there is also mental phenomena associated with the activity of the two previously uncharged neurons, it would have to be a result of a prior causal effect on those two neurons, but we just said that the first neuron caused their energetic states and its energy has already been entirely accounted for.

So what is the  right English word for the relationship at issue?

Up until today I thought "constitutive' was the right word to use to describe the relationship; but I just realized that doesn't quite work either, at least not in the ordinary sense.  For example, if I were to see a black billiard ball standing against a white background from a distance, I might initially mistake it for a two dimensional picture of a billiard ball.  Then when I get closer I would notice it is a three-dimensional object (an actual billiard ball).  It's depth is a constituent part of it which I had not initially noticed.  But going back to the idea of having complete information about the physical state of such an object.  If I had had that when I saw it from a distance, I would never have made the mistake of treating it as two-dimensional in the first place. 

But that, again, isn't what is going on with the appearance of phenomena in consciousness because, again, it is not described or anticipated by the dynamical states of the neurons any more than it would be for billiard balls.

The only way 'constitutive' seems to work as a description of what is going is if we treat the appearance in consciousness as another dimension in a similar way to how I just treated depth as an additional dimension I come to notice about the billiard ball.   It is just that with the dimension of mental phenomena, it can't be a 'physical' ... because, again, all the physical consequences and correlates of the neurons' dynamical activity has already been accounted for.

That seems to be a really hard problem which Seth -- mistakenly to my mind -- believes he has dissolved.

comment by Steven Byrnes (steve2152) · 2021-10-26T13:13:05.697Z · LW(p) · GW(p)

I was disappointed to see Seth ignore Scott Aaronson's incisive knock-down of IIT.

Yes, and also, here's a different incisive knock-down of IIT that I found very compelling.

Replies from: Julian Bradshaw
comment by Julian Bradshaw · 2021-10-27T05:33:38.098Z · LW(p) · GW(p)

That's easier to understand for me than Aaronson's, thanks. Interestingly, the author of that blog post (Jake R. Hanson) seems to have just published a version of it as a proper scientific paper in Neuroscience of Consciousness this past August... a journal whose editor-in-chief is Anil Seth, the author of the book reviewed above! Not sure if it comes up in the book or not, considering the book was published just this September—it's probably too recent unfortunately.

Replies from: alexander-1
comment by Alexander (alexander-1) · 2021-10-27T07:01:05.151Z · LW(p) · GW(p)

I love the title of that paper. Formalising falsification for theories of consciousness is exactly what the consciousness space needs to maximise signal and minimise noise. Thank you for sharing it! I’m going to give that paper a read. I’m very curious about how J R Hanson defines “consciousness”. To falsify a theory, we first need to be precise about what it must predict.

I am fairly certain that Anil Seth did not mention either of these incisive knock-downs of IIT in the book but I could’ve missed it. The reason I’m so certain is because the way Seth spoke about IIT was of admiration and approval. I’m sure he would’ve updated.

comment by Matt Sigl (matt-sigl) · 2021-12-17T22:13:54.662Z · LW(p) · GW(p)

Also, Phi is not at all poorly defined. You can analyze any system, find the spatio-temporal scale at which that system is most integrated, (the scale at which the behavior of the system is more than the sum of its parts and therefore fully analyzable only as a single whole), and then calculate using either the Kullback-Leibler divergence or the Earth Mover’s Distance (Wasserstein’s metric) - (different versions of the theory use different statistical methods) - the exact value of Phi as a measure of the amount of integrated information in a system. The fact that the theory is being “refined” is a testimony to its appeal. The original paper “A Provisional Manifesto” is still the best overall description of the theory, and one that actually goes into some (very complicated) detail about how the “architecture” of the n-dimensional information space implied by the theory maps onto phenomenology.

FWIW, I found Scott Aaronson’s analysis of of IIT intellectually uncharitable. He didn’t seem interested in understanding the theory on it’s own terms, and his main criticism, that a grid of logic gates could be conscious if IIT is correct, unconvincing. If neurons could be conscious, why not grids? There is even some neurological evidence that the brain is actually organized in grid structures, especially cortex, but the tangled and coiled up physiology of the brain obscures this fact.

comment by Ape in the coat · 2021-10-26T09:00:38.754Z · LW(p) · GW(p)

The Good Regulator Theorem asserts that "every good regulator of a system must be a model of that system." Therefore, the air conditioning system must have some heat-map (e.g. via thermostats) of the building (i.e. a model). Similarly, for an organism to maintain its existence, it must have a model of the system it is trying to sustain, i.e. a model of itself and its surrounding environment. This way, the organism can remain within a narrow set of favourable physical states (the organism's attractor set) which allow it to stay alive.

 

This was insightful. I haven't previously thought of my consciousness as being part of the same continuum as bunch of thermostats but in hindsight it's very obvious.

I notice how my ability to conceive philosophical zombies has decreased even more.

Replies from: alexander-1
comment by Alexander (alexander-1) · 2021-10-26T11:04:27.374Z · LW(p) · GW(p)

If I recall correctly, I was first introduced to the map-territory meme via LessWrong, and I've found it a useful idea in that it has helped me conceptualise the world and my place in it more clearly (as far as I can tell). I hear with great interest that you, too, have found this perspective insightful!

[The following are speculative ramblings.]

I wonder what the limits of map-territory convergence are and what those limits tell us about the limits of intelligence. Is complete convergence possible? Or is the limit determined by computational irreducibility (the idea that you cannot model some systems perfectly, you simply have to watch them unfold to find out what they do)? Is the universe a map that perfectly reflects the territory (itself)? Or is the universe yet another map of a yet deeper reality? I guess these questions belong to the realm of metaphysics.

Replies from: Ape in the coat
comment by Ape in the coat · 2021-10-27T07:50:32.577Z · LW(p) · GW(p)

Well of course I was already familiar with map-territory distinction, and while insightful itself, it wasn't the insight I grasped from that paragraph.

The new insight is deeper understanding to what degree consciousness is functionally necessary for human behaviour. Literally as necessary as thermostats for air conditioning system. Also, while understanding that I have maps of reality in my consciousness, I suppose, I wasn't explicitly thinking that my consciousness is itself a map.

Replies from: alexander-1
comment by Alexander (alexander-1) · 2021-10-27T08:31:26.256Z · LW(p) · GW(p)

Indeed! The good regulator theorem certainly gives concreteness to the abstract notion of a map. I find clarity in viewing intelligence/consciousness as analogous to the processes of mapmaking—walking around, surveying the territory, recording observations, and so on—rather than simply the map.  In my view, this analogy to mapmaking makes more explicit the relationship between physical processes and intelligence/consciousness and the ever-changing nature of the map. I find it a little mind-blowing to conceptualise the map as the territory modelling itself.

The Wheeler Eye

I recommend chapter 5 (and related chapters) of A Thousand Brains by Jeff Hawkins for a physiological explanation of the idea of a map and how it manifests in the brain's structure.

...every cortical column learns models of objects. The columns do this using the same basic method that the old brain uses to learn models of environments. Therefore, we proposed that each cortical column has a set of cells equivalent to grid cells, another set equivalent to place cells, and another set equivalent to head direction cells, all of which were first discovered in parts of the old brain.

[Disclaimer: I have not completed reading A Thousand Brains and I have not scrupulously scrutinised it yet.]

I’ve come across this idea about the similarities between brain structure and the structure of our physical environment in several places now (both K Friston and J Hawkins talk about it).

  • Brains have thin and long connections between neurons, which we can compare to forces that appear to act at a distance, such as light reflecting off an object and reaching our eyes almost instantaneously or gravity acting on a falling apple.
  • The deeply nested hierarchical structure of the connectome is analogous to the hierarchical nature of physical systems (composition and abstraction).
  • The human brain processes whatness and whereness in different regions. If we lived in a reality where an object changed its nature every time it moved, it would be more efficient to combine whatness and whereness processing into the same region.

All this is highly speculative but hints at a map-territory correspondence between brains and their environments. 

comment by Matt Sigl (matt-sigl) · 2021-12-17T21:45:33.760Z · LW(p) · GW(p)

Seems that Seth doesn’t understand the Zombie argument at all. Assuming Seth believes in the causal closure of the physical world (I don’t think he believes consciousness is an immaterial force “filling in” the causal gaps of indeterminate physical processes in the brain), he should take Zombies more seriously. The Zombie argument applies to any physical process no matter how “complex” since physical processes can always be conceived to happen exactly the same way “in the dark”, as a zombie. If the physical world is causally closed, all the causal “work” is done physically in the brain in a coherent, intelligible way and consciousness is only assumed because we know about it from first person experience. Zombies are a convincing way to make the Hard Problem explicit via a thought experiment. His example of imagining a A380 moving backwards is irrelevant because the incoherency there is implied by the non-controversial ontological character of the matter that constitutes it: given that matter is what is and if when I’m imagining a A380 I’m really imagining a physical object, then I can’t “actually” imagine it moving backward because it wouldn’t really be actual matter I’m imagining. (What I could imagine is the phenomenal experience of seeing something “like” that happen, like a special effect in a movie. I’m actually imagining a potential possible experience.) Zombies are a different kind of conceivability question altogether. It’s precisely consciousness’ radically different ontological nature that the Zombie argument is attempting to bring to fore. To argue against zombies you’d have to demonstrate why physical processes MUST be conscious, (probably impossible given the fundamental modality of “physical” explanation itself) or introduce a new fundamental ontology of the world such that zombies are impossible because the concept of the physical world, as implied by zombie dualism, doesn’t exist. (IIT actually veers in this direction.)

Replies from: weightt-an
comment by weightt an (weightt-an) · 2022-06-02T17:54:19.901Z · LW(p) · GW(p)

Isn't consciousness just a "read-only access thing to the world" then? Like is there some reason why dualism is not isomorphic to parallelism?

comment by TAG · 2023-08-26T15:12:16.882Z · LW(p) · GW(p)

Can you imagine an A380 flying backwards? Of course you can. Just imagine a large plane in the air, moving backwards. Is such a scenario really conceivable? Well, the more you know about aerodynamics and aeronautical engineering, the less conceivable it becomes. It just cannot be done.

Seth follows this up with a knock-down argument against Chalmers’ philosophical zombies:

In one sense it’s trivial to imagine a philosophical zombie. I just picture a version of myself wandering around without having any conscious experiences. But can I really conceive this? What I’m being asked to do, really, is to consider the capabilities and limitations of a vast network of many billions of neurons and gazillions of synapses (the connections between neurons), not to mention glial cells and neurotransmitter gradients and other such neurobiological goodies, all wrapped into a body interacting with a world which includes other brains in other bodies. Can I do this? Can anyone do this? I doubt it.

That's a disanalogous analogy. In the first case, we cant imagine the plane flying backwards in terms of our knowledge of aerodynamics, because aerodynamics makes it impossible; in the second case, we don't have a theory that makes it inevitable that neural activity must be accompanied by phenomenal consciousness -- such a theory would be an answer to the hard problem.

comment by Don Salmon (don-salmon) · 2022-03-01T16:49:49.503Z · LW(p) · GW(p)

Do you have a definition of the word "physical" as used in physicalism?

Replies from: alexander-1
comment by Alexander (alexander-1) · 2022-06-17T00:15:18.234Z · LW(p) · GW(p)

I would define "physical" as the set of rigid rules governing reality that exist beyond our experience and that we cannot choose to change.

I can cause water to freeze to form ice using my agency, but I cannot change the fundamental rules governing water, such as its freezing point. These rules go beyond my agency and, in fact, constrain my agency.

Physics constrains everything else in a way that everything else does not constrain physics, and thus the primacy of physics.

Replies from: TAG
comment by TAG · 2023-08-26T15:16:13.925Z · LW(p) · GW(p)

If there were a rigid law that the wicked are punished karmically, would that count as physical?