Thoughts on How Consciousness Can Affect the World

post by DavidPlumpton · 2014-03-20T08:32:09.473Z · LW · GW · Legacy · 11 comments

Contents

  Overview
  Tentative Lemmas
  Likely Complications
None
11 comments

Overview

In "The Generalized Anti-Zombie Principle" Eliezer mentioned that one aspect of consciousness is that it can affect the world, e.g. by making us say out loud "I feel conscious", or deciding to read (or write) this article. However my current working hypothesis is that consciousness is a feature of information and computation. So how can pure information affect the world? It is this aspect of consciousness that I intend to explore.
Assumptions

A1: Let's go all in on Reductionism. Nothing else is involved other than the laws of physics (even if we haven't discovered them all yet).

A2: Consciousness exists (or at least some illusion of experiencing it exists).


Tentative Lemmas

TL1: A simulation of the brain (if necessary down to quantum fields) would seem to itself to experience consciousness in whatever way it is that we seem to ourselves to experience consciousness

TL2: If an electronic (or even pen and paper) simulation of a brain is conscious, then consciousness is "substrate independent", i.e. not requiring a squishy brain at least in principle.

TL3: If consciousness is substrate independent, then the best candidate for the underlying mechanism of consciousness is computation and information.

TL4: "You" and "I" are information and algorithms, implemented by our brain tissue. By this I mean that somehow information and computation can be made to seem to experience consciousness. So somehow the information can be computed into a state where it contains information relating to the experience of consciousness.

Likely Complications

LC1: State transitions may involve some random component.

LC2: It seems likely that a static state is insufficient to experience consciousness, requiring change of state (i.e. processing). Could a single unchanging state experience consciousness forever? Hmmmm....
Main Argument

What does is mean for matter such as neurons to "implement" consciousness? For information to be stored the neurons need to be in some state that can be discriminated from other states. The act of computation is changing the state of information to a new state, based on the current state. Let's say we start in state S0 and then neurons fire in some way that we label state S1, ignoring LC1 for now. So we have "computed" state S1 from a previous state S0, so let's just write that as:

S0 -> S1

From our previous set of assumptions we can conclude that S0 -> S1 may involve some sensation of consciousness to the system represented by the neurons, i.e. that the consciousness would be present regardless of whether a physical set of neurons was firing or a computer simulation was running or we if we writing it slowly down on paper. So we can consider the physical state S0 to be "implementing" a conscious sensation that we can label as C0. Let's describe that as:

S0 (C0) -> S1 (C1)

Note that the parens do note indicate function calls, but just describe correlations between the two state flavours. As for LC1, if we instead get S0 -> S(random other) then it would simply seem that a conscious decision had been made to not take the action. Now let's consider some computation that follows on from S1:

S1 (C1) -> S2* (C2)

I label S2* with a star to indicate that it is slightly special because it happens to fire some motor neurons that are connected to muscles and we notice some effect on the world. So a conscious being experiences the chain of computation S0 (C0) -> S1 (C1) -> S2* (C2) as feeling like they have made a decision to take some real world action, and then they performed that action. From TL2 we see that in a simulation the same feeling of making a decision and acting on it will be experienced.

And now the key point... each information state change occurs in lockstep with patterns of neurons firing. The real or simulated state transitions S0 -> S1 -> S2* correspond to the perceived C0 -> C1 -> C2. But of course C2 can be considered to be C2*; the conscious being will soon have sensory feedback to indicate that C2 seemed to cause a change in the world.

So S0 (C0) -> S1 (C1) -> S2* (C2)  is equivalent to S0 (C0) -> S1 (C1) -> S2* (C2*)

The result is another one of those "dissolve the question" situations. Consciousness affects the real world because "real actions" and information changes occur in lockstep together, such that some conscious feeling of the causality of a decision in C0 -> C1 -> C2* is really a perfect correlation to neurons firing in the specific sequence S0 -> S1 -> S2*. In fact it now seems like a philosophical point as to whether there is any difference between causation and correlation for this specific situation.

Can LC2 change the conclusion at all? I suspect not; perhaps S0 may consist of a set of changing sub-states (even if C0 does not), until a new state occurs that is within a set S1 intead of S0.

The argument concerning consciousness would also seem to apply to qualia (I suspect many smaller brained animals experience qualia but no consciousness).

11 comments

Comments sorted by top scores.

comment by TheAncientGeek · 2014-03-20T11:24:42.004Z · LW(p) · GW(p)

Pure information does not exist. It is an abstract perspective on concrete physical states. Information processing cannot fail to stay in lockstep with the causal activity of its implementation because it is not ontologically a separate thing.

The idea that consciousness supervenes on any functional equivalent to neural activity is problematic because the range of possible concrete implementations is so large, eg Blockheads).

If you take the view that the more problematical aspects of consciousness , such as qualia, supervise directly on physics, and not on the informational layer, you can avoid both Blockheads and p-zombies.

Replies from: Thomas
comment by Thomas · 2014-03-20T15:25:25.592Z · LW(p) · GW(p)

If you take the view that the more problematical aspects of consciousness , such as qualia, supervise directly on physics

Without any informational process going on there?

comment by Shmi (shminux) · 2014-03-20T17:28:28.908Z · LW(p) · GW(p)

Consciousness exists

If you are trying to be all formal about it, it's good to start by defining your terminology. What do you mean by Consciousness and what do you mean by existence? And one of the best ways to define what you mean by a commonly used term is to delineate its boundaries. For example, what is not-consciousness? Not-quite-consciousness? Give an example of ten. Same with existence. What does it mean for something to not exist? Can you list a dozen of non-existing things?

For example, do pink unicorns exist? If not, how come they affect reality (you see a sentence about them on your computer monitor)? How is consciousness different from pink unicorns? Please do not latch on this one particular example, make up your own.

I am pretty sure you have no firm understanding of what you are talking about, even though it feels like you do in your gut, "but is hard to explain". If you do not have a firm grasp of the basics, writing fancy lemmas and theorems may help you publish a philosophy paper but does not get your anywhere closer to understanding the issues.

Replies from: DavidPlumpton
comment by DavidPlumpton · 2014-03-20T19:12:23.838Z · LW(p) · GW(p)

If you are trying to be all formal about it, it's good to start by defining your terminology. What do you mean by Consciousness and what do you mean by existence?

I'm trying to be slightly formal, but without getting too bogged down. Instead I would prefer to take a few shortcuts to see if the road ahead looks promising at all. So far I feel that the best I've managed is to say "If a system seems to itself to experience consciousness in the same way that we seem to experience it, then we can call it conscious".

I am pretty sure you have no firm understanding of what you are talking about,

Not as sure as I am ;-) But I am trying to improve my understanding, and have no intention of writing philosophy papers.

comment by [deleted] · 2014-03-20T12:19:29.469Z · LW(p) · GW(p)

I am not convinced by arguments for Sir Karl Popper's three worlds model of existence. Similar to what TheAncientGreek said, I am not convinced mental objects exist. But I suggest what you and EY are writing about, Popper wrote about in the 1970s.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-03-20T12:37:03.371Z · LW(p) · GW(p)

David could benefit from reading Chambers The Conscious Mind , if he has not. Chalmers thinks consciousness supervenes on information processing, but his model is dualistic. David might want to avoid that.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-03-20T12:56:21.011Z · LW(p) · GW(p)

Not up on my Chalmers, but is it really fair to call it dualism? I thought his view was that there is a subjective/physical duality in Nature just as there is a particle/wave duality. That's not very Cartesian, really.

Replies from: pragmatist, TheAncientGeek
comment by pragmatist · 2014-03-20T13:19:57.257Z · LW(p) · GW(p)

Chalmers' view is usually referred to as property dualism, because it says that brains (and perhaps other physical systems) have certain properties (subjective experience, for instance) that are not reducible to fundamental physical properties. This is not really like particle/wave duality, because in that case both particle-like and wave-like aspects of the behavior of matter are unified by a deeper theory. Chalmers doesn't believe we will see any such unification of mental and physical properties.

Descartes, on the other hand, was a substance dualist. He didn't just believe that mental properties are irreducible to physical properties; he also believed that the bearer of mental properties is non-physical, i.e. not the brain but the non-physical mind.

So Chalmers is a dualist, according to contemporary philosophical parlance, in that he thinks that our fundamental ontology must include the mental as well as the physical, but he's not a substance dualist.

comment by TheAncientGeek · 2014-03-20T13:07:03.351Z · LW(p) · GW(p)

He says it's dualism, and it requires non physical properties. It's not Cartesianism because minds aren't detachable, but Cartesianism is a subset of dualism.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-03-20T13:07:34.915Z · LW(p) · GW(p)

Ok.

comment by Squark · 2014-03-23T19:40:34.247Z · LW(p) · GW(p)

As for LC1, if we instead get S0 -> S(random other) then it would simply seem that a conscious decision had been made to not take the action.

Are you equating decision making with physical randomness? This seems to be an error. When you "make a decision", there is a reason you made that particular decision. See also what Russel has to say.